Sample records for outcome-based reference thresholds

  1. Threshold-driven optimization for reference-based auto-planning

    NASA Astrophysics Data System (ADS)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  2. Outcome-driven thresholds for home blood pressure measurement: international database of home blood pressure in relation to cardiovascular outcome.

    PubMed

    Niiranen, Teemu J; Asayama, Kei; Thijs, Lutgarde; Johansson, Jouni K; Ohkubo, Takayoshi; Kikuya, Masahiro; Boggia, José; Hozawa, Atsushi; Sandoya, Edgardo; Stergiou, George S; Tsuji, Ichiro; Jula, Antti M; Imai, Yutaka; Staessen, Jan A

    2013-01-01

    The lack of outcome-driven operational thresholds limits the clinical application of home blood pressure (BP) measurement. Our objective was to determine an outcome-driven reference frame for home BP measurement. We measured home and clinic BP in 6470 participants (mean age, 59.3 years; 56.9% women; 22.4% on antihypertensive treatment) recruited in Ohasama, Japan (n=2520); Montevideo, Uruguay (n=399); Tsurugaya, Japan (n=811); Didima, Greece (n=665); and nationwide in Finland (n=2075). In multivariable-adjusted analyses of individual subject data, we determined home BP thresholds, which yielded 10-year cardiovascular risks similar to those associated with stages 1 (120/80 mm Hg) and 2 (130/85 mm Hg) prehypertension, and stages 1 (140/90 mm Hg) and 2 (160/100 mm Hg) hypertension on clinic measurement. During 8.3 years of follow-up (median), 716 cardiovascular end points, 294 cardiovascular deaths, 393 strokes, and 336 cardiac events occurred in the whole cohort; in untreated participants these numbers were 414, 158, 225, and 194, respectively. In the whole cohort, outcome-driven systolic/diastolic thresholds for the home BP corresponding with stages 1 and 2 prehypertension and stages 1 and 2 hypertension were 121.4/77.7, 127.4/79.9, 133.4/82.2, and 145.4/86.8 mm Hg; in 5018 untreated participants, these thresholds were 118.5/76.9, 125.2/79.7, 131.9/82.4, and 145.3/87.9 mm Hg, respectively. Rounded thresholds for stages 1 and 2 prehypertension and stages 1 and 2 hypertension amounted to 120/75, 125/80, 130/85, and 145/90 mm Hg, respectively. Population-based outcome-driven thresholds for home BP are slightly lower than those currently proposed in hypertension guidelines. Our current findings could inform guidelines and help clinicians in diagnosing and managing patients.

  3. Managing moral hazard in motor vehicle accident insurance claims.

    PubMed

    Ebrahim, Shanil; Busse, Jason W; Guyatt, Gordon H; Birch, Stephen

    2013-05-01

    Motor vehicle accident (MVA) insurance in Canada is based primarily on two different compensation systems: (i) no-fault, in which policyholders are unable to seek recovery for losses caused by other parties (unless they have specified dollar or verbal thresholds) and (ii) tort, in which policyholders may seek general damages. As insurance companies pay for MVA-related health care costs, excess use of health care services may occur as a result of consumers' (accident victims) and/or producers' (health care providers) behavior - often referred to as the moral hazard of insurance. In the United States, moral hazard is greater for low dollar threshold no-fault insurance compared with tort systems. In Canada, high dollar threshold or pure no-fault versus tort systems are associated with faster patient recovery and reduced MVA claims. These findings suggest that high threshold no-fault or pure no-fault compensation systems may be associated with improved outcomes for patients and reduced moral hazard.

  4. An analysis of population-based prenatal screening for overt hypothyroidism.

    PubMed

    Bryant, Stefanie N; Nelson, David B; McIntire, Donald D; Casey, Brian M; Cunningham, F Gary

    2015-10-01

    The purpose of the study was to evaluate pregnancy outcomes of hypothyroidism that were identified in a population-based prenatal screening program. This is a secondary analysis of a prospective prenatal population-based study in which serum thyroid analytes were obtained from November 2000 to April 2003. Initial screening thresholds were intentionally inclusive (thyroid-stimulating hormone [TSH], >3.0 mU/L; free thyroxine, <0.9 ng/dL); those who screened positive were referred for confirmatory testing in a hospital-based laboratory. Hypothyroidism was identified and treated if TSH level was >4.5 mU/L and if fT4 level was <0.76 ng/dL. Perinatal outcomes in these women and those who screened positive but unconfirmed to have hypothyroidism were compared with women with euthyroidism. Outcomes were then analyzed according to initial TSH levels. A total of 26,518 women completed initial screening: 24,584 women (93%) were euthyroid, and 284 women (1%) had abnormal initial values that suggested hypothyroidism. Of those referred, 232 women (82%) underwent repeat testing, and 47 women (0.2% initially screened) were confirmed to have hypothyroidism. Perinatal outcomes of women with treated overt hypothyroidism were similar to women with euthyroidism. Higher rates of pregnancy-related hypertension were identified in the 182 women with unconfirmed hypothyroidism when compared with women with euthyroidism (P < .001); however, this association was seen only in women with initial TSH >4.5 mU/L (adjusted odds ratio, 2.53; 95% confidence interval, 1.4-4.5). The identification and treatment of overt hypothyroidism results in pregnancy outcomes similar to women with euthyroidism. Unconfirmed screening results suggestive of hypothyroidism portend pregnancy risks similar to women with subclinical hypothyroidism, specifically preeclampsia; however, this increased risk was seen only in women with initial TSH levels of >4.5 mU/L and suggests that this is a more clinically relevant threshold than 3.0 mU/L. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Cost-effectiveness of different strategies for selecting and treating individuals at increased risk of osteoporosis or osteopenia: a systematic review.

    PubMed

    Müller, Dirk; Pulm, Jannis; Gandjour, Afschin

    2012-01-01

    To compare cost-effectiveness modeling analyses of strategies to prevent osteoporotic and osteopenic fractures either based on fixed thresholds using bone mineral density or based on variable thresholds including bone mineral density and clinical risk factors. A systematic review was performed by using the MEDLINE database and reference lists from previous reviews. On the basis of predefined inclusion/exclusion criteria, we identified relevant studies published since January 2006. Articles included for the review were assessed for their methodological quality and results. The literature search resulted in 24 analyses, 14 of them using a fixed-threshold approach and 10 using a variable-threshold approach. On average, 70% of the criteria for methodological quality were fulfilled, but almost half of the analyses did not include medication adherence in the base case. The results of variable-threshold strategies were more homogeneous and showed more favorable incremental cost-effectiveness ratios compared with those based on a fixed threshold with bone mineral density. For analyses with fixed thresholds, incremental cost-effectiveness ratios varied from €80,000 per quality-adjusted life-year in women aged 55 years to cost saving in women aged 80 years. For analyses with variable thresholds, the range was €47,000 to cost savings. Risk assessment using variable thresholds appears to be more cost-effective than selecting high-risk individuals by fixed thresholds. Although the overall quality of the studies was fairly good, future economic analyses should further improve their methods, particularly in terms of including more fracture types, incorporating medication adherence, and including or discussing unrelated costs during added life-years. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Methods for interpreting change over time in patient-reported outcome measures.

    PubMed

    Wyrwich, K W; Norquist, J M; Lenderking, W R; Acaster, S

    2013-04-01

    Interpretation guidelines are needed for patient-reported outcome (PRO) measures' change scores to evaluate efficacy of an intervention and to communicate PRO results to regulators, patients, physicians, and providers. The 2009 Food and Drug Administration (FDA) Guidance for Industry Patient-Reported Outcomes (PRO) Measures: Use in Medical Product Development to Support Labeling Claims (hereafter referred to as the final FDA PRO Guidance) provides some recommendations for the interpretation of change in PRO scores as evidence of treatment efficacy. This article reviews the evolution of the methods and the terminology used to describe and aid in the communication of meaningful PRO change score thresholds. Anchor- and distribution-based methods have played important roles, and the FDA has recently stressed the importance of cross-sectional patient global assessments of concept as anchor-based methods for estimation of the responder definition, which describes an individual-level treatment benefit. The final FDA PRO Guidance proposes the cumulative distribution function (CDF) of responses as a useful method to depict the effect of treatments across the study population. While CDFs serve an important role, they should not be a replacement for the careful investigation of a PRO's relevant responder definition using anchor-based methods and providing stakeholders with a relevant threshold for the interpretation of change over time.

  7. Setting nutrient thresholds to support an ecological assessment based on nutrient enrichment, potential primary production and undesirable disturbance.

    PubMed

    Devlin, Michelle; Painting, Suzanne; Best, Mike

    2007-01-01

    The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.

  8. A cross-sectional study of hearing thresholds among 4627 Norwegian train and track maintenance workers

    PubMed Central

    Lie, Arve; Skogstad, Marit; Johnsen, Torstein Seip; Engdahl, Bo; Tambs, Kristian

    2014-01-01

    Objective Railway workers performing maintenance work of trains and tracks could be at risk of developing noise-induced hearing loss, since they are exposed to noise levels of 75–90 dB(A) with peak exposures of 130–140 dB(C). The objective was to make a risk assessment by comparing the hearing thresholds among train and track maintenance workers with a reference group not exposed to noise and reference values from the ISO 1999. Design Cross-sectional. Setting A major Norwegian railway company. Participants 1897 and 2730 male train and track maintenance workers, respectively, all exposed to noise, and 2872 male railway traffic controllers and office workers not exposed to noise. Outcome measures The primary outcome was the hearing threshold (pure tone audiometry, frequencies from 0.5 to 8 kHz), and the secondary outcome was the prevalence of audiometric notches (Coles notch) of the most recent audiogram. Results Train and track maintenance workers aged 45 years or older had a small mean hearing loss in the 3–6 kHz area of 3–5 dB. The hearing loss was less among workers younger than 45 years. Audiometric notches were slightly more prevalent among the noise exposed (59–64%) group compared with controls (49%) for all age groups. They may therefore be a sensitive measure in disclosing an early hearing loss at a group level. Conclusions Train and track maintenance workers aged 45 years or older, on average, have a slightly greater hearing loss and more audiometric notches compared with reference groups not exposed to noise. Younger (<45 years) workers have hearing thresholds comparable to the controls. PMID:25324318

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendell, Mark J.; Fisk, William J.

    Background - The goal of this project, with a focus on commercial buildings in California, was to develop a new framework for evidence-based minimum ventilation rate (MVR) standards that protect occupants in buildings while also considering energy use and cost. This was motivated by research findings suggesting that current prescriptive MVRs in commercial buildings do not provide occupants with fully safe and satisfactory indoor environments. Methods - The project began with a broad review in several areas ? the diverse strategies now used for standards or guidelines for MVRs or for environmental contaminant exposures, current knowledge about adverse human effectsmore » associated with VRs, and current knowledge about contaminants in commercial buildings, including their their presence, their adverse human effects, and their relationships with VRs. Based on a synthesis of the reviewed information, new principles and approaches are proposed for setting evidence-based VRs standards for commercial buildings, considering a range of human effects including health, performance, and acceptability of air. Results ? A review and evaluation is first presented of current approaches to setting prescriptive building ventilation standards and setting acceptable limits for human contaminant exposures in outdoor air and occupational settings. Recent research on approaches to setting acceptable levels of environmental exposures in evidence-based MVR standards is also described. From a synthesis and critique of these materials, a set of principles for setting MVRs is presented, along with an example approach based on these principles. The approach combines two sequential strategies. In a first step, an acceptable threshold is set for each adverse outcome that has a demonstrated relationship to VRs, as an increase from a (low) outcome level at a high reference ventilation rate (RVR, the VR needed to attain the best achievable levels of the adverse outcome); MVRs required to meet each specific outcome threshold are estimated; and the highest of these MVRs, which would then meet all outcome thresholds, is selected as the target MVR. In a second step, implemented only if the target MVR from step 1 is judged impractically high, costs and benefits are estimated and this information is used in a risk management process. Four human outcomes with substantial quantitative evidence of relationships to VRs are identified for initial consideration in setting MVR standards. These are: building-related symptoms (sometimes called sick building syndrome symptoms), poor perceived indoor air quality, and diminished work performance, all with data relating them directly to VRs; and cancer and non-cancer chronic outcomes, related indirectly to VRs through specific VR-influenced indoor contaminants. In an application of step 1 for offices using a set of example outcome thresholds, a target MVR of 9 L/s (19 cfm) per person was needed. Because this target MVR was close to MVRs in current standards, use of a cost/benefit process seemed unnecessary. Selection of more stringent thresholds for one or more human outcomes, however, could raise the target MVR to 14 L/s (30 cfm) per person or higher, triggering the step 2 risk management process. Consideration of outdoor air pollutant effects would add further complexity to the framework. For balancing the objective and subjective factors involved in setting MVRs in a cost-benefit process, it is suggested that a diverse group of stakeholders make the determination after assembling as much quantitative data as possible.« less

  10. A comparison of South Asian specific and established BMI thresholds for determining obesity prevalence in pregnancy and predicting pregnancy complications: findings from the Born in Bradford cohort.

    PubMed

    Bryant, M; Santorelli, G; Lawlor, D A; Farrar, D; Tuffnell, D; Bhopal, R; Wright, J

    2014-03-01

    To describe how maternal obesity prevalence varies by established international and South Asian specific body mass index (BMI) cut-offs in women of Pakistani origin and investigate whether different BMI thresholds can help to identify women at risk of adverse pregnancy and birth outcomes. Prospective bi-ethnic birth cohort study (the Born in Bradford (BiB) cohort). Bradford, a deprived city in the North of the UK. A total of 8478 South Asian and White British pregnant women participated in the BiB cohort study. Maternal obesity prevalence; prevalence of known obesity-related adverse pregnancy outcomes: mode of birth, hypertensive disorders of pregnancy (HDP), gestational diabetes, macrosomia and pre-term births. Application of South Asian BMI cut-offs increased prevalence of obesity in Pakistani women from 18.8 (95% confidence interval (CI) 17.6-19.9) to 30.9% (95% CI 29.5-32.2). With the exception of pre-term births, there was a positive linear relationship between BMI and prevalence of adverse pregnancy and birth outcomes, across almost the whole BMI distribution. Risk of gestational diabetes and HDP increased more sharply in Pakistani women after a BMI threshold of at least 30 kg m(-2), but there was no evidence of a sharp increase in any risk factors at the new, lower thresholds suggested for use in South Asian women. BMI was a good single predictor of outcomes (area under the receiver operating curve: 0.596-0.685 for different outcomes); prediction was more discriminatory and accurate with BMI as a continuous variable than as a binary variable for any possible cut-off point. Applying the new South Asian threshold to pregnant women would markedly increase those who were referred for monitoring and lifestyle advice. However, our results suggest that lowering the BMI threshold in South Asian women would not improve the predictive ability for identifying those who were at risk of adverse pregnancy outcomes.

  11. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  12. Evaluation of bone formation in calcium phosphate scaffolds with μCT-method validation using SEM.

    PubMed

    Lewin, S; Barba, A; Persson, C; Franch, J; Ginebra, M-P; Öhman-Mägi, C

    2017-10-05

    There is a plethora of calcium phosphate (CaP) scaffolds used as synthetic substitutes to bone grafts. The scaffold performance is often evaluated from the quantity of bone formed within or in direct contact with the scaffold. Micro-computed tomography (μCT) allows three-dimensional evaluation of bone formation inside scaffolds. However, the almost identical x-ray attenuation of CaP and bone obtrude the separation of these phases in μCT images. Commonly, segmentation of bone in μCT images is based on gray scale intensity, with manually determined global thresholds. However, image analysis methods, and methods for manual thresholding in particular, lack standardization and may consequently suffer from subjectivity. The aim of the present study was to provide a methodological framework for addressing these issues. Bone formation in two types of CaP scaffold architectures (foamed and robocast), obtained from a larger animal study (a 12 week canine animal model) was evaluated by μCT. In addition, cross-sectional scanning electron microscopy (SEM) images were acquired as references to determine thresholds and to validate the result. μCT datasets were registered to the corresponding SEM reference. Global thresholds were then determined by quantitatively correlating the different area fractions in the μCT image, towards the area fractions in the corresponding SEM image. For comparison, area fractions were also quantified using global thresholds determined manually by two different approaches. In the validation the manually determined thresholds resulted in large average errors in area fraction (up to 17%), whereas for the evaluation using SEM references, the errors were estimated to be less than 3%. Furthermore, it was found that basing the thresholds on one single SEM reference gave lower errors than determining them manually. This study provides an objective, robust and less error prone method to determine global thresholds for the evaluation of bone formation in CaP scaffolds.

  13. Derivation of soil screening thresholds to protect chisel-toothed kangaroo rat from uranium mine waste in northern Arizona

    USGS Publications Warehouse

    Hinck, Jo E.; Linder, Greg L.; Otton, James K.; Finger, Susan E.; Little, Edward E.; Tillitt, Donald E.

    2013-01-01

    Chemical data from soil and weathered waste material samples collected from five uranium mines north of the Grand Canyon (three reclaimed, one mined but not reclaimed, and one never mined) were used in a screening-level risk analysis for the Arizona chisel-toothed kangaroo rat (Dipodomys microps leucotis); risks from radiation exposure were not evaluated. Dietary toxicity reference values were used to estimate soil-screening thresholds presenting risk to kangaroo rats. Sensitivity analyses indicated that body weight critically affected outcomes of exposed-dose calculations; juvenile kangaroo rats were more sensitive to the inorganic constituent toxicities than adult kangaroo rats. Species-specific soil-screening thresholds were derived for arsenic (137 mg/kg), cadmium (16 mg/kg), copper (1,461 mg/kg), lead (1,143 mg/kg), nickel (771 mg/kg), thallium (1.3 mg/kg), uranium (1,513 mg/kg), and zinc (731 mg/kg) using toxicity reference values that incorporate expected chronic field exposures. Inorganic contaminants in soils within and near the mine areas generally posed minimal risk to kangaroo rats. Most exceedances of soil thresholds were for arsenic and thallium and were associated with weathered mine wastes.

  14. The use of ketamine in ECT anaesthesia: A systematic review and critical commentary on efficacy, cognitive, safety and seizure outcomes.

    PubMed

    Gálvez, Verònica; McGuirk, Lucy; Loo, Colleen K

    2017-09-01

    This review will discuss ECT efficacy and cognitive outcomes when using ketamine as an ECT anaesthetic compared to other anaesthetics, taking into account important moderator variables that have often not been considered to date. It will also include information on safety and other ECT outcomes (seizure threshold and quality). A systematic search through MEDLINE, PubMed, PsychINFO, Cochrane Databases and reference lists from retrieved articles was performed. Search terms were: "ketamine" and "Electroconvulsive Therapy", from 1995 to September 2016. Meta-analyses, randomised controlled trials, open-label and retrospective studies published in English of depressed samples receiving ECT with ketamine anaesthesia were included (n = 24). Studies were heterogeneous in the clinical populations included and ECT treatment and anaesthetic methods. Frequently, studies did not report on ECT factors (i.e., pulse-width, treatment schedule). Findings regarding efficacy were mixed. Tolerance from repeated use may explain why several studies found that ketamine enhanced efficacy early in the ECT course but not at the end. The majority of studies did not comprehensively examine cognition and adverse effects were not systematically studied. Only a minority of the studies reported on seizure threshold and expression. The routine use of ketamine anaesthesia for ECT in clinical settings cannot yet be recommended based on published data. Larger randomised controlled trials, taking into account moderator variables, specifically reporting on ECT parameters and systematically assessing outcomes are encouraged.

  15. A cross-sectional study of hearing thresholds among 4627 Norwegian train and track maintenance workers.

    PubMed

    Lie, Arve; Skogstad, Marit; Johnsen, Torstein Seip; Engdahl, Bo; Tambs, Kristian

    2014-10-16

    Railway workers performing maintenance work of trains and tracks could be at risk of developing noise-induced hearing loss, since they are exposed to noise levels of 75-90 dB(A) with peak exposures of 130-140 dB(C). The objective was to make a risk assessment by comparing the hearing thresholds among train and track maintenance workers with a reference group not exposed to noise and reference values from the ISO 1999. Cross-sectional. A major Norwegian railway company. 1897 and 2730 male train and track maintenance workers, respectively, all exposed to noise, and 2872 male railway traffic controllers and office workers not exposed to noise. The primary outcome was the hearing threshold (pure tone audiometry, frequencies from 0.5 to 8 kHz), and the secondary outcome was the prevalence of audiometric notches (Coles notch) of the most recent audiogram. Train and track maintenance workers aged 45 years or older had a small mean hearing loss in the 3-6 kHz area of 3-5 dB. The hearing loss was less among workers younger than 45 years. Audiometric notches were slightly more prevalent among the noise exposed (59-64%) group compared with controls (49%) for all age groups. They may therefore be a sensitive measure in disclosing an early hearing loss at a group level. Train and track maintenance workers aged 45 years or older, on average, have a slightly greater hearing loss and more audiometric notches compared with reference groups not exposed to noise. Younger (<45 years) workers have hearing thresholds comparable to the controls. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Application of machine learning methodology for pet-based definition of lung cancer

    PubMed Central

    Kerhet, A.; Small, C.; Quon, H.; Riauka, T.; Schrader, L.; Greiner, R.; Yee, D.; McEwan, A.; Roa, W.

    2010-01-01

    We applied a learning methodology framework to assist in the threshold-based segmentation of non-small-cell lung cancer (nsclc) tumours in positron-emission tomography–computed tomography (pet–ct) imaging for use in radiotherapy planning. Gated and standard free-breathing studies of two patients were independently analysed (four studies in total). Each study had a pet–ct and a treatment-planning ct image. The reference gross tumour volume (gtv) was identified by two experienced radiation oncologists who also determined reference standardized uptake value (suv) thresholds that most closely approximated the gtv contour on each slice. A set of uptake distribution-related attributes was calculated for each pet slice. A machine learning algorithm was trained on a subset of the pet slices to cope with slice-to-slice variation in the optimal suv threshold: that is, to predict the most appropriate suv threshold from the calculated attributes for each slice. The algorithm’s performance was evaluated using the remainder of the pet slices. A high degree of geometric similarity was achieved between the areas outlined by the predicted and the reference suv thresholds (Jaccard index exceeding 0.82). No significant difference was found between the gated and the free-breathing results in the same patient. In this preliminary work, we demonstrated the potential applicability of a machine learning methodology as an auxiliary tool for radiation treatment planning in nsclc. PMID:20179802

  17. A Bayesian predictive two-stage design for phase II clinical trials.

    PubMed

    Sambucini, Valeria

    2008-04-15

    In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.

  18. Altered quantitative sensory testing outcome in subjects with opioid therapy.

    PubMed

    Chen, Lucy; Malarick, Charlene; Seefeld, Lindsey; Wang, Shuxing; Houghton, Mary; Mao, Jianren

    2009-05-01

    Preclinical studies have suggested that opioid exposure may induce a paradoxical decrease in the nociceptive threshold, commonly referred as opioid-induced hyperalgesia (OIH). While OIH may have implications in acute and chronic pain management, its clinical features remain unclear. Using an office-based quantitative sensory testing (QST) method, we compared pain threshold, pain tolerance, and the degree of temporal summation of the second pain in response to thermal stimulation among three groups of subjects: those with neither pain nor opioid therapy (group 1), with chronic pain but without opioid therapy (group 2), and with both chronic pain and opioid therapy (group 3). We also examined the possible correlation between QST responses to thermal stimulation and opioid dose, opioid treatment duration, opioid analgesic type, pain duration, or gender in group 3 subjects. As compared with both group 1 (n=41) and group 2 (n=41) subjects, group 3 subjects (n=58) displayed a decreased heat pain threshold and exacerbated temporal summation of the second pain to thermal stimulation. In contrast, there were no differences in cold or warm sensation among three groups. Among clinical factors, daily opioid dose consistently correlated with the decreased heat pain threshold and exacerbated temporal summation of the second pain in group 3 subjects. These results indicate that decreased heat pain threshold and exacerbated temporal summation of the second pain may be characteristic QST changes in subjects with opioid therapy. The data suggest that QST may be a useful tool in the clinical assessment of OIH.

  19. Risk Associated with Pulse Pressure on Out-of-Office Blood Pressure Measurement

    PubMed Central

    Gu, Yu-Mei; Aparicio, Lucas S.; Liu, Yan-Ping; Asayama, Kei; Hansen, Tine W.; Niiranen, Teemu J.; Boggia, José; Thijs, Lutgarde; Staessen, Jan A.

    2014-01-01

    Background Longitudinal studies have demonstrated that the risk of cardiovascular disease increases with pulse pressure (PP). However, PP remains an elusive cardiovascular risk factor with findings being inconsistent between studies. The 2013 ESH/ESC guideline proposed that PP is useful in stratification and suggested a threshold of 60 mm Hg, which is 10 mm Hg higher compared to that in the 2007 guideline; however, no justification for this increase was provided. Methodology Published thresholds of PP are based on office blood pressure measurement and often on arbitrary categorical analyses. In the International Database on Ambulatory blood pressure in relation to Cardiovascular Outcomes (IDACO) and the International Database on HOme blood pressure in relation to Cardiovascular Outcome (IDHOCO), we determined outcome-driven thresholds for PP based on ambulatory or home blood pressure measurement, respectively. Results The main findings were that for people aged <60 years, PP did not refine risk stratification, whereas in older people the thresholds were 64 and 76 mm Hg for the ambulatory and home PP, respectively. However, PP provided little added predictive value over and beyond classical risk factors. PMID:26587443

  20. Convergence of decision rules for value-based pricing of new innovative drugs.

    PubMed

    Gandjour, Afschin

    2015-04-01

    Given the high costs of innovative new drugs, most European countries have introduced policies for price control, in particular value-based pricing (VBP) and international reference pricing. The purpose of this study is to describe how profit-maximizing manufacturers would optimally adjust their launch sequence to these policies and how VBP countries may best respond. To decide about the launching sequence, a manufacturer must consider a tradeoff between price and sales volume in any given country as well as the effect of price in a VBP country on the price in international reference pricing countries. Based on the manufacturer's rationale, it is best for VBP countries in Europe to implicitly collude in the long term and set cost-effectiveness thresholds at the level of the lowest acceptable VBP country. This way, international reference pricing countries would also converge towards the lowest acceptable threshold in Europe.

  1. Regression Discontinuity for Causal Effect Estimation in Epidemiology.

    PubMed

    Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till

    Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.

  2. Hearing Tests Based on Biologically Calibrated Mobile Devices: Comparison With Pure-Tone Audiometry

    PubMed Central

    Grysiński, Tomasz; Kręcicki, Tomasz

    2018-01-01

    Background Hearing screening tests based on pure-tone audiometry may be conducted on mobile devices, provided that the devices are specially calibrated for the purpose. Calibration consists of determining the reference sound level and can be performed in relation to the hearing threshold of normal-hearing persons. In the case of devices provided by the manufacturer, together with bundled headphones, the reference sound level can be calculated once for all devices of the same model. Objective This study aimed to compare the hearing threshold measured by a mobile device that was calibrated using a model-specific, biologically determined reference sound level with the hearing threshold obtained in pure-tone audiometry. Methods Trial participants were recruited offline using face-to-face prompting from among Otolaryngology Clinic patients, who own Android-based mobile devices with bundled headphones. The hearing threshold was obtained on a mobile device by means of an open access app, Hearing Test, with incorporated model-specific reference sound levels. These reference sound levels were previously determined in uncontrolled conditions in relation to the hearing threshold of normal-hearing persons. An audiologist-assisted self-measurement was conducted by the participants in a sound booth, and it involved determining the lowest audible sound generated by the device within the frequency range of 250 Hz to 8 kHz. The results were compared with pure-tone audiometry. Results A total of 70 subjects, 34 men and 36 women, aged 18-71 years (mean 36, standard deviation [SD] 11) participated in the trial. The hearing threshold obtained on mobile devices was significantly different from the one determined by pure-tone audiometry with a mean difference of 2.6 dB (95% CI 2.0-3.1) and SD of 8.3 dB (95% CI 7.9-8.7). The number of differences not greater than 10 dB reached 89% (95% CI 88-91), whereas the mean absolute difference was obtained at 6.5 dB (95% CI 6.2-6.9). Sensitivity and specificity for a mobile-based screening method were calculated at 98% (95% CI 93-100.0) and 79% (95% CI 71-87), respectively. Conclusions The method of hearing self-test carried out on mobile devices with bundled headphones demonstrates high compatibility with pure-tone audiometry, which confirms its potential application in hearing monitoring, screening tests, or epidemiological examinations on a large scale. PMID:29321124

  3. Hearing Tests Based on Biologically Calibrated Mobile Devices: Comparison With Pure-Tone Audiometry.

    PubMed

    Masalski, Marcin; Grysiński, Tomasz; Kręcicki, Tomasz

    2018-01-10

    Hearing screening tests based on pure-tone audiometry may be conducted on mobile devices, provided that the devices are specially calibrated for the purpose. Calibration consists of determining the reference sound level and can be performed in relation to the hearing threshold of normal-hearing persons. In the case of devices provided by the manufacturer, together with bundled headphones, the reference sound level can be calculated once for all devices of the same model. This study aimed to compare the hearing threshold measured by a mobile device that was calibrated using a model-specific, biologically determined reference sound level with the hearing threshold obtained in pure-tone audiometry. Trial participants were recruited offline using face-to-face prompting from among Otolaryngology Clinic patients, who own Android-based mobile devices with bundled headphones. The hearing threshold was obtained on a mobile device by means of an open access app, Hearing Test, with incorporated model-specific reference sound levels. These reference sound levels were previously determined in uncontrolled conditions in relation to the hearing threshold of normal-hearing persons. An audiologist-assisted self-measurement was conducted by the participants in a sound booth, and it involved determining the lowest audible sound generated by the device within the frequency range of 250 Hz to 8 kHz. The results were compared with pure-tone audiometry. A total of 70 subjects, 34 men and 36 women, aged 18-71 years (mean 36, standard deviation [SD] 11) participated in the trial. The hearing threshold obtained on mobile devices was significantly different from the one determined by pure-tone audiometry with a mean difference of 2.6 dB (95% CI 2.0-3.1) and SD of 8.3 dB (95% CI 7.9-8.7). The number of differences not greater than 10 dB reached 89% (95% CI 88-91), whereas the mean absolute difference was obtained at 6.5 dB (95% CI 6.2-6.9). Sensitivity and specificity for a mobile-based screening method were calculated at 98% (95% CI 93-100.0) and 79% (95% CI 71-87), respectively. The method of hearing self-test carried out on mobile devices with bundled headphones demonstrates high compatibility with pure-tone audiometry, which confirms its potential application in hearing monitoring, screening tests, or epidemiological examinations on a large scale. ©Marcin Masalski, Tomasz Grysiński, Tomasz Kręcicki. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 10.01.2018.

  4. Searching for the definition of macrosomia through an outcome-based approach.

    PubMed

    Ye, Jiangfeng; Zhang, Lin; Chen, Yan; Fang, Fang; Luo, ZhongCheng; Zhang, Jun

    2014-01-01

    Macrosomia has been defined in various ways by obstetricians and researchers. The purpose of the present study was to search for a definition of macrosomia through an outcome-based approach. In a study of 30,831,694 singleton term live births and 38,053 stillbirths in the U.S. Linked Birth-Infant Death Cohort datasets (1995-2004), we compared the occurrence of stillbirth, neonatal death, and 5-min Apgar score less than four in subgroups of birthweight (4000-4099 g, 4100-4199 g, 4200-4299 g, 4300-4399 g, 4400-4499 g, 4500-4999 g vs. reference group 3500-4000 g) and birthweight percentile for gestational age (90th-94th percentile, 95th-96th, and ≥ 97th percentile, vs. reference group 75th-90th percentile). There was no significant increase in adverse perinatal outcomes until birthweight exceeded the 97th percentile. Weight-specific odds ratios (ORs) elevated substantially to 2 when birthweight exceeded 4500 g in Whites. In Blacks and Hispanics, the aORs exceeded 2 for 5-min Apgar less than four when birthweight exceeded 4300 g. For vaginal deliveries, the aORs of perinatal morbidity and mortality were larger for most of the subgroups, but the patterns remained the same. A birthweight greater than 4500 g in Whites, or 4300 g in Blacks and Hispanics regardless of gestational age is the optimal threshold to define macrosomia. A birthweight greater than the 97th percentile for a given gestational age, irrespective of race is also reasonable to define macrosomia. The former may be more clinically useful and simpler to apply.

  5. Threshold concepts in finance: student perspectives

    NASA Astrophysics Data System (ADS)

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-10-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by finance academics. In addition, we investigate the potential of a framework of different types of knowledge to differentiate the delivery of the finance curriculum and the role of modelling in finance. Our purpose is to identify ways to improve curriculum design and delivery, leading to better student outcomes. Whilst we find that there is significant overlap between what students identify as important in finance and the threshold concepts identified by academics, much of this overlap is expressed by indirect reference to the concepts. Further, whilst different types of knowledge are apparent in the student data, there is evidence that students do not necessarily distinguish conceptual from other types of knowledge. As well as investigating the finance curriculum, the research demonstrates the use of threshold concepts to compare and contrast student and academic perceptions of a discipline and, as such, is of interest to researchers in education and other disciplines.

  6. Definitions of cardiovascular insufficiency and relation to outcomes in critically ill newborn infants

    PubMed Central

    Fernandez, Erika; Watterberg, Kristi L.; Faix, Roger G.; Yoder, Bradley A.; Walsh, Michele C.; Lacy, Conra Backstrom; Osborne, Karen A.; Das, Abhik; Kendrick, Douglas E.; Stoll, Barbara J.; Poindexter, Brenda B.; Laptook, Abbot R.; Kennedy, Kathleen A.; Schibler, Kurt; Bell, Edward F.; Van Meurs, Krisa P.; Frantz, Ivan D.; Goldberg, Ronald N.; Shankaran, Seetha; Carlo, Waldemar A.; Ehrenkranz, Richard A.; Sanchez, Pablo J.; Higgins, Rosemary D.

    2015-01-01

    Background We previously reported on the overall incidence, management and outcomes in infants with cardiovascular insufficiency (CVI). However, there are limited data on the relationship of the specific different definitions of CVI to short term outcomes in term and late preterm newborn infants. Objective To evaluate how 4 definitions of CVI relate to short term outcomes and death. Study Design The previously reported study was a multicenter, prospective cohort study of 647 infants ≥ 34 weeks gestation admitted to a Neonatal Research Network (NRN) newborn intensive care unit (NICU) and mechanically ventilated (MV) during their first 72 hours. The relationship of five short term outcomes at discharge and 4 different definitions of CVI were further analyzed. Results All 4 definitions were associated with greater number of days on MV & days on O2. The definition using a threshold blood pressure (BP) measurement alone was not associated with days to full feeding, days in the NICU or death. The definition based on treatment of CVI was associated with all outcomes including death. Conclusions The definition using a threshold BP alone was not consistently associated with adverse short term outcomes. Using only a threshold BP to determine therapy may not improve outcomes. PMID:25825962

  7. Comparison of bedside screening methods for frailty assessment in older adult trauma patients in the emergency department.

    PubMed

    Shah, Sachita P; Penn, Kevin; Kaplan, Stephen J; Vrablik, Michael; Jablonowski, Karl; Pham, Tam N; Reed, May J

    2018-04-14

    Frailty is linked to poor outcomes in older patients. We prospectively compared the utility of the picture-based Clinical Frailty Scale (CFS9), clinical assessments, and ultrasound muscle measurements against the reference FRAIL scale in older adult trauma patients in the emergency department (ED). We recruited a convenience sample of adults 65 yrs. or older with blunt trauma and injury severity scores <9. We queried subjects (or surrogates) on the FRAIL scale, and compared this to: physician-based and subject/surrogate-based CFS9; mid-upper arm circumference (MUAC) and grip strength; and ultrasound (US) measures of muscle thickness (limbs and abdominal wall). We derived optimal diagnostic thresholds and calculated performance metrics for each comparison using sensitivity, specificity, predictive values, and area under receiver operating characteristic curves (AUROC). Fifteen of 65 patients were frail by FRAIL scale (23%). CFS9 performed well when assessed by subject/surrogate (AUROC 0.91 [95% CI 0.84-0.98] or physician (AUROC 0.77 [95% CI 0.63-0.91]. Optimal thresholds for both physician and subject/surrogate were CFS9 of 4 or greater. If both physician and subject/surrogate provided scores <4, sensitivity and negative predictive value were 90.0% (54.1-99.5%) and 95.0% (73.1-99.7%). Grip strength and MUAC were not predictors. US measures that combined biceps and quadriceps thickness showed an AUROC of 0.75 compared to the reference standard. The ED needs rapid, validated tools to screen for frailty. The CFS9 has excellent negative predictive value in ruling out frailty. Ultrasound of combined biceps and quadriceps has modest concordance as an alternative in trauma patients who cannot provide a history. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. The relationship between extreme precipitation events and landslides distributions in 2009 in Lower Austria

    NASA Astrophysics Data System (ADS)

    Katzensteiner, H.; Bell, R.; Petschko, H.; Glade, T.

    2012-04-01

    The prediction and forecast of widespread landsliding for a given triggering event is an open research question. Numerous studies tried to link spatial rainfall and landslide distributions. This study focuses on analysing the relationship between intensive precipitation and rainfall-triggered shallow landslides in the year 2009 in Lower Austria. Landslide distributions were gained from the building ground register, which is maintained by the Geological Survey of Lower Austria. It contains detailed information of landslides, which were registered due to damage reports. Spatially distributed rainfall estimates were extracted from INCA (Integrated Nowcasting through Comprehensive Analysis) precipitation analysis, which is a combination of station data interpolation and radar data in a spatial resolution of 1km developed by the Central Institute for Meteorology and Geodynamics (ZAMG), Vienna, Austria. The importance of the data source is shown by comparing rainfall data based on reference gauges, spatial interpolation and INCA-analysis for a certain storm period. INCA precipitation data can detect precipitating cells that do not hit a station but might trigger a landslide, which is an advantage over the application of reference stations for the definition of rainfall thresholds. Empirical thresholds at regional scale were determined based on rainfall-intensity and duration in the year 2009 and landslide information. These thresholds are dependent on the criteria which separate the landslide triggering and non-triggering precipitation events from each other. Different approaches for defining thresholds alter the shape of the threshold as well. A temporarily threshold I=8,8263*D^(-0.672) for extreme rainfall events in summer in Lower Austria was defined. A verification of the threshold with similar events of other years as well as following analyses based on a larger landslide database are in progress.

  9. Different Imaging Strategies in Patients With Possible Basilar Artery Occlusion: Cost-Effectiveness Analysis.

    PubMed

    Beyer, Sebastian E; Hunink, Myriam G; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H

    2015-07-01

    This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80,000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80,000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80,000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. © 2015 The Authors.

  10. Different Imaging Strategies in Patients With Possible Basilar Artery Occlusion

    PubMed Central

    Beyer, Sebastian E.; Hunink, Myriam G.; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E.; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F.

    2015-01-01

    Background and Purpose— This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. Methods— A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80 000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. Results— In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80 000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80 000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Conclusions— Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. PMID:26022634

  11. MEthods of ASsessing blood pressUre: identifying thReshold and target valuEs (MeasureBP): a review & study protocol.

    PubMed

    Blom, Kimberly C; Farina, Sasha; Gomez, Yessica-Haydee; Campbell, Norm R C; Hemmelgarn, Brenda R; Cloutier, Lyne; McKay, Donald W; Dawes, Martin; Tobe, Sheldon W; Bolli, Peter; Gelfer, Mark; McLean, Donna; Bartlett, Gillian; Joseph, Lawrence; Featherstone, Robin; Schiffrin, Ernesto L; Daskalopoulou, Stella S

    2015-04-01

    Despite progress in automated blood pressure measurement (BPM) technology, there is limited research linking hard outcomes to automated office BPM (OBPM) treatment targets and thresholds. Equivalences for automated BPM devices have been estimated from approximations of standardized manual measurements of 140/90 mmHg. Until outcome-driven targets and thresholds become available for automated measurement methods, deriving evidence-based equivalences between automated methods and standardized manual OBPM is the next best solution. The MeasureBP study group was initiated by the Canadian Hypertension Education Program to close this critical knowledge gap. MeasureBP aims to define evidence-based equivalent values between standardized manual OBPM and automated BPM methods by synthesizing available evidence using a systematic review and individual subject-level data meta-analyses. This manuscript provides a review of the literature and MeasureBP study protocol. These results will lay the evidenced-based foundation to resolve uncertainties within blood pressure guidelines which, in turn, will improve the management of hypertension.

  12. Defining indoor heat thresholds for health in the UK.

    PubMed

    Anderson, Mindy; Carmichael, Catriona; Murray, Virginia; Dengel, Andy; Swainson, Michael

    2013-05-01

    It has been recognised that as outdoor ambient temperatures increase past a particular threshold, so do mortality/morbidity rates. However, similar thresholds for indoor temperatures have not yet been identified. Due to a warming climate, the non-sustainability of air conditioning as a solution, and the desire for more energy-efficient airtight homes, thresholds for indoor temperature should be defined as a public health issue. The aim of this paper is to outline the need for indoor heat thresholds and to establish if they can be identified. Our objectives include: describing how indoor temperature is measured; highlighting threshold measurements and indices; describing adaptation to heat; summary of the risk of susceptible groups to heat; reviewing the current evidence on the link between sleep, heat and health; exploring current heat and health warning systems and thresholds; exploring the built environment and the risk of overheating; and identifying the gaps in current knowledge and research. A global literature search of key databases was conducted using a pre-defined set of keywords to retrieve peer-reviewed and grey literature. The paper will apply the findings to the context of the UK. A summary of 96 articles, reports, government documents and textbooks were analysed and a gap analysis was conducted. Evidence on the effects of indoor heat on health implies that buildings are modifiers of the effect of climate on health outcomes. Personal exposure and place-based heat studies showed the most significant correlations between indoor heat and health outcomes. However, the data are sparse and inconclusive in terms of identifying evidence-based definitions for thresholds. Further research needs to be conducted in order to provide an evidence base for threshold determination. Indoor and outdoor heat are related but are different in terms of language and measurement. Future collaboration between the health and building sectors is needed to develop a common language and an index for indoor heat and health thresholds in a changing climate.

  13. Deactivating stimulation sites based on low-rate thresholds improves spectral ripple and speech reception thresholds in cochlear implant users.

    PubMed

    Zhou, Ning

    2017-03-01

    The study examined whether the benefit of deactivating stimulation sites estimated to have broad neural excitation was attributed to improved spectral resolution in cochlear implant users. The subjects' spatial neural excitation pattern was estimated by measuring low-rate detection thresholds across the array [see Zhou (2016). PLoS One 11, e0165476]. Spectral resolution, as assessed by spectral-ripple discrimination thresholds, significantly improved after deactivation of five high-threshold sites. The magnitude of improvement in spectral-ripple discrimination thresholds predicted the magnitude of improvement in speech reception thresholds after deactivation. Results suggested that a smaller number of relatively independent channels provide a better outcome than using all channels that might interact.

  14. Assessing the nutrient intake of a low-carbohydrate, high-fat (LCHF) diet: a hypothetical case study design

    PubMed Central

    Zinn, Caryn; Rush, Amy; Johnson, Rebecca

    2018-01-01

    Objective The low-carbohydrate, high-fat (LCHF) diet is becoming increasingly employed in clinical dietetic practice as a means to manage many health-related conditions. Yet, it continues to remain contentious in nutrition circles due to a belief that the diet is devoid of nutrients and concern around its saturated fat content. This work aimed to assess the micronutrient intake of the LCHF diet under two conditions of saturated fat thresholds. Design In this descriptive study, two LCHF meal plans were designed for two hypothetical cases representing the average Australian male and female weight-stable adult. National documented heights, a body mass index of 22.5 to establish weight and a 1.6 activity factor were used to estimate total energy intake using the Schofield equation. Carbohydrate was limited to <130 g, protein was set at 15%–25% of total energy and fat supplied the remaining calories. One version of the diet aligned with the national saturated fat guideline threshold of <10% of total energy and the other included saturated fat ad libitum. Primary outcomes The primary outcomes included all micronutrients, which were assessed using FoodWorks dietary analysis software against national Australian/New Zealand nutrient reference value (NRV) thresholds. Results All of the meal plans exceeded the minimum NRV thresholds, apart from iron in the female meal plans, which achieved 86%–98% of the threshold. Saturated fat intake was logistically unable to be reduced below the 10% threshold for the male plan but exceeded the threshold by 2 g (0.6%). Conclusion Despite macronutrient proportions not aligning with current national dietary guidelines, a well-planned LCHF meal plan can be considered micronutrient replete. This is an important finding for health professionals, consumers and critics of LCHF nutrition, as it dispels the myth that these diets are suboptimal in their micronutrient supply. As with any diet, for optimal nutrient achievement, meals need to be well formulated. PMID:29439004

  15. Pre-operative Thresholds for Achieving Meaningful Clinical Improvement after Arthroscopic Treatment of Femoroacetabular Impingement

    PubMed Central

    Nwachukwu, Benedict U.; Fields, Kara G.; Nawabi, Danyal H.; Kelly, Bryan T.; Ranawat, Anil S.

    2016-01-01

    Objectives: Knowledge of the thresholds and determinants for successful femoroacetabular impingement (FAI) treatment is evolving. The primary purpose of this study was to define pre-operative outcome score thresholds that can be used to predict patients most likely to achieve meaningful clinically important difference (MCID) after arthroscopic FAI treatment. Secondarily determinants of achieving MCID were evaluated. Methods: A prospective institutional hip arthroscopy registry was reviewed to identify patients with FAI treated with arthroscopic labral surgery, acetabular rim trimming, and femoral osteochondroplasty. The modified Harris Hip Score (mHHS), the Hip Outcome Score (HOS) and the international Hip Outcome Tool (iHOT-33) tools were administered at baseline and at one year post-operatively. MCID was calculated using a distribution-based method. A receiver operating characteristic (ROC) analysis was used to calculate cohort-based threshold values predictive of achieving MCID. Area under the curve (AUC) was used to define predictive ability (strength of association) with AUC >0.7 considered acceptably predictive. Univariate and multivariable analyses were used to analyze demographic, radiographic and intra-operative factors associated with achieving MCID. Results: There were 374 patients (mean + SD age, 32.9 + 10.5) and 56.4% were female. The MCID for mHHS, HOS activities of daily living (HOS-ADL), HOS Sports, and iHOT-33 was 8.2, 8.4,14.5, and 12.0 respectively. ROC analysis (threshold, % achieving MCID, strength of association) for these tools in our population was: mHHS (61.6, 78%, 0.68), HOS-ADL (83.8, 68%, 0.84), HOS-Sports (63.9, 64%, 0.74), and iHOT-33 (54.3, 82%, 0.65). Likelihood for achieving MCID declined above and increased below these thresholds. In univariate analysis female sex, femoral version, lower acetabular outerbridge score and increasing CT sagittal center edge angle (CEA) were predictive of achieving MCID. In multivariable analysis sagittal CEA was the only variable maintaining significance (p = 0.032). Conclusion: We used a large prospective hip arthroscopy database to identify pre-operative patient outcome score thresholds predictive of meaningful post-operative outcome improvement after arthroscopic FAI treatment. This is the largest reported hip arthroscopy cohort to define MCID and the first to do so for iHOT-33. The HOS-ADL may have the best predictive ability for achieving MCID after hip arthroscopy. Patients with relatively high pre-operative ADL, quality of life and functional status appear to have a high chance for achieveing MCID up to our defined thresholds. Hip dysplasia is an important outcome modifier. The findings of this study may be useful for managing preoperative expectation for patients undergoing arthroscopic FAI surgery.

  16. Accuracy of cancellous bone volume fraction measured by micro-CT scanning.

    PubMed

    Ding, M; Odgaard, A; Hvid, I

    1999-03-01

    Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.

  17. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  18. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  19. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  20. Utility of High Temporal Resolution Observations for Heat Health Event Characterization

    NASA Astrophysics Data System (ADS)

    Palecki, M. A.

    2017-12-01

    Many heat health watch systems produce a binary on/off warning when conditions are predicted to exceed a given threshold during a day. Days with warnings and their mortality/morbidity statistics are analyzed relative to days not warned to determine the impacts of the event on human health, the effectiveness of warnings, and other statistics. The climate analyses of the heat waves or extreme temperature events are often performed with hourly or daily observations of air temperature, humidity, and other measured or derived variables, especially the maxima and minima of these data. However, since the beginning of the century, 5-minute observations are readily available for many weather and climate stations in the United States. NOAA National Centers for Environmental Information (NCEI) has been collecting 5-minute observations from the NOAA Automated Surface Observing System (ASOS) stations since 2000, and from the U.S. Climate Reference Network (USCRN) stations since 2005. This presentation will demonstrate the efficacy of utilizing 5-minute environmental observations to characterize heat waves by counting the length of time conditions exceed extreme thresholds based on individual and multiple variables and on derived variables such as the heat index. The length and depth of recovery periods between daytime heating periods will also be examined. The length of time under extreme conditions will influence health outcomes for those directly exposed. Longer periods of dangerous conditions also could increase the chances for poor health outcomes for those only exposed intermittently through cumulative impacts.

  1. Better Informing Decision Making with Multiple Outcomes Cost-Effectiveness Analysis under Uncertainty in Cost-Disutility Space

    PubMed Central

    McCaffrey, Nikki; Agar, Meera; Harlum, Janeane; Karnon, Jonathon; Currow, David; Eckermann, Simon

    2015-01-01

    Introduction Comparing multiple, diverse outcomes with cost-effectiveness analysis (CEA) is important, yet challenging in areas like palliative care where domains are unamenable to integration with survival. Generic multi-attribute utility values exclude important domains and non-health outcomes, while partial analyses—where outcomes are considered separately, with their joint relationship under uncertainty ignored—lead to incorrect inference regarding preferred strategies. Objective The objective of this paper is to consider whether such decision making can be better informed with alternative presentation and summary measures, extending methods previously shown to have advantages in multiple strategy comparison. Methods Multiple outcomes CEA of a home-based palliative care model (PEACH) relative to usual care is undertaken in cost disutility (CDU) space and compared with analysis on the cost-effectiveness plane. Summary measures developed for comparing strategies across potential threshold values for multiple outcomes include: expected net loss (ENL) planes quantifying differences in expected net benefit; the ENL contour identifying preferred strategies minimising ENL and their expected value of perfect information; and cost-effectiveness acceptability planes showing probability of strategies minimising ENL. Results Conventional analysis suggests PEACH is cost-effective when the threshold value per additional day at home ( 1) exceeds $1,068 or dominated by usual care when only the proportion of home deaths is considered. In contrast, neither alternative dominate in CDU space where cost and outcomes are jointly considered, with the optimal strategy depending on threshold values. For example, PEACH minimises ENL when 1=$2,000 and 2=$2,000 (threshold value for dying at home), with a 51.6% chance of PEACH being cost-effective. Conclusion Comparison in CDU space and associated summary measures have distinct advantages to multiple domain comparisons, aiding transparent and robust joint comparison of costs and multiple effects under uncertainty across potential threshold values for effect, better informing net benefit assessment and related reimbursement and research decisions. PMID:25751629

  2. Trends in Biometric Health Indices Within an Employer-Sponsored Wellness Program With Outcome-Based Incentives.

    PubMed

    Fu, Patricia Lin; Bradley, Kent L; Viswanathan, Sheila; Chan, June M; Stampfer, Meir

    2016-07-01

    To evaluate changes in employees' biometrics over time relative to outcome-based incentive thresholds. Retrospective cohort analysis of biometric screening participants (n = 26 388). Large employer primarily in Western United States. Office, retail, and distribution workforce. A voluntary outcome-based biometric screening program, incentivized with health insurance premium discounts. Body mass index (BMI), cholesterol, blood glucose, blood pressure, and nicotine. Followed were participants from their first year of participation, evaluating changes in measures. On average, participants who did not meet the incentive threshold at baseline decreased their BMI (1%), glucose (8%), blood pressure (systolic 9%, diastolic 8%), and total cholesterol (8%) by year 2 with improvements generally sustained or continued during each additional year of participation. On average, individuals at high health risk who participated in a financially incentivized biometric assessment program improved their health indices over time. Further research is needed to understand key determinants that drive health improvement indicated here. © The Author(s) 2016.

  3. Routine magnetic resonance imaging for idiopathic olfactory loss: a modeling-based economic evaluation.

    PubMed

    Rudmik, Luke; Smith, Kristine A; Soler, Zachary M; Schlosser, Rodney J; Smith, Timothy L

    2014-10-01

    Idiopathic olfactory loss is a common clinical scenario encountered by otolaryngologists. While trying to allocate limited health care resources appropriately, the decision to obtain a magnetic resonance imaging (MRI) scan to investigate for a rare intracranial abnormality can be difficult. To evaluate the cost-effectiveness of ordering routine MRI in patients with idiopathic olfactory loss. We performed a modeling-based economic evaluation with a time horizon of less than 1 year. Patients included in the analysis had idiopathic olfactory loss defined by no preceding viral illness or head trauma and negative findings of a physical examination and nasal endoscopy. Routine MRI vs no-imaging strategies. We developed a decision tree economic model from the societal perspective. Effectiveness, probability, and cost data were obtained from the published literature. Litigation rates and costs related to a missed diagnosis were obtained from the Physicians Insurers Association of America. A univariate threshold analysis and multivariate probabilistic sensitivity analysis were performed to quantify the degree of certainty in the economic conclusion of the reference case. The comparative groups included those who underwent routine MRI of the brain with contrast alone and those who underwent no brain imaging. The primary outcome was the cost per correct diagnosis of idiopathic olfactory loss. The mean (SD) cost for the MRI strategy totaled $2400.00 ($1717.54) and was effective 100% of the time, whereas the mean (SD) cost for the no-imaging strategy totaled $86.61 ($107.40) and was effective 98% of the time. The incremental cost-effectiveness ratio for the MRI strategy compared with the no-imaging strategy was $115 669.50, which is higher than most acceptable willingness-to-pay thresholds. The threshold analysis demonstrated that when the probability of having a treatable intracranial disease process reached 7.9%, the incremental cost-effectiveness ratio for MRI vs no imaging was $24 654.38. The probabilistic sensitivity analysis demonstrated that the no-imaging strategy was the cost-effective decision with 81% certainty at a willingness-to-pay threshold of $50 000. This economic evaluation suggests that the most cost-effective decision is to not obtain a routine MRI scan of the brain in patients with idiopathic olfactory loss. Outcomes from this study may be used to counsel patients and aid in the decision-making process.

  4. An approach to defect inspection for packing presswork with virtual orientation points and threshold template image

    NASA Astrophysics Data System (ADS)

    Hao, Xiangyang; Liu, Songlin; Zhao, Fulai; Jiang, Lixing

    2015-05-01

    The packing presswork is an important factor of industrial product, especially for the luxury commodities such as cigarettes. In order to ensure the packing presswork to be qualified, the products should be inspected and unqualified one be picked out piece by piece with the vision-based inspection method, which has such advantages as no-touch inspection, high efficiency and automation. Vision-based inspection of packing presswork mainly consists of steps as image acquisition, image registration and defect inspection. The registration between inspected image and reference image is the foundation and premise of visual inspection. In order to realize rapid, reliable and accurate image registration, a registration method based on virtual orientation points is put forward. The precision of registration between inspected image and reference image can reach to sub pixels. Since defect is without fixed position, shape, size and color, three measures are taken to improve the inspection effect. Firstly, the concept of threshold template image is put forward to resolve the problem of variable threshold of intensity difference. Secondly, the color difference is calculated by comparing each pixel with the adjacent pixels of its correspondence on reference image to avoid false defect resulted from color registration error. Thirdly, the strategy of image pyramid is applied in the inspection algorithm to enhance the inspection efficiency. Experiments show that the related algorithm is effective to defect inspection and it takes 27.4 ms on average to inspect a piece of cigarette packing presswork.

  5. Signal processing system for electrotherapy applications

    NASA Astrophysics Data System (ADS)

    Płaza, Mirosław; Szcześniak, Zbigniew

    2017-08-01

    The system of signal processing for electrotherapeutic applications is proposed in the paper. The system makes it possible to model the curve of threshold human sensitivity to current (Dalziel's curve) in full medium frequency range (1kHz-100kHz). The tests based on the proposed solution were conducted and their results were compared with those obtained according to the assumptions of High Tone Power Therapy method and referred to optimum values. Proposed system has high dynamics and precision of mapping the curve of threshold human sensitivity to current and can be used in all methods where threshold curves are modelled.

  6. Age-related normative values for handgrip strength and grip strength’s usefulness as a predictor of mortality and both cognitive and physical decline in older adults in northwest Russia

    PubMed Central

    Turusheva, A.; Frolova, E.; Degryse, J-M.

    2017-01-01

    Objectives: This paper sought to provide normative values for grip strength among older adults across different age groups in northwest Russia and to investigate their predictive value for adverse events. Methods: A population-based prospective cohort study of 611 community-dwelling individuals 65+. Grip strength was measured using the standard protocol applied in the Groningen Elderly Tests. The cut-off thresholds for grip strength were defined separately for men and women of different ages using a weighted polynomial regression. A Cox regression analysis, the c-statistic, a risk reclassification analysis, and bootstrapping techniques were used to analyze the data. The outcomes were the 5-year mortality rate, the loss of autonomy and mental decline. Results: We determined the age-related reference intervals of grip strength for older adults. The 5th and 10th percentiles of grip strength were associated with a higher risk for malnutrition, low autonomy, physical and mental functioning and 5-year mortality. The 5th percentile of grip strength was associated with a decline in autonomy. Conclusions: This study presents age- and sex-specific reference values for grip strength in the 65+ Russian population derived from a prospective cohort study. The norms can be used in clinical practice to identify patients at increased risk for adverse outcomes. PMID:28250246

  7. Sustainable thresholds for cooperative epidemiological models.

    PubMed

    Barrios, Edwin; Gajardo, Pedro; Vasilieva, Olga

    2018-05-22

    In this paper, we introduce a method for computing sustainable thresholds for controlled cooperative models described by a system of ordinary differential equations, a property shared by a wide class of compartmental models in epidemiology. The set of sustainable thresholds refers to constraints (e.g., maximal "allowable" number of human infections; maximal "affordable" budget for disease prevention, diagnosis and treatments; etc.), parameterized by thresholds, that can be sustained by applying an admissible control strategy starting at the given initial state and lasting the whole period of the control intervention. This set, determined by the initial state of the dynamical system, virtually provides useful information for more efficient (or cost-effective) decision-making by exhibiting the trade-offs between different types of constraints and allowing the user to assess future outcomes of control measures on transient behavior of the dynamical system. In order to accentuate the originality of our approach and to reveal its potential significance in real-life applications, we present an example relying on the 2013 dengue outbreak in Cali, Colombia, where we compute the set of sustainable thresholds (in terms of the maximal "affordable" budget and the maximal "allowable" levels of active infections among human and vector populations) that could be sustained during the epidemic outbreak. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Predictive value of neuron-specific enolase for prognosis in patients with moderate or severe traumatic brain injury: a systematic review and meta-analysis

    PubMed Central

    Mercier, Eric; Boutin, Amélie; Shemilt, Michèle; Lauzier, François; Zarychanski, Ryan; Fergusson, Dean A.; Moore, Lynne; McIntyre, Lauralyn A.; Archambault, Patrick; Légaré, France; Rousseau, François; Lamontagne, François; Nadeau, Linda; Turgeon, Alexis F.

    2016-01-01

    Background: Prognosis is difficult to establish early after moderate or severe traumatic brain injury despite representing an important concern for patients, families and medical teams. Biomarkers, such as neuron-specific enolase, have been proposed as potential early prognostic indicators. Our objective was to determine the association between neuron-specific enolase and clinical outcomes, and the prognostic value of neuron-specific enolase after a moderate or severe traumatic brain injury. Methods: We searched MEDLINE, Embase, The Cochrane Library and Biosis Previews, and reviewed reference lists of eligible articles to identify studies. We included cohort studies and randomized controlled trials that evaluated the prognostic value of neuron-specific enolase to predict mortality or Glasgow Outcome Scale score in patients with moderate or severe traumatic brain injury. Two reviewers independently collected data. The pooled mean differences were analyzed using random-effects models. We assessed risk of bias using a customized Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Subgroup and sensitivity analyses were performed based on a priori hypotheses. Results: We screened 5026 citations from which 30 studies (involving 1321 participants) met our eligibility criteria. We found a significant positive association between neuron-specific enolase serum levels and mortality (10 studies, n = 474; mean difference [MD] 18.46 µg/L, 95% confidence interval [CI] 10.81 to 26.11 µg/L; I2 = 83%) and a Glasgow Outcome Scale ≤ 3 (14 studies, n = 603; MD 17.25 µg/L, 95% CI 11.42 to 23.07 µg/L; I2 = 82%). We were unable to determine a clinical threshold value using the available patient data. Interpretation: In patients with moderate or severe traumatic brain injury, increased neuron-specific enolase serum levels are associated with unfavourable outcomes. The optimal neuron-specific enolase threshold value to predict unfavourable prognosis remains unknown and clinical decision-making is currently not recommended until additional studies are made available. PMID:27975043

  9. Motor control theories and their applications.

    PubMed

    Latash, Mark L; Levin, Mindy F; Scholz, John P; Schöner, Gregor

    2010-01-01

    We describe several influential hypotheses in the field of motor control including the equilibrium-point (referent configuration) hypothesis, the uncontrolled manifold hypothesis, and the idea of synergies based on the principle of motor abundance. The equilibrium-point hypothesis is based on the idea of control with thresholds for activation of neuronal pools; it provides a framework for analysis of both voluntary and involuntary movements. In particular, control of a single muscle can be adequately described with changes in the threshold of motor unit recruitment during slow muscle stretch (threshold of the tonic stretch reflex). Unlike the ideas of internal models, the equilibrium-point hypothesis does not assume neural computations of mechanical variables. The uncontrolled manifold hypothesis is based on the dynamic system approach to movements; it offers a toolbox to analyze synergic changes within redundant sets of elements related to stabilization of potentially important performance variables. The referent configuration hypothesis and the principle of abundance can be naturally combined into a single coherent scheme of control of multi-element systems. A body of experimental data on healthy persons and patients with movement disorders are reviewed in support of the mentioned hypotheses. In particular, movement disorders associated with spasticity are considered as consequences of an impaired ability to shift threshold of the tonic stretch reflex within the whole normal range. Technical details and applications of the mentioned hypo-theses to studies of motor learning are described. We view the mentioned hypotheses as the most promising ones in the field of motor control, based on a solid physical and neurophysiological foundation.

  10. Simplified risk assessment of noise induced hearing loss by means of 2 spreadsheet models.

    PubMed

    Lie, Arve; Engdahl, Bo; Tambs, Kristian

    2016-11-18

    The objective of this study has been to test 2 spreadsheet models to compare the observed with the expected hearing loss for a Norwegian reference population. The prevalence rates of the Norwegian and the National Institute for Occupational Safety and Health (NIOSH) definitions of hearing outcomes were calculated in terms of sex and age, 20-64 years old, for a screened (with no occupational noise exposure) (N = 18 858) and unscreened (N = 38 333) Norwegian reference population from the Nord-Trøndelag Hearing Loss Study (NTHLS). Based on the prevalence rates, 2 different spreadsheet models were constructed in order to compare the prevalence rates of various groups of workers with the expected rates. The spreadsheets were then tested on 10 different occupational groups with varying degrees of hearing loss as compared to a reference population. Hearing of office workers, train drivers, conductors and teachers differed little from the screened reference values based on the Norwegian and the NIOSH criterion. The construction workers, miners, farmers and military had an impaired hearing and railway maintenance workers and bus drivers had a mildly impaired hearing. The spreadsheet models give a valid assessment of the hearing loss. The use of spreadsheet models to compare hearing in occupational groups with that of a reference population is a simple and quick method. The results are in line with comparable hearing thresholds, and allow for significance testing. The method is believed to be useful for occupational health services in the assessment of risk of noise induced hearing loss (NIHL) and the preventive potential in groups of noise-exposed workers. Int J Occup Med Environ Health 2016;29(6):991-999. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  11. Noise reduction in Lidar signal using correlation-based EMD combined with soft thresholding and roughness penalty

    NASA Astrophysics Data System (ADS)

    Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo

    2018-01-01

    Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.

  12. Molecular taxonomy of phytopathogenic fungi: a case study in Peronospora.

    PubMed

    Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M Teresa; Martín, María P

    2009-07-29

    Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence.

  13. Molecular Taxonomy of Phytopathogenic Fungi: A Case Study in Peronospora

    PubMed Central

    Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M. Teresa; Martín, María P.

    2009-01-01

    Background Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Methodology Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. Conclusions A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence. PMID:19641601

  14. Classification of HCV and HIV-1 Sequences with the Branching Index

    PubMed Central

    Hraber, Peter; Kuiken, Carla; Waugh, Mark; Geer, Shaun; Bruno, William J.; Leitner, Thomas

    2009-01-01

    SUMMARY Classification of viral sequences should be fast, objective, accurate, and reproducible. Most methods that classify sequences use either pairwise distances or phylogenetic relations, but cannot discern when a sequence is unclassifiable. The branching index (BI) combines distance and phylogeny methods to compute a ratio that quantifies how closely a query sequence clusters with a subtype clade. In the hypothesis-testing framework of statistical inference, the BI is compared with a threshold to test whether sufficient evidence exists for the query sequence to be classified among known sequences. If above the threshold, the null hypothesis of no support for the subtype relation is rejected and the sequence is taken as belonging to the subtype clade with which it clusters on the tree. This study evaluates statistical properties of the branching index for subtype classification in HCV and HIV-1. Pairs of BI values with known positive and negative test results were computed from 10,000 random fragments of reference alignments. Sampled fragments were of sufficient length to contain phylogenetic signal that groups reference sequences together properly into subtype clades. For HCV, a threshold BI of 0.71 yields 95.1% agreement with reference subtypes, with equal false positive and false negative rates. For HIV-1, a threshold of 0.66 yields 93.5% agreement. Higher thresholds can be used where lower false positive rates are required. In synthetic recombinants, regions without breakpoints are recognized accurately; regions with breakpoints do not uniquely represent any known subtype. Web-based services for viral subtype classification with the branching index are available online. PMID:18753218

  15. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  16. Threshold considerations in fair allocation of health resources: justice beyond scarcity.

    PubMed

    Alvarez, Allen Andrew A

    2007-10-01

    Application of egalitarian and prioritarian accounts of health resource allocation in low-income countries have both been criticized for implying distribution outcomes that allow decreasing/undermining health gains and for tolerating unacceptable standards of health care and health status that result from such allocation schemes. Insufficient health care and severe deprivation of health resources are difficult to accept even when justified by aggregative efficiency or legitimized by fair deliberative process in pursuing equality and priority oriented outcomes. I affirm the sufficientarian argument that, given extreme scarcity of public health resources in low-income countries, neither health status equality between populations nor priority for the worse off is normatively adequate. Nevertheless, the threshold norm alone need not be the sole consideration when a country's total health budget is extremely scarce. Threshold considerations are necessary in developing a theory of fair distribution of health resources that is sensitive to the lexically prior norm of sufficiency. Based on the intuition that shares must not be taken away from those who barely achieve a minimal level of health, I argue that assessments based on standards of minimal physical/mental health must be developed to evaluate the sufficiency of the total resources of health systems in low-income countries prior to pursuing equality, priority, and efficiency based resource allocation. I also begin to examine how threshold sensitive health resource assessment could be used in the Philippines.

  17. Reference guide to odor thresholds for hazardous air pollutants listed in the Clean Air Act amendments of 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.

    1992-03-01

    In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less

  18. Automatic segmentation of lung parenchyma based on curvature of ribs using HRCT images in scleroderma studies

    NASA Astrophysics Data System (ADS)

    Prasad, M. N.; Brown, M. S.; Ahmad, S.; Abtin, F.; Allen, J.; da Costa, I.; Kim, H. J.; McNitt-Gray, M. F.; Goldin, J. G.

    2008-03-01

    Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis. Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to perform segmentation of the lungs using a technique that selects an optimal threshold for a given scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build the model for selection of the best fitting lung boundary. The performance of the new technique was compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing. The two techniques were evaluated against manual reference segmentations using a volumetric overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the fixed threshold technique.

  19. Cost-effectiveness analysis of total ankle arthroplasty.

    PubMed

    SooHoo, Nelson F; Kominski, Gerald

    2004-11-01

    There is renewed interest in total ankle arthroplasty as an alternative to ankle fusion in the treatment of end-stage ankle arthritis. Despite a lack of long-term data on the clinical outcomes associated with these implants, the use of ankle arthroplasty is expanding. The purpose of this cost-effectiveness analysis was to evaluate whether the currently available literature justifies the emerging use of total ankle arthroplasty. This study also identifies thresholds for the durability and function of ankle prostheses that, if met, would support more widespread dissemination of this new technology. A decision model was created for the treatment of ankle arthritis. The literature was reviewed to identify possible outcomes and their probabilities following ankle fusion and ankle arthroplasty. Each outcome was weighted for quality of life with use of a utility factor, and effectiveness was expressed in units of quality-adjusted life years. Gross costs were estimated from Medicare charge and reimbursement data for the relevant codes. The effect of the uncertainty of estimates of costs and effectiveness was assessed with sensitivity analysis. The reference case of our model assumed a ten-year duration of survival of the prosthesis, resulting in an incremental cost-effectiveness ratio for ankle arthroplasty of $18,419 per quality-adjusted life year gained. This reflects a gain of 0.52 quality-adjusted life years at a cost of $9578 when ankle arthroplasty is chosen over fusion. This ratio compares favorably with the cost-effectiveness of other medical and surgical interventions. Sensitivity analysis determined that the cost per quality-adjusted life year gained with ankle arthroplasty rises above $50,000 if the prosthesis is assumed to fail before seven years. Treatment options with ratios above $50,000 per quality-adjusted life year are commonly considered to have limited cost-effectiveness. This threshold is also crossed when the theoretical functional advantages of ankle arthroplasty are eliminated in sensitivity analysis. The currently available literature has not yet shown that total ankle arthroplasty predictably results in levels of durability and function that make it cost-effective at this time. However, the reference case of this analysis does demonstrate that total ankle arthroplasty has the potential to be a cost-effective alternative to ankle fusion. This reference case assumes that the theoretical functional advantages of ankle arthroplasty over ankle fusion will be borne out in future clinical studies. Performance of total ankle replacement will be better justified if these thresholds are met in published long-term clinical trials.

  20. The Bilirubin Albumin Ratio in the Management of Hyperbilirubinemia in Preterm Infants to Improve Neurodevelopmental Outcome: A Randomized Controlled Trial – BARTrial

    PubMed Central

    van Imhoff, Deirdre E.; Bos, Arend F.; Lopriore, Enrico; Offringa, Martin; Ruiter, Selma A. J.; van Braeckel, Koen N. J. A.; Krabbe, Paul F. M.; Quik, Elise H.; van Toledo-Eppinga, Letty; Nuytemans, Debbie H. G. M.; van Wassenaer-Leemhuis, Aleid G.; Benders, Manon J. N.; Korbeeck-van Hof, Karen K. M.; van Lingen, Richard A.; Groot Jebbink, Liesbeth J. M.; Liem, Djien; Mansvelt, Petri; Buijs, Jan; Govaert, Paul; van Vliet, Ineke; Mulder, Twan L. M.; Wolfs, Cecile; Fetter, Willem P. F.; Laarman, Celeste

    2014-01-01

    Background and Objective High bilirubin/albumin (B/A) ratios increase the risk of bilirubin neurotoxicity. The B/A ratio may be a valuable measure, in addition to the total serum bilirubin (TSB), in the management of hyperbilirubinemia. We aimed to assess whether the additional use of B/A ratios in the management of hyperbilirubinemia in preterm infants improved neurodevelopmental outcome. Methods In a prospective, randomized controlled trial, 615 preterm infants of 32 weeks' gestation or less were randomly assigned to treatment based on either B/A ratio and TSB thresholds (consensus-based), whichever threshold was crossed first, or on the TSB thresholds only. The primary outcome was neurodevelopment at 18 to 24 months' corrected age as assessed with the Bayley Scales of Infant Development III by investigators unaware of treatment allocation. Secondary outcomes included complications of preterm birth and death. Results Composite motor (100±13 vs. 101±12) and cognitive (101±12 vs. 101±11) scores did not differ between the B/A ratio and TSB groups. Demographic characteristics, maximal TSB levels, B/A ratios, and other secondary outcomes were similar. The rates of death and/or severe neurodevelopmental impairment for the B/A ratio versus TSB groups were 15.4% versus 15.5% (P = 1.0) and 2.8% versus 1.4% (P = 0.62) for birth weights ≤1000 g and 1.8% versus 5.8% (P = 0.03) and 4.1% versus 2.0% (P = 0.26) for birth weights of >1000 g. Conclusions The additional use of B/A ratio in the management of hyperbilirubinemia in preterm infants did not improve their neurodevelopmental outcome. Trial Registration Controlled-Trials.com ISRCTN74465643 PMID:24927259

  1. CD4 Enumeration Technologies: A Systematic Review of Test Performance for Determining Eligibility for Antiretroviral Therapy

    PubMed Central

    Peeling, Rosanna W.; Sollis, Kimberly A.; Glover, Sarah; Crowe, Suzanne M.; Landay, Alan L.; Cheng, Ben; Barnett, David; Denny, Thomas N.; Spira, Thomas J.; Stevens, Wendy S.; Crowley, Siobhan; Essajee, Shaffiq; Vitoria, Marco; Ford, Nathan

    2015-01-01

    Background Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. Methods and Findings Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. Conclusions A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed. PMID:25790185

  2. Towards a comprehensive barcode library for arctic life - Ephemeroptera, Plecoptera, and Trichoptera of Churchill, Manitoba, Canada

    PubMed Central

    2009-01-01

    Background This study reports progress in assembling a DNA barcode reference library for Ephemeroptera, Plecoptera, and Trichoptera ("EPTs") from a Canadian subarctic site, which is the focus of a comprehensive biodiversity inventory using DNA barcoding. These three groups of aquatic insects exhibit a moderate level of species diversity, making them ideal for testing the feasibility of DNA barcoding for routine biotic surveys. We explore the correlation between the morphological species delineations, DNA barcode-based haplotype clusters delimited by a sequence threshold (2%), and a threshold-free approach to biodiversity quantification--phylogenetic diversity. Results A DNA barcode reference library is built for 112 EPT species for the focal region, consisting of 2277 COI sequences. Close correspondence was found between EPT morphospecies and haplotype clusters as designated using a standard threshold value. Similarly, the shapes of taxon accumulation curves based upon haplotype clusters were very similar to those generated using phylogenetic diversity accumulation curves, but were much more computationally efficient. Conclusion The results of this study will facilitate other lines of research on northern EPTs and also bode well for rapidly conducting initial biodiversity assessments in unknown EPT faunas. PMID:20003245

  3. Physiology-Based Modeling May Predict Surgical Treatment Outcome for Obstructive Sleep Apnea

    PubMed Central

    Li, Yanru; Ye, Jingying; Han, Demin; Cao, Xin; Ding, Xiu; Zhang, Yuhuan; Xu, Wen; Orr, Jeremy; Jen, Rachel; Sands, Scott; Malhotra, Atul; Owens, Robert

    2017-01-01

    Study Objectives: To test whether the integration of both anatomical and nonanatomical parameters (ventilatory control, arousal threshold, muscle responsiveness) in a physiology-based model will improve the ability to predict outcomes after upper airway surgery for obstructive sleep apnea (OSA). Methods: In 31 patients who underwent upper airway surgery for OSA, loop gain and arousal threshold were calculated from preoperative polysomnography (PSG). Three models were compared: (1) a multiple regression based on an extensive list of PSG parameters alone; (2) a multivariate regression using PSG parameters plus PSG-derived estimates of loop gain, arousal threshold, and other trait surrogates; (3) a physiological model incorporating selected variables as surrogates of anatomical and nonanatomical traits important for OSA pathogenesis. Results: Although preoperative loop gain was positively correlated with postoperative apnea-hypopnea index (AHI) (P = .008) and arousal threshold was negatively correlated (P = .011), in both model 1 and 2, the only significant variable was preoperative AHI, which explained 42% of the variance in postoperative AHI. In contrast, the physiological model (model 3), which included AHIREM (anatomy term), fraction of events that were hypopnea (arousal term), the ratio of AHIREM and AHINREM (muscle responsiveness term), loop gain, and central/mixed apnea index (control of breathing terms), was able to explain 61% of the variance in postoperative AHI. Conclusions: Although loop gain and arousal threshold are associated with residual AHI after surgery, only preoperative AHI was predictive using multivariate regression modeling. Instead, incorporating selected surrogates of physiological traits on the basis of OSA pathophysiology created a model that has more association with actual residual AHI. Commentary: A commentary on this article appears in this issue on page 1023. Clinical Trial Registration: ClinicalTrials.Gov; Title: The Impact of Sleep Apnea Treatment on Physiology Traits in Chinese Patients With Obstructive Sleep Apnea; Identifier: NCT02696629; URL: https://clinicaltrials.gov/show/NCT02696629 Citation: Li Y, Ye J, Han D, Cao X, Ding X, Zhang Y, Xu W, Orr J, Jen R, Sands S, Malhotra A, Owens R. Physiology-based modeling may predict surgical treatment outcome for obstructive sleep apnea. J Clin Sleep Med. 2017;13(9):1029–1037. PMID:28818154

  4. Statistical Analysis of SSMIS Sea Ice Concentration Threshold at the Arctic Sea Ice Edge during Summer Based on MODIS and Ship-Based Observational Data.

    PubMed

    Ji, Qing; Li, Fei; Pang, Xiaoping; Luo, Cong

    2018-04-05

    The threshold of sea ice concentration (SIC) is the basis for accurately calculating sea ice extent based on passive microwave (PM) remote sensing data. However, the PM SIC threshold at the sea ice edge used in previous studies and released sea ice products has not always been consistent. To explore the representable value of the PM SIC threshold corresponding on average to the position of the Arctic sea ice edge during summer in recent years, we extracted sea ice edge boundaries from the Moderate-resolution Imaging Spectroradiometer (MODIS) sea ice product (MOD29 with a spatial resolution of 1 km), MODIS images (250 m), and sea ice ship-based observation points (1 km) during the fifth (CHINARE-2012) and sixth (CHINARE-2014) Chinese National Arctic Research Expeditions, and made an overlay and comparison analysis with PM SIC derived from Special Sensor Microwave Imager Sounder (SSMIS, with a spatial resolution of 25 km) in the summer of 2012 and 2014. Results showed that the average SSMIS SIC threshold at the Arctic sea ice edge based on ice-water boundary lines extracted from MOD29 was 33%, which was higher than that of the commonly used 15% discriminant threshold. The average SIC threshold at sea ice edge based on ice-water boundary lines extracted by visual interpretation from four scenes of the MODIS image was 35% when compared to the average value of 36% from the MOD29 extracted ice edge pixels for the same days. The average SIC of 31% at the sea ice edge points extracted from ship-based observations also confirmed that choosing around 30% as the SIC threshold during summer is recommended for sea ice extent calculations based on SSMIS PM data. These results can provide a reference for further studying the variation of sea ice under the rapidly changing Arctic.

  5. [The alpha-fetoprotein in prognosis of survival of and functional rehabilitation of patients with ischemic stroke].

    PubMed

    Arkhipkin, A A; Liang, O V; Kochetov, A G

    2014-10-01

    The study was carried out to determine the prognostic value of alpha-fetoprotein in development of lethal outcome and degree of functional rehabilitation of patients with ischemic stroke. The sampling included 216 patients in acute period of ischemic stroke. At the first day of development of disease they were measured the level of human alpha-fetoprotein. At the second day of disease patients were evaluated the degree of functional rehabilitation and the rate of lethal outcomes was calculated. Previously, the reference interval for alpha-fetoprotein was calculated according the guidelines of the International federation of clinical chemistry and national standard. The reference interval amounted to 0.59-3.78 mE/l. The study results demonstrated that low level of alpha-fetoprotein is related to higher risk of lethal outcome (SE=1.7, p=0.012). The increasing of level of alpha-fetoprotein over mentioned threshold value statistically significant increases probability of survival of patients. The further increasing more than 2.28 mE/l is related to subsequent good functional rehabilitation according the modifies Rankine scale (SE=1.4, p=0.001) and Barthel index (SE=1.49, p<0.001).

  6. Definition, prevalence, and outcome of feeding intolerance in intensive care: a systematic review and meta-analysis.

    PubMed

    Blaser, A Reintam; Starkopf, J; Kirsimägi, Ü; Deane, A M

    2014-09-01

    Clinicians and researchers frequently use the phrase 'feeding intolerance' (FI) as a descriptive term in enterally fed critically ill patients. We aimed to: (1) determine what is the most accepted definition of FI; (2) estimate the prevalence of FI; and (3) evaluate whether FI is associated with important outcomes. Systematic searches of peer-reviewed publications using PubMed, MEDLINE, and Web of Science were performed with studies reporting FI extracted. We identified 72 studies defining FI. In 33 studies, the definition was based on large gastric residual volumes (GRVs) together with other gastrointestinal symptoms, while 30 studies relied solely on large GRVs, six studies used inadequate delivery of enteral nutrition (EN) as a threshold, and three studies gastrointestinal symptoms without reference to GRV. The median volume used to define a 'large' GRV was 250 ml (ranges from 75 to 500 ml). The pooled proportion (n = 31 studies) of FI was 38.3% (95% CI 30.7-46.2). Five studies reported outcomes, all of them observed adverse outcome in FI patients. In three studies, respectively, FI was associated with increased mortality and ICU length-of-stay. In summary, FI is inconsistently defined but appears to occur frequently. There are preliminary data indicating that FI is associated with adverse outcomes. A standard definition of FI is required to determine the accuracy of these preliminary data. © 2014 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  7. Determining the Threshold for HbA1c as a Predictor for Adverse Outcomes After Total Joint Arthroplasty: A Multicenter, Retrospective Study.

    PubMed

    Tarabichi, Majd; Shohat, Noam; Kheir, Michael M; Adelani, Muyibat; Brigati, David; Kearns, Sean M; Patel, Pankajkumar; Clohisy, John C; Higuera, Carlos A; Levine, Brett R; Schwarzkopf, Ran; Parvizi, Javad; Jiranek, William A

    2017-09-01

    Although HbA1c is commonly used for assessing glycemic control before surgery, there is no consensus regarding its role and the appropriate threshold in predicting adverse outcomes. This study was designed to evaluate the potential link between HbA1c and subsequent periprosthetic joint infection (PJI), with the intention of determining the optimal threshold for HbA1c. This is a multicenter retrospective study, which identified 1645 diabetic patients who underwent primary total joint arthroplasty (1004 knees and 641 hips) between 2001 and 2015. All patients had an HbA1c measured within 3 months of surgery. The primary outcome of interest was a PJI at 1 year based on the Musculoskeletal Infection Society criteria. Secondary outcomes included orthopedic (wound and mechanical complications) and nonorthopedic complications (sepsis, thromboembolism, genitourinary, and cardiovascular complications). A regression analysis was performed to determine the independent influence of HbA1c for predicting PJI. Overall 22 cases of PJI occurred at 1 year (1.3%). HbA1c at a threshold of 7.7 was distinct for predicting PJI (area under the curve, 0.65; 95% confidence interval, 0.51-0.78). Using this threshold, PJI rates increased from 0.8% (11 of 1441) to 5.4% (11 of 204). In the stepwise logistic regression analysis, PJI remained the only variable associated with higher HbA1c (odds ratio, 1.5; confidence interval, 1.2-2.0; P = .0001). There was no association between high HbA1c levels and other complications assessed. High HbA1c levels are associated with an increased risk for PJI. A threshold of 7.7% seems to be more indicative of infection than the commonly used 7% and should perhaps be the goal in preoperative patient optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Determinants of 25(OH)D sufficiency in obese minority children: selecting outcome measures and analytic approaches.

    PubMed

    Zhou, Ping; Schechter, Clyde; Cai, Ziyong; Markowitz, Morri

    2011-06-01

    To highlight complexities in defining vitamin D sufficiency in children. Serum 25-(OH) vitamin D [25(OH)D] levels from 140 healthy obese children age 6 to 21 years living in the inner city were compared with multiple health outcome measures, including bone biomarkers and cardiovascular risk factors. Several statistical analytic approaches were used, including Pearson correlation, analysis of covariance (ANCOVA), and "hockey stick" regression modeling. Potential threshold levels for vitamin D sufficiency varied by outcome variable and analytic approach. Only systolic blood pressure (SBP) was significantly correlated with 25(OH)D (r = -0.261; P = .038). ANCOVA revealed that SBP and triglyceride levels were statistically significant in the test groups [25(OH)D <10, <15 and <20 ng/mL] compared with the reference group [25(OH)D >25 ng/mL]. ANCOVA also showed that only children with severe vitamin D deficiency [25(OH)D <10 ng/mL] had significantly higher parathyroid hormone levels (Δ = 15; P = .0334). Hockey stick model regression analyses found evidence of a threshold level in SBP, with a 25(OH)D breakpoint of 27 ng/mL, along with a 25(OH)D breakpoint of 18 ng/mL for triglycerides, but no relationship between 25(OH)D and parathyroid hormone. Defining vitamin D sufficiency should take into account different vitamin D-related health outcome measures and analytic methodologies. Copyright © 2011 Mosby, Inc. All rights reserved.

  9. Device for monitoring cell voltage

    DOEpatents

    Doepke, Matthias [Garbsen, DE; Eisermann, Henning [Edermissen, DE

    2012-08-21

    A device for monitoring a rechargeable battery having a number of electrically connected cells includes at least one current interruption switch for interrupting current flowing through at least one associated cell and a plurality of monitoring units for detecting cell voltage. Each monitoring unit is associated with a single cell and includes a reference voltage unit for producing a defined reference threshold voltage and a voltage comparison unit for comparing the reference threshold voltage with a partial cell voltage of the associated cell. The reference voltage unit is electrically supplied from the cell voltage of the associated cell. The voltage comparison unit is coupled to the at least one current interruption switch for interrupting the current of at least the current flowing through the associated cell, with a defined minimum difference between the reference threshold voltage and the partial cell voltage.

  10. Cost-effectiveness thresholds: methods for setting and examples from around the world.

    PubMed

    Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano

    2018-06-01

    Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.

  11. [The analysis of threshold effect using Empower Stats software].

    PubMed

    Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan

    2013-11-01

    In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.

  12. Regional rainfall thresholds for landslide occurrence using a centenary database

    NASA Astrophysics Data System (ADS)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia

    2018-04-01

    This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.

  13. Statistical approaches for the definition of landslide rainfall thresholds and their uncertainty using rain gauge and satellite data

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-05-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.

  14. Statistical Approaches for the Definition of Landslide Rainfall Thresholds and their Uncertainty Using Rain Gauge and Satellite Data

    NASA Technical Reports Server (NTRS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-01-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.

  15. Estimating daily climatologies for climate indices derived from climate model data and observations

    PubMed Central

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  16. Defining a reference range for vital signs in healthy term pregnant women undergoing caesarean section.

    PubMed

    Dennis, A; Hardy, L

    2016-11-01

    Early warning systems (EWS), used to identify deteriorating hospitalised patients, are based on measurement of vital signs. When the patients are pregnant, most EWS still use non-pregnant reference ranges of vital signs to determine trigger thresholds. There are no published reference ranges for all vital signs in pregnancy. We aimed to define vital signs reference ranges for term pregnancy in the preoperative period, and to determine the appropriateness of EWS trigger criteria in pregnancy. We conducted a one-year retrospective study in a tertiary referral obstetric hospital. The study sample was healthy term women undergoing planned caesarean section (CS). Systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), oxygen saturation (SpO 2 ) and temperature were all measured automatically and data was extracted from the medical record. Two hundred and fifty-eight women met inclusion criteria. Results were (mean ± SD [standard deviation]) SBP 118 ± 11.2 mmHg, DBP 75 ± 10.3 mmHg, HR 84 ± 10.2 /minute, respiratory rate 18 ± 1.5 /minute, SpO 2 99%  ± 1.0% and temperature 36.4°C ± 0.43°C. The reference ranges (mean ± 2SD) determined were SBP 96-140 mmHg, DBP 54-96 mmHg, HR 64-104/minute, RR 15-21 /minute, SpO 2 97%-100% and temperature 35.5°C-37.3°C. This study defined a reference range for vital signs in healthy term pregnant women undergoing CS. Study findings suggest that currently used criteria for EWS triggers, based on non-pregnant values, may be too extreme for timely detection of deteriorating pregnant patients. Further research examining the modified HR triggers of ≤50 and ≥110 /minute in pregnant women and their relationship to clinical outcomes is required.

  17. Geospatial association between adverse birth outcomes and arsenic in groundwater in New Hampshire, USA.

    PubMed

    Shi, Xun; Ayotte, Joseph D; Onda, Akikazu; Miller, Stephanie; Rees, Judy; Gilbert-Diamond, Diane; Onega, Tracy; Gui, Jiang; Karagas, Margaret; Moeschler, John

    2015-04-01

    There is increasing evidence of the role of arsenic in the etiology of adverse human reproductive outcomes. Because drinking water can be a major source of arsenic to pregnant women, the effect of arsenic exposure through drinking water on human birth may be revealed by a geospatial association between arsenic concentration in groundwater and birth problems, particularly in a region where private wells substantially account for water supply, like New Hampshire, USA. We calculated town-level rates of preterm birth and term low birth weight (term LBW) for New Hampshire, by using data for 1997-2009 stratified by maternal age. We smoothed the rates by using a locally weighted averaging method to increase the statistical stability. The town-level groundwater arsenic probability values are from three GIS data layers generated by the US Geological Survey: probability of local groundwater arsenic concentration >1 µg/L, probability >5 µg/L, and probability >10 µg/L. We calculated Pearson's correlation coefficients (r) between the reproductive outcomes (preterm birth and term LBW) and the arsenic probability values, at both state and county levels. For preterm birth, younger mothers (maternal age <20) have a statewide r = 0.70 between the rates smoothed with a threshold = 2,000 births and the town mean arsenic level based on the data of probability >10 µg/L; for older mothers, r = 0.19 when the smoothing threshold = 3,500; a majority of county level r values are positive based on the arsenic data of probability >10 µg/L. For term LBW, younger mothers (maternal age <25) have a statewide r = 0.44 between the rates smoothed with a threshold = 3,500 and town minimum arsenic concentration based on the data of probability >1 µg/L; for older mothers, r = 0.14 when the rates are smoothed with a threshold = 1,000 births and also adjusted by town median household income in 1999, and the arsenic values are the town minimum based on probability >10 µg/L. At the county level for younger mothers, positive r values prevail, but for older mothers, it is a mix. For both birth problems, the several most populous counties-with 60-80 % of the state's population and clustering at the southwest corner of the state-are largely consistent in having a positive r across different smoothing thresholds. We found evident spatial associations between the two adverse human reproductive outcomes and groundwater arsenic in New Hampshire, USA. However, the degree of associations and their sensitivity to different representations of arsenic level are variable. Generally, preterm birth has a stronger spatial association with groundwater arsenic than term LBW, suggesting an inconsistency in the impact of arsenic on the two reproductive outcomes. For both outcomes, younger maternal age has stronger spatial associations with groundwater arsenic.

  18. The response time threshold for predicting favourable neurological outcomes in patients with bystander-witnessed out-of-hospital cardiac arrest.

    PubMed

    Ono, Yuichi; Hayakawa, Mineji; Iijima, Hiroaki; Maekawa, Kunihiko; Kodate, Akira; Sadamoto, Yoshihiro; Mizugaki, Asumi; Murakami, Hiromoto; Katabami, Kenichi; Sawamura, Atsushi; Gando, Satoshi

    2016-10-01

    It is well established that the period of time between a call being made to emergency medical services (EMS) and the time at which the EMS arrive at the scene (i.e. the response time) affects survival outcomes in patients who experience out-of-hospital cardiac arrest (OHCA). However, the relationship between the response time and favourable neurological outcomes remains unclear. We therefore aimed to determine a response time threshold in patients with bystander-witnessed OHCA that is associated with positive neurological outcomes and to assess the relationship between the response time and neurological outcomes in patients with OHCA. This study was a retrospective, observational analysis of data from 204,277 episodes of bystander-witnessed OHCA between 2006 and 2012 in Japan. We used classification and regression trees (CARTs) and receiver operating characteristic (ROC) curve analyses to determine the threshold of response time associated with favourable neurological outcomes (Cerebral Performance Category 1 or 2) 1 month after cardiac arrest. Both CARTs and ROC analyses indicated that a threshold of 6.5min was associated with improved neurological outcomes in all bystander-witnessed OHCA events of cardiac origin. Furthermore, bystander cardiopulmonary resuscitation (CPR) prolonged the threshold of response time by 1min (up to 7.5min). The adjusted odds ratio for favourable neurological outcomes in patients with OHCA who received care within ≤6.5min was 1.935 (95% confidential interval: 1.834-2.041, P<0.001). A response time of ≤6.5min was closely associated with favourable neurological outcomes in all bystander-witnessed patients with OHCA. Bystander CPR prolonged the response time threshold by 1min. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Analysis of Critical Mass in Threshold Model of Diffusion

    NASA Astrophysics Data System (ADS)

    Kim, Jeehong; Hur, Wonchang; Kang, Suk-Ho

    2012-04-01

    Why does diffusion sometimes show cascade phenomena but at other times is impeded? In addressing this question, we considered a threshold model of diffusion, focusing on the formation of a critical mass, which enables diffusion to be self-sustaining. Performing an agent-based simulation, we found that the diffusion model produces only two outcomes: Almost perfect adoption or relatively few adoptions. In order to explain the difference, we considered the various properties of network structures and found that the manner in which thresholds are arrayed over a network is the most critical factor determining the size of a cascade. On the basis of the results, we derived a threshold arrangement method effective for generation of a critical mass and calculated the size required for perfect adoption.

  20. A Temporal Model of Level-Invariant, Tone-in-Noise Detection

    ERIC Educational Resources Information Center

    Berg, Bruce G.

    2004-01-01

    Level-invariant detection refers to findings that thresholds in tone-in-noise detection are unaffected by roving-level procedures that degrade energy cues. Such data are inconsistent with ideas that detection is based on the energy passed by an auditory filter. A hypothesis that detection is based on a level-invariant temporal cue is advanced.…

  1. An emerging evidence base for the management of neonatal hypoglycaemia.

    PubMed

    Harding, Jane E; Harris, Deborah L; Hegarty, Joanne E; Alsweiler, Jane M; McKinlay, Christopher Jd

    2017-01-01

    Neonatal hypoglycaemia is common, and screening and treatment of babies considered at risk is widespread, despite there being little reliable evidence upon which to base management decisions. Although there is now evidence about which babies are at greatest risk, the threshold for diagnosis, best approach to treatment and later outcomes all remain uncertain. Recent studies suggest that treatment with dextrose gel is safe and effective and may help support breast feeding. Thresholds for intervention require a wide margin of safety in light of information that babies with glycaemic instability and with low glucose concentrations may be associated with a higher risk of later higher order cognitive and learning problems. Randomised trials are urgently needed to inform optimal thresholds for intervention and appropriate treatment strategies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. An Emerging Evidence Base for the Management of Neonatal Hypoglycaemia

    PubMed Central

    Harding, Jane E; Harris, Deborah L; Hegarty, Joanne E; Alsweiler, Jane M; McKinlay, Christopher JD

    2016-01-01

    Neonatal hypoglycaemia is common, and screening and treatment of babies considered at risk is widespread, despite there being little reliable evidence upon which to base management decisions. Although there is now evidence about which babies are at greatest risk, the threshold for diagnosis, best approach to treatment and later outcomes all remain uncertain. Recent studies suggest that treatment with dextrose gel is safe and effective and may help support breast feeding. Thresholds for intervention require a wide margin of safety in light of information that babies with glycaemic instability and with low glucose concentrations may be associated with a higher risk of later higher order cognitive and learning problems. Randomised trials are urgently needed to inform optimal thresholds for intervention and appropriate treatment strategies. PMID:27989586

  3. Proposing an Empirically Justified Reference Threshold for Blood Culture Sampling Rates in Intensive Care Units

    PubMed Central

    Castell, Stefanie; Schwab, Frank; Geffers, Christine; Bongartz, Hannah; Brunkhorst, Frank M.; Gastmeier, Petra; Mikolajczyk, Rafael T.

    2014-01-01

    Early and appropriate blood culture sampling is recommended as a standard of care for patients with suspected bloodstream infections (BSI) but is rarely taken into account when quality indicators for BSI are evaluated. To date, sampling of about 100 to 200 blood culture sets per 1,000 patient-days is recommended as the target range for blood culture rates. However, the empirical basis of this recommendation is not clear. The aim of the current study was to analyze the association between blood culture rates and observed BSI rates and to derive a reference threshold for blood culture rates in intensive care units (ICUs). This study is based on data from 223 ICUs taking part in the German hospital infection surveillance system. We applied locally weighted regression and segmented Poisson regression to assess the association between blood culture rates and BSI rates. Below 80 to 90 blood culture sets per 1,000 patient-days, observed BSI rates increased with increasing blood culture rates, while there was no further increase above this threshold. Segmented Poisson regression located the threshold at 87 (95% confidence interval, 54 to 120) blood culture sets per 1,000 patient-days. Only one-third of the investigated ICUs displayed blood culture rates above this threshold. We provided empirical justification for a blood culture target threshold in ICUs. In the majority of the studied ICUs, blood culture sampling rates were below this threshold. This suggests that a substantial fraction of BSI cases might remain undetected; reporting observed BSI rates as a quality indicator without sufficiently high blood culture rates might be misleading. PMID:25520442

  4. An economic evaluation of maxillary implant overdentures based on six vs. four implants.

    PubMed

    Listl, Stefan; Fischer, Leonhard; Giannakopoulos, Nikolaos Nikitas

    2014-08-18

    The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients' denture satisfaction, the respective cost-effectiveness threshold varies substantially. The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes.

  5. An economic evaluation of maxillary implant overdentures based on six vs. four implants

    PubMed Central

    2014-01-01

    Background The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. Methods A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Results Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients’ denture satisfaction, the respective cost-effectiveness threshold varies substantially. Conclusions The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes. PMID:25135370

  6. Oxygen saturation in healthy children aged 5 to 16 years residing in Huayllay, Peru at 4340 m.

    PubMed

    Schult, Sandra; Canelo-Aybar, Carlos

    2011-01-01

    Hypoxemia is a major life-threatening complication of childhood pneumonia. The threshold points for hypoxemia vary with altitude. However, few published data describe that normal range of variation. The purpose of this study was to establish reference values of normal mean Sao(2) levels and an approximate cutoff point to define hypoxemia for clinical purposes above 4300 meters above sea level (masl). Children aged 5 to 16 yr were examined during primary care visits at the Huayllay Health Center. Huayllay is a rural community located at 4340 m in the province of Pasco in the Peruvian Andes. We collected basic sociodemographic data and evaluated three outcomes: arterial oxygen saturation (Sao(2)) with a pulse oximeter, heart rate, and respiratory rate. Comparisons of main outcomes among age groups (5-6, 7-8, 9-10, 11-12, 13-14, and 15-16 yr) and sex were performed using linear regression models. The correlation of Sao(2) with heart rate and respiration rate was established by Pearson's correlation test. We evaluated 583 children, of whom 386 were included in the study. The average age was 10.3 yr; 55.7% were female. The average Sao(2), heart rate, and respiratory rate were 85.7% (95% CI: 85.2-86.2), 80.4/min (95% CI: 79.0-81.9), and 19.9/min (95% CI: 19.6-20.2), respectively. Sao(2) increased with age (p < 0.001). No differences by sex were observed. The mean minus two standard deviations of Sao(2) (threshold point for hypoxemia) ranged from 73.8% to 81.8% by age group. At 4300 m, the reference values for hypoxemia may be 14.2% lower than at sea level. This difference must be considered when diagnosing hypoxemia or deciding oxygen supplementation at high altitude. Other studies are needed to determine whether this reference value is appropriate for clinical use.

  7. Detection and classification of alarm threshold violations in condition monitoring systems working in highly varying operational conditions

    NASA Astrophysics Data System (ADS)

    Strączkiewicz, M.; Barszcz, T.; Jabłoński, A.

    2015-07-01

    All commonly used condition monitoring systems (CMS) enable defining alarm thresholds that enhance efficient surveillance and maintenance of dynamic state of machinery. The thresholds are imposed on the measured values such as vibration-based indicators, temperature, pressure, etc. For complex machinery such as wind turbine (WT) the total number of thresholds might be counted in hundreds multiplied by the number of operational states. All the parameters vary not only due to possible machinery malfunctions, but also due to changes in operating conditions and these changes are typically much stronger than the former ones. Very often, such a behavior may lead to hundreds of false alarms. Therefore, authors propose a novel approach based on parameterized description of the threshold violation. For this purpose the novelty and severity factors are introduced. The first parameter refers to the time of violation occurrence while the second one describes the impact of the indicator-increase to the entire machine. Such approach increases reliability of the CMS by providing the operator with the most useful information of the system events. The idea of the procedure is presented on a simulated data similar to those from a wind turbine.

  8. The conventional tuning fork as a quantitative tool for vibration threshold.

    PubMed

    Alanazy, Mohammed H; Alfurayh, Nuha A; Almweisheer, Shaza N; Aljafen, Bandar N; Muayqil, Taim

    2018-01-01

    This study was undertaken to describe a method for quantifying vibration when using a conventional tuning fork (CTF) in comparison to a Rydel-Seiffer tuning fork (RSTF) and to provide reference values. Vibration thresholds at index finger and big toe were obtained in 281 participants. Spearman's correlations were performed. Age, weight, and height were analyzed for their covariate effects on vibration threshold. Reference values at the fifth percentile were obtained by quantile regression. The correlation coefficients between CTF and RSTF values at finger/toe were 0.59/0.64 (P = 0.001 for both). Among covariates, only age had a significant effect on vibration threshold. Reference values for CTF at finger/toe for the age groups 20-39 and 40-60 years were 7.4/4.9 and 5.8/4.6 s, respectively. Reference values for RSTF at finger/toe for the age groups 20-39 and 40-60 years were 6.9/5.5 and 6.2/4.7, respectively. CTF provides quantitative values that are as good as those provided by RSTF. Age-stratified reference data are provided. Muscle Nerve 57: 49-53, 2018. © 2017 Wiley Periodicals, Inc.

  9. ToTCompute: A Novel EEG-Based TimeOnTask Threshold Computation Mechanism for Engagement Modelling and Monitoring

    ERIC Educational Resources Information Center

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2016-01-01

    Engagement influences participation, progression and retention in game-based e-learning (GBeL). Therefore, GBeL systems should engage the players in order to support them to maximize their learning outcomes, and provide the players with adequate feedback to maintain their motivation. Innovative engagement monitoring solutions based on players'…

  10. Should English healthcare providers be penalised for failing to collect patient-reported outcome measures? A retrospective analysis

    PubMed Central

    Street, Andrew; Gomes, Manuel; Bojke, Chris

    2015-01-01

    Summary Objective The best practice tariff for hip and knee replacement in the English National Health Service (NHS) rewards providers based on improvements in patient-reported outcome measures (PROMs) collected before and after surgery. Providers only receive a bonus if at least 50% of their patients complete the preoperative questionnaire. We determined how many providers failed to meet this threshold prior to the policy introduction and assessed longitudinal stability of participation rates. Design Retrospective observational study using data from Hospital Episode Statistics and the national PROM programme from April 2009 to March 2012. We calculated participation rates based on either (a) all PROM records or (b) only those that could be linked to inpatient records; constructed confidence intervals around rates to account for sampling variation; applied precision weighting to allow for volume; and applied risk adjustment. Setting NHS hospitals and private providers in England. Participants NHS patients undergoing elective unilateral hip and knee replacement surgery. Main outcome measures Number of providers with participation rates statistically significantly below 50%. Results Crude rates identified many providers that failed to achieve the 50% threshold but there were substantially fewer after adjusting for uncertainty and precision. While important, risk adjustment required restricting the analysis to linked data. Year-on-year correlation between provider participation rates was moderate. Conclusions Participation rates have improved over time and only a small number of providers now fall below the threshold, but administering preoperative questionnaires remains problematic in some providers. We recommend that participation rates are based on linked data and take into account sampling variation. PMID:25827906

  11. The importance of reference materials in doping-control analysis.

    PubMed

    Mackay, Lindsey G; Kazlauskas, Rymantas

    2011-08-01

    Currently a large range of pure substance reference materials are available for calibration of doping-control methods. These materials enable traceability to the International System of Units (SI) for the results generated by World Anti-Doping Agency (WADA)-accredited laboratories. Only a small number of prohibited substances have threshold limits for which quantification is highly important. For these analytes only the highest quality reference materials that are available should be used. Many prohibited substances have no threshold limits and reference materials provide essential identity confirmation. For these reference materials the correct identity is critical and the methods used to assess identity in these cases should be critically evaluated. There is still a lack of certified matrix reference materials to support many aspects of doping analysis. However, in key areas a range of urine matrix materials have been produced for substances with threshold limits, for example 19-norandrosterone and testosterone/epitestosterone (T/E) ratio. These matrix-certified reference materials (CRMs) are an excellent independent means of checking method recovery and bias and will typically be used in method validation and then regularly as quality-control checks. They can be particularly important in the analysis of samples close to threshold limits, in which measurement accuracy becomes critical. Some reference materials for isotope ratio mass spectrometry (IRMS) analysis are available and a matrix material certified for steroid delta values is currently under production. In other new areas, for example the Athlete Biological Passport, peptide hormone testing, designer steroids, and gene doping, reference material needs still need to be thoroughly assessed and prioritised.

  12. Pressure and cold pain threshold reference values in a large, young adult, pain-free population.

    PubMed

    Waller, Robert; Smith, Anne Julia; O'Sullivan, Peter Bruce; Slater, Helen; Sterling, Michele; McVeigh, Joanne Alexandra; Straker, Leon Melville

    2016-10-01

    Currently there is a lack of large population studies that have investigated pain sensitivity distributions in healthy pain free people. The aims of this study were: (1) to provide sex-specific reference values of pressure and cold pain thresholds in young pain-free adults; (2) to examine the association of potential correlates of pain sensitivity with pain threshold values. This study investigated sex specific pressure and cold pain threshold estimates for young pain free adults aged 21-24 years. A cross-sectional design was utilised using participants (n=617) from the Western Australian Pregnancy Cohort (Raine) Study at the 22-year follow-up. The association of site, sex, height, weight, smoking, health related quality of life, psychological measures and activity with pain threshold values was examined. Pressure pain threshold (lumbar spine, tibialis anterior, neck and dorsal wrist) and cold pain threshold (dorsal wrist) were assessed using standardised quantitative sensory testing protocols. Reference values for pressure pain threshold (four body sites) stratified by sex and site, and cold pain threshold (dorsal wrist) stratified by sex are provided. Statistically significant, independent correlates of increased pressure pain sensitivity measures were site (neck, dorsal wrist), sex (female), higher waist-hip ratio and poorer mental health. Statistically significant, independent correlates of increased cold pain sensitivity measures were, sex (female), poorer mental health and smoking. These data provide the most comprehensive and robust sex specific reference values for pressure pain threshold specific to four body sites and cold pain threshold at the dorsal wrist for young adults aged 21-24 years. Establishing normative values in this young age group is important given that the transition from adolescence to adulthood is a critical temporal period during which trajectories for persistent pain can be established. These data will provide an important research resource to enable more accurate profiling and interpretation of pain sensitivity in clinical pain disorders in young adults. The robust and comprehensive data can assist interpretation of future clinical pain studies and provide further insight into the complex associations of pain sensitivity that can be used in future research. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  13. Quantifying patterns of change in marine ecosystem response to multiple pressures.

    PubMed

    Large, Scott I; Fay, Gavin; Friedland, Kevin D; Link, Jason S

    2015-01-01

    The ability to understand and ultimately predict ecosystem response to multiple pressures is paramount to successfully implement ecosystem-based management. Thresholds shifts and nonlinear patterns in ecosystem responses can be used to determine reference points that identify levels of a pressure that may drastically alter ecosystem status, which can inform management action. However, quantifying ecosystem reference points has proven elusive due in large part to the multi-dimensional nature of both ecosystem pressures and ecosystem responses. We used ecological indicators, synthetic measures of ecosystem status and functioning, to enumerate important ecosystem attributes and to reduce the complexity of the Northeast Shelf Large Marine Ecosystem (NES LME). Random forests were used to quantify the importance of four environmental and four anthropogenic pressure variables to the value of ecological indicators, and to quantify shifts in aggregate ecological indicator response along pressure gradients. Anthropogenic pressure variables were critical defining features and were able to predict an average of 8-13% (up to 25-66% for individual ecological indicators) of the variation in ecological indicator values, whereas environmental pressures were able to predict an average of 1-5 % (up to 9-26% for individual ecological indicators) of ecological indicator variation. Each pressure variable predicted a different suite of ecological indicator's variation and the shapes of ecological indicator responses along pressure gradients were generally nonlinear. Threshold shifts in ecosystem response to exploitation, the most important pressure variable, occurred when commercial landings were 20 and 60% of total surveyed biomass. Although present, threshold shifts in ecosystem response to environmental pressures were much less important, which suggests that anthropogenic pressures have significantly altered the ecosystem structure and functioning of the NES LME. Gradient response curves provide ecologically informed transformations of pressure variables to explain patterns of ecosystem structure and functioning. By concurrently identifying thresholds for a suite of ecological indicator responses to multiple pressures, we demonstrate that ecosystem reference points can be evaluated and used to support ecosystem-based management.

  14. Audibility threshold spectrum for prominent discrete tone analysis

    NASA Astrophysics Data System (ADS)

    Kimizuka, Ikuo

    2005-09-01

    To evaluate the annoyance of tonal components in noise emissions, ANSI S1.13 (for general purposes) and/or ISO 7779/ECMA-74 (dedicatedfor IT equipment) state two similar metrics: tone-to-noise ratio (TNR) and prominence ratio(PR). By these or either of these two parameters, noise of question with a sharp spectral peak is analyzed by high resolution FFF and classified as prominent when it exceeds some criterion curve. According to present procedures, however this designation is dependent on only the spectral shape. To resolve this problem, the author proposes a threshold spectrum of human ear audibility. The spectrum is based on the reference threshold of hearing which is defined in ISO 389-7 and/or ISO 226. With this spectrum, one can objectively define whether the noise peak of question is audible or not, by simple comparison of the peak amplitude of noise emission and the corresponding value of threshold. Applying the threshold, one can avoid overkilling or unnecessary action for noise. Such a peak with absolutely low amplitude is not audible.

  15. Ecological thresholds as a basis for defining management triggers for National Park Service vital signs: case studies for dryland ecosystems

    USGS Publications Warehouse

    Bowker, Matthew A.; Miller, Mark E.; Belote, R. Travis; Garman, Steven L.

    2013-01-01

    Threshold concepts are used in research and management of ecological systems to describe and interpret abrupt and persistent reorganization of ecosystem properties (Walker and Meyers, 2004; Groffman and others, 2006). Abrupt change, referred to as a threshold crossing, and the progression of reorganization can be triggered by one or more interactive disturbances such as land-use activities and climatic events (Paine and others, 1998). Threshold crossings occur when feedback mechanisms that typically absorb forces of change are replaced with those that promote development of alternative equilibria or states (Suding and others, 2004; Walker and Meyers, 2004; Briske and others, 2008). The alternative states that emerge from a threshold crossing vary and often exhibit reduced ecological integrity and value in terms of management goals relative to the original or reference system. Alternative stable states with some limited residual properties of the original system may develop along the progression after a crossing; an eventual outcome may be the complete loss of pre-threshold properties of the original ecosystem. Reverting to the more desirable reference state through ecological restoration becomes increasingly difficult and expensive along the progression gradient and may eventually become impossible. Ecological threshold concepts have been applied as a heuristic framework and to aid in the management of rangelands (Bestelmeyer, 2006; Briske and others, 2006, 2008), aquatic (Scheffer and others, 1993; Rapport and Whitford 1999), riparian (Stringham and others, 2001; Scott and others, 2005), and forested ecosystems (Allen and others, 2002; Digiovinazzo and others, 2010). These concepts are also topical in ecological restoration (Hobbs and Norton 1996; Whisenant 1999; Suding and others, 2004; King and Hobbs, 2006) and ecosystem sustainability (Herrick, 2000; Chapin and others, 1996; Davenport and others, 1998). Achieving conservation management goals requires the protection of resources within the range of desired conditions (Cook and others, 2010). The goal of conservation management for natural resources in the U.S. National Park System is to maintain native species and habitat unimpaired for the enjoyment of future generations. Achieving this goal requires, in part, early detection of system change and timely implementation of remediation. The recent National Park Service Inventory and Monitoring program (NPS I&M) was established to provide early warning of declining ecosystem conditions relative to a desired native or reference system (Fancy and others, 2009). To be an effective tool for resource protection, monitoring must be designed to alert managers of impending thresholds so that preventive actions can be taken. This requires an understanding of the ecosystem attributes and processes associated with threshold-type behavior; how these attributes and processes become degraded; and how risks of degradation vary among ecosystems and in relation to environmental factors such as soil properties, climatic conditions, and exposure to stressors. In general, the utility of the threshold concept for long-term monitoring depends on the ability of scientists and managers to detect, predict, and prevent the occurrence of threshold crossings associated with persistent, undesirable shifts among ecosystem states (Briske and others, 2006). Because of the scientific challenges associated with understanding these factors, the application of threshold concepts to monitoring designs has been very limited to date (Groffman and others, 2006). As a case in point, the monitoring efforts across the 32 NPS I&M networks were largely designed with the knowledge that they would not be used to their full potential until the development of a systematic method for understanding threshold dynamics and methods for estimating key attributes of threshold crossings. This report describes and demonstrates a generalized approach that we implemented to formalize understanding and estimating of threshold dynamics for terrestrial dryland ecosystems in national parks of the Colorado Plateau. We provide a structured approach to identify and describe degradation processes associated with threshold behavior and to estimate indicator levels that characterize the point at which a threshold crossing has occurred or is imminent (tipping points) or points where investigative or preventive management action should be triggered (assessment points). We illustrate this method for several case studies in national parks included in the Northern and Southern Colorado Plateau NPS I&M networks, where historical livestock grazing, climatic change, and invasive species are key agents of change. The approaches developed in these case studies are intended to enhance the design, effectiveness, and management-relevance of monitoring efforts in support of conservation management in dryland systems. They specifically enhance National Park Service (NPS) capacity for protecting park resources on the Colorado Plateau but have applicability to monitoring and conservation management of dryland ecosystems worldwide.

  16. Identifying Thresholds for Ecosystem-Based Management

    PubMed Central

    Samhouri, Jameal F.; Levin, Phillip S.; Ainsworth, Cameron H.

    2010-01-01

    Background One of the greatest obstacles to moving ecosystem-based management (EBM) from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. Methodology/Principal Findings To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution) at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity) and functional (e.g., resilience) attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1) fishing and (2) nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. Conclusions/Significance For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management. PMID:20126647

  17. Cost-effectiveness thresholds: pros and cons.

    PubMed

    Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-12-01

    Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.

  18. Flood and landslide warning based on rainfall thresholds and soil moisture indexes: the HEWS (Hydrohazards Early Warning System) for Sicily

    NASA Astrophysics Data System (ADS)

    Brigandì, Giuseppina; Tito Aronica, Giuseppe; Bonaccorso, Brunella; Gueli, Roberto; Basile, Giuseppe

    2017-09-01

    The main focus of the paper is to present a flood and landslide early warning system, named HEWS (Hydrohazards Early Warning System), specifically developed for the Civil Protection Department of Sicily, based on the combined use of rainfall thresholds, soil moisture modelling and quantitative precipitation forecast (QPF). The warning system is referred to 9 different Alert Zones in which Sicily has been divided into and based on a threshold system of three different increasing critical levels: ordinary, moderate and high. In this system, for early flood warning, a Soil Moisture Accounting (SMA) model provides daily soil moisture conditions, which allow to select a specific set of three rainfall thresholds, one for each critical level considered, to be used for issue the alert bulletin. Wetness indexes, representative of the soil moisture conditions of a catchment, are calculated using a simple, spatially-lumped rainfall-streamflow model, based on the SCS-CN method, and on the unit hydrograph approach, that require daily observed and/or predicted rainfall, and temperature data as input. For the calibration of this model daily continuous time series of rainfall, streamflow and air temperature data are used. An event based lumped rainfall-runoff model has been, instead, used for the derivation of the rainfall thresholds for each catchment in Sicily characterised by an area larger than 50 km2. In particular, a Kinematic Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall was developed for this purpose. For rainfall-induced shallow landslide warning, empirical rainfall thresholds provided by Gariano et al. (2015) have been included in the system. They were derived on an empirical basis starting from a catalogue of 265 shallow landslides in Sicily in the period 2002-2012. Finally, Delft-FEWS operational forecasting platform has been applied to link input data, SMA model and rainfall threshold models to produce warning on a daily basis for the entire region.

  19. Rainfall threshold definition using an entropy decision approach and radar data

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Ridolfi, E.; Russo, F.; Napolitano, F.

    2011-07-01

    Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.

  20. Marginally perceptible outcome feedback, motor learning and implicit processes.

    PubMed

    Masters, Rich S W; Maxwell, Jon P; Eves, Frank F

    2009-09-01

    Participants struck 500 golf balls to a concealed target. Outcome feedback was presented at the subjective or objective threshold of awareness of each participant or at a supraliminal threshold. Participants who received fully perceptible (supraliminal) feedback learned to strike the ball onto the target, as did participants who received feedback that was only marginally perceptible (subjective threshold). Participants who received feedback that was not perceptible (objective threshold) showed no learning. Upon transfer to a condition in which the target was unconcealed, performance increased in both the subjective and the objective threshold condition, but decreased in the supraliminal condition. In all three conditions, participants reported minimal declarative knowledge of their movements, suggesting that deliberate hypothesis testing about how best to move in order to perform the motor task successfully was disrupted by the impoverished disposition of the visual outcome feedback. It was concluded that sub-optimally perceptible visual feedback evokes implicit processes.

  1. Geospatial association between adverse birth outcomes and arsenic in groundwater in New Hampshire, USA

    USGS Publications Warehouse

    Xun Shi,; Ayotte, Joseph; Akikazu Onda,; Stephanie Miller,; Judy Rees,; Diane Gilbert-Diamond,; Onega, Tracy L; Gui, Jiang; Karagas, Margaret R.; Moeschler, John B

    2015-01-01

    There is increasing evidence of the role of arsenic in the etiology of adverse human reproductive outcomes. Because drinking water can be a major source of arsenic to pregnant women, the effect of arsenic exposure through drinking water on human birth may be revealed by a geospatial association between arsenic concentration in groundwater and birth problems, particularly in a region where private wells substantially account for water supply, like New Hampshire, USA. We calculated town-level rates of preterm birth and term low birth weight (term LBW) for New Hampshire, by using data for 1997–2009 stratified by maternal age. We smoothed the rates by using a locally weighted averaging method to increase the statistical stability. The town-level groundwater arsenic probability values are from three GIS data layers generated by the US Geological Survey: probability of local groundwater arsenic concentration >1 µg/L, probability >5 µg/L, and probability >10 µg/L. We calculated Pearson’s correlation coefficients (r) between the reproductive outcomes (preterm birth and term LBW) and the arsenic probability values, at both state and county levels. For preterm birth, younger mothers (maternal age <20) have a statewider = 0.70 between the rates smoothed with a threshold = 2,000 births and the town mean arsenic level based on the data of probability >10 µg/L; for older mothers, r = 0.19 when the smoothing threshold = 3,500; a majority of county level r values are positive based on the arsenic data of probability >10 µg/L. For term LBW, younger mothers (maternal age <25) have a statewide r = 0.44 between the rates smoothed with a threshold = 3,500 and town minimum arsenic concentration based on the data of probability >1 µg/L; for older mothers, r = 0.14 when the rates are smoothed with a threshold = 1,000 births and also adjusted by town median household income in 1999, and the arsenic values are the town minimum based on probability >10 µg/L. At the county level for younger mothers, positive r values prevail, but for older mothers, it is a mix. For both birth problems, the several most populous counties—with 60–80% of the state’s population and clustering at the southwest corner of the state—are largely consistent in having a positive r across different smoothing thresholds. We found evident spatial associations between the two adverse human reproductive outcomes and groundwater arsenic in New Hampshire, USA. However, the degree of associations and their sensitivity to different representations of arsenic level are variable. Generally, preterm birth has a stronger spatial association with groundwater arsenic than term LBW, suggesting an inconsistency in the impact of arsenic on the two reproductive outcomes. For both outcomes, younger maternal age has stronger spatial associations with groundwater arsenic.

  2. Geospatial Association between Low Birth Weight and Arsenic in Groundwater in New Hampshire, USA

    PubMed Central

    Shi, Xun; Ayotte, Joseph D.; Onda, Akikazu; Miller, Stephanie; Rees, Judy; Gilbert-Diamond, Diane; Onega, Tracy; Gui, Jiang; Karagas, Margaret; Moeschler, John

    2015-01-01

    Background There is increasing evidence of the role of arsenic in the etiology of adverse human reproductive outcomes. Since drinking water can be a major source of arsenic to pregnant women, the effect of arsenic exposure through drinking water on human birth may be revealed by a geospatial association between arsenic concentration in groundwater and birth problems, particularly in a region where private wells substantially account for water supply, like New Hampshire, US. Methods We calculated town-level rates of preterm birth and term low birth weight (term LBW) for New Hampshire, using data for 1997-2009 and stratified by maternal age. We smoothed the rates using a locally-weighted averaging method to increase the statistical stability. The town-level groundwater arsenic values are from three GIS data layers generated by the US Geological Survey: probability of local groundwater arsenic concentration > 1 μg/L, probability > 5 μg/L, and probability > 10 μg/L. We calculated Pearson's correlation coefficients (r) between the reproductive outcomes (preterm birth and term LBW) and the arsenic values, at both state and county levels. Results For preterm birth, younger mothers (maternal age < 20) have a statewide r = 0.70 between the rates smoothed with a threshold = 2,000 births and the town mean arsenic level based on the data of probability > 10 μg/L; For older mothers, r = 0.19 when the smoothing threshold = 3,500; A majority of county level r values are positive based on the arsenic data of probability > 10 μg/L. For term LBW, younger mothers (maternal age < 25) have a statewide r = 0.44 between the rates smoothed with a threshold = 3,500 and town minimum arsenic level based on the data of probability > 1 μg/L; For older mothers, r = 0.14 when the rates are smoothed with a threshold = 1,000 births and also adjusted by town median household income in 1999, and the arsenic values are the town minimum based on probability > 10 μg/L. At the county level, for younger mothers positive r values prevail, but for older mothers it is a mix. For both birth problems, the several most populous counties - with 60-80% of the state's population and clustering at the southwest corner of the state – are largely consistent in having a positive r across different smoothing thresholds. Conclusion We found evident spatial associations between the two adverse human reproductive outcomes and groundwater arsenic in New Hampshire, US. However, the degree of associations and their sensitivity to different representations of arsenic level are variable. Generally, preterm birth has a stronger spatial association with groundwater arsenic than term LBW, suggesting an inconsistency in the impact of arsenic on the two reproductive outcomes. For both outcomes, younger maternal age has stronger spatial associations with groundwater arsenic. PMID:25326895

  3. High-frequency (8 to 16 kHz) reference thresholds and intrasubject threshold variability relative to ototoxicity criteria using a Sennheiser HDA 200 earphone.

    PubMed

    Frank, T

    2001-04-01

    The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using Sennheiser HDA 200 earphones should equal the 1998 interim ISO RETSPLs from 8 to 16 kHz. Further, because the differences between repeated thresholds were well within +/-10 dB and had an extremely low false-positive rate in reference to the ASHA 1994 criteria for a significant threshold shift due to ototoxicity, a Sennheiser HDA 200 earphone can be used for serial monitoring to determine whether significant high-frequency threshold shifts have occurred for patients receiving potentially ototoxic drug therapy.

  4. Primary care REFerral for EchocaRdiogram (REFER) in heart failure: a diagnostic accuracy study

    PubMed Central

    Taylor, Clare J; Roalfe, Andrea K; Iles, Rachel; Hobbs, FD Richard; Barton, P; Deeks, J; McCahon, D; Cowie, MR; Sutton, G; Davis, RC; Mant, J; McDonagh, T; Tait, L

    2017-01-01

    Background Symptoms of breathlessness, fatigue, and ankle swelling are common in general practice but deciding which patients are likely to have heart failure is challenging. Aim To evaluate the performance of a clinical decision rule (CDR), with or without N-Terminal pro-B type natriuretic peptide (NT-proBNP) assay, for identifying heart failure. Design and setting Prospective, observational, diagnostic validation study of patients aged >55 years, presenting with shortness of breath, lethargy, or ankle oedema, from 28 general practices in England. Method The outcome was test performance of the CDR and natriuretic peptide test in determining a diagnosis of heart failure. The reference standard was an expert consensus panel of three cardiologists. Results Three hundred and four participants were recruited, with 104 (34.2%; 95% confidence interval [CI] = 28.9 to 39.8) having a confirmed diagnosis of heart failure. The CDR+NT-proBNP had a sensitivity of 90.4% (95% CI = 83.0 to 95.3) and specificity 45.5% (95% CI = 38.5 to 52.7). NT-proBNP level alone with a cut-off <400 pg/ml had sensitivity 76.9% (95% CI = 67.6 to 84.6) and specificity 91.5% (95% CI = 86.7 to 95.0). At the lower cut-off of NT-proBNP <125 pg/ml, sensitivity was 94.2% (95% CI = 87.9 to 97.9) and specificity 49.0% (95% CI = 41.9 to 56.1). Conclusion At the low threshold of NT-proBNP <125 pg/ml, natriuretic peptide testing alone was better than a validated CDR+NT-proBNP in determining which patients presenting with symptoms went on to have a diagnosis of heart failure. The higher NT-proBNP threshold of 400 pg/ml may mean more than one in five patients with heart failure are not appropriately referred. Guideline natriuretic peptide thresholds may need to be revised. PMID:27919937

  5. Transfusion thresholds and other strategies for guiding allogeneic red blood cell transfusion.

    PubMed

    Hill, S R; Carless, P A; Henry, D A; Carson, J L; Hebert, P C; McClelland, D B; Henderson, K M

    2002-01-01

    Most clinical practice guidelines recommend restrictive red cell transfusion practices with the goal of minimising exposure to allogeneic blood (from an unrelated donor). The purpose of this review is to compare clinical outcomes in patients randomised to restrictive versus liberal transfusion thresholds (triggers). To examine the evidence on the effect of transfusion thresholds, on the use of allogeneic and/or autologous blood, and the evidence for any effect on clinical outcomes. Trials were identified by: computer searches of OVID Medline (1966 to December 2000), Current Contents (1993 to Week 48 2000), and the Cochrane Controlled Trials Register (2000 Issue 4). References in identified trials and review articles were checked and authors contacted to identify any additional studies. Controlled trials in which patients were randomised to an intervention group or to a control group. Trials were included where the intervention groups were assigned on the basis of a clear transfusion "trigger", described as a haemoglobin (Hb) or haematocrit (Hct) level below which a RBC transfusion was to be administered. Trial quality was assessed using criteria proposed by Schulz et al. (1995). Relative risks of requiring allogeneic blood transfusion, transfused blood volumes and other clinical outcomes were pooled across trials using a random effects model. Ten trials were identified that reported outcomes for a total of 1780 patients. Restrictive transfusion strategies reduced the risk of receiving a red blood cell (RBC) transfusion by a relative 42% (RR=0.58: 95%CI=0.47,0.71). This equates to an average absolute risk reduction (ARR) of 40% (95%CI=24% to 56%). The volume of RBCs transfused was reduced on average by 0.93 units (95%CI=0.36,1.5 units). However, heterogeneity between these trials was statistically significant (p<0.00001) for these outcomes. Mortality, rates of cardiac events, morbidity, and length of hospital stay were unaffected. Trials were of poor methodological quality. The limited published evidence supports the use of restrictive transfusion triggers in patients who are free of serious cardiac disease. However, most of the data on clinical outcomes were generated by a single trial. The effects of conservative transfusion triggers on functional status, morbidity and mortality, particularly in patients with cardiac disease, need to be tested in further large clinical trials. In countries with inadequate screening of donor blood the data may constitute a stronger basis for avoiding transfusion with allogeneic red cells.

  6. Time-based partitioning model for predicting neurologically favorable outcome among adults with witnessed bystander out-of-hospital CPA.

    PubMed

    Abe, Toshikazu; Tokuda, Yasuharu; Cook, E Francis

    2011-01-01

    Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR) for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA) have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648) who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance) or two (moderate cerebral disability) of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605) to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC); and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043), 209/2,292 (9.1%) in all patients with the acceptable time intervals and 1,388/2,706 (52.1%) in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.

  7. On what basis are medical cost-effectiveness thresholds set? Clashing opinions and an absence of data: a systematic review.

    PubMed

    Cameron, David; Ubels, Jasper; Norström, Fredrik

    2018-01-01

    The amount a government should be willing to invest in adopting new medical treatments has long been under debate. With many countries using formal cost-effectiveness (C/E) thresholds when examining potential new treatments and ever-growing medical costs, accurately setting the level of a C/E threshold can be essential for an efficient healthcare system. The aim of this systematic review is to describe the prominent approaches to setting a C/E threshold, compile available national-level C/E threshold data and willingness-to-pay (WTP) data, and to discern whether associations exist between these values, gross domestic product (GDP) and health-adjusted life expectancy (HALE). This review further examines current obstacles faced with the presently available data. A systematic review was performed to collect articles which have studied national C/E thresholds and willingness-to-pay (WTP) per quality-adjusted life year (QALY) in the general population. Associations between GDP, HALE, WTP, and C/E thresholds were analyzed with correlations. Seventeen countries were identified from nine unique sources to have formal C/E thresholds within our inclusion criteria. Thirteen countries from nine sources were identified to have WTP per QALY data within our inclusion criteria. Two possible associations were identified: C/E thresholds with HALE (quadratic correlation of 0.63), and C/E thresholds with GDP per capita (polynomial correlation of 0.84). However, these results are based on few observations and therefore firm conclusions cannot be made. Most national C/E thresholds identified in our review fall within the WHO's recommended range of one-to-three times GDP per capita. However, the quality and quantity of data available regarding national average WTP per QALY, opportunity costs, and C/E thresholds is poor in comparison to the importance of adequate investment in healthcare. There exists an obvious risk that countries might either over- or underinvest in healthcare if they base their decision-making process on erroneous presumptions or non-evidence-based methodologies. The commonly referred to value of 100,000$ USD per QALY may potentially have some basis.

  8. On what basis are medical cost-effectiveness thresholds set? Clashing opinions and an absence of data: a systematic review

    PubMed Central

    Cameron, David; Ubels, Jasper; Norström, Fredrik

    2018-01-01

    ABSTRACT Background: The amount a government should be willing to invest in adopting new medical treatments has long been under debate. With many countries using formal cost-effectiveness (C/E) thresholds when examining potential new treatments and ever-growing medical costs, accurately setting the level of a C/E threshold can be essential for an efficient healthcare system. Objectives: The aim of this systematic review is to describe the prominent approaches to setting a C/E threshold, compile available national-level C/E threshold data and willingness-to-pay (WTP) data, and to discern whether associations exist between these values, gross domestic product (GDP) and health-adjusted life expectancy (HALE). This review further examines current obstacles faced with the presently available data. Methods: A systematic review was performed to collect articles which have studied national C/E thresholds and willingness-to-pay (WTP) per quality-adjusted life year (QALY) in the general population. Associations between GDP, HALE, WTP, and C/E thresholds were analyzed with correlations. Results: Seventeen countries were identified from nine unique sources to have formal C/E thresholds within our inclusion criteria. Thirteen countries from nine sources were identified to have WTP per QALY data within our inclusion criteria. Two possible associations were identified: C/E thresholds with HALE (quadratic correlation of 0.63), and C/E thresholds with GDP per capita (polynomial correlation of 0.84). However, these results are based on few observations and therefore firm conclusions cannot be made. Conclusions: Most national C/E thresholds identified in our review fall within the WHO’s recommended range of one-to-three times GDP per capita. However, the quality and quantity of data available regarding national average WTP per QALY, opportunity costs, and C/E thresholds is poor in comparison to the importance of adequate investment in healthcare. There exists an obvious risk that countries might either over- or underinvest in healthcare if they base their decision-making process on erroneous presumptions or non-evidence-based methodologies. The commonly referred to value of 100,000$ USD per QALY may potentially have some basis. PMID:29564962

  9. Automatic recognition of falls in gait-slip training: Harness load cell based criteria.

    PubMed

    Yang, Feng; Pai, Yi-Chung

    2011-08-11

    Over-head-harness systems, equipped with load cell sensors, are essential to the participants' safety and to the outcome assessment in perturbation training. The purpose of this study was to first develop an automatic outcome recognition criterion among young adults for gait-slip training and then verify such criterion among older adults. Each of 39 young and 71 older subjects, all protected by safety harness, experienced 8 unannounced, repeated slips, while walking on a 7m walkway. Each trial was monitored with a motion capture system, bilateral ground reaction force (GRF), harness force, and video recording. The fall trials were first unambiguously indentified with careful visual inspection of all video records. The recoveries without balance loss (in which subjects' trailing foot landed anteriorly to the slipping foot) were also first fully recognized from motion and GRF analyses. These analyses then set the gold standard for the outcome recognition with load cell measurements. Logistic regression analyses based on young subjects' data revealed that the peak load cell force was the best predictor of falls (with 100% accuracy) at the threshold of 30% body weight. On the other hand, the peak moving average force of load cell across 1s period, was the best predictor (with 100% accuracy) separating recoveries with backward balance loss (in which the recovery step landed posterior to slipping foot) from harness assistance at the threshold of 4.5% body weight. These threshold values were fully verified using the data from older adults (100% accuracy in recognizing falls). Because of the increasing popularity in the perturbation training coupling with the protective over-head-harness system, this new criterion could have far reaching implications in automatic outcome recognition during the movement therapy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. AUTOMATIC RECOGNITION OF FALLS IN GAIT-SLIP: A HARNESS LOAD CELL BASED CRITERION

    PubMed Central

    Yang, Feng; Pai, Yi-Chung

    2012-01-01

    Over-head-harness systems, equipped with load cell sensors, are essential to the participants’ safety and to the outcome assessment in perturbation training. The purpose of this study was to first develop an automatic outcome recognition criterion among young adults for gait-slip training and then verify such criterion among older adults. Each of 39 young and 71 older subjects, all protected by safety harness, experienced 8 unannounced, repeated slips, while walking on a 7-m walkway. Each trial was monitored with a motion capture system, bilateral ground reaction force (GRF), harness force and video recording. The fall trials were first unambiguously indentified with careful visual inspection of all video records. The recoveries without balance loss (in which subjects’ trailing foot landed anteriorly to the slipping foot) were also first fully recognized from motion and GRF analyses. These analyses then set the gold standard for the outcome recognition with load cell measurements. Logistic regression analyses based on young subjects’ data revealed that peak load cell force was the best predictor of falls (with 100% accuracy) at the threshold of 30% body weight. On the other hand, the peak moving average force of load cell across 1-s period, was the best predictor (with 100% accuracy) separating recoveries with backward balance loss (in which the recovery step landed posterior to slipping foot) from harness assistance at the threshold of 4.5% body weight. These threshold values were fully verified using the data from older adults (100% accuracy in recognizing falls). Because of the increasing popularity in the perturbation training coupling with the protective over-head-harness system, this new criterion could have far reaching implications in automatic outcome recognition during the movement therapy. PMID:21696744

  11. Diagnostic accuracy of spot urinary protein and albumin to creatinine ratios for detection of significant proteinuria or adverse pregnancy outcome in patients with suspected pre-eclampsia: systematic review and meta-analysis

    PubMed Central

    Morris, R K; Riley, R D; Doug, M; Deeks, J J

    2012-01-01

    Objective To determine the diagnostic accuracy of two “spot urine” tests for significant proteinuria or adverse pregnancy outcome in pregnant women with suspected pre-eclampsia. Design Systematic review and meta-analysis. Data sources Searches of electronic databases 1980 to January 2011, reference list checking, hand searching of journals, and contact with experts. Inclusion criteria Diagnostic studies, in pregnant women with hypertension, that compared the urinary spot protein to creatinine ratio or albumin to creatinine ratio with urinary protein excretion over 24 hours or adverse pregnancy outcome. Study characteristics, design, and methodological and reporting quality were objectively assessed. Data extraction Study results relating to diagnostic accuracy were extracted and synthesised using multivariate random effects meta-analysis methods. Results Twenty studies, testing 2978 women (pregnancies), were included. Thirteen studies examining protein to creatinine ratio for the detection of significant proteinuria were included in the multivariate analysis. Threshold values for protein to creatinine ratio ranged between 0.13 and 0.5, with estimates of sensitivity ranging from 0.65 to 0.89 and estimates of specificity from 0.63 to 0.87; the area under the summary receiver operating characteristics curve was 0.69. On average, across all studies, the optimum threshold (that optimises sensitivity and specificity combined) seems to be between 0.30 and 0.35 inclusive. However, no threshold gave a summary estimate above 80% for both sensitivity and specificity, and considerable heterogeneity existed in diagnostic accuracy across studies at most thresholds. No studies looked at protein to creatinine ratio and adverse pregnancy outcome. For albumin to creatinine ratio, meta-analysis was not possible. Results from a single study suggested that the most predictive result, for significant proteinuria, was with the DCA 2000 quantitative analyser (>2 mg/mmol) with a summary sensitivity of 0.94 (95% confidence interval 0.86 to 0.98) and a specificity of 0.94 (0.87 to 0.98). In a single study of adverse pregnancy outcome, results for perinatal death were a sensitivity of 0.82 (0.48 to 0.98) and a specificity of 0.59 (0.51 to 0.67). Conclusion The maternal “spot urine” estimate of protein to creatinine ratio shows promising diagnostic value for significant proteinuria in suspected pre-eclampsia. The existing evidence is not, however, sufficient to determine how protein to creatinine ratio should be used in clinical practice, owing to the heterogeneity in test accuracy and prevalence across studies. Insufficient evidence is available on the use of albumin to creatinine ratio in this area. Insufficient evidence exists for either test to predict adverse pregnancy outcome. PMID:22777026

  12. Prognostic Effect and Longitudinal Hemodynamic Assessment of Borderline Pulmonary Hypertension.

    PubMed

    Assad, Tufik R; Maron, Bradley A; Robbins, Ivan M; Xu, Meng; Huang, Shi; Harrell, Frank E; Farber-Eger, Eric H; Wells, Quinn S; Choudhary, Gaurav; Hemnes, Anna R; Brittain, Evan L

    2017-12-01

    Pulmonary hypertension (PH) is diagnosed by a mean pulmonary arterial pressure (mPAP) value of at least 25 mm Hg during right heart catheterization (RHC). While several studies have demonstrated increased mortality in patients with mPAP less than that threshold, little is known about the natural history of borderline PH. To test the hypothesis that patients with borderline PH have decreased survival compared with patients with lower mPAP and frequently develop overt PH and to identify clinical correlates of borderline PH. Retrospective cohort study from 1998 to 2014 at Vanderbilt University Medical Center, comprising all patients undergoing routine RHC for clinical indication. We extracted demographics, clinical data, invasive hemodynamics, echocardiography, and vital status for all patients. Patients with mPAP values of 18 mm Hg or less, 19 to 24 mm Hg, and at least 25 mm Hg were classified as reference, borderline PH, and PH, respectively. Mean pulmonary arterial pressure. Our primary outcome was all-cause mortality after adjusting for clinically relevant covariates in a Cox proportional hazards model. Our secondary outcome was the diagnosis of overt PH in patients initially diagnosed with borderline PH. Both outcomes were determined prior to data analysis. We identified 4343 patients (mean [SD] age, 59 [15] years, 51% women, and 86% white) among whom the prevalence of PH and borderline PH was 62% and 18%, respectively. Advanced age, features of the metabolic syndrome, and chronic heart and lung disease were independently associated with a higher likelihood of borderline PH compared with reference patients in a logistic regression model. After adjusting for 34 covariates in a Cox proportional hazards model, borderline PH was associated with increased mortality compared with reference patients (hazard ratio, 1.31; 95% CI, 1.04-1.65; P = .001). The hazard of death increased incrementally with higher mPAP, without an observed threshold. In the 70 patients with borderline PH who underwent a repeated RHC, 43 (61%) had developed overt PH, with a median increase in mPAP of 5 mm Hg (interquartile range, -1 to 11 mm Hg; P < .001). Borderline PH is common in patients undergoing RHC and is associated with significant comorbidities, progression to overt PH, and decreased survival. Small increases in mPAP, even at values currently considered normal, are independently associated with increased mortality. Prospective studies are warranted to determine whether early intervention or closer monitoring improves clinical outcomes in these patients.

  13. [Effect of transparent yellow and orange colored contact lenses on color discrimination in the yellow color range].

    PubMed

    Schürer, M; Walter, A; Brünner, H; Langenbucher, A

    2015-08-01

    Colored transparent filters cause a change in color perception and have an impact on the perceptible amount of different colors and especially on the ability to discriminate between them. Yellow or orange tinted contact lenses worn to enhance contrast vision by reducing or blocking short wavelengths also have an effect on color perception. The impact of the yellow and orange tinted contact lenses Wöhlk SPORT CONTRAST on color discrimination was investigated with the Erlangen colour measurement system in a study with 14 and 16 subjects, respectively. In relation to a yellow reference color located at u' = 0.2487/v' = 0.5433, measurements of color discrimination thresholds were taken in up to 6 different color coordinate axes. Based on these thresholds, color discrimination ellipses were calculated. These results are given in the Derrington, Krauskopf and Lennie (DKL) color system. Both contact lenses caused a shift of the reference color towards higher saturated colors. Color discrimination ability with the yellow and orange colored lenses was significantly enhanced along the blue-yellow axis in comparison to the reference measurements without a tinted filter. Along the red-green axis only the orange lens caused a significant reduction of color discrimination threshold distance to the reference color. Yellow and orange tinted contact lenses enhance the ability of color discrimination. If the transmission spectra and the induced changes are taken into account, these results can also be applied to other filter media, such as blue filter intraocular lenses.

  14. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  15. The anaerobic threshold: over-valued or under-utilized? A novel concept to enhance lipid optimization!

    PubMed

    Connolly, Declan A J

    2012-09-01

    The purpose of this article is to assess the value of the anaerobic threshold for use in clinical populations with the intent to improve exercise adaptations and outcomes. The anaerobic threshold is generally poorly understood, improperly used, and poorly measured. It is rarely used in clinical settings and often reserved for athletic performance testing. Increased exercise participation within both clinical and other less healthy populations has increased our attention to optimizing exercise outcomes. Of particular interest is the optimization of lipid metabolism during exercise in order to improve numerous conditions such as blood lipid profile, insulin sensitivity and secretion, and weight loss. Numerous authors report on the benefits of appropriate exercise intensity in optimizing outcomes even though regulation of intensity has proved difficult for many. Despite limited use, selected exercise physiology markers have considerable merit in exercise-intensity regulation. The anaerobic threshold, and other markers such as heart rate, may well provide a simple and valuable mechanism for regulating exercising intensity. The use of the anaerobic threshold and accurate target heart rate to regulate exercise intensity is a valuable approach that is under-utilized across populations. The measurement of the anaerobic threshold can be simplified to allow clients to use nonlaboratory measures, for example heart rate, in order to self-regulate exercise intensity and improve outcomes.

  16. Spot protein-creatinine ratio and spot albumin-creatinine ratio in the assessment of pre-eclampsia: a diagnostic accuracy study with decision-analytic model-based economic evaluation and acceptability analysis.

    PubMed

    Waugh, Jason; Hooper, Richard; Lamb, Edmund; Robson, Stephen; Shennan, Andrew; Milne, Fiona; Price, Christopher; Thangaratinam, Shakila; Berdunov, Vladislav; Bingham, Jenn

    2017-10-01

    The National Institute for Health and Care Excellence (NICE) guidelines highlighted the need for 'large, high-quality prospective studies comparing the various methods of measuring proteinuria in women with new-onset hypertensive disorders during pregnancy'. The primary objective was to evaluate quantitative assessments of spot protein-creatinine ratio (SPCR) and spot albumin-creatinine ratio (SACR) in predicting severe pre-eclampsia (PE) compared with 24-hour urine protein measurement. The secondary objectives were to investigate interlaboratory assay variation, to evaluate SPCR and SACR thresholds in predicting adverse maternal and fetal outcomes and to assess the cost-effectiveness of these models. This was a prospective diagnostic accuracy cohort study, with decision-analytic modelling and a cost-effectiveness analysis. The setting was 36 obstetric units in England, UK. Pregnant women (aged ≥ 16 years), who were at > 20 weeks' gestation with confirmed gestational hypertension and trace or more proteinuria on an automated dipstick urinalysis. Women provided a spot urine sample for protein analysis (the recruitment sample) and were asked to collect a 24-hour urine sample, which was stored for secondary analysis. A further spot sample of urine was taken immediately before delivery. Outcome data were collected from hospital records. There were four index tests on a spot sample of urine: (1) SPCR test (conducted at the local laboratory); (2) SPCR test [conducted at the central laboratory using the benzethonium chloride (BZC) assay]; (3) SPCR test [conducted at the central laboratory using the pyrogallol red (PGR) assay]; and (4) SACR test (conducted at the central laboratory using an automated chemistry analyser). The comparator tests on 24-hour urine collection were a central test using the BZC assay and a central test using the PGR assay. The primary reference standard was the NICE definition of severe PE. Secondary reference standards were a clinician diagnosis of severe PE, which is defined as treatment with magnesium sulphate or with severe PE protocol; adverse perinatal outcome; one or more of perinatal or infant mortality, bronchopulmonary dysplasia, necrotising enterocolitis or grade III/IV intraventricular haemorrhage; and economic cost and outcomes. Health service data on service use and costs followed published economic models. In total, 959 women were available for primary analysis and 417 of them had severe PE. The diagnostic accuracy of the four assays on spot urine samples against the reference standards was similar. The three SPCR tests had sensitivities in excess of 90% at prespecified thresholds, with poor specificities and negative likelihood ratios of ≥ 0.1. The SACR test had a significantly higher sensitivity of 99% (confidence interval 98% to 100%) and lower specificity. Receiver operating characteristic (ROC) curves were similar (area under ROC curve between 0.87 and 0.89); the area under the central laboratory's SACR curve was significantly higher ( p  = 0.004). The central laboratory's SACR test was the most cost-effective option, generating an additional 0.03 quality-adjusted life-years at an additional cost of £45.07 compared with the local laboratory's SPCR test. The probabilistic analysis showed it to have a 100% probability of being cost-effective at the standard willingness-to-pay threshold recommended by NICE. Implementation of NICE guidelines has led to an increased intervention rate in the study population that affected recruitment rates and led to revised sample size calculations. Evidence from this clinical study does not support the recommendation of 24-hour urine sample collection in hypertensive pregnant women. The SACR test had better diagnostic performance when predicting severe pre-eclampsia. All four tests could potentially be used as rule-out tests for the NICE definition of severe PE. Testing SACR at a threshold of 8 mg/mmol should be studied as a 'rule-out' test of proteinuria. Current Controlled Trials ISRCTN82607486. This project was funded by the National Institute Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment ; Vol. 21, No. 61. See the NIHR Journals Library website for further project information.

  17. Room-Temperature Low-Threshold Lasing from Monolithically Integrated Nanostructured Porous Silicon Hybrid Microcavities.

    PubMed

    Robbiano, Valentina; Paternò, Giuseppe M; La Mattina, Antonino A; Motti, Silvia G; Lanzani, Guglielmo; Scotognella, Francesco; Barillaro, Giuseppe

    2018-05-22

    Silicon photonics would strongly benefit from monolithically integrated low-threshold silicon-based laser operating at room temperature, representing today the main challenge toward low-cost and power-efficient electronic-photonic integrated circuits. Here we demonstrate low-threshold lasing from fully transparent nanostructured porous silicon (PSi) monolithic microcavities (MCs) infiltrated with a polyfluorene derivative, namely, poly(9,9-di- n-octylfluorenyl-2,7-diyl) (PFO). The PFO-infiltrated PSiMCs support single-mode blue lasing at the resonance wavelength of 466 nm, with a line width of ∼1.3 nm and lasing threshold of 5 nJ (15 μJ/cm 2 ), a value that is at the state of the art of PFO lasers. Furthermore, time-resolved photoluminescence shows a significant shortening (∼57%) of PFO emission lifetime in the PSiMCs, with respect to nonresonant PSi reference structures, confirming a dramatic variation of the radiative decay rate due to a Purcell effect. Our results, given also that blue lasing is a worst case for silicon photonics, are highly appealing for the development of low-cost, low-threshold silicon-based lasers with wavelengths tunable from visible to the near-infrared region by simple infiltration of suitable emitting polymers in monolithically integrated nanostructured PSiMCs.

  18. Association between the Part D coverage gap and adverse health outcomes.

    PubMed

    Polinski, Jennifer M; Shrank, William H; Glynn, Robert J; Huskamp, Haiden A; Christopher Roebuck, M; Schneeweiss, Sebastian

    2012-08-01

    To determine whether Part D coverage gap entry is associated with risk of death or hospitalization for cardiovascular outcomes. Prospective cohort study. Beneficiaries entered the study upon reaching the coverage gap spending threshold and were observed until an outcome reaching the threshold for catastrophic coverage occurred or year's end. Nine thousand four hundred thirty-six exposed individuals (those who were responsible for drug costs in the gap) were compared with 9,436 unexposed individuals (those who received financial assistance) based on propensity score (PS) or high-dimensional propensity score (hdPS). Medicare Part D drug insurance. Three hundred three thousand nine hundred seventy-eight Medicare beneficiaries aged 65 and older in 2006 and 2007 with linked prescription and medical claims who enrolled in stand-alone Part D or retiree drug plans and reached the gap spending threshold. Rates of death and hospitalization for any of five cardiovascular outcomes, including acute coronary syndrome with revascularization (ACS), after reaching the coverage gap spending threshold were compared using Cox proportional hazards models. In PS-matched analyses, exposed beneficiaries had higher, albeit not significantly so, hazard of death (hazard ratio (HR) = 1.25, 95% confidence interval (CI) = 0.98-1.59) and ACS (HR = 1.16, 95% CI = 0.83-1.62) than unexposed beneficiaries. hdPS-matched analyses minimized residual confounding and confirmed results (death: HR = 0.99, 95% CI = 0.78-1.24; ACS: HR = 1.07, 95% CI = 0.81-1.41). Exposed beneficiaries were no more or less likely to experience other outcomes than were those who were unexposed. During the short-term coverage gap period, having no financial assistance to pay for drugs was not associated with greater risk of death or hospitalization for cardiovascular causes, although long-term health consequences remain unclear. © 2012, Copyright the Authors Journal compilation © 2012, The American Geriatrics Society.

  19. A pilot study of reference vibrotactile perception thresholds on the fingertip obtained with Malaysian healthy people using ISO 13091-1 equipment.

    PubMed

    Daud, Roshada; Maeda, Setsuo; Kameel, Nur Nazmin Mustafa; Ripin, Muhamad Yunus; Bakrun, Norazman; Md Zein, Raemy; Kido, Masaharu; Higuchi, Kiyotaka

    2004-04-01

    The purpose of this paper is to clarify the reference vibrotactile perception thresholds (VPT) for healthy people in Malaysia. The measurement equipment standard, ISO 13091-1, of the vibrotactile perception thresholds for the assessment of nerve dysfunction and the analysis and interpretation of measurements at the fingertips standard, ISO 13091-2, were published in ISO/TC108/SC4/WG8 on 2001 and 2003 individually. In the ISO 13091-2 standard, the reference VPT data were obtained from few research papers. Malaysian people's VPT data don't include to this standard. In Malaysia, when the VPT is using to diagnose of the hand-arm vibration syndrome, the reference VPT data need to compare with the worker's ones. But, Malaysia does not have the reference VPT data yet. So, in this paper, the VPT was measured by using ISO 13091-1 standard equipment to obtain the reference data for Malaysian people. And these data were compared with the ISO reference data on the ISO 13091-2 standard. From the comparison of these data, it was clear that the Malaysian healthy people's VPT data were consistent with the reference data of the ISO 13091-2 standard.

  20. Statistical equivalence and test-retest reliability of delay and probability discounting using real and hypothetical rewards.

    PubMed

    Matusiewicz, Alexis K; Carter, Anne E; Landes, Reid D; Yi, Richard

    2013-11-01

    Delay discounting (DD) and probability discounting (PD) refer to the reduction in the subjective value of outcomes as a function of delay and uncertainty, respectively. Elevated measures of discounting are associated with a variety of maladaptive behaviors, and confidence in the validity of these measures is imperative. The present research examined (1) the statistical equivalence of discounting measures when rewards were hypothetical or real, and (2) their 1-week reliability. While previous research has partially explored these issues using the low threshold of nonsignificant difference, the present study fully addressed this issue using the more-compelling threshold of statistical equivalence. DD and PD measures were collected from 28 healthy adults using real and hypothetical $50 rewards during each of two experimental sessions, one week apart. Analyses using area-under-the-curve measures revealed a general pattern of statistical equivalence, indicating equivalence of real/hypothetical conditions as well as 1-week reliability. Exceptions are identified and discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. 77 FR 9288 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-16

    ... electronic executions and to delete references to Royalty Fees for foreign currency options, which the... posted electronic executions and to delete references to Royalty Fees for foreign currency options, which....... -0.36 Threshold 3 More than 1,200,000.. -0.42 Threshold 4 More than 3,500,000.. -0.43 Royalty Fees...

  2. Speech-in-Noise Tests and Supra-threshold Auditory Evoked Potentials as Metrics for Noise Damage and Clinical Trial Outcome Measures.

    PubMed

    Le Prell, Colleen G; Brungart, Douglas S

    2016-09-01

    In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.

  3. Cost–effectiveness thresholds: pros and cons

    PubMed Central

    Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-01-01

    Abstract Cost–effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost–effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost–effectiveness thresholds allow cost–effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization’s Commission on Macroeconomics in Health suggested cost–effectiveness thresholds based on multiples of a country’s per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this – in addition to uncertainty in the modelled cost–effectiveness ratios – can lead to the wrong decision on how to spend health-care resources. Cost–effectiveness information should be used alongside other considerations – e.g. budget impact and feasibility considerations – in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost–effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair. PMID:27994285

  4. Economic Analysis of Neoadjuvant Chemotherapy Versus Primary Debulking Surgery for Advanced Epithelial Ovarian Cancer Using an Aggressive Surgical Paradigm.

    PubMed

    Cole, Ashley L; Barber, Emma L; Gogate, Anagha; Tran, Arthur-Quan; Wheeler, Stephanie B

    2018-04-21

    Neoadjuvant chemotherapy (NACT) versus primary debulking surgery (PDS) for advanced epithelial ovarian cancer (AEOC) remains controversial in the United States. Generalizability of existing trial results has been criticized because of less aggressive debulking procedures than commonly used in the United States. As a result, economic evaluations using input data from these trials may not accurately reflect costs and outcomes associated with more aggressive primary surgery. Using data from an ongoing trial performing aggressive debulking, we investigated the cost-effectiveness and cost-utility of NACT versus PDS for AEOC. A decision tree model was constructed to estimate differences in short-term outcomes and costs for a hypothetical cohort of 15,000 AEOC patients (US annual incidence of AEOC) treated with NACT versus PDS over a 1-year time horizon from a Medicare payer perspective. Outcomes included costs per cancer-related death averted, life-years and quality-adjusted life-years (QALYs) gained. Base-case probabilities, costs, and utilities were based on the Surgical Complications Related to Primary or Interval Debulking in Ovarian Neoplasms trial. Base-case analyses assumed equivalent survival; threshold analysis estimated the maximum survival difference that would result in NACT being cost-effective at $50,000/QALY and $100,000/QALY willingness-to-pay thresholds. Probabilistic sensitivity analysis was used to characterize model uncertainty. Compared with PDS, NACT was associated with $142 million in cost savings, 1098 fewer cancer-related deaths, and 1355 life-years and 1715 QALYs gained, making it the dominant treatment strategy for all outcomes. In sensitivity analysis, NACT remained dominant in 99.3% of simulations. Neoadjuvant chemotherapy remained cost-effective at $50,000/QALY and $100,000/QALY willingness-to-pay thresholds if survival differences were less than 2.7 and 1.4 months, respectively. In the short term, NACT is cost-saving with improved outcomes. However, if PDS provides a longer-term survival advantage, it may be cost-effective. Research is needed on the role of patient preferences in tradeoffs between survival and quality of life.

  5. Identification of Pure-Tone Audiologic Thresholds for Pediatric Cochlear Implant Candidacy: A Systematic Review.

    PubMed

    de Kleijn, Jasper L; van Kalmthout, Ludwike W M; van der Vossen, Martijn J B; Vonck, Bernard M D; Topsakal, Vedat; Bruijnzeel, Hanneke

    2018-05-24

    Although current guidelines recommend cochlear implantation only for children with profound hearing impairment (HI) (>90 decibel [dB] hearing level [HL]), studies show that children with severe hearing impairment (>70-90 dB HL) could also benefit from cochlear implantation. To perform a systematic review to identify audiologic thresholds (in dB HL) that could serve as an audiologic candidacy criterion for pediatric cochlear implantation using 4 domains of speech and language development as independent outcome measures (speech production, speech perception, receptive language, and auditory performance). PubMed and Embase databases were searched up to June 28, 2017, to identify studies comparing speech and language development between children who were profoundly deaf using cochlear implants and children with severe hearing loss using hearing aids, because no studies are available directly comparing children with severe HI in both groups. If cochlear implant users with profound HI score better on speech and language tests than those with severe HI who use hearing aids, this outcome could support adjusting cochlear implantation candidacy criteria to lower audiologic thresholds. Literature search, screening, and article selection were performed using a predefined strategy. Article screening was executed independently by 4 authors in 2 pairs; consensus on article inclusion was reached by discussion between these 4 authors. This study is reported according to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) statement. Title and abstract screening of 2822 articles resulted in selection of 130 articles for full-text review. Twenty-one studies were selected for critical appraisal, resulting in selection of 10 articles for data extraction. Two studies formulated audiologic thresholds (in dB HLs) at which children could qualify for cochlear implantation: (1) at 4-frequency pure-tone average (PTA) thresholds of 80 dB HL or greater based on speech perception and auditory performance subtests and (2) at PTA thresholds of 88 and 96 dB HL based on a speech perception subtest. In 8 of the 18 outcome measures, children with profound HI using cochlear implants performed similarly to children with severe HI using hearing aids. Better performance of cochlear implant users was shown with a picture-naming test and a speech perception in noise test. Owing to large heterogeneity in study population and selected tests, it was not possible to conduct a meta-analysis. Studies indicate that lower audiologic thresholds (≥80 dB HL) than are advised in current national and manufacturer guidelines would be appropriate as audiologic candidacy criteria for pediatric cochlear implantation.

  6. Public sector low threshold office-based buprenorphine treatment: outcomes at year 7.

    PubMed

    Bhatraju, Elenore Patterson; Grossman, Ellie; Tofighi, Babak; McNeely, Jennifer; DiRocco, Danae; Flannery, Mara; Garment, Ann; Goldfeld, Keith; Gourevitch, Marc N; Lee, Joshua D

    2017-02-28

    Buprenorphine maintenance for opioid dependence remains of limited availability among underserved populations, despite increases in US opioid misuse and overdose deaths. Low threshold primary care treatment models including the use of unobserved, "home," buprenorphine induction may simplify initiation of care and improve access. Unobserved induction and long-term treatment outcomes have not been reported recently among large, naturalistic cohorts treated in low threshold safety net primary care settings. This prospective clinical registry cohort design estimated rates of induction-related adverse events, treatment retention, and urine opioid results for opioid dependent adults offered buprenorphine maintenance in a New York City public hospital primary care office-based practice from 2006 to 2013. This clinic relied on typical ambulatory care individual provider-patient visits, prescribed unobserved induction exclusively, saw patients no more than weekly, and did not require additional psychosocial treatment. Unobserved induction consisted of an in-person screening and diagnostic visit followed by a 1-week buprenorphine written prescription, with pamphlet, and telephone support. Primary outcomes analyzed were rates of induction-related adverse events (AE), week 1 drop-out, and long-term treatment retention. Factors associated with treatment retention were examined using a Cox proportional hazard model among inductions and all patients. Secondary outcomes included overall clinic retention, buprenorphine dosages, and urine sample results. Of the 485 total patients in our registry, 306 were inducted, and 179 were transfers already on buprenorphine. Post-induction (n = 306), week 1 drop-out was 17%. Rates of any induction-related AE were 12%; serious adverse events, 0%; precipitated withdrawal, 3%; prolonged withdrawal, 4%. Treatment retention was a median 38 weeks (range 0-320) for inductions, compared to 110 (0-354) weeks for transfers and 57 for the entire clinic population. Older age, later years of first clinic visit (vs. 2006-2007), and baseline heroin abstinence were associated with increased treatment retention overall. Unobserved "home" buprenorphine induction in a public sector primary care setting appeared a feasible and safe clinical practice. Post-induction treatment retention of a median 38 weeks was in line with previous naturalistic studies of real-world office-based opioid treatment. Low threshold treatment protocols, as compared to national guidelines, may compliment recently increased prescriber patient limits and expand access to buprenorphine among public sector opioid use disorder patients.

  7. Clinical-outcome-based demand management in health services.

    PubMed

    Brogan, C; Lawrence, D; Mayhew, L

    2008-01-01

    THE PROBLEM OF MANAGING DEMAND: Most healthcare systems have 'third-party payers' who face the problem of keeping within budgets despite pressures to increase resources due to the ageing population, new technologies and patient demands to lower thresholds for care. This paper uses the UK National Health Service as a case study to suggest techniques for system-based demand management, which aims to control demand and costs whilst maintaining the cost-effectiveness of the system. The technique for managing demand in primary, elective and urgent care consists of managing treatment thresholds for appropriate care, using a whole-systems approach and costing the care elements in the system. It is important to analyse activity in relation to capacity and demand. Examples of using these techniques in practice are given. The practical effects of using such techniques need evaluation. If these techniques are not used, managing demand and limiting healthcare expenditure will be at the expense of clinical outcomes and unmet need, which will perpetuate financial crises.

  8. Electrophysiological and psychophysical asymmetries in sensitivity to interaural correlation gaps and implications for binaural integration time.

    PubMed

    Lüddemann, Helge; Kollmeier, Birger; Riedel, Helmut

    2016-02-01

    Brief deviations of interaural correlation (IAC) can provide valuable cues for detection, segregation and localization of acoustic signals. This study investigated the processing of such "binaural gaps" in continuously running noise (100-2000 Hz), in comparison to silent "monaural gaps", by measuring late auditory evoked potentials (LAEPs) and perceptual thresholds with novel, iteratively optimized stimuli. Mean perceptual binaural gap duration thresholds exhibited a major asymmetry: they were substantially shorter for uncorrelated gaps in correlated and anticorrelated reference noise (1.75 ms and 4.1 ms) than for correlated and anticorrelated gaps in uncorrelated reference noise (26.5 ms and 39.0 ms). The thresholds also showed a minor asymmetry: they were shorter in the positive than in the negative IAC range. The mean behavioral threshold for monaural gaps was 5.5 ms. For all five gap types, the amplitude of LAEP components N1 and P2 increased linearly with the logarithm of gap duration. While perceptual and electrophysiological thresholds matched for monaural gaps, LAEP thresholds were about twice as long as perceptual thresholds for uncorrelated gaps, but half as long for correlated and anticorrelated gaps. Nevertheless, LAEP thresholds showed the same asymmetries as perceptual thresholds. For gap durations below 30 ms, LAEPs were dominated by the processing of the leading edge of a gap. For longer gap durations, in contrast, both the leading and the lagging edge of a gap contributed to the evoked response. Formulae for the equivalent rectangular duration (ERD) of the binaural system's temporal window were derived for three common window shapes. The psychophysical ERD was 68 ms for diotic and about 40 ms for anti- and uncorrelated noise. After a nonlinear Z-transform of the stimulus IAC prior to temporal integration, ERDs were about 10 ms for reference correlations of ±1 and 80 ms for uncorrelated reference. Hence, a physiologically motivated peripheral nonlinearity changed the rank order of ERDs across experimental conditions in a plausible manner. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. The consequences of ignoring measurement invariance for path coefficients in structural equation models

    PubMed Central

    Guenole, Nigel; Brown, Anna

    2014-01-01

    We report a Monte Carlo study examining the effects of two strategies for handling measurement non-invariance – modeling and ignoring non-invariant items – on structural regression coefficients between latent variables measured with item response theory models for categorical indicators. These strategies were examined across four levels and three types of non-invariance – non-invariant loadings, non-invariant thresholds, and combined non-invariance on loadings and thresholds – in simple, partial, mediated and moderated regression models where the non-invariant latent variable occupied predictor, mediator, and criterion positions in the structural regression models. When non-invariance is ignored in the latent predictor, the focal group regression parameters are biased in the opposite direction to the difference in loadings and thresholds relative to the referent group (i.e., lower loadings and thresholds for the focal group lead to overestimated regression parameters). With criterion non-invariance, the focal group regression parameters are biased in the same direction as the difference in loadings and thresholds relative to the referent group. While unacceptable levels of parameter bias were confined to the focal group, bias occurred at considerably lower levels of ignored non-invariance than was previously recognized in referent and focal groups. PMID:25278911

  10. Restrictive or Liberal Red-Cell Transfusion for Cardiac Surgery.

    PubMed

    Mazer, C David; Whitlock, Richard P; Fergusson, Dean A; Hall, Judith; Belley-Cote, Emilie; Connolly, Katherine; Khanykin, Boris; Gregory, Alexander J; de Médicis, Étienne; McGuinness, Shay; Royse, Alistair; Carrier, François M; Young, Paul J; Villar, Juan C; Grocott, Hilary P; Seeberger, Manfred D; Fremes, Stephen; Lellouche, François; Syed, Summer; Byrne, Kelly; Bagshaw, Sean M; Hwang, Nian C; Mehta, Chirag; Painter, Thomas W; Royse, Colin; Verma, Subodh; Hare, Gregory M T; Cohen, Ashley; Thorpe, Kevin E; Jüni, Peter; Shehata, Nadine

    2017-11-30

    The effect of a restrictive versus liberal red-cell transfusion strategy on clinical outcomes in patients undergoing cardiac surgery remains unclear. In this multicenter, open-label, noninferiority trial, we randomly assigned 5243 adults undergoing cardiac surgery who had a European System for Cardiac Operative Risk Evaluation (EuroSCORE) I of 6 or more (on a scale from 0 to 47, with higher scores indicating a higher risk of death after cardiac surgery) to a restrictive red-cell transfusion threshold (transfuse if hemoglobin level was <7.5 g per deciliter, starting from induction of anesthesia) or a liberal red-cell transfusion threshold (transfuse if hemoglobin level was <9.5 g per deciliter in the operating room or intensive care unit [ICU] or was <8.5 g per deciliter in the non-ICU ward). The primary composite outcome was death from any cause, myocardial infarction, stroke, or new-onset renal failure with dialysis by hospital discharge or by day 28, whichever came first. Secondary outcomes included red-cell transfusion and other clinical outcomes. The primary outcome occurred in 11.4% of the patients in the restrictive-threshold group, as compared with 12.5% of those in the liberal-threshold group (absolute risk difference, -1.11 percentage points; 95% confidence interval [CI], -2.93 to 0.72; odds ratio, 0.90; 95% CI, 0.76 to 1.07; P<0.001 for noninferiority). Mortality was 3.0% in the restrictive-threshold group and 3.6% in the liberal-threshold group (odds ratio, 0.85; 95% CI, 0.62 to 1.16). Red-cell transfusion occurred in 52.3% of the patients in the restrictive-threshold group, as compared with 72.6% of those in the liberal-threshold group (odds ratio, 0.41; 95% CI, 0.37 to 0.47). There were no significant between-group differences with regard to the other secondary outcomes. In patients undergoing cardiac surgery who were at moderate-to-high risk for death, a restrictive strategy regarding red-cell transfusion was noninferior to a liberal strategy with respect to the composite outcome of death from any cause, myocardial infarction, stroke, or new-onset renal failure with dialysis, with less blood transfused. (Funded by the Canadian Institutes of Health Research and others; TRICS III ClinicalTrials.gov number, NCT02042898 .).

  11. Site- and bond-percolation thresholds in K_{n,n}-based lattices: Vulnerability of quantum annealers to random qubit and coupler failures on chimera topologies.

    PubMed

    Melchert, O; Katzgraber, Helmut G; Novotny, M A

    2016-04-01

    We estimate the critical thresholds of bond and site percolation on nonplanar, effectively two-dimensional graphs with chimeralike topology. The building blocks of these graphs are complete and symmetric bipartite subgraphs of size 2n, referred to as K_{n,n} graphs. For the numerical simulations we use an efficient union-find-based algorithm and employ a finite-size scaling analysis to obtain the critical properties for both bond and site percolation. We report the respective percolation thresholds for different sizes of the bipartite subgraph and verify that the associated universality class is that of standard two-dimensional percolation. For the canonical chimera graph used in the D-Wave Systems Inc. quantum annealer (n=4), we discuss device failure in terms of network vulnerability, i.e., we determine the critical fraction of qubits and couplers that can be absent due to random failures prior to losing large-scale connectivity throughout the device.

  12. Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Reed, Seann M.

    2003-09-01

    The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.

  13. Methodological aspects of crossover and maximum fat-oxidation rate point determination.

    PubMed

    Michallet, A-S; Tonini, J; Regnier, J; Guinot, M; Favre-Juvin, A; Bricout, V; Halimi, S; Wuyam, B; Flore, P

    2008-11-01

    Indirect calorimetry during exercise provides two metabolic indices of substrate oxidation balance: the crossover point (COP) and maximum fat oxidation rate (LIPOXmax). We aimed to study the effects of the analytical device, protocol type and ventilatory response on variability of these indices, and the relationship with lactate and ventilation thresholds. After maximum exercise testing, 14 relatively fit subjects (aged 32+/-10 years; nine men, five women) performed three submaximum graded tests: one was based on a theoretical maximum power (tMAP) reference; and two were based on the true maximum aerobic power (MAP). Gas exchange was measured concomitantly using a Douglas bag (D) and an ergospirometer (E). All metabolic indices were interpretable only when obtained by the D reference method and MAP protocol. Bland and Altman analysis showed overestimation of both indices with E versus D. Despite no mean differences between COP and LIPOXmax whether tMAP or MAP was used, the individual data clearly showed disagreement between the two protocols. Ventilation explained 10-16% of the metabolic index variations. COP was correlated with ventilation (r=0.96, P<0.01) and the rate of increase in blood lactate (r=0.79, P<0.01), and LIPOXmax correlated with the ventilation threshold (r=0.95, P<0.01). This study shows that, in fit healthy subjects, the analytical device, reference used to build the protocol and ventilation responses affect metabolic indices. In this population, and particularly to obtain interpretable metabolic indices, we recommend a protocol based on the true MAP or one adapted to include the transition from fat to carbohydrate. The correlation between metabolic indices and lactate/ventilation thresholds suggests that shorter, classical maximum progressive exercise testing may be an alternative means of estimating these indices in relatively fit subjects. However, this needs to be confirmed in patients who have metabolic defects.

  14. Identifying thresholds for relationships between impacts of rationing of nursing care and nurse- and patient-reported outcomes in Swiss hospitals: a correlational study.

    PubMed

    Schubert, Maria; Clarke, Sean P; Glass, Tracy R; Schaffert-Witvliet, Bianca; De Geest, Sabina

    2009-07-01

    In the Rationing of Nursing Care in Switzerland Study, implicit rationing of care was the only factor consistently significantly associated with all six studied patient outcomes. These results highlight the importance of rationing as a new system factor regarding patient safety and quality of care. Since at least some rationing of care appears inevitable, it is important to identify the thresholds of its influences in order to minimize its negative effects on patient outcomes. To describe the levels of implicit rationing of nursing care in a sample of Swiss acute care hospitals and to identify clinically meaningful thresholds of rationing. Descriptive cross-sectional multi-center study. Five Swiss-German and three Swiss-French acute care hospitals. 1338 nurses and 779 patients. Implicit rationing of nursing care was measured using the newly developed Basel Extent of Rationing of Nursing Care (BERNCA) instrument. Other variables were measured using survey items from the International Hospital Outcomes Study battery. Data were summarized using appropriate descriptive measures, and logistic regression models were used to define a clinically meaningful rationing threshold level. For the studied patient outcomes, identified rationing threshold levels varied from 0.5 (i.e., between 0 ('never') and 1 ('rarely') to 2 ('sometimes')). Three of the identified patient outcomes (nosocomial infections, pressure ulcers, and patient satisfaction) were particularly sensitive to rationing, showing negative consequences anywhere it was consistently reported (i.e., average BERNCA scores of 0.5 or above). In other cases, increases in negative outcomes were first observed from the level of 1 (average ratings of rarely). Rationing scores generated using the BERNCA instrument provide a clinically meaningful method for tracking the correlates of low resources or difficulties in resource allocation on patient outcomes. Thresholds identified here provide parameters for administrators to respond to whenever rationing reports exceed the determined level of '0.5' or '1'. Since even very low levels of rationing had negative consequences on three of the six studied outcomes, it is advisable to treat consistent evidence of any rationing as a significant threat to patient safety and quality of care.

  15. Thresholds for Shifting Visually Perceived Eye Level Due to Incremental Pitches

    NASA Technical Reports Server (NTRS)

    Scott, Donald M.; Welch, Robert; Cohen, M. M.; Hill, Cyndi

    2001-01-01

    Visually perceived eye level (VPEL) was judged by subjects as they viewed a luminous grid pattern that was pitched by 2 or 5 deg increments between -20 deg and +20 deg. Subjects were dark adapted for 20 min and indicated--VPEL by directing the beam of a laser pointer to the rear wall of a 1.25 m cubic pitch box that rotated about a horizontal axis midpoint on the rear wall. Data were analyzed by ANOVA and the Tukey HSD procedure. Results showed a 10.0 deg threshold for pitches P(sub i) above the reference pitch P(sub 0), and a -10.3 deg threshold for pitches P(sub i) below-the reference-pitch P(sub 0). Threshold data for pitches P(sub i) < P(sub 0) suggest an asymmetric threshold for VPEL below and above physical eye level.

  16. Should sensory function after median nerve injury and repair be quantified using two-point discrimination as the critical measure?

    PubMed

    Jerosch-Herold, C

    2000-12-01

    Two-point discrimination (2PD) is widely used for evaluating outcome from peripheral nerve injury and repair. It is the only quantifiable measure used in the British Medical Research Council (MRC) classification that was developed by Highet in 1954. This paper reports the results of a study of 41 patients with complete median nerve lacerations to the wrist or forearm. Two-point discrimination thresholds were assessed together with locognosia (locognosia is the ability to localise a sensory stimulus on the body's surface), tactile gnosis, and touch threshold. Using the MRC classification 29 (71%) patients had a result of S2 or below, 11 (27%) were S3, and only one scored S3+. Patients scored much better on the other tests and showed progressive recovery. It remains too difficult for patients to obtain a measurable threshold value on 2PD and the test therefore lacks responsiveness. The rating of outcome from peripheral nerve repair should not be based solely on 2PD testing and must include other tests of tactile sensibility.

  17. Machine Learning Approach to Extract Diagnostic and Prognostic Thresholds: Application in Prognosis of Cardiovascular Mortality

    PubMed Central

    Mena, Luis J.; Orozco, Eber E.; Felix, Vanessa G.; Ostos, Rodolfo; Melgarejo, Jesus; Maestre, Gladys E.

    2012-01-01

    Machine learning has become a powerful tool for analysing medical domains, assessing the importance of clinical parameters, and extracting medical knowledge for outcomes research. In this paper, we present a machine learning method for extracting diagnostic and prognostic thresholds, based on a symbolic classification algorithm called REMED. We evaluated the performance of our method by determining new prognostic thresholds for well-known and potential cardiovascular risk factors that are used to support medical decisions in the prognosis of fatal cardiovascular diseases. Our approach predicted 36% of cardiovascular deaths with 80% specificity and 75% general accuracy. The new method provides an innovative approach that might be useful to support decisions about medical diagnoses and prognoses. PMID:22924062

  18. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.

    PubMed

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.

  19. What Would Be the Effect of Referral to High-Volume Hospitals in a Largely Rural State?

    ERIC Educational Resources Information Center

    Ward, Marcia M.; Jaana, Mirou; Wakefield, Douglas S.; Ohsfeldt, Robert L.; Schneider, John E.; Miller, Thomas; Lei, Yang

    2004-01-01

    Volume of certain surgical procedures has been linked to patient outcomes. The Leapfrog Group and others have recommended evidence-based referral using specific volume thresholds for nonemergent cases. The literature is limited on the effect of such referral on hospitals, especially in rural areas. To examine the impact of evidence-based referral…

  20. Use of continuous glucose monitoring as an outcome measure in clinical trials.

    PubMed

    Beck, Roy W; Calhoun, Peter; Kollman, Craig

    2012-10-01

    Although developed to be a management tool for individuals with diabetes, continuous glucose monitoring (CGM) also has potential value for the assessment of outcomes in clinical studies. We evaluated using CGM as such an outcome measure. Data were analyzed from six previously completed inpatient studies in which both CGM (Freestyle Navigator™ [Abbott Diabetes Care, Alameda, CA] or Guardian(®) [Medtronic, Northridge, CA]) and reference glucose measurements were available. The analyses included 97 days of data from 93 participants with type 1 diabetes (age range, 5-57 years; mean, 18 ± 12 years). Mean glucose levels per day were similar for the CGM and reference measurements (median, 148 mg/dL vs. 143 mg/dL, respectively; P = 0.92), and the correlation of the two was high (r = 0.89). Similarly, most glycemia metrics showed no significant differences comparing CGM and reference values, except that the nadir glucose tended to be slightly lower and peak glucose slightly higher with reference measurements than CGM measurements (respective median, 59 mg/dL vs. 66 mg/dL [P = 0.05] and 262 mg/dL vs. 257 mg/dL [P = 0.003]) and glucose variability as measured with the coefficient of variation was slightly lower with CGM than reference measurements (respective median, 31% vs. 35%; P<0.001). A reasonably high degree of concordance exists when comparing outcomes based on CGM measurements with outcomes based on reference blood glucose measurements. CGM inaccuracy and underestimation of the extremes of hyperglycemia and hypoglycemia can be accounted for in a clinical trial's study design. Thus, in appropriate settings, CGM can be a very meaningful and feasible outcome measure for clinical trials.

  1. Use of Continuous Glucose Monitoring as an Outcome Measure in Clinical Trials

    PubMed Central

    Calhoun, Peter; Kollman, Craig

    2012-01-01

    Abstract Objective Although developed to be a management tool for individuals with diabetes, continuous glucose monitoring (CGM) also has potential value for the assessment of outcomes in clinical studies. We evaluated using CGM as such an outcome measure. Research Design and Methods Data were analyzed from six previously completed inpatient studies in which both CGM (Freestyle Navigator™ [Abbott Diabetes Care, Alameda, CA] or Guardian® [Medtronic, Northridge, CA]) and reference glucose measurements were available. The analyses included 97 days of data from 93 participants with type 1 diabetes (age range, 5–57 years; mean, 18±12 years). Results Mean glucose levels per day were similar for the CGM and reference measurements (median, 148 mg/dL vs. 143 mg/dL, respectively; P=0.92), and the correlation of the two was high (r=0.89). Similarly, most glycemia metrics showed no significant differences comparing CGM and reference values, except that the nadir glucose tended to be slightly lower and peak glucose slightly higher with reference measurements than CGM measurements (respective median, 59 mg/dL vs. 66 mg/dL [P=0.05] and 262 mg/dL vs. 257 mg/dL [P=0.003]) and glucose variability as measured with the coefficient of variation was slightly lower with CGM than reference measurements (respective median, 31% vs. 35%; P<0.001). Conclusions A reasonably high degree of concordance exists when comparing outcomes based on CGM measurements with outcomes based on reference blood glucose measurements. CGM inaccuracy and underestimation of the extremes of hyperglycemia and hypoglycemia can be accounted for in a clinical trial's study design. Thus, in appropriate settings, CGM can be a very meaningful and feasible outcome measure for clinical trials. PMID:23013201

  2. Controlling for Frailty in Pharmacoepidemiologic Studies of Older Adults: Validation of an Existing Medicare Claims-based Algorithm.

    PubMed

    Cuthbertson, Carmen C; Kucharska-Newton, Anna; Faurot, Keturah R; Stürmer, Til; Jonsson Funk, Michele; Palta, Priya; Windham, B Gwen; Thai, Sydney; Lund, Jennifer L

    2018-07-01

    Frailty is a geriatric syndrome characterized by weakness and weight loss and is associated with adverse health outcomes. It is often an unmeasured confounder in pharmacoepidemiologic and comparative effectiveness studies using administrative claims data. Among the Atherosclerosis Risk in Communities (ARIC) Study Visit 5 participants (2011-2013; n = 3,146), we conducted a validation study to compare a Medicare claims-based algorithm of dependency in activities of daily living (or dependency) developed as a proxy for frailty with a reference standard measure of phenotypic frailty. We applied the algorithm to the ARIC participants' claims data to generate a predicted probability of dependency. Using the claims-based algorithm, we estimated the C-statistic for predicting phenotypic frailty. We further categorized participants by their predicted probability of dependency (<5%, 5% to <20%, and ≥20%) and estimated associations with difficulties in physical abilities, falls, and mortality. The claims-based algorithm showed good discrimination of phenotypic frailty (C-statistic = 0.71; 95% confidence interval [CI] = 0.67, 0.74). Participants classified with a high predicted probability of dependency (≥20%) had higher prevalence of falls and difficulty in physical ability, and a greater risk of 1-year all-cause mortality (hazard ratio = 5.7 [95% CI = 2.5, 13]) than participants classified with a low predicted probability (<5%). Sensitivity and specificity varied across predicted probability of dependency thresholds. The Medicare claims-based algorithm showed good discrimination of phenotypic frailty and high predictive ability with adverse health outcomes. This algorithm can be used in future Medicare claims analyses to reduce confounding by frailty and improve study validity.

  3. Assessment of Minimum Important Difference and Substantial Clinical Benefit with the Vascular Quality of Life Questionnaire-6 when Evaluating Revascularisation Procedures in Peripheral Arterial Disease.

    PubMed

    Nordanstig, J; Pettersson, M; Morgan, M; Falkenberg, M; Kumlien, C

    2017-09-01

    Patient reported outcomes are increasingly used to assess outcomes after peripheral arterial disease (PAD) interventions. VascuQoL-6 (VQ-6) is a PAD specific health-related quality of life (HRQoL) instrument for routine clinical practice and clinical research. This study assessed the minimum important difference for the VQ-6 and determined thresholds for the minimum important difference and substantial clinical benefit following PAD revascularisation. This was a population-based observational cohort study. VQ-6 data from the Swedvasc Registry (January 2014 to September 2016) was analysed for revascularised PAD patients. The minimum important difference was determined using a combination of a distribution based and an anchor-based method, while receiver operating characteristic curve analysis (ROC) was used to determine optimal thresholds for a substantial clinical benefit following revascularisation. A total of 3194 revascularised PAD patients with complete VQ-6 baseline recordings (intermittent claudication (IC) n = 1622 and critical limb ischaemia (CLI) n = 1572) were studied, of which 2996 had complete VQ-6 recordings 30 days and 1092 a year after the vascular intervention. The minimum important difference 1 year after revascularisation for IC patients ranged from 1.7 to 2.2 scale steps, depending on the method of analysis. Among CLI patients, the minimum important difference after 1 year was 1.9 scale steps. ROC analyses demonstrated that the VQ-6 discriminative properties for a substantial clinical benefit was excellent for IC patients (area under curve (AUC) 0.87, sensitivity 0.81, specificity 0.76) and acceptable in CLI (AUC 0.736, sensitivity 0.63, specificity 0.72). An optimal VQ-6 threshold for a substantial clinical benefit was determined at 3.5 scale steps among IC patients and 4.5 in CLI patients. The suggested thresholds for minimum important difference and substantial clinical benefit could be used when evaluating VQ-6 outcomes following different interventions in PAD and in the design of clinical trials. Copyright © 2017 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  4. Vertical structure of cumulonimbus towers and intense convective clouds over the South Asian region during the summer monsoon season

    NASA Astrophysics Data System (ADS)

    Bhat, G. S.; Kumar, Shailendra

    2015-03-01

    The vertical structure of radar reflectivity factor in active convective clouds that form during the South Asian monsoon season is reported using the 2A25 version 6 data product derived from the precipitation radar measurements on board the Tropical Rainfall Measuring Mission satellite. We define two types of convective cells, namely, cumulonimbus towers (CbTs) and intense convective cells (ICCs). CbT is defined referring to a reflectivity threshold of 20 dBZ at 12 km altitude and is at least 9 km thick. ICCs are constructed referring to reflectivity thresholds at 8 km and 3 km altitudes. Cloud properties reported here are based on 10 year climatology. It is observed that the frequency of occurrence of CbTs is highest over the foothills of Himalayas, plains of northern India and Bangladesh, and minimum over the Arabian Sea and equatorial Indian Ocean west of 90°E. The regional differences depend on the reference height selected, namely, small in the case of CbTs and prominent in 6-13 km height range for ICCs. Land cells are more intense than the oceanic ones for convective cells defined using the reflectivity threshold at 3 km, whereas land versus ocean contrasts are not observed in the case of CbTs. Compared to cumulonimbus clouds elsewhere in the tropics, the South Asian counterparts have higher reflectivity values above 11 km altitude.

  5. Clinical Utility of Risk Models to Refer Patients with Adnexal Masses to Specialized Oncology Care: Multicenter External Validation Using Decision Curve Analysis.

    PubMed

    Wynants, Laure; Timmerman, Dirk; Verbakel, Jan Y; Testa, Antonia; Savelli, Luca; Fischerova, Daniela; Franchi, Dorella; Van Holsbeke, Caroline; Epstein, Elisabeth; Froyman, Wouter; Guerriero, Stefano; Rossi, Alberto; Fruscio, Robert; Leone, Francesco Pg; Bourne, Tom; Valentin, Lil; Van Calster, Ben

    2017-09-01

    Purpose: To evaluate the utility of preoperative diagnostic models for ovarian cancer based on ultrasound and/or biomarkers for referring patients to specialized oncology care. The investigated models were RMI, ROMA, and 3 models from the International Ovarian Tumor Analysis (IOTA) group [LR2, ADNEX, and the Simple Rules risk score (SRRisk)]. Experimental Design: A secondary analysis of prospectively collected data from 2 cross-sectional cohort studies was performed to externally validate diagnostic models. A total of 2,763 patients (2,403 in dataset 1 and 360 in dataset 2) from 18 centers (11 oncology centers and 7 nononcology hospitals) in 6 countries participated. Excised tissue was histologically classified as benign or malignant. The clinical utility of the preoperative diagnostic models was assessed with net benefit (NB) at a range of risk thresholds (5%-50% risk of malignancy) to refer patients to specialized oncology care. We visualized results with decision curves and generated bootstrap confidence intervals. Results: The prevalence of malignancy was 41% in dataset 1 and 40% in dataset 2. For thresholds up to 10% to 15%, RMI and ROMA had a lower NB than referring all patients. SRRisks and ADNEX demonstrated the highest NB. At a threshold of 20%, the NBs of ADNEX, SRrisks, and RMI were 0.348, 0.350, and 0.270, respectively. Results by menopausal status and type of center (oncology vs. nononcology) were similar. Conclusions: All tested IOTA methods, especially ADNEX and SRRisks, are clinically more useful than RMI and ROMA to select patients with adnexal masses for specialized oncology care. Clin Cancer Res; 23(17); 5082-90. ©2017 AACR . ©2017 American Association for Cancer Research.

  6. Postoperative seizure outcome-guided machine learning for interictal electrocorticography in neocortical epilepsy.

    PubMed

    Park, Seong-Cheol; Chung, Chun Kee

    2018-06-01

    The objective of this study was to introduce a new machine learning guided by outcome of resective epilepsy surgery defined as the presence/absence of seizures to improve data mining for interictal pathological activities in neocortical epilepsy. Electrocorticographies for 39 patients with medically intractable neocortical epilepsy were analyzed. We separately analyzed 38 frequencies from 0.9 to 800 Hz including both high-frequency activities and low-frequency activities to select bands related to seizure outcome. An automatic detector using amplitude-duration-number thresholds was used. Interictal electrocorticography data sets of 8 min for each patient were selected. In the first training data set of 20 patients, the automatic detector was optimized to best differentiate the seizure-free group from not-seizure-free-group based on ranks of resection percentages of activities detected using a genetic algorithm. The optimization was validated in a different data set of 19 patients. There were 16 (41%) seizure-free patients. The mean follow-up duration was 21 ± 11 mo (range, 13-44 mo). After validation, frequencies significantly related to seizure outcome were 5.8, 8.4-25, 30, 36, 52, and 75 among low-frequency activities and 108 and 800 Hz among high-frequency activities. Resection for 5.8, 8.4-25, 108, and 800 Hz activities consistently improved seizure outcome. Resection effects of 17-36, 52, and 75 Hz activities on seizure outcome were variable according to thresholds. We developed and validated an automated detector for monitoring interictal pathological and inhibitory/physiological activities in neocortical epilepsy using a data-driven approach through outcome-guided machine learning. NEW & NOTEWORTHY Outcome-guided machine learning based on seizure outcome was used to improve detections for interictal electrocorticographic low- and high-frequency activities. This method resulted in better separation of seizure outcome groups than others reported in the literature. The automatic detector can be trained without human intervention and no prior information. It is based only on objective seizure outcome data without relying on an expert's manual annotations. Using the method, we could find and characterize pathological and inhibitory activities.

  7. Longitudinal predictors of aided speech audibility in infants and children

    PubMed Central

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Bentler, Ruth; Holte, Lenore; Roush, Patricia; Oleson, Jacob; Van Buren, John; Moeller, Mary Pat

    2015-01-01

    Objectives Amplification is a core component of early intervention for children who are hard of hearing (CHH), but hearing aids (HAs) have unique effects that may be independent from other components of the early intervention process, such as caregiver training or speech and language intervention. The specific effects of amplification are rarely described in studies of developmental outcomes. The primary purpose of this manuscript is to quantify aided speech audibility during the early childhood years and examine the factors that influence audibility with amplification for children in the Outcomes of Children with Hearing Loss (OCHL) study. Design Participants were 288 children with permanent hearing loss who were followed as part of the OCHL study. All of the children in this analysis had bilateral hearing loss and wore air-conduction behind-the-ear HAs. At every study visit, hearing thresholds were measured using developmentally-appropriate behavioral methods. Data were obtained for a total of 1043 audiometric evaluations across all subjects for the first four study visits. In addition, the aided audibility of speech through the HA was assessed using probe microphone measures. Hearing thresholds and aided audibility were analyzed. Repeated-measures analyses of variance were conducted to determine if patterns of thresholds and aided audibility were significantly different between ears (left vs. right) or across the first four study visits. Furthermore, a cluster analysis was performed based on the aided audibility at entry into the study, aided audibility at the child’s final visit, and change in aided audibility between these two intervals to determine if there were different patterns of longitudinal aided audibility within the sample. Results Eighty-four percent of children in the study had stable audiometric thresholds during the study, defined as threshold changes <10 dB for any single study visit. There were no significant differences in hearing thresholds, aided audibility, or deviation of the HA fitting from prescriptive targets between ears or across test intervals for the first four visits. Approximately 35% of the children in the study had aided audibility that was below the average for the normative range for the Speech Intelligibility Index (SII) based on degree of hearing loss. The cluster analysis of longitudinal aided audibility revealed three distinct groups of children: a group with consistently high aided audibility throughout the study, a group with decreasing audibility during the study, and a group with consistently low aided audibility. Conclusions The current results indicated that approximately 65% of children in the study had adequate aided audibility of speech and stable hearing during the study period. Limited audibility was associated with greater degrees of hearing loss and larger deviations from prescriptive targets. Studies of developmental outcomes will help to determine how aided audibility is necessary to affects developmental outcomes in CHH. PMID:26731156

  8. Clinical Practice Guidelines From the AABB: Red Blood Cell Transfusion Thresholds and Storage.

    PubMed

    Carson, Jeffrey L; Guyatt, Gordon; Heddle, Nancy M; Grossman, Brenda J; Cohn, Claudia S; Fung, Mark K; Gernsheimer, Terry; Holcomb, John B; Kaplan, Lewis J; Katz, Louis M; Peterson, Nikki; Ramsey, Glenn; Rao, Sunil V; Roback, John D; Shander, Aryeh; Tobian, Aaron A R

    2016-11-15

    More than 100 million units of blood are collected worldwide each year, yet the indication for red blood cell (RBC) transfusion and the optimal length of RBC storage prior to transfusion are uncertain. To provide recommendations for the target hemoglobin level for RBC transfusion among hospitalized adult patients who are hemodynamically stable and the length of time RBCs should be stored prior to transfusion. Reference librarians conducted a literature search for randomized clinical trials (RCTs) evaluating hemoglobin thresholds for RBC transfusion (1950-May 2016) and RBC storage duration (1948-May 2016) without language restrictions. The results were summarized using the Grading of Recommendations Assessment, Development and Evaluation method. For RBC transfusion thresholds, 31 RCTs included 12 587 participants and compared restrictive thresholds (transfusion not indicated until the hemoglobin level is 7-8 g/dL) with liberal thresholds (transfusion not indicated until the hemoglobin level is 9-10 g/dL). The summary estimates across trials demonstrated that restrictive RBC transfusion thresholds were not associated with higher rates of adverse clinical outcomes, including 30-day mortality, myocardial infarction, cerebrovascular accident, rebleeding, pneumonia, or thromboembolism. For RBC storage duration, 13 RCTs included 5515 participants randomly allocated to receive fresher blood or standard-issue blood. These RCTs demonstrated that fresher blood did not improve clinical outcomes. It is good practice to consider the hemoglobin level, the overall clinical context, patient preferences, and alternative therapies when making transfusion decisions regarding an individual patient. Recommendation 1: a restrictive RBC transfusion threshold in which the transfusion is not indicated until the hemoglobin level is 7 g/dL is recommended for hospitalized adult patients who are hemodynamically stable, including critically ill patients, rather than when the hemoglobin level is 10 g/dL (strong recommendation, moderate quality evidence). A restrictive RBC transfusion threshold of 8 g/dL is recommended for patients undergoing orthopedic surgery, cardiac surgery, and those with preexisting cardiovascular disease (strong recommendation, moderate quality evidence). The restrictive transfusion threshold of 7 g/dL is likely comparable with 8 g/dL, but RCT evidence is not available for all patient categories. These recommendations do not apply to patients with acute coronary syndrome, severe thrombocytopenia (patients treated for hematological or oncological reasons who are at risk of bleeding), and chronic transfusion-dependent anemia (not recommended due to insufficient evidence). Recommendation 2: patients, including neonates, should receive RBC units selected at any point within their licensed dating period (standard issue) rather than limiting patients to transfusion of only fresh (storage length: <10 days) RBC units (strong recommendation, moderate quality evidence). Research in RBC transfusion medicine has significantly advanced the science in recent years and provides high-quality evidence to inform guidelines. A restrictive transfusion threshold is safe in most clinical settings and the current blood banking practices of using standard-issue blood should be continued.

  9. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  10. Discriminating the precipitation phase based on different temperature thresholds in the Songhua River Basin, China

    NASA Astrophysics Data System (ADS)

    Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao

    2018-06-01

    Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4.0 °C at most stations. The results of this study provide information for climate change research and hydrological process simulations in the SRB, as well as provide reference information for discriminating precipitation phase in other regions.

  11. The characteristics of vibrotactile perception threshold among shipyard workers in a tropical environment.

    PubMed

    Tamrin, Shamsul Bahri Mohd; Jamalohdin, Mohd Nazri; Ng, Yee Guan; Maeda, Setsuo; Ali, Nurul Asyiqin Mohd

    2012-01-01

    The objectives of this study are to determine the prevalence of hand-arm vibration syndrome (HAVS) and the characteristics of the vibrotactile perception threshold (VPT) among users of hand-held vibrating tools working in a tropical environment. A cross sectional study was done among 47 shipyard workers using instruments and a questionnaire to determine HAVS related symptoms. The vibration acceleration magnitude was determined using a Human Vibration Meter (Maestro). A P8 Pallesthesiometer (EMSON-MAT, Poland) was used to determine the VPT of index and little finger at frequencies of 31.5 Hz and 125 Hz. The mean reference threshold shift was determined from the reference threshold shift derived from the VPT value. The results show a moderate prevalence of HAVS (49%) among the shipyard workers. They were exposed to the same high intensity level of HAVS (mean = 4.19 ± 1.94 m/s(2)) from the use of vibrating hand-held tools. The VPT values were found to be higher for both fingers and both frequencies (index, 31.5 Hz = 110.91 ± 7.36 dB, 125 Hz = 117.0 ± 10.25 dB; little, 31.5 Hz = 110.70 ± 6.75 dB, 125 Hz = 117.71 ± 10.25 dB) compared to the normal healthy population with a mean threshold shift of between 9.20 to 10.61 decibels. The frequency of 31.5 Hz had a higher percentage of positive mean reference threshold shift (index finger=93.6%, little finger=100%) compared to 125 Hz (index finger=85.1%, little finger=78.7%). In conclusion, the prevalence of HAVS was lower than those working in a cold environment; however, all workers had a higher mean VPT value compared to the normal population with all those reported as having HAVS showing a positive mean reference threshold shift of VPT value.

  12. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    PubMed

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. [Definition of low threshold volumes for quality assurance: conceptual and methodological issues involved in the definition and evaluation of thresholds for volume outcome relations in clinical care].

    PubMed

    Wetzel, Hermann

    2006-01-01

    In a large number of mostly retrospective association studies, a statistical relationship between volume and quality of health care has been reported. However, the relevance of these results is frequently limited by methodological shortcomings. In this article, criteria for the evidence and definition of thresholds for volume-outcome relations are proposed, e.g. the specification of relevant outcomes for quality indicators, analysis of volume as a continuous variable with an adequate case-mix and risk adjustment, accounting for cluster effects and considering mathematical models for the derivation of cut-off values. Moreover, volume thresholds are regarded as surrogate parameters for the indirect classification of the quality of care, whose diagnostic validity and effectiveness in improving health care quality need to be evaluated in prospective studies.

  14. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  15. An Automated Energy Detection Algorithm Based on Consecutive Mean Excision

    DTIC Science & Technology

    2018-01-01

    present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan

  16. Predictive performance of rainfall thresholds for shallow landslides in Switzerland from gridded daily data

    NASA Astrophysics Data System (ADS)

    Leonarduzzi, Elena; Molnar, Peter; McArdell, Brian W.

    2017-08-01

    A high-resolution gridded daily precipitation data set was combined with a landslide inventory containing over 2000 events in the period 1972-2012 to analyze rainfall thresholds which lead to landsliding in Switzerland. We colocated triggering rainfall to landslides, developed distributions of triggering and nontriggering rainfall event properties, and determined rainfall thresholds and intensity-duration ID curves and validated their performance. The best predictive performance was obtained by the intensity-duration ID threshold curve, followed by peak daily intensity Imax and mean event intensity Imean. Event duration by itself had very low predictive power. A single country-wide threshold of Imax = 28 mm/d was extended into space by regionalization based on surface erodibility and local climate (mean daily precipitation). It was found that wetter local climate and lower erodibility led to significantly higher rainfall thresholds required to trigger landslides. However, we showed that the improvement in model performance due to regionalization was marginal and much lower than what can be achieved by having a high-quality landslide database. Reference cases in which the landslide locations and timing were randomized and the landslide sample size was reduced showed the sensitivity of the Imax rainfall threshold model. Jack-knife and cross-validation experiments demonstrated that the model was robust. The results reported here highlight the potential of using rainfall ID threshold curves and rainfall threshold values for predicting the occurrence of landslides on a country or regional scale with possible applications in landslide warning systems, even with daily data.

  17. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  18. Bone-anchored Hearing Aids: correlation between pure-tone thresholds and outcome in three user groups.

    PubMed

    Pfiffner, Flurin; Kompis, Martin; Stieger, Christof

    2009-10-01

    To investigate correlations between preoperative hearing thresholds and postoperative aided thresholds and speech understanding of users of Bone-anchored Hearing Aids (BAHA). Such correlations may be useful to estimate the postoperative outcome with BAHA from preoperative data. Retrospective case review. Tertiary referral center. : Ninety-two adult unilaterally implanted BAHA users in 3 groups: (A) 24 subjects with a unilateral conductive hearing loss, (B) 38 subjects with a bilateral conductive hearing loss, and (C) 30 subjects with single-sided deafness. Preoperative air-conduction and bone-conduction thresholds and 3-month postoperative aided and unaided sound-field thresholds as well as speech understanding using German 2-digit numbers and monosyllabic words were measured and analyzed. Correlation between preoperative air-conduction and bone-conduction thresholds of the better and of the poorer ear and postoperative aided thresholds as well as correlations between gain in sound-field threshold and gain in speech understanding. Aided postoperative sound-field thresholds correlate best with BC threshold of the better ear (correlation coefficients, r2 = 0.237 to 0.419, p = 0.0006 to 0.0064, depending on the group of subjects). Improvements in sound-field threshold correspond to improvements in speech understanding. When estimating expected postoperative aided sound-field thresholds of BAHA users from preoperative hearing thresholds, the BC threshold of the better ear should be used. For the patient groups considered, speech understanding in quiet can be estimated from the improvement in sound-field thresholds.

  19. A pilot study to assess feasibility of value based pricing in Cyprus through pharmacoeconomic modelling and assessment of its operational framework: sorafenib for second line renal cell cancer.

    PubMed

    Petrou, Panagiotis; Talias, Michael A

    2014-01-01

    The continuing increase of pharmaceutical expenditure calls for new approaches to pricing and reimbursement of pharmaceuticals. Value based pricing of pharmaceuticals is emerging as a useful tool and possess theoretical attributes to help health system cope with rising pharmaceutical expenditure. To assess the feasibility of introducing a value-based pricing scheme of pharmaceuticals in Cyprus and explore the integrative framework. A probabilistic Markov chain Monte Carlo model was created to simulate progression of advanced renal cell cancer for comparison of sorafenib to standard best supportive care. Literature review was performed and efficacy data were transferred from a published landmark trial, while official pricelists and clinical guidelines from Cyprus Ministry of Health were utilised for cost calculation. Based on proposed willingness to pay threshold the maximum price of sorafenib for the indication of second line renal cell cancer was assessed. Sorafenib value based price was found to be significantly lower compared to its current reference price. Feasibility of Value Based Pricing is documented and pharmacoeconomic modelling can lead to robust results. Integration of value and affordability in the price are its main advantages which have to be weighed against lack of documentation for several theoretical parameters that influence outcome. Smaller countries such as Cyprus may experience adversities in establishing and sustaining essential structures for this scheme.

  20. The use of quality-adjusted life-years in the economic evaluation of health technologies in Spain: a review of the 1990-2009 literature.

    PubMed

    Rodriguez, José Manuel; Paz, Silvia; Lizan, Luis; Gonzalez, Paloma

    2011-06-01

    To appraise economic evaluations of health technologies that included quality-adjusted life-years (QALYs) as an outcome measure conducted over the past 20 years in Spain. A systematic review of the literature was conducted. Economic evaluations that included QALYs as an outcome measure, conducted in Spain and published between January 1990 and December 2009 were identified. Primary and gray literature sources were reviewed. A total of 60 articles and 4 health technology assessment reports were included. Key findings were 1) the vast majority of articles (77.1%) referred to therapeutic interventions; 2) 63.2% dealt with pharmaceutical products and much fewer with preventive strategies, medical devices, or diagnostic interventions; 3) most evaluations referred to cardiovascular- (19.8%), respiratory- (16.3%), and cancer- (13.0%) related processes; 4) 80.3% were based on a theoretical model, most commonly Markov models (71.4%); 5) 67.3% adopted the National Health System perspective; 6) information on the methods used to describe the health states was given in 45.1% of studies; 7) 40.3% used the EuroQoL-5D to elicit preferences, whereas 66.1% gave no details on the methods applied to determine patients' choices; 8) it was possible to state who completed the questionnaires in only 17.7% of studies; 9) 77.1% of the interventions assessed were below the €30,000/QALY suggested affordable threshold in Spain. An increasing number of economic evaluations using QALYs had been conducted. Most of them relied on theoretical models. Several methodological issues remain unsolved. Great disparity exists regarding the reporting of the methods used to determine health states and utility values. Copyright © 2011. Published by Elsevier Inc.

  1. Neural activity in cortical area V4 underlies fine disparity discrimination.

    PubMed

    Shiozaki, Hiroshi M; Tanabe, Seiji; Doi, Takahiro; Fujita, Ichiro

    2012-03-14

    Primates are capable of discriminating depth with remarkable precision using binocular disparity. Neurons in area V4 are selective for relative disparity, which is the crucial visual cue for discrimination of fine disparity. Here, we investigated the contribution of V4 neurons to fine disparity discrimination. Monkeys discriminated whether the center disk of a dynamic random-dot stereogram was in front of or behind its surrounding annulus. We first behaviorally tested the reference frame of the disparity representation used for performing this task. After learning the task with a set of surround disparities, the monkey generalized its responses to untrained surround disparities, indicating that the perceptual decisions were generated from a disparity representation in a relative frame of reference. We then recorded single-unit responses from V4 while the monkeys performed the task. On average, neuronal thresholds were higher than the behavioral thresholds. The most sensitive neurons reached thresholds as low as the psychophysical thresholds. For subthreshold disparities, the monkeys made frequent errors. The variable decisions were predictable from the fluctuation in the neuronal responses. The predictions were based on a decision model in which each V4 neuron transmits the evidence for the disparity it prefers. We finally altered the disparity representation artificially by means of microstimulation to V4. The decisions were systematically biased when microstimulation boosted the V4 responses. The bias was toward the direction predicted from the decision model. We suggest that disparity signals carried by V4 neurons underlie precise discrimination of fine stereoscopic depth.

  2. The influence of music and stress on musicians' hearing

    NASA Astrophysics Data System (ADS)

    Kähäri, Kim; Zachau, Gunilla; Eklöf, Mats; Möller, Claes

    2004-10-01

    Hearing and hearing disorders among classical and rock/jazz musicians was investigated. Pure tone audiometry was done in 140 classical and 139 rock/jazz musicians. The rock/jazz musicians answered a questionnaire concerning hearing disorders and psychosocial exposure. All results were compared to age appropriate reference materials. Hearing thresholds showed a notch configuration in both classical and rock/jazz musicians indicating the inclusion of high sound levels but an overall well-preserved hearing thresholds. Female musicians had significantly better hearing thresholds in the high-frequency area than males. Rock/jazz musicians showed slight worse hearing thresholds as compared to classical musicians. When assessing hearing disorders, a large number of rock/jazz musicians suffered from different hearing disorders (74%). Hearing loss, tinnitus and hyperacusis were the most common disorders and were significantly more frequent in comparison with different reference populations. Among classical musicians, no extended negative progress of the pure tone hearing threshold values was found in spite of the continued 16 years of musical noise exposure. In rock/jazz musicians, there was no relationships between psychosocial factors at work and hearing disorders. The rock/jazz musicians reported low stress and high degree of energy. On the average, the rock/jazz musicians reported higher control, lower stress and higher energy than a reference material of white-collar workers.

  3. Threshold Pricing: A Strategy for the Marketing of Adult Education Courses.

    ERIC Educational Resources Information Center

    Lamoureux, Marvin E.

    Because threshold pricing's scope for course price development had a good potential for application to the marketing of services by nonprofit organizations, this study's purpose was to determine the existence and applicability of course price thresholds or ranges to the decisionmaking framework of adult educators, with special reference to…

  4. Crossing the Threshold: Bringing Biological Variation to the Foreground

    ERIC Educational Resources Information Center

    Batzli, Janet M.; Knight, Jennifer K.; Hartley, Laurel M.; Maskiewicz, April Cordero; Desy, Elizabeth A.

    2016-01-01

    Threshold concepts have been referred to as "jewels in the curriculum": concepts that are key to competency in a discipline but not taught explicitly. In biology, researchers have proposed the idea of threshold concepts that include such topics as variation, randomness, uncertainty, and scale. In this essay, we explore how the notion of…

  5. Serum Cystatin C– Versus Creatinine-Based Definitions of Acute Kidney Injury Following Cardiac Surgery: A Prospective Cohort Study

    PubMed Central

    Spahillari, Aferdita; Parikh, Chirag R.; Sint, Kyaw; Koyner, Jay L.; Patel, Uptal D.; Edelstein, Charles L.; Passik, Cary S.; Thiessen-Philbrook, Heather; Swaminathan, Madhav; Shlipak, Michael G.

    2012-01-01

    Background The primary aim of this study was to compare the sensitivity and rapidity of AKI detection by cystatin C relative to creatinine following cardiac surgery. Study Design Prospective cohort study Settings and Participants 1,150 high-risk, adult cardiac surgery patients in the TRIBE-AKI (Translational Research Investigating Biomarker Endpoints for Acute Kidney Injury) Consortium. Predictor Changes in serum creatinine and cystatin C Outcome Post-surgical incidence of AKI Measurements Serum creatinine and cystatin C were measured at the preoperative visit and daily on postoperative days 1–5. To allow comparisons between changes in creatinine and cystatin C, AKI endpoints were defined by the relative increases in each marker from baseline (25, 50 and 100%) and the incidence of AKI was compared based upon each marker. Secondary aims were to compare clinical outcomes among patients defined as having AKI by cystatin C and/or creatinine. Results Overall, serum creatinine detected more cases of AKI than cystatin C: 35% developed a ≥25% increase in serum creatinine, whereas only 23% had ≥25% increase in cystatin C (p < 0.001). Creatinine also had higher proportions meeting the 50% (14% and 8%, p<0.001) and 100% (4% and 2%, p=0.005) thresholds for AKI diagnosis. Clinical outcomes were generally not statistically different for AKI cases detected by creatinine or cystatin C. However, for each AKI threshold, patients with AKI confirmed by both markers had significantly higher risk of the combined mortality/dialysis outcome compared with patients with AKI detected by creatinine alone (p=0.002). Limitations There were few adverse clinical outcomes, limiting our ability to detect differences in outcomes between subgroups of patients based upon their definitions of AKI. Conclusion In this large multicenter study, we found that cystatin C was less sensitive for AKI detection compared with creatinine. However, confirmation by cystatin C appeared to identify a subset of AKI patients with substantially higher risk of adverse outcomes. PMID:22809763

  6. Segmentation of liver region with tumorous tissues

    NASA Astrophysics Data System (ADS)

    Zhang, Xuejun; Lee, Gobert; Tajima, Tetsuji; Kitagawa, Teruhiko; Kanematsu, Masayuki; Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Yokoyama, Ryujiro; Kondo, Hiroshi; Hoshi, Hiroaki; Nawano, Shigeru; Shinozaki, Kenji

    2007-03-01

    Segmentation of an abnormal liver region based on CT or MR images is a crucial step in surgical planning. However, precisely carrying out this step remains a challenge due to either connectivities of the liver to other organs or the shape, internal texture, and homogeneity of liver that maybe extensively affected in case of liver diseases. Here, we propose a non-density based method for extracting the liver region containing tumor tissues by edge detection processing. False extracted regions are eliminated by a shape analysis method and thresholding processing. If the multi-phased images are available then the overall outcome of segmentation can be improved by subtracting two phase images, and the connectivities can be further eliminated by referring to the intensity on another phase image. Within an edge liver map, tumor candidates are identified by their different gray values relative to the liver. After elimination of the small and nonspherical over-extracted regions, the final liver region integrates the tumor region with the liver tissue. In our experiment, 40 cases of MDCT images were used and the result showed that our fully automatic method for the segmentation of liver region is effective and robust despite the presence of hepatic tumors within the liver.

  7. Rapid identification of illegal synthetic adulterants in herbal anti-diabetic medicines using near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Feng, Yanchun; Lei, Deqing; Hu, Changqin

    We created a rapid detection procedure for identifying herbal medicines illegally adulterated with synthetic drugs using near infrared spectroscopy. This procedure includes a reverse correlation coefficient method (RCCM) and comparison of characteristic peaks. Moreover, we made improvements to the RCCM based on new strategies for threshold settings. Any tested herbal medicine must meet two criteria to be identified with our procedure as adulterated. First, the correlation coefficient between the tested sample and the reference must be greater than the RCCM threshold. Next, the NIR spectrum of the tested sample must contain the same characteristic peaks as the reference. In this study, four pure synthetic anti-diabetic drugs (i.e., metformin, gliclazide, glibenclamide and glimepiride), 174 batches of laboratory samples and 127 batches of herbal anti-diabetic medicines were used to construct and validate the procedure. The accuracy of this procedure was greater than 80%. Our data suggest that this protocol is a rapid screening tool to identify synthetic drug adulterants in herbal medicines on the market.

  8. New diagnostic criteria for gestational diabetes mellitus and their impact on the number of diagnoses and pregnancy outcomes.

    PubMed

    Koning, Sarah H; van Zanden, Jelmer J; Hoogenberg, Klaas; Lutgers, Helen L; Klomp, Alberdina W; Korteweg, Fleurisca J; van Loon, Aren J; Wolffenbuttel, Bruce H R; van den Berg, Paul P

    2018-04-01

    Detection and management of gestational diabetes mellitus (GDM) are crucial to reduce the risk of pregnancy-related complications for both mother and child. In 2013, the WHO adopted new diagnostic criteria for GDM to improve pregnancy outcomes. However, the evidence supporting these criteria is limited. Consequently, these new criteria have not yet been endorsed in the Netherlands. The aim of this study was to determine the impact of these criteria on the number of GDM diagnoses and pregnancy outcomes. Data were available from 10,642 women who underwent a 75 g OGTT because of risk factors or signs suggestive of GDM. Women were treated if diagnosed with GDM according to the WHO 1999 criteria. Data on pregnancy outcomes were obtained from extensive chart reviews from 4,431 women and were compared between women with normal glucose tolerance (NGT) and women classified into the following groups: (1) GDM according to WHO 1999 criteria; (2) GDM according to WHO 2013 criteria; (3) GDM according to WHO 2013 fasting glucose threshold, but not WHO 1999 criteria; and (4) GDM according to WHO 1999 2 h plasma glucose threshold (2HG), but not WHO 2013 criteria. Applying the new WHO 2013 criteria would have increased the number of diagnoses by 45% (32% vs 22%) in this population of women at higher risk for GDM. In comparison with women with NGT, women classified as having GDM based only on the WHO 2013 threshold for fasting glucose, who were not treated for GDM, were more likely to have been obese (46.1% vs 28.1%, p < 0.001) and hypertensive (3.3% vs 1.2%, p < 0.001) before pregnancy, and to have had higher rates of gestational hypertension (7.8% vs 4.9%, p = 0.003), planned Caesarean section (10.3% vs 6.5%, p = 0.001) and induction of labour (34.8% vs 28.0%, p = 0.001). In addition, their neonates were more likely to have had an Apgar score <7 at 5 min (4.4% vs 2.6%, p = 0.015) and to have been admitted to the Neonatology Department (15.0% vs 11.1%, p = 0.004). The number of large for gestational age (LGA) neonates was not significantly different between the two groups. Women potentially missed owing to the higher 2HG threshold set by WHO 2013 had similar pregnancy outcomes to women with NGT. These women were all treated for GDM with diet and 20.5% received additional insulin. Applying the WHO 2013 criteria will have a major impact on the number of GDM diagnoses. Using the fasting glucose threshold set by WHO 2013 identifies a group of women with an increased risk of adverse outcomes compared with women with NGT. We therefore support the use of a lower fasting glucose threshold in the Dutch national guideline for GDM diagnosis. However, adopting the WHO 2013 criteria with a higher 2HG threshold would exclude women in whom treatment for GDM seems to be effective.

  9. An evidence- and risk-based approach to a harmonized laboratory alert list in Australia and New Zealand.

    PubMed

    Campbell, Craig A; Lam, Que; Horvath, Andrea R

    2018-04-19

    Individual laboratories are required to compose an alert list for identifying critical and significant risk results. The high-risk result working party of the Royal College of Pathologists of Australasia (RCPA) and the Australasian Association of Clinical Biochemists (AACB) has developed a risk-based approach for a harmonized alert list for laboratories throughout Australia and New Zealand. The six-step process for alert threshold identification and assessment involves reviewing the literature, rating the available evidence, performing a risk analysis, assessing method transferability, considering workload implications and seeking endorsement from stakeholders. To demonstrate this approach, a worked example for deciding the upper alert threshold for potassium is described. The findings of the worked example are for infants aged 0-6 months, a recommended upper potassium alert threshold of >7.0 mmol/L in serum and >6.5 mmol/L in plasma, and for individuals older than 6 months, a threshold of >6.2 mmol/L in both serum and plasma. Limitations in defining alert thresholds include the lack of well-designed studies that measure the relationship between high-risk results and patient outcomes or the benefits of treatment to prevent harm, and the existence of a wide range of clinical practice guidelines with conflicting decision points at which treatment is required. The risk-based approach described presents a transparent, evidence- and consensus-based methodology that can be used by any laboratory when designing an alert list for local use. The RCPA-AACB harmonized alert list serves as a starter set for further local adaptation or adoption after consultation with clinical users.

  10. Using natural range of variation to set decision thresholds: a case study for great plains grasslands

    USGS Publications Warehouse

    Symstad, Amy J.; Jonas, Jayne L.; Edited by Guntenspergen, Glenn R.

    2014-01-01

    Natural range of variation (NRV) may be used to establish decision thresholds or action assessment points when ecological thresholds are either unknown or do not exist for attributes of interest in a managed ecosystem. The process for estimating NRV involves identifying spatial and temporal scales that adequately capture the heterogeneity of the ecosystem; compiling data for the attributes of interest via study of historic records, analysis and interpretation of proxy records, modeling, space-for-time substitutions, or analysis of long-term monitoring data; and quantifying the NRV from those data. At least 19 National Park Service (NPS) units in North America’s Great Plains are monitoring plant species richness and evenness as indicators of vegetation integrity in native grasslands, but little information on natural, temporal variability of these indicators is available. In this case study, we use six long-term vegetation monitoring datasets to quantify the temporal variability of these attributes in reference conditions for a variety of Great Plains grassland types, and then illustrate the implications of using different NRVs based on these quantities for setting management decision thresholds. Temporal variability of richness (as measured by the coefficient of variation, CV) is fairly consistent across the wide variety of conditions occurring in Colorado shortgrass prairie to Minnesota tallgrass sand savanna (CV 0.20–0.45) and generally less than that of production at the same sites. Temporal variability of evenness spans a greater range of CV than richness, and it is greater than that of production in some sites but less in other sites. This natural temporal variability may mask undesirable changes in Great Plains grasslands vegetation. Consequently, we suggest that managers consider using a relatively narrow NRV (interquartile range of all richness or evenness values observed in reference conditions) for designating a surveillance threshold, at which greater attention to the situation would be paid, and a broader NRV for designating management thresholds, at which action would be instigated.

  11. Individualized Prediction of Heat Stress in Firefighters: A Data-Driven Approach Using Classification and Regression Trees.

    PubMed

    Mani, Ashutosh; Rao, Marepalli; James, Kelley; Bhattacharya, Amit

    2015-01-01

    The purpose of this study was to explore data-driven models, based on decision trees, to develop practical and easy to use predictive models for early identification of firefighters who are likely to cross the threshold of hyperthermia during live-fire training. Predictive models were created for three consecutive live-fire training scenarios. The final predicted outcome was a categorical variable: will a firefighter cross the upper threshold of hyperthermia - Yes/No. Two tiers of models were built, one with and one without taking into account the outcome (whether a firefighter crossed hyperthermia or not) from the previous training scenario. First tier of models included age, baseline heart rate and core body temperature, body mass index, and duration of training scenario as predictors. The second tier of models included the outcome of the previous scenario in the prediction space, in addition to all the predictors from the first tier of models. Classification and regression trees were used independently for prediction. The response variable for the regression tree was the quantitative variable: core body temperature at the end of each scenario. The predicted quantitative variable from regression trees was compared to the upper threshold of hyperthermia (38°C) to predict whether a firefighter would enter hyperthermia. The performance of classification and regression tree models was satisfactory for the second (success rate = 79%) and third (success rate = 89%) training scenarios but not for the first (success rate = 43%). Data-driven models based on decision trees can be a useful tool for predicting physiological response without modeling the underlying physiological systems. Early prediction of heat stress coupled with proactive interventions, such as pre-cooling, can help reduce heat stress in firefighters.

  12. 48 CFR 736.602-5 - Short selection process for procurements not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Short selection process for procurements not to exceed the simplified acquisition threshold. 736.602-5 Section 736.602-5... selection process for procurements not to exceed the simplified acquisition threshold. References to FAR 36...

  13. Frame-of-Reference Training Effectiveness: Effects of Goal Orientation and Self-Efficacy on Affective, Cognitive, Skill-Based, and Transfer Outcomes

    ERIC Educational Resources Information Center

    Dierdorff, Erich C.; Surface, Eric A.; Brown, Kenneth G.

    2010-01-01

    Empirical evidence supporting frame-of-reference (FOR) training as an effective intervention for calibrating raters is convincing. Yet very little is known about who does better or worse in FOR training. We conducted a field study of how motivational factors influence affective, cognitive, and behavioral learning outcomes, as well as near transfer…

  14. Structured decision making as a conceptual framework to identify thresholds for conservation and management

    USGS Publications Warehouse

    Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.

    2009-01-01

    Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.

  15. Minimum change in spherical aberration that can be perceived

    PubMed Central

    Manzanera, Silvestre; Artal, Pablo

    2016-01-01

    It is important to know the visual sensitivity to optical blur from both a basic science perspective and a practical point of view. Of particular interest is the sensitivity to blur induced by spherical aberration because it is being used to increase depth of focus as a component of a presbyopic solution. Using a flicker detection-based procedure implemented on an adaptive optics visual simulator, we measured the spherical aberration thresholds that produce just-noticeable differences in perceived image quality. The thresholds were measured for positive and negative values of spherical aberration, for best focus and + 0.5 D and + 1.0 D of defocus. At best focus, the SA thresholds were 0.20 ± 0.01 µm and −0.17 ± 0.03 µm for positive and negative spherical aberration respectively (referred to a 6-mm pupil). These experimental values may be useful in setting spherical aberration permissible levels in different ophthalmic techniques. PMID:27699113

  16. Effects of stimulation parameters and electrode location on thresholds for epidural stimulation of cat motor cortex

    NASA Astrophysics Data System (ADS)

    Wongsarnpigoon, Amorn; Grill, Warren M.

    2011-12-01

    Epidural electrical stimulation (ECS) of the motor cortex is a developing therapy for neurological disorders. Both placement and programming of ECS systems may affect the therapeutic outcome, but the treatment parameters that will maximize therapeutic outcomes and minimize side effects are not known. We delivered ECS to the motor cortex of anesthetized cats and investigated the effects of electrode placement and stimulation parameters on thresholds for evoking motor responses in the contralateral forelimb. Thresholds were inversely related to stimulation frequency and the number of pulses per stimulus train. Thresholds were lower over the forelimb representation in motor cortex (primary site) than surrounding sites (secondary sites), and thresholds at sites <4 mm away from the primary site were significantly lower than at sites >4 mm away. Electrode location and montage influenced the effects of polarity on thresholds: monopolar anodic and cathodic thresholds were not significantly different over the primary site, cathodic thresholds were significantly lower than anodic thresholds over secondary sites and bipolar thresholds were significantly lower with the anode over the primary site than with the cathode over the primary site. A majority of bipolar thresholds were either between or equal to the respective monopolar thresholds, but several bipolar thresholds were greater than or less than the monopolar thresholds of both the anode and cathode. During bipolar stimulation, thresholds were influenced by both electric field superposition and indirect, synaptically mediated interactions. These results demonstrate the influence of stimulation parameters and electrode location during cortical stimulation, and these effects should be considered during the programming of systems for therapeutic cortical stimulation.

  17. Quantitative sensory testing in the German Research Network on Neuropathic Pain (DFNS): reference data for the trunk and application in patients with chronic postherpetic neuralgia.

    PubMed

    Pfau, Doreen B; Krumova, Elena K; Treede, Rolf-Detlef; Baron, Ralf; Toelle, Thomas; Birklein, Frank; Eich, Wolfgang; Geber, Christian; Gerhardt, Andreas; Weiss, Thomas; Magerl, Walter; Maier, Christoph

    2014-05-01

    Age- and gender-matched reference values are essential for the clinical use of quantitative sensory testing (QST). To extend the standard test sites for QST-according to the German Research Network on Neuropathic Pain-to the trunk, we collected QST profiles on the back in 162 healthy subjects. Sensory profiles for standard test sites were within normal interlaboratory differences. QST revealed lower sensitivity on the upper back than the hand, and higher sensitivity on the lower back than the foot, but no systematic differences between these trunk sites. Age effects were significant for most parameters. Females exhibited lower pressure pain thresholds (PPT) than males, which was the only significant gender difference. Values outside the 95% confidence interval of healthy subjects (considered abnormal) required temperature changes of >3.3-8.2 °C for thermal detection. For cold pain thresholds, confidence intervals extended mostly beyond safety cutoffs, hence only relative reference data (left-right differences, hand-trunk differences) were sufficiently sensitive. For mechanical detection and pain thresholds, left-right differences were 1.5-2.3 times more sensitive than absolute reference data. The most sensitive parameter was PPT, where already side-to-side differences >35% were abnormal. Compared to trunk reference data, patients with postherpetic neuralgia exhibited thermal and tactile deficits and dynamic mechanical allodynia, mostly without reduced mechanical pain thresholds. This pattern deviates from other types of neuropathic pain. QST reference data for the trunk will also be useful for patients with postthoracotomy pain or chronic back pain. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  18. Does Teacher Certification Program Lead to Better Quality Teachers? Evidence from Indonesia

    ERIC Educational Resources Information Center

    Kusumawardhani, Prita Nurmalia

    2017-01-01

    This paper examines the impact of the teacher certification program in Indonesia in 2007 and 2008 on student and teacher outcomes. I create a rule-based instrumental variable from discontinuities arising from the assignment mechanism of teachers into certification program. The thresholds are determined empirically. The study applies a two-sample…

  19. Mechanical and biomechanical analysis of a linear piston design for angular-velocity-based orthotic control.

    PubMed

    Lemaire, Edward D; Samadi, Reza; Goudreau, Louis; Kofman, Jonathan

    2013-01-01

    A linear piston hydraulic angular-velocity-based control knee joint was designed for people with knee-extensor weakness to engage knee-flexion resistance when knee-flexion angular velocity reaches a preset threshold, such as during a stumble, but to otherwise allow free knee motion. During mechanical testing at the lowest angular-velocity threshold, the device engaged within 2 degrees knee flexion and resisted moment loads of over 150 Nm. The device completed 400,000 loading cycles without mechanical failure or wear that would affect function. Gait patterns of nondisabled participants were similar to normal at walking speeds that produced below-threshold knee angular velocities. Fast walking speeds, employed purposely to attain the angular-velocity threshold and cause knee-flexion resistance, reduced maximum knee flexion by approximately 25 degrees but did not lead to unsafe gait patterns in foot ground clearance during swing. In knee collapse tests, the device successfully engaged knee-flexion resistance and stopped knee flexion with peak knee moments of up to 235.6 Nm. The outcomes from this study support the potential for the linear piston hydraulic knee joint in knee and knee-ankle-foot orthoses for people with lower-limb weakness.

  20. Perceptual color difference metric including a CSF based on the perception threshold

    NASA Astrophysics Data System (ADS)

    Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2008-01-01

    The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.

  1. Degenerate band edge laser

    NASA Astrophysics Data System (ADS)

    Veysi, Mehdi; Othman, Mohamed A. K.; Figotin, Alexander; Capolino, Filippo

    2018-05-01

    We propose a class of lasers based on a fourth-order exceptional point of degeneracy (EPD) referred to as the degenerate band edge (DBE). EPDs have been found in parity-time-symmetric photonic structures that require loss and/or gain; here we show that the DBE is a different kind of EPD since it occurs in periodic structures that are lossless and gainless. Because of this property, a small level of gain is sufficient to induce single-frequency lasing based on a synchronous operation of four degenerate Floquet-Bloch eigenwaves. This lasing scheme constitutes a light-matter interaction mechanism that leads also to a unique scaling law of the laser threshold with the inverse of the fifth power of the laser-cavity length. The DBE laser has the lowest lasing threshold in comparison to a regular band edge laser and to a conventional laser in cavities with the same loaded quality (Q ) factor and length. In particular, even without mirror reflectors the DBE laser exhibits a lasing threshold which is an order of magnitude lower than that of a uniform cavity laser of the same length and with very high mirror reflectivity. Importantly, this novel DBE lasing regime enforces mode selectivity and coherent single-frequency operation even for pumping rates well beyond the lasing threshold, in contrast to the multifrequency nature of conventional uniform cavity lasers.

  2. Wideband acoustic reflex test in a test battery to predict middle-ear dysfunction

    PubMed Central

    Keefe, Douglas H.; Fitzpatrick, Denis; Liu, Yi-Wen; Sanford, Chris A.; Gorga, Michael P.

    2013-01-01

    A wideband (WB) aural acoustical test battery of middle-ear status, including acoustic-reflex thresholds (ARTs) and acoustic-transfer functions (ATFs, i.e., absorbance and admittance) was hypothesized to be more accurate than 1-kHz tympanometry in classifying ears that pass or refer on a newborn hearing screening (NHS) protocol based on otoacoustic emissions. Assessment of middle-ear status may improve NHS programs by identifying conductive dysfunction and cases in which auditory neuropathy exists. Ipsilateral ARTs were assessed with a stimulus including four broadband-noise or tonal activator pulses alternating with five clicks presented before, between and after the pulses. The reflex shift was defined as the difference between final and initial click responses. ARTs were measured using maximum likelihood both at low frequencies (0.8–2.8 kHz) and high (2.8–8 kHz). The median low-frequency ART was elevated by 24 dB in NHS refers compared to passes. An optimal combination of ATF and ART tests performed better than either test alone in predicting NHS outcomes, and WB tests performed better than 1-kHz tympanometry. Medial olivocochlear efferent shifts in cochlear function may influence ARs, but their presence would also be consistent with normal conductive function. Baseline clinical and WB ARTs were also compared in ipsilateral and contralateral measurements in adults. PMID:19772907

  3. Threshold concepts in finance: conceptualizing the curriculum

    NASA Astrophysics Data System (ADS)

    Hoadley, Susan; Tickle, Leonie; Wood, Leigh N.; Kyng, Tim

    2015-08-01

    Graduates with well-developed capabilities in finance are invaluable to our society and in increasing demand. Universities face the challenge of designing finance programmes to develop these capabilities and the essential knowledge that underpins them. Our research responds to this challenge by identifying threshold concepts that are central to the mastery of finance and by exploring their potential for informing curriculum design and pedagogical practices to improve student outcomes. In this paper, we report the results of an online survey of finance academics at multiple institutions in Australia, Canada, New Zealand, South Africa and the United Kingdom. The outcomes of our research are recommendations for threshold concepts in finance endorsed by quantitative evidence, as well as a model of the finance curriculum incorporating finance, modelling and statistics threshold concepts. In addition, we draw conclusions about the application of threshold concept theory supported by both quantitative and qualitative evidence. Our methodology and findings have general relevance to the application of threshold concept theory as a means to investigate and inform curriculum design and delivery in higher education.

  4. Efficacy and safety of N-acetylcysteine in prevention of noise induced hearing loss: a randomized clinical trial.

    PubMed

    Kopke, Richard; Slade, Martin D; Jackson, Ronald; Hammill, Tanisha; Fausti, Stephen; Lonsbury-Martin, Brenda; Sanderson, Alicia; Dreisbach, Laura; Rabinowitz, Peter; Torre, Peter; Balough, Ben

    2015-05-01

    Despite a robust hearing conservation program, military personnel continue to be at high risk for noise induced hearing loss (NIHL). For more than a decade, a number of laboratories have investigated the use of antioxidants as a safe and effective adjunct to hearing conservation programs. Of the antioxidants that have been investigated, N-acetylcysteine (NAC) has consistently reduced permanent NIHL in the laboratory, but its clinical efficacy is still controversial. This study provides a prospective, randomized, double-blinded, placebo-controlled clinical trial investigating the safety profile and the efficacy of NAC to prevent hearing loss in a military population after weapons training. Of the 566 total study subjects, 277 received NAC while 289 were given placebo. The null hypothesis for the rate of STS was not rejected based on the measured results. While no significant differences were found for the primary outcome, rate of threshold shifts, the right ear threshold shift rate difference did approach significance (p = 0.0562). No significant difference was found in the second primary outcome, percentage of subjects experiencing an adverse event between placebo and NAC groups (26.7% and 27.4%, respectively, p = 0.4465). Results for the secondary outcome, STS rate in the trigger hand ear, did show a significant difference (34.98% for placebo-treated, 27.14% for NAC-treated, p-value = 0.0288). Additionally, post-hoc analysis showed significant differences in threshold shift rates when handedness was taken into account. While the secondary outcomes and post-hoc analysis suggest that NAC treatment is superior to the placebo, the present study design failed to confirm this. The lack of significant differences in overall hearing loss between the treatment and placebo groups may be due to a number of factors, including suboptimal dosing, premature post-exposure audiograms, or differences in risk between ears or subjects. Based on secondary outcomes and post hoc analyses however, further studies seem warranted and are needed to clarify dose response and the factors that may have played a role in the observed results. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. A Review of Physical Fitness as It Pertains to the Military Services

    DTIC Science & Technology

    1985-07-01

    muscle metabolic capacity. The results of this research has led to a measure which is commonly referred to as " anaerobic threshold " (40). This is an...unfortunate term in that it is really a measure of aerobic metabolic capacity, not anaerobic . Anaerobic threshold is defined as the point of exercise...the individual can exercise without lactate accumulation, the higher the anaerobic threshold value. Untrained individuals have a threshold at a work

  6. Using thresholds based on risk of cardiovascular disease to target treatment for hypertension: modelling events averted and number treated

    PubMed Central

    Baker, Simon; Priest, Patricia; Jackson, Rod

    2000-01-01

    Objective To estimate the impact of using thresholds based on absolute risk of cardiovascular disease to target drug treatment to lower blood pressure in the community. Design Modelling of three thresholds of treatment for hypertension based on the absolute risk of cardiovascular disease. 5 year risk of disease was estimated for each participant using an equation to predict risk. Net predicted impact of the thresholds on the number of people treated and the number of disease events averted over 5 years was calculated assuming a relative treatment benefit of one quarter. Setting Auckland, New Zealand. Participants 2158 men and women aged 35-79 years randomly sampled from the general electoral rolls. Main outcome measures Predicted 5 year risk of cardiovascular disease event, estimated number of people for whom treatment would be recommended, and disease events averted over 5 years at different treatment thresholds. Results 46 374 (12%) Auckland residents aged 35-79 receive drug treatment to lower their blood pressure, averting an estimated 1689 disease events over 5 years. Restricting treatment to individuals with blood pressure ⩾170/100 mm Hg and those with blood pressure between 150/90-169/99 mm Hg who have a predicted 5 year risk of disease ⩾10% would increase the net number for whom treatment would be recommended by 19 401. This 42% relative increase is predicted to avert 1139/1689 (68%) additional disease events overall over 5 years compared with current treatment. If the threshold for 5 year risk of disease is set at 15% the number recommended for treatment increases by <10% but about 620/1689 (37%) additional events can be averted. A 20% threshold decreases the net number of patients recommended for treatment by about 10% but averts 204/1689 (12%) more disease events than current treatment. Conclusions Implementing treatment guidelines that use treatment thresholds based on absolute risk could significantly improve the efficiency of drug treatment to lower blood pressure in primary care. PMID:10710577

  7. Should English healthcare providers be penalised for failing to collect patient-reported outcome measures? A retrospective analysis.

    PubMed

    Gutacker, Nils; Street, Andrew; Gomes, Manuel; Bojke, Chris

    2015-08-01

    The best practice tariff for hip and knee replacement in the English National Health Service (NHS) rewards providers based on improvements in patient-reported outcome measures (PROMs) collected before and after surgery. Providers only receive a bonus if at least 50% of their patients complete the preoperative questionnaire. We determined how many providers failed to meet this threshold prior to the policy introduction and assessed longitudinal stability of participation rates. Retrospective observational study using data from Hospital Episode Statistics and the national PROM programme from April 2009 to March 2012. We calculated participation rates based on either (a) all PROM records or (b) only those that could be linked to inpatient records; constructed confidence intervals around rates to account for sampling variation; applied precision weighting to allow for volume; and applied risk adjustment. NHS hospitals and private providers in England. NHS patients undergoing elective unilateral hip and knee replacement surgery. Number of providers with participation rates statistically significantly below 50%. Crude rates identified many providers that failed to achieve the 50% threshold but there were substantially fewer after adjusting for uncertainty and precision. While important, risk adjustment required restricting the analysis to linked data. Year-on-year correlation between provider participation rates was moderate. Participation rates have improved over time and only a small number of providers now fall below the threshold, but administering preoperative questionnaires remains problematic in some providers. We recommend that participation rates are based on linked data and take into account sampling variation. © The Royal Society of Medicine.

  8. A zone-specific fish-based biotic index as a management tool for the Zeeschelde estuary (Belgium).

    PubMed

    Breine, Jan; Quataert, Paul; Stevens, Maarten; Ollevier, Frans; Volckaert, Filip A M; Van den Bergh, Ericia; Maes, Joachim

    2010-07-01

    Fish-based indices monitor changes in surface waters and are a valuable aid in communication by summarising complex information about the environment (Harrison and Whitfield, 2004). A zone-specific fish-based multimetric estuarine index of biotic integrity (Z-EBI) was developed based on a 13 year time series of fish surveys from the Zeeschelde estuary (Belgium). Sites were pre-classified using indicators of anthropogenic impact. Metrics showing a monotone response with pressure classes were selected for further analysis. Thresholds for the good ecological potential (GEP) were defined from references. A modified trisection was applied for the other thresholds. The Z-EBI is defined by the average of the metric scores calculated over a one year period and translated into an ecological quality ratio (EQR). The indices integrate structural and functional qualities of the estuarine fish communities. The Z-EBI performances were successfully validated for habitat degradation in the various habitat zones. Copyright 2010 Elsevier Ltd. All rights reserved.

  9. Comparison of the Between the Flags calling criteria to the MEWS, NEWS and the electronic Cardiac Arrest Risk Triage (eCART) score for the identification of deteriorating ward patients.

    PubMed

    Green, Malcolm; Lander, Harvey; Snyder, Ashley; Hudson, Paul; Churpek, Matthew; Edelson, Dana

    2018-02-01

    Traditionally, paper based observation charts have been used to identify deteriorating patients, with emerging recent electronic medical records allowing electronic algorithms to risk stratify and help direct the response to deterioration. We sought to compare the Between the Flags (BTF) calling criteria to the Modified Early Warning Score (MEWS), National Early Warning Score (NEWS) and electronic Cardiac Arrest Risk Triage (eCART) score. Multicenter retrospective analysis of electronic health record data from all patients admitted to five US hospitals from November 2008-August 2013. Cardiac arrest, ICU transfer or death within 24h of a score RESULTS: Overall accuracy was highest for eCART, with an AUC of 0.801 (95% CI 0.799-0.802), followed by NEWS, MEWS and BTF respectively (0.718 [0.716-0.720]; 0.698 [0.696-0.700]; 0.663 [0.661-0.664]). BTF criteria had a high risk (Red Zone) specificity of 95.0% and a moderate risk (Yellow Zone) specificity of 27.5%, which corresponded to MEWS thresholds of >=4 and >=2, NEWS thresholds of >=5 and >=2, and eCART thresholds of >=12 and >=4, respectively. At those thresholds, eCART caught 22 more adverse events per 10,000 patients than BTF using the moderate risk criteria and 13 more using high risk criteria, while MEWS and NEWS identified the same or fewer. An electronically generated eCART score was more accurate than commonly used paper based observation tools for predicting the composite outcome of in-hospital cardiac arrest, ICU transfer and death within 24h of observation. The outcomes of this analysis lend weight for a move towards an algorithm based electronic risk identification tool for deteriorating patients to ensure earlier detection and prevent adverse events in the hospital. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Neurologic outcomes of toxic oil syndrome patients 18 years after the epidemic.

    PubMed Central

    de la Paz, Manuel Posada; Philen, Rossanne M; Gerr, Fredric; Letz, Richard; Ferrari Arroyo, Maria José; Vela, Lydia; Izquierdo, Maravillas; Arribas, Concepción Martín; Borda, Ignacio Abaitua; Ramos, Alejandro; Mora, Cristina; Matesanz, Gloria; Roldán, Maria Teresa; Pareja, Juan

    2003-01-01

    Toxic oil syndrome (TOS) resulted from consumption of rapeseed oil denatured with 2% aniline and affected more than 20,000 persons. Eighteen years after the epidemic, many patients continue to report neurologic symptoms that are difficult to evaluate using conventional techniques. We conducted an epidemiologic study to determine whether an exposure to toxic oil 18 years ago was associated with current adverse neurobehavioral effects. We studied a case group of 80 adults exposed to toxic oil 18 years ago and a referent group of 79 adult age- and sex-frequency-matched unexposed subjects. We interviewed subjects for demographics, health status, exposures to neurotoxicants, and responses to the Kaufman Brief Intelligence Test (K-BIT), Programa Integrado de Exploracion Neuropsicologica (PIEN), and Goldberg depression questionnaires and administered quantitative neurobehavioral and neurophysiologic tests by computer or trained nurses. The groups did not differ with respect to educational background or other critical variables. We examined associations between case and referent groups and the neurobehavioral and neurophysiologic outcomes of interest. Decreased distal strength of the dominant and nondominant hands and increased vibrotactile thresholds of the fingers and toes were significantly associated with exposure to toxic oil. Finger tapping, simple reaction time latency, sequence B latency, symbol digit latency, and auditory digit span were also significantly associated with exposure. Case subjects also had statistically significantly more neuropsychologic symptoms compared with referents. Using quantitative neurologic tests, we found significant adverse central and peripheral neurologic effects in a group of TOS patients 18 years after exposure to toxic oil when compared with a nonexposed referent group. These effects were not documented by standard clinical examination and were found more frequently in women. PMID:12896854

  11. Neurologic outcomes of toxic oil syndrome patients 18 years after the epidemic.

    PubMed

    de la Paz, Manuel Posada; Philen, Rossanne M; Gerr, Fredric; Letz, Richard; Ferrari Arroyo, Maria José; Vela, Lydia; Izquierdo, Maravillas; Arribas, Concepción Martín; Borda, Ignacio Abaitua; Ramos, Alejandro; Mora, Cristina; Matesanz, Gloria; Roldán, Maria Teresa; Pareja, Juan

    2003-08-01

    Toxic oil syndrome (TOS) resulted from consumption of rapeseed oil denatured with 2% aniline and affected more than 20,000 persons. Eighteen years after the epidemic, many patients continue to report neurologic symptoms that are difficult to evaluate using conventional techniques. We conducted an epidemiologic study to determine whether an exposure to toxic oil 18 years ago was associated with current adverse neurobehavioral effects. We studied a case group of 80 adults exposed to toxic oil 18 years ago and a referent group of 79 adult age- and sex-frequency-matched unexposed subjects. We interviewed subjects for demographics, health status, exposures to neurotoxicants, and responses to the Kaufman Brief Intelligence Test (K-BIT), Programa Integrado de Exploracion Neuropsicologica (PIEN), and Goldberg depression questionnaires and administered quantitative neurobehavioral and neurophysiologic tests by computer or trained nurses. The groups did not differ with respect to educational background or other critical variables. We examined associations between case and referent groups and the neurobehavioral and neurophysiologic outcomes of interest. Decreased distal strength of the dominant and nondominant hands and increased vibrotactile thresholds of the fingers and toes were significantly associated with exposure to toxic oil. Finger tapping, simple reaction time latency, sequence B latency, symbol digit latency, and auditory digit span were also significantly associated with exposure. Case subjects also had statistically significantly more neuropsychologic symptoms compared with referents. Using quantitative neurologic tests, we found significant adverse central and peripheral neurologic effects in a group of TOS patients 18 years after exposure to toxic oil when compared with a nonexposed referent group. These effects were not documented by standard clinical examination and were found more frequently in women.

  12. Prioritizing CD4 Count Monitoring in Response to ART in Resource-Constrained Settings: A Retrospective Application of Prediction-Based Classification

    PubMed Central

    Liu, Yan; Li, Xiaohong; Johnson, Margaret; Smith, Collette; Kamarulzaman, Adeeba bte; Montaner, Julio; Mounzer, Karam; Saag, Michael; Cahn, Pedro; Cesar, Carina; Krolewiecki, Alejandro; Sanne, Ian; Montaner, Luis J.

    2012-01-01

    Background Global programs of anti-HIV treatment depend on sustained laboratory capacity to assess treatment initiation thresholds and treatment response over time. Currently, there is no valid alternative to CD4 count testing for monitoring immunologic responses to treatment, but laboratory cost and capacity limit access to CD4 testing in resource-constrained settings. Thus, methods to prioritize patients for CD4 count testing could improve treatment monitoring by optimizing resource allocation. Methods and Findings Using a prospective cohort of HIV-infected patients (n = 1,956) monitored upon antiretroviral therapy initiation in seven clinical sites with distinct geographical and socio-economic settings, we retrospectively apply a novel prediction-based classification (PBC) modeling method. The model uses repeatedly measured biomarkers (white blood cell count and lymphocyte percent) to predict CD4+ T cell outcome through first-stage modeling and subsequent classification based on clinically relevant thresholds (CD4+ T cell count of 200 or 350 cells/µl). The algorithm correctly classified 90% (cross-validation estimate = 91.5%, standard deviation [SD] = 4.5%) of CD4 count measurements <200 cells/µl in the first year of follow-up; if laboratory testing is applied only to patients predicted to be below the 200-cells/µl threshold, we estimate a potential savings of 54.3% (SD = 4.2%) in CD4 testing capacity. A capacity savings of 34% (SD = 3.9%) is predicted using a CD4 threshold of 350 cells/µl. Similar results were obtained over the 3 y of follow-up available (n = 619). Limitations include a need for future economic healthcare outcome analysis, a need for assessment of extensibility beyond the 3-y observation time, and the need to assign a false positive threshold. Conclusions Our results support the use of PBC modeling as a triage point at the laboratory, lessening the need for laboratory-based CD4+ T cell count testing; implementation of this tool could help optimize the use of laboratory resources, directing CD4 testing towards higher-risk patients. However, further prospective studies and economic analyses are needed to demonstrate that the PBC model can be effectively applied in clinical settings. Please see later in the article for the Editors' Summary PMID:22529752

  13. Internal validation of the GlobalFiler™ Express PCR Amplification Kit for the direct amplification of reference DNA samples on a high-throughput automated workflow.

    PubMed

    Flores, Shahida; Sun, Jie; King, Jonathan; Budowle, Bruce

    2014-05-01

    The GlobalFiler™ Express PCR Amplification Kit uses 6-dye fluorescent chemistry to enable multiplexing of 21 autosomal STRs, 1 Y-STR, 1 Y-indel and the sex-determining marker amelogenin. The kit is specifically designed for processing reference DNA samples in a high throughput manner. Validation studies were conducted to assess the performance and define the limitations of this direct amplification kit for typing blood and buccal reference DNA samples on various punchable collection media. Studies included thermal cycling sensitivity, reproducibility, precision, sensitivity of detection, minimum detection threshold, system contamination, stochastic threshold and concordance. Results showed that optimal amplification and injection parameters for a 1.2mm punch from blood and buccal samples were 27 and 28 cycles, respectively, combined with a 12s injection on an ABI 3500xL Genetic Analyzer. Minimum detection thresholds were set at 100 and 120RFUs for 27 and 28 cycles, respectively, and it was suggested that data from positive amplification controls provided a better threshold representation. Stochastic thresholds were set at 250 and 400RFUs for 27 and 28 cycles, respectively, as stochastic effects increased with cycle number. The minimum amount of input DNA resulting in a full profile was 0.5ng, however, the optimum range determined was 2.5-10ng. Profile quality from the GlobalFiler™ Express Kit and the previously validated AmpFlSTR(®) Identifiler(®) Direct Kit was comparable. The validation data support that reliable DNA typing results from reference DNA samples can be obtained using the GlobalFiler™ Express PCR Amplification Kit. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Rapid exclusion of the diagnosis of immune HIT by AcuStar HIT and heparin-induced multiple electrode aggregometry.

    PubMed

    Minet, V; Baudar, J; Bailly, N; Douxfils, J; Laloy, J; Lessire, S; Gourdin, M; Devalet, B; Chatelain, B; Dogné, J M; Mullier, F

    2014-06-01

    Accurate diagnosis of heparin-induced thrombocytopenia (HIT) is essential but remains challenging. We have previously demonstrated, in a retrospective study, the usefulness of the combination of the 4Ts score, AcuStar HIT and heparin-induced multiple electrode aggregometry (HIMEA) with optimized thresholds. We aimed at exploring prospectively the performances of our optimized diagnostic algorithm on suspected HIT patients. The secondary objective is to evaluate performances of AcuStar HIT-Ab (PF4-H) in comparison with the clinical outcome. 116 inpatients with clinically suspected immune HIT were included. Our optimized diagnostic algorithm was applied to each patient. Sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) of the overall diagnostic strategy as well as AcuStar HIT-Ab (at manufacturer's thresholds and at our thresholds) were calculated using clinical diagnosis as the reference. Among 116 patients, 2 patients had clinically-diagnosed HIT. These 2 patients were positive on AcuStar HIT-Ab, AcuStar HIT-IgG and HIMEA. Using our optimized algorithm, all patients were correctly diagnosed. AcuStar HIT-Ab at our cut-off (>9.41 U/mL) and at manufacturer's cut-off (>1.00 U/mL) showed both a sensitivity of 100.0% and a specificity of 99.1% and 90.4%, respectively. The combination of the 4Ts score, the HemosIL® AcuStar HIT and HIMEA with optimized thresholds may be useful for the rapid and accurate exclusion of the diagnosis of immune HIT. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A lane line segmentation algorithm based on adaptive threshold and connected domain theory

    NASA Astrophysics Data System (ADS)

    Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang

    2018-04-01

    Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.

  16. Extended high-frequency thresholds in college students: effects of music player use and other recreational noise.

    PubMed

    Le Prell, Colleen G; Spankovich, Christopher; Lobariñas, Edward; Griffiths, Scott K

    2013-09-01

    Human hearing is sensitive to sounds from as low as 20 Hz to as high as 20,000 Hz in normal ears. However, clinical tests of human hearing rarely include extended high-frequency (EHF) threshold assessments, at frequencies extending beyond 8000 Hz. EHF thresholds have been suggested for use monitoring the earliest effects of noise on the inner ear, although the clinical usefulness of EHF threshold testing is not well established for this purpose. The primary objective of this study was to determine if EHF thresholds in healthy, young adult college students vary as a function of recreational noise exposure. A retrospective analysis of a laboratory database was conducted; all participants with both EHF threshold testing and noise history data were included. The potential for "preclinical" EHF deficits was assessed based on the measured thresholds, with the noise surveys used to estimate recreational noise exposure. EHF thresholds measured during participation in other ongoing studies were available from 87 participants (34 male and 53 female); all participants had hearing within normal clinical limits (≤25 HL) at conventional frequencies (0.25-8 kHz). EHF thresholds closely matched standard reference thresholds [ANSI S3.6 (1996) Annex C]. There were statistically reliable threshold differences in participants who used music players, with 3-6 dB worse thresholds at the highest test frequencies (10-16 kHz) in participants who reported long-term use of music player devices (>5 yr), or higher listening levels during music player use. It should be possible to detect small changes in high-frequency hearing for patients or participants who undergo repeated testing at periodic intervals. However, the increased population-level variability in thresholds at the highest frequencies will make it difficult to identify the presence of small but potentially important deficits in otherwise normal-hearing individuals who do not have previously established baseline data. American Academy of Audiology.

  17. Impact of tumor size and tracer uptake heterogeneity in (18)F-FDG PET and CT non-small cell lung cancer tumor delineation.

    PubMed

    Hatt, Mathieu; Cheze-le Rest, Catherine; van Baardwijk, Angela; Lambin, Philippe; Pradier, Olivier; Visvikis, Dimitris

    2011-11-01

    The objectives of this study were to investigate the relationship between CT- and (18)F-FDG PET-based tumor volumes in non-small cell lung cancer (NSCLC) and the impact of tumor size and uptake heterogeneity on various approaches to delineating uptake on PET images. Twenty-five NSCLC cancer patients with (18)F-FDG PET/CT were considered. Seventeen underwent surgical resection of their tumor, and the maximum diameter was measured. Two observers manually delineated the tumors on the CT images and the tumor uptake on the corresponding PET images, using a fixed threshold at 50% of the maximum (T(50)), an adaptive threshold methodology, and the fuzzy locally adaptive Bayesian (FLAB) algorithm. Maximum diameters of the delineated volumes were compared with the histopathology reference when available. The volumes of the tumors were compared, and correlations between the anatomic volume and PET uptake heterogeneity and the differences between delineations were investigated. All maximum diameters measured on PET and CT images significantly correlated with the histopathology reference (r > 0.89, P < 0.0001). Significant differences were observed among the approaches: CT delineation resulted in large overestimation (+32% ± 37%), whereas all delineations on PET images resulted in underestimation (from -15% ± 17% for T(50) to -4% ± 8% for FLAB) except manual delineation (+8% ± 17%). Overall, CT volumes were significantly larger than PET volumes (55 ± 74 cm(3) for CT vs. from 18 ± 25 to 47 ± 76 cm(3) for PET). A significant correlation was found between anatomic tumor size and heterogeneity (larger lesions were more heterogeneous). Finally, the more heterogeneous the tumor uptake, the larger was the underestimation of PET volumes by threshold-based techniques. Volumes based on CT images were larger than those based on PET images. Tumor size and tracer uptake heterogeneity have an impact on threshold-based methods, which should not be used for the delineation of cases of large heterogeneous NSCLC, as these methods tend to largely underestimate the spatial extent of the functional tumor in such cases. For an accurate delineation of PET volumes in NSCLC, advanced image segmentation algorithms able to deal with tracer uptake heterogeneity should be preferred.

  18. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration.

    PubMed

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. The Centers for Medicare and Medicaid Services' Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records.To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California's (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals' mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals' decreased, KPNC hospitals' performance would appear better. Future hospital benchmarking should consider the impact of variation in admission thresholds.

  19. Meeting Our Standards for Educational Justice: Doing Our Best with the Evidence

    ERIC Educational Resources Information Center

    Joyce, Kathryn E.; Cartwright, Nancy

    2018-01-01

    The United States considers educating all students to a threshold of adequate outcomes to be a central goal of educational justice. The No Child Left Behind Act introduced evidence-based policy and accountability protocols to ensure that all students receive an education that enables them to meet adequacy standards. Unfortunately, evidence-based…

  20. Costo- Efectividad Del Uso Profiláctico Del Factor Estimulante De Colonias De Granulocitos En Adultos Con Leucemia Linfoblástica Aguda en Colombia.

    PubMed

    Casadiego Rincón, Elkin Javier; Díaz Rojas, Jorge Augusto; Bermúdez, Carlos Daniel; Martínez, Víctor Prieto

    2016-12-01

    To assess the cost-effectiveness of prophylactic administration of Granulocyte Colony-Stimulating Factor (G-CSF) compared with no use of it, during the induction phase of chemotherapy in Adults with Acute Lymphoblastic Leukemia (ALL) in Colombia. A decision tree with a time horizon of 30 days was built under colombian health system perspective including only direct costs. The costs of procedures and medications were taken from official sources and an institution of national reference of oncology services. The safety and effectiveness data were taken from the literature and two Colombian cohorts with patients older than 15 years. The unit of outcome was the proportion of deaths avoided. Base-case results on a clinical trial indicate that using factor is a dominant strategy. The variable that most impacted the outcome was the incidence of febrile neutropenia. Considering a threshold of $22.228 USD in 80% of cases using factor was cost effective. However, the use of factor is not cost-effective for the country for incidences of febrile neutropenia > 48%. It was not possible to establish cost-effectiveness of pegfilgrastim because no information was found. As per Colombian data, the use of prophylactic factor under chemotherapeutic induction in adults with ALL, turns out to be not cost effective. The difference in the results suggests the need of a careful extrapolation of information from clinical trials (ideal world) for developing economic evaluations in Colombia. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. A novel approach for human whole transcriptome analysis based on absolute gene expression of microarray data.

    PubMed

    Bikel, Shirley; Jacobo-Albavera, Leonor; Sánchez-Muñoz, Fausto; Cornejo-Granados, Fernanda; Canizales-Quinteros, Samuel; Soberón, Xavier; Sotelo-Mundo, Rogerio R; Del Río-Navarro, Blanca E; Mendoza-Vargas, Alfredo; Sánchez, Filiberto; Ochoa-Leyva, Adrian

    2017-01-01

    In spite of the emergence of RNA sequencing (RNA-seq), microarrays remain in widespread use for gene expression analysis in the clinic. There are over 767,000 RNA microarrays from human samples in public repositories, which are an invaluable resource for biomedical research and personalized medicine. The absolute gene expression analysis allows the transcriptome profiling of all expressed genes under a specific biological condition without the need of a reference sample. However, the background fluorescence represents a challenge to determine the absolute gene expression in microarrays. Given that the Y chromosome is absent in female subjects, we used it as a new approach for absolute gene expression analysis in which the fluorescence of the Y chromosome genes of female subjects was used as the background fluorescence for all the probes in the microarray. This fluorescence was used to establish an absolute gene expression threshold, allowing the differentiation between expressed and non-expressed genes in microarrays. We extracted the RNA from 16 children leukocyte samples (nine males and seven females, ages 6-10 years). An Affymetrix Gene Chip Human Gene 1.0 ST Array was carried out for each sample and the fluorescence of 124 genes of the Y chromosome was used to calculate the absolute gene expression threshold. After that, several expressed and non-expressed genes according to our absolute gene expression threshold were compared against the expression obtained using real-time quantitative polymerase chain reaction (RT-qPCR). From the 124 genes of the Y chromosome, three genes (DDX3Y, TXLNG2P and EIF1AY) that displayed significant differences between sexes were used to calculate the absolute gene expression threshold. Using this threshold, we selected 13 expressed and non-expressed genes and confirmed their expression level by RT-qPCR. Then, we selected the top 5% most expressed genes and found that several KEGG pathways were significantly enriched. Interestingly, these pathways were related to the typical functions of leukocytes cells, such as antigen processing and presentation and natural killer cell mediated cytotoxicity. We also applied this method to obtain the absolute gene expression threshold in already published microarray data of liver cells, where the top 5% expressed genes showed an enrichment of typical KEGG pathways for liver cells. Our results suggest that the three selected genes of the Y chromosome can be used to calculate an absolute gene expression threshold, allowing a transcriptome profiling of microarray data without the need of an additional reference experiment. Our approach based on the establishment of a threshold for absolute gene expression analysis will allow a new way to analyze thousands of microarrays from public databases. This allows the study of different human diseases without the need of having additional samples for relative expression experiments.

  2. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis

    PubMed Central

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  3. Robot-Assisted End-Effector-Based Stair Climbing for Cardiopulmonary Exercise Testing: Feasibility, Reliability, and Repeatability.

    PubMed

    Stoller, Oliver; Schindelholz, Matthias; Hunt, Kenneth J

    2016-01-01

    Neurological impairments can limit the implementation of conventional cardiopulmonary exercise testing (CPET) and cardiovascular training strategies. A promising approach to provoke cardiovascular stress while facilitating task-specific exercise in people with disabilities is feedback-controlled robot-assisted end-effector-based stair climbing (RASC). The aim of this study was to evaluate the feasibility, reliability, and repeatability of augmented RASC-based CPET in able-bodied subjects, with a view towards future research and applications in neurologically impaired populations. Twenty able-bodied subjects performed a familiarisation session and 2 consecutive incremental CPETs using augmented RASC. Outcome measures focussed on standard cardiopulmonary performance parameters and on accuracy of work rate tracking (RMSEP-root mean square error). Criteria for feasibility were cardiopulmonary responsiveness and technical implementation. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC), standard error of the measurement (SEM), and minimal detectable change (MDC). Mean differences, limits of agreement, and coefficients of variation (CoV) were estimated to assess repeatability. All criteria for feasibility were achieved. Mean V'O2peak was 106±9% of predicted V'O2max and mean HRpeak was 99±3% of predicted HRmax. 95% of the subjects achieved at least 1 criterion for V'O2max, and the detection of the sub-maximal ventilatory thresholds was successful (ventilatory anaerobic threshold 100%, respiratory compensation point 90% of the subjects). Excellent reliability was found for peak cardiopulmonary outcome measures (ICC ≥ 0.890, SEM ≤ 0.60%, MDC ≤ 1.67%). Repeatability for the primary outcomes was good (CoV ≤ 0.12). RASC-based CPET with feedback-guided exercise intensity demonstrated comparable or higher peak cardiopulmonary performance variables relative to predicted values, achieved the criteria for V'O2max, and allowed determination of sub-maximal ventilatory thresholds. The reliability and repeatability were found to be high. There is potential for augmented RASC to be used for exercise testing and prescription in populations with neurological impairments who would benefit from repetitive task-specific training.

  4. Robot-Assisted End-Effector-Based Stair Climbing for Cardiopulmonary Exercise Testing: Feasibility, Reliability, and Repeatability

    PubMed Central

    Stoller, Oliver; Schindelholz, Matthias; Hunt, Kenneth J.

    2016-01-01

    Background Neurological impairments can limit the implementation of conventional cardiopulmonary exercise testing (CPET) and cardiovascular training strategies. A promising approach to provoke cardiovascular stress while facilitating task-specific exercise in people with disabilities is feedback-controlled robot-assisted end-effector-based stair climbing (RASC). The aim of this study was to evaluate the feasibility, reliability, and repeatability of augmented RASC-based CPET in able-bodied subjects, with a view towards future research and applications in neurologically impaired populations. Methods Twenty able-bodied subjects performed a familiarisation session and 2 consecutive incremental CPETs using augmented RASC. Outcome measures focussed on standard cardiopulmonary performance parameters and on accuracy of work rate tracking (RMSEP−root mean square error). Criteria for feasibility were cardiopulmonary responsiveness and technical implementation. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC), standard error of the measurement (SEM), and minimal detectable change (MDC). Mean differences, limits of agreement, and coefficients of variation (CoV) were estimated to assess repeatability. Results All criteria for feasibility were achieved. Mean V′O2peak was 106±9% of predicted V′O2max and mean HRpeak was 99±3% of predicted HRmax. 95% of the subjects achieved at least 1 criterion for V′O2max, and the detection of the sub-maximal ventilatory thresholds was successful (ventilatory anaerobic threshold 100%, respiratory compensation point 90% of the subjects). Excellent reliability was found for peak cardiopulmonary outcome measures (ICC ≥ 0.890, SEM ≤ 0.60%, MDC ≤ 1.67%). Repeatability for the primary outcomes was good (CoV ≤ 0.12). Conclusions RASC-based CPET with feedback-guided exercise intensity demonstrated comparable or higher peak cardiopulmonary performance variables relative to predicted values, achieved the criteria for V′O2max, and allowed determination of sub-maximal ventilatory thresholds. The reliability and repeatability were found to be high. There is potential for augmented RASC to be used for exercise testing and prescription in populations with neurological impairments who would benefit from repetitive task-specific training. PMID:26849137

  5. A pilot study to assess feasibility of value based pricing in Cyprus through pharmacoeconomic modelling and assessment of its operational framework: sorafenib for second line renal cell cancer

    PubMed Central

    2014-01-01

    Background The continuing increase of pharmaceutical expenditure calls for new approaches to pricing and reimbursement of pharmaceuticals. Value based pricing of pharmaceuticals is emerging as a useful tool and possess theoretical attributes to help health system cope with rising pharmaceutical expenditure. Aim To assess the feasibility of introducing a value-based pricing scheme of pharmaceuticals in Cyprus and explore the integrative framework. Methods A probabilistic Markov chain Monte Carlo model was created to simulate progression of advanced renal cell cancer for comparison of sorafenib to standard best supportive care. Literature review was performed and efficacy data were transferred from a published landmark trial, while official pricelists and clinical guidelines from Cyprus Ministry of Health were utilised for cost calculation. Based on proposed willingness to pay threshold the maximum price of sorafenib for the indication of second line renal cell cancer was assessed. Results Sorafenib value based price was found to be significantly lower compared to its current reference price. Conclusion Feasibility of Value Based Pricing is documented and pharmacoeconomic modelling can lead to robust results. Integration of value and affordability in the price are its main advantages which have to be weighed against lack of documentation for several theoretical parameters that influence outcome. Smaller countries such as Cyprus may experience adversities in establishing and sustaining essential structures for this scheme. PMID:24910539

  6. Identifying insects with incomplete DNA barcode libraries, African fruit flies (Diptera: Tephritidae) as a test case.

    PubMed

    Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc

    2012-01-01

    We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.

  7. Identifying Insects with Incomplete DNA Barcode Libraries, African Fruit Flies (Diptera: Tephritidae) as a Test Case

    PubMed Central

    Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc

    2012-01-01

    We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600

  8. Evaluation of different radon guideline values based on characterization of ecological risk and visualization of lung cancer mortality trends in British Columbia, Canada.

    PubMed

    Branion-Calles, Michael C; Nelson, Trisalyn A; Henderson, Sarah B

    2015-11-19

    There is no safe concentration of radon gas, but guideline values provide threshold concentrations that are used to map areas at higher risk. These values vary between different regions, countries, and organizations, which can lead to differential classification of risk. For example the World Health Organization suggests a 100 Bq m(-3)value, while Health Canada recommends 200 Bq m(-3). Our objective was to describe how different thresholds characterized ecological radon risk and their visual association with lung cancer mortality trends in British Columbia, Canada. Eight threshold values between 50 and 600 Bq m(-3) were identified, and classes of radon vulnerability were defined based on whether the observed 95(th) percentile radon concentration was above or below each value. A balanced random forest algorithm was used to model vulnerability, and the results were mapped. We compared high vulnerability areas, their estimated populations, and differences in lung cancer mortality trends stratified by smoking prevalence and sex. Classification accuracy improved as the threshold concentrations decreased and the area classified as high vulnerability increased. Majority of the population lived within areas of lower vulnerability regardless of the threshold value. Thresholds as low as 50 Bq m(-3) were associated with higher lung cancer mortality, even in areas with low smoking prevalence. Temporal trends in lung cancer mortality were increasing for women, while decreasing for men. Radon contributes to lung cancer in British Columbia. The results of the study contribute evidence supporting the use of a reference level lower than the current guideline of 200 Bq m(-3) for the province.

  9. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    NASA Astrophysics Data System (ADS)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  10. AmpliVar: mutation detection in high-throughput sequence from amplicon-based libraries.

    PubMed

    Hsu, Arthur L; Kondrashova, Olga; Lunke, Sebastian; Love, Clare J; Meldrum, Cliff; Marquis-Nicholson, Renate; Corboy, Greg; Pham, Kym; Wakefield, Matthew; Waring, Paul M; Taylor, Graham R

    2015-04-01

    Conventional means of identifying variants in high-throughput sequencing align each read against a reference sequence, and then call variants at each position. Here, we demonstrate an orthogonal means of identifying sequence variation by grouping the reads as amplicons prior to any alignment. We used AmpliVar to make key-value hashes of sequence reads and group reads as individual amplicons using a table of flanking sequences. Low-abundance reads were removed according to a selectable threshold, and reads above this threshold were aligned as groups, rather than as individual reads, permitting the use of sensitive alignment tools. We show that this approach is more sensitive, more specific, and more computationally efficient than comparable methods for the analysis of amplicon-based high-throughput sequencing data. The method can be extended to enable alignment-free confirmation of variants seen in hybridization capture target-enrichment data. © 2015 WILEY PERIODICALS, INC.

  11. Cost Savings Threshold Analysis of a Capacity-Building Program for HIV Prevention Organizations

    ERIC Educational Resources Information Center

    Dauner, Kim Nichols; Oglesby, Willie H.; Richter, Donna L.; LaRose, Christopher M.; Holtgrave, David R.

    2008-01-01

    Although the incidence of HIV each year remains steady, prevention funding is increasingly competitive. Programs need to justify costs in terms of evaluation outcomes, including economic ones. Threshold analyses set performance standards to determine program effectiveness relative to that threshold. This method was used to evaluate the potential…

  12. Threshold Concepts in Higher Education: A Synthesis of the Literature Relating to Measurement of Threshold Crossing

    ERIC Educational Resources Information Center

    Nicola-Richmond, Kelli; Pépin, Geneviève; Larkin, Helen; Taylor, Charlotte

    2018-01-01

    In relation to teaching and learning approaches that improve student learning outcomes, threshold concepts have generated substantial interest in higher education. They have been described as "portals" that lead to a transformed way of understanding or thinking, enabling learners to progress, and have been enthusiastically adopted to…

  13. Threshold-based insulin-pump interruption for reduction of hypoglycemia.

    PubMed

    Bergenstal, Richard M; Klonoff, David C; Garg, Satish K; Bode, Bruce W; Meredith, Melissa; Slover, Robert H; Ahmann, Andrew J; Welsh, John B; Lee, Scott W; Kaufman, Francine R

    2013-07-18

    The threshold-suspend feature of sensor-augmented insulin pumps is designed to minimize the risk of hypoglycemia by interrupting insulin delivery at a preset sensor glucose value. We evaluated sensor-augmented insulin-pump therapy with and without the threshold-suspend feature in patients with nocturnal hypoglycemia. We randomly assigned patients with type 1 diabetes and documented nocturnal hypoglycemia to receive sensor-augmented insulin-pump therapy with or without the threshold-suspend feature for 3 months. The primary safety outcome was the change in the glycated hemoglobin level. The primary efficacy outcome was the area under the curve (AUC) for nocturnal hypoglycemic events. Two-hour threshold-suspend events were analyzed with respect to subsequent sensor glucose values. A total of 247 patients were randomly assigned to receive sensor-augmented insulin-pump therapy with the threshold-suspend feature (threshold-suspend group, 121 patients) or standard sensor-augmented insulin-pump therapy (control group, 126 patients). The changes in glycated hemoglobin values were similar in the two groups. The mean AUC for nocturnal hypoglycemic events was 37.5% lower in the threshold-suspend group than in the control group (980 ± 1200 mg per deciliter [54.4 ± 66.6 mmol per liter] × minutes vs. 1568 ± 1995 mg per deciliter [87.0 ± 110.7 mmol per liter] × minutes, P<0.001). Nocturnal hypoglycemic events occurred 31.8% less frequently in the threshold-suspend group than in the control group (1.5 ± 1.0 vs. 2.2 ± 1.3 per patient-week, P<0.001). The percentages of nocturnal sensor glucose values of less than 50 mg per deciliter (2.8 mmol per liter), 50 to less than 60 mg per deciliter (3.3 mmol per liter), and 60 to less than 70 mg per deciliter (3.9 mmol per liter) were significantly reduced in the threshold-suspend group (P<0.001 for each range). After 1438 instances at night in which the pump was stopped for 2 hours, the mean sensor glucose value was 92.6 ± 40.7 mg per deciliter (5.1 ± 2.3 mmol per liter). Four patients (all in the control group) had a severe hypoglycemic event; no patients had diabetic ketoacidosis. This study showed that over a 3-month period the use of sensor-augmented insulin-pump therapy with the threshold-suspend feature reduced nocturnal hypoglycemia, without increasing glycated hemoglobin values. (Funded by Medtronic MiniMed; ASPIRE ClinicalTrials.gov number, NCT01497938.).

  14. Outcomes of Late Implantation in Usher Syndrome Patients.

    PubMed

    Hoshino, Ana Cristina H; Echegoyen, Agustina; Goffi-Gomez, Maria Valéria Schmidt; Tsuji, Robinson Koji; Bento, Ricardo Ferreira

    2017-04-01

    Introduction  Usher syndrome (US) is an autosomal recessive disorder characterized by hearing loss and progressive visual impairment. Some deaf Usher syndrome patients learn to communicate using sign language. During adolescence, as they start losing vision, they are usually referred to cochlear implantation as a salvage for their new condition. Is a late implantation beneficial to these children? Objective  The objective of this study is to describe the outcomes of US patients who received cochlear implants at a later age. Methods  This is a retrospective study of ten patients diagnosed with US1. We collected pure-tone thresholds and speech perception tests from pre and one-year post implant. Results  Average age at implantation was 18.9 years (5-49). Aided average thresholds were 103 dB HL and 35 dB HL pre and one-year post implant, respectively. Speech perception was only possible to be measured in four patients preoperatively, who scored 13.3; 26.67; 46% vowels and 56% 4-choice. All patients except one had some kind of communication. Two were bilingual. After one year of using the device, seven patients were able to perform the speech tests (from four-choice to close set sentences) and three patients abandoned the use of the implant. Conclusion  We observed that detection of sounds can be achieved with late implantation, but speech recognition is only possible in patients with previous hearing stimulation, since it depends on the development of hearing skills and the maturation of the auditory pathways.

  15. Effect of peripheral arterial disease on the onset of lactate threshold during cardiopulmonary exercise test: study protocol.

    PubMed

    Key, Angela; Ali, Tamara; Walker, Paul; Duffy, Nick; Barkat, Mo; Snellgrove, Jayne; Torella, Francesco

    2016-12-19

    Cardiopulmonary exercise test (CPET) is widely used in preoperative assessment and cardiopulmonary rehabilitation. The effect of peripheral arterial disease (PAD) on oxygen delivery (VO 2 ) measured by CPET is not known. The aim of this study was to investigate the effect of PAD on VO 2 measurements during CPET. We designed a prospective cohort study, which will recruit 30 patients with PAD, who will undergo CPET before and after treatment of iliofemoral occlusive arterial disease. The main outcome measure is the difference in VO 2 at the lactate threshold (LT) between the 2 CPETs. The secondary outcome measure is the relationship between change in VO 2 at the LT and peak exercise pretreatment and post-treatment and haemodynamic measures of PAD improvement (ankle-brachial index differential). For VO 2 changes, only simple paired bivariate comparisons, not multivariate analyses, are planned, due to the small sample size. The correlation between ABI and VO 2 rise will be tested by linear regression. The study was approved by the North West-Lancaster Research and Ethics committee (reference 15/NW/0801). Results will be disseminated through scientific journal and scientific conference presentation. Completion of recruitment is expected by the end of 2016, and submission for publication by March 2017. NCT02657278. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Rapid resolution of brain ischemic hypoxia after cerebral revascularization in moyamoya disease.

    PubMed

    Arikan, Fuat; Vilalta, Jordi; Torne, Ramon; Noguer, Montserrat; Lorenzo-Bosquet, Carles; Sahuquillo, Juan

    2015-03-01

    In moyamoya disease (MMD), cerebral revascularization is recommended in patients with recurrent or progressive ischemic events and associated reduced cerebral perfusion reserve. Low-flow bypass with or without indirect revascularization is generally the standard surgical treatment. Intraoperative monitoring of cerebral partial pressure of oxygen (PtiO2) with polarographic Clark-type probes in cerebral artery bypass surgery for MMD-induced chronic cerebral ischemia has not yet been described. To describe basal brain tissue oxygenation in MMD patients before revascularization as well as the immediate changes produced by the surgical procedure using intraoperative PtiO2 monitoring. Between October 2011 and January 2013, all patients with a diagnosis of MMD were intraoperatively monitored. Cerebral oxygenation status was analyzed based on the Ptio2/PaO2 ratio. Reference thresholds of PtiO2/PaO2 had been previously defined as below 0.1 for the lower reference threshold (hypoxia) and above 0.35 for the upper reference threshold (hyperoxia). Before STA-MCA bypass, all patients presented a situation of severe tissue hypoxia confirmed by a PtiO2/PaO2 ratio <0.1. After bypass, all patients showed a rapid and sustained increase in PtiO2, which reached normal values (PtiO2/PaO2 ratio between 0.1 and 0.35). One patient showed an initial PtiO2 improvement followed by a decrease due to bypass occlusion. After repeat anastomosis, the patient's PtiO2 increased again and stabilized. Direct anastomosis quickly improves cerebral oxygenation, immediately reducing the risk of ischemic stroke in both pediatric and adult patients. Intraoperative PtiO2 monitoring is a very reliable tool to verify the effectiveness of this revascularization procedure.

  17. Contribution of myofascial trigger points to migraine symptoms.

    PubMed

    Giamberardino, Maria Adele; Tafuri, Emmanuele; Savini, Antonella; Fabrizio, Alessandra; Affaitati, Giannapia; Lerza, Rosanna; Di Ianni, Livio; Lapenna, Domenico; Mezzetti, Andrea

    2007-11-01

    This study evaluated the contribution of myofascial trigger points (TrPs) to migraine pain. Seventy-eight migraine patients with cervical active TrPs whose referred areas (RAs) coincided with migraine sites (frontal/temporal) underwent electrical pain threshold measurement in skin, subcutis, and muscle in TrPs and RAs at baseline and after 3, 10, 30, and 60 days; migraine pain assessment (number and intensity of attacks) for 60 days before and 60 days after study start. Fifty-four patients (group 1) underwent TrP anesthetic infiltration on the 3rd, 10th, 30th, and 60th day (after threshold measurement); 24 (group 2) received no treatment. Twenty normal subjects underwent threshold measurements in the same sites and time points as patients. At baseline, all patients showed lower than normal thresholds in TrPs and RAs in all tissues (P < .001). During treatment in group 1, all thresholds increased progressively in TrPs and RAs (P < .0001), with sensory normalization of skin/subcutis in RAs at the end of treatment; migraine pain decreased (P < .001). Threshold increase in RAs and migraine reduction correlated linearly (.0001 < P < .006). In group 2 and normal subjects, no changes occurred. Cervical TrPs with referred areas in migraine sites thus contribute substantially to migraine symptoms, the peripheral nociceptive input from TrPs probably enhancing the sensitization level of central sensory neurons. This article shows the beneficial effects of local therapy of active myofascial trigger points (TrPs) on migraine symptoms in patients in whom migraine sites coincide with the referred areas of the TrPs. These results suggest that migraine pain is often contributed to by myofascial inputs that enhance the level of central neuronal excitability.

  18. Clinical laboratory: bigger is not always better.

    PubMed

    Plebani, Mario

    2018-06-27

    Laboratory services around the world are undergoing substantial consolidation and changes through mechanisms ranging from mergers, acquisitions and outsourcing, primarily based on expectations to improve efficiency, increasing volumes and reducing the cost per test. However, the relationship between volume and costs is not linear and numerous variables influence the end cost per test. In particular, the relationship between volumes and costs does not span the entire platter of clinical laboratories: high costs are associated with low volumes up to a threshold of 1 million test per year. Over this threshold, there is no linear association between volumes and costs, as laboratory organization rather than test volume more significantly affects the final costs. Currently, data on laboratory errors and associated diagnostic errors and risk for patient harm emphasize the need for a paradigmatic shift: from a focus on volumes and efficiency to a patient-centered vision restoring the nature of laboratory services as an integral part of the diagnostic and therapy process. Process and outcome quality indicators are effective tools to measure and improve laboratory services, by stimulating a competition based on intra- and extra-analytical performance specifications, intermediate outcomes and customer satisfaction. Rather than competing with economic value, clinical laboratories should adopt a strategy based on a set of harmonized quality indicators and performance specifications, active laboratory stewardship, and improved patient safety.

  19. Pack-Year Cigarette Smoking History for Determination of Lung Cancer Screening Eligibility. Comparison of the Electronic Medical Record versus a Shared Decision-making Conversation.

    PubMed

    Modin, Hannah E; Fathi, Joelle T; Gilbert, Christopher R; Wilshire, Candice L; Wilson, Andrew K; Aye, Ralph W; Farivar, Alexander S; Louie, Brian E; Vallières, Eric; Gorden, Jed A

    2017-08-01

    Implementation of lung cancer screening programs is occurring across the United States. Programs vary in approaches to patient identification and shared decision-making. The eligibility of persons referred to screening programs, the outcomes of eligibility determination during shared decision-making, and the potential for the electronic medical record (EMR) to identify eligible individuals have not been well described. Our objectives were to assess the eligibility of individuals referred for lung cancer screening and compare information extracted from the EMR to information derived from a shared decision-making conversation for the determination of eligibility for lung cancer screening. We performed a retrospective analysis of individuals referred to a centralized lung cancer screening program serving a five-hospital health services system in Seattle, Washington between October 2014 and January 2016. Demographics, referral, and outcomes data were collected. A pack-year smoking history derived from the EMR was compared with the pack-year history obtained during a shared decision-making conversation performed by a licensed nurse professional representing the lung cancer screening program. A total of 423 individuals were referred to the program, of whom 59.6% (252 of 423) were eligible. Of those, 88.9% (224 of 252) elected screening. There was 96.2% (230 of 239) discordance in pack-year smoking history between the EMR and the shared decision-making conversation. The EMR underreported pack-years of smoking for 85.2% (196 of 230) of the participants, with a median difference of 29.2 pack-years. If identification of eligible individuals relied solely on the accuracy of the pack-year smoking history recorded in the EMR, 53.6% (128 of 239) would have failed to meet the 30-pack-year threshold for screening. Many individuals referred for lung cancer screening may be ineligible. Overreliance on the EMR for identification of individuals at risk may lead to missed opportunities for appropriate lung cancer screening.

  20. A Comparative Study of the Applied Methods for Estimating Deflection of the Vertical in Terrestrial Geodetic Measurements

    PubMed Central

    Vittuari, Luca; Tini, Maria Alessandra; Sarti, Pierguido; Serantoni, Eugenio; Borghi, Alessandra; Negusini, Monia; Guillaume, Sébastien

    2016-01-01

    This paper compares three different methods capable of estimating the deflection of the vertical (DoV): one is based on the joint use of high precision spirit leveling and Global Navigation Satellite Systems (GNSS), a second uses astro-geodetic measurements and the third gravimetric geoid models. The working data sets refer to the geodetic International Terrestrial Reference Frame (ITRF) co-location sites of Medicina (Northern, Italy) and Noto (Sicily), these latter being excellent test beds for our investigations. The measurements were planned and realized to estimate the DoV with a level of precision comparable to the angular accuracy achievable in high precision network measured by modern high-end total stations. The three methods are in excellent agreement, with an operational supremacy of the astro-geodetic method, being faster and more precise than the others. The method that combines leveling and GNSS has slightly larger standard deviations; although well within the 1 arcsec level, which was assumed as threshold. Finally, the geoid model based method, whose 2.5 arcsec standard deviations exceed this threshold, is also statistically consistent with the others and should be used to determine the DoV components where local ad hoc measurements are lacking. PMID:27104544

  1. Reactive power and voltage control strategy based on dynamic and adaptive segment for DG inverter

    NASA Astrophysics Data System (ADS)

    Zhai, Jianwei; Lin, Xiaoming; Zhang, Yongjun

    2018-03-01

    The inverter of distributed generation (DG) can support reactive power to help solve the problem of out-of-limit voltage in active distribution network (ADN). Therefore, a reactive voltage control strategy based on dynamic and adaptive segment for DG inverter is put forward to actively control voltage in this paper. The proposed strategy adjusts the segmented voltage threshold of Q(U) droop curve dynamically and adaptively according to the voltage of grid-connected point and the power direction of adjacent downstream line. And then the reactive power reference of DG inverter can be got through modified Q(U) control strategy. The reactive power of inverter is controlled to trace the reference value. The proposed control strategy can not only control the local voltage of grid-connected point but also help to maintain voltage within qualified range considering the terminal voltage of distribution feeder and the reactive support for adjacent downstream DG. The scheme using the proposed strategy is compared with the scheme without the reactive support of DG inverter and the scheme using the Q(U) control strategy with constant segmented voltage threshold. The simulation results suggest that the proposed method has a significant improvement on solving the problem of out-of-limit voltage, restraining voltage variation and improving voltage quality.

  2. On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?

    PubMed

    Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro

    2016-01-01

    Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. GREAT: a gradient-based color-sampling scheme for Retinex.

    PubMed

    Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo

    2017-04-01

    Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.

  4. Prediction of pKa Values for Neutral and Basic Drugs based on Hybrid Artificial Intelligence Methods.

    PubMed

    Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin

    2018-03-05

    The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.

  5. A comparison of modality-specific somatosensory changes during menstruation in dysmenorrheic and nondysmenorrheic women.

    PubMed

    Bajaj, Priti; Bajaj, Prem; Madsen, Hans; Arendt-Nielsen, Lars

    2002-01-01

    The objective was to evaluate somatosensory thresholds to a multimodality stimulation regimen applied both within and outside areas of referred menstrual pain in dysmenorrheic women, over four phases of confirmed ovulatory cycles, and to compare them with thresholds in nondysmenorrheic women during menstruation. Twenty dysmenorrheic women with menstrual pain scoring 5.45 +/- 0.39 cm (mean +/- standard error of mean) on a visual analog scale (10 cm) participated. Fifteen nondysmenorrheic women with a menstrual pain score of 0.4 +/- 0.2 cm participated as controls. Ovulation was confirmed by an enzyme-multiplied immunoassay technique. Menstrual pain was described with the McGill Pain Questionnaire. Areas within menstrual pain referral were two abdominal sites and the midline of the low back, and the arm and thigh were the control areas. The pressure pain threshold (PPT) and pinch pain threshold were determined by a hand-held electronic pressure algometer, the heat pain threshold (HPT) by a contact thermode, and the tactile threshold with von Frey hairs. In dysmenorrheic women the McGill Pain Questionnaire showed a larger sensory and affective component of pain than the evaluative and miscellaneous groups. The HPT and PPT were lower in the menstrual phase than in the ovulatory, luteal, and premenstrual phases, both within and outside areas of referred menstrual pain (p <0.01), with a more pronounced decrease at the referral pain areas. The pinch pain threshold was lower in the menstrual phase than in the ovulatory phase (p <0.02), and the tactile threshold did not differ significantly across the menstrual phases or within any site. Dysmenorrheic women had a lower HPT at the control sites and a lower PPT at the abdomen, back, and control sites, than in those of nondysmenorrheic women in the menstrual phase. The results show reduced somatosensory pain thresholds during menstruation to heat and pressure stimulation, both within and outside areas of referred menstrual pain in dysmenorrheic women. Dysmenorrheic women showed a lower HPT at the control sites and a lower PPT at all the sites than those for nondysmenorrheic women in the menstrual phase. The altered somatosensory thresholds may be dependent on a spinal mechanism of central hyperexcitability, induced by recurrent moderate to severe menstrual pain.

  6. Reconstruction of Sea State One

    DTIC Science & Technology

    1988-02-01

    this section only a general overview of the wave computer system will be offered. A more comprehensive treatment of this subject is available in Appendix...1) Sync Strip and Threshold Processing Card (2) Pulse Generation Logic Card (3) X Vector Logic Card (4) Y Vector Logic Card (5) Blanking Interval...output by this comparator when the threshold is crossed, which shall be referred to as threshold crossing (THC). (2) PULSE GENERATION LOGIC CARD Turning

  7. Determination of the Oswestry Disability Index score equivalent to a "satisfactory symptom state" in patients undergoing surgery for degenerative disorders of the lumbar spine-a Spine Tango registry-based study.

    PubMed

    van Hooff, Miranda L; Mannion, Anne F; Staub, Lukas P; Ostelo, Raymond W J G; Fairbank, Jeremy C T

    2016-10-01

    The achievement of a given change score on a valid outcome instrument is commonly used to indicate whether a clinically relevant change has occurred after spine surgery. However, the achievement of such a change score can be dependent on baseline values and does not necessarily indicate whether the patient is satisfied with the current state. The achievement of an absolute score equivalent to a patient acceptable symptom state (PASS) may be a more stringent measure to indicate treatment success. This study aimed to estimate the score on the Oswestry Disability Index (ODI, version 2.1a; 0-100) corresponding to a PASS in patients who had undergone surgery for degenerative disorders of the lumbar spine. This is a cross-sectional study of diagnostic accuracy using follow-up data from an international spine surgery registry. The sample includes 1,288 patients with degenerative lumbar spine disorders who had undergone elective spine surgery, registered in the EUROSPINE Spine Tango Spine Surgery Registry. The main outcome measure was the ODI (version 2.1a). Surgical data and data from the ODI and Core Outcome Measures Index (COMI) were included to determine the ODI threshold equivalent to PASS at 1 year (±1.5 months; n=780) and 2 years (±2 months; n=508) postoperatively. The symptom-specific well-being item of the COMI was used as the external criterion in the receiver operating characteristic (ROC) analysis to determine the ODI threshold equivalent to PASS. Separate sensitivity analyses were performed based on the different definitions of an "acceptable state" and for subgroups of patients. JF is a copyright holder of the ODI. The ODI threshold for PASS was 22, irrespective of the time of follow-up (area under the curve [AUC]: 0.89 [sensitivity {Se}: 78.3%, specificity {Sp}: 82.1%] and AUC: 0.91 [Se: 80.7%, Sp: 85.6] for the 1- and 2-year follow-ups, respectively). Sensitivity analyses showed that the absolute ODI-22 threshold for the two follow-up time-points were robust. A stricter definition of PASS resulted in lower ODI thresholds, varying from 16 (AUC=0.89; Se: 80.2%, Sp: 82.0%) to 18 (AUC=0.90; Se: 82.4%, Sp: 80.4%) depending on the time of follow-up. An ODI score ≤22 indicates the achievement of an acceptable symptom state and can hence be used as a criterion of treatment success alongside the commonly used change score measures. At the individual level, the threshold could be used to indicate whether or not a patient with a lumbar spine disorder is a "responder" after elective surgery. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Restrictive transfusion threshold is safe in high-risk patients undergoing brain tumor surgery.

    PubMed

    Alkhalid, Yasmine; Lagman, Carlito; Sheppard, John P; Nguyen, Thien; Prashant, Giyarpuram N; Ziman, Alyssa F; Yang, Isaac

    2017-12-01

    To assess the safety of a restrictive threshold for the transfusion of red blood cells (RBCs) compared to a liberal threshold in high-risk patients undergoing brain tumor surgery. We reviewed patients who were 50 years of age or older with a preoperative American Society of Anesthesiologists physical status class II to V who underwent open craniotomy for tumor resection and were transfused packed RBCs during or after surgery. We retrospectively assigned patients to a restrictive-threshold (a pretransfusion hemoglobin level <8g/dL) or a liberal-threshold group (a pretransfusion hemoglobin level of 8-10/dL). The primary outcome was in-hospital mortality rate. Secondary outcomes were in-hospital complication rates, length of stay, and discharge disposition. Twenty-five patients were included in the study, of which 17 were assigned to a restrictive-threshold group and 8 patients to a liberal-threshold group. The in-hospital mortality rates were 12% for the restrictive-threshold group (odds ratio [OR] 0.93, 95% confidence interval [CI] 0.07-12.11) and 13% for the liberal-threshold group. The in-hospital complication rates were 52.9% for the restrictive-threshold group (OR 1.13, 95% CI 0.21-6.05) and 50% for the liberal-threshold group. The average number of days in the intensive care unit and hospital were 8.6 and 22.4 days in the restrictive-threshold group and 6 and 15 days in the liberal-threshold group, respectively (P=0.69 and P=0.20). The rates of non-routine discharge were 71% in the restrictive-threshold group (OR 2.40, 95% CI 0.42-13.60) and 50% in the liberal-threshold group. A restrictive transfusion threshold did not significantly influence in-hospital mortality or complication rates, length of stay, or discharge disposition in patients at high operative risk. Copyright © 2017. Published by Elsevier B.V.

  9. Self-calibrating threshold detector

    NASA Technical Reports Server (NTRS)

    Barnes, J. R.; Huang, M. Y. (Inventor)

    1980-01-01

    A self calibrating threshold detector comprises a single demodulating channel which includes a mixer having one input receiving the incoming signal and another input receiving a local replica code. During a short time interval, an incorrect local code is applied to the mixer to incorrectly demodulate the incoming signal and to provide a reference level that calibrates the noise propagating through the channel. A sample and hold circuit is coupled to the channel for storing a sample of the reference level. During a relatively long time interval, the correct replica code provides an output level which ranges between the reference level and a maximum level that represents incoming signal presence and synchronism with the replica code. A summer substracts the stored sample reference from the output level to provide a resultant difference signal indicative of the acquisition of the expected signal.

  10. Oxygen Saturation in Healthy Children Aged 5 to 16 Years Residing in Huayllay, Peru at 4340 m

    PubMed Central

    Schult, Sandra

    2011-01-01

    Abstract Schult, Sandra, and Carlos Canelo-Aybar. Oxygen saturation in healthy chidren aged 5 to 6 years residing in Huayllay, Peru, at 4340 m. High Alt. Med. Biol. 12:89–92, 2011.—Hypoxemia is a major life-threatening complication of childhood pneumonia. The threshold points for hypoxemia vary with altitude. However, few published data describe that normal range of variation. The purpose of this study was to establish reference values of normal mean Sao2 levels and an approximate cutoff point to define hypoxemia for clinical purposes above 4300 meters above sea level (masl). Children aged 5 to 16 yr were examined during primary care visits at the Huayllay Health Center. Huayllay is a rural community located at 4340 m in the province of Pasco in the Peruvian Andes. We collected basic sociodemographic data and evaluated three outcomes: arterial oxygen saturation (Sao2) with a pulse oximeter, heart rate, and respiratory rate. Comparisons of main outcomes among age groups (5–6, 7–8, 9–10, 11–12, 13–14, and 15–16 yr) and sex were performed using linear regression models. The correlation of Sao2 with heart rate and respiration rate was established by Pearson's correlation test. We evaluated 583 children, of whom 386 were included in the study. The average age was 10.3 yr; 55.7% were female. The average Sao2, heart rate, and respiratory rate were 85.7% (95% CI: 85.2–86.2), 80.4/min (95% CI: 79.0–81.9), and 19.9/min (95% CI: 19.6–20.2), respectively. Sao2 increased with age (p < 0.001). No differences by sex were observed. The mean minus two standard deviations of Sao2 (threshold point for hypoxemia) ranged from 73.8% to 81.8% by age group. At 4300 m, the reference values for hypoxemia may be 14.2% lower than at sea level. This difference must be considered when diagnosing hypoxemia or deciding oxygen supplementation at high altitude. Other studies are needed to determine whether this reference value is appropriate for clinical use. PMID:21452970

  11. Nonlinear distortion analysis for single heterojunction GaAs HEMT with frequency and temperature

    NASA Astrophysics Data System (ADS)

    Alim, Mohammad A.; Ali, Mayahsa M.; Rezazadeh, Ali A.

    2018-07-01

    Nonlinearity analysis using two-tone intermodulation distortion (IMD) technique for 0.5 μm gate-length AlGaAs/GaAs based high electron mobility transistor have been investigated based on biasing conditions, input power, frequency and temperature. The outcomes indicate a significant modification on the output IMD power and as well as the minimum distortion level. The input IMD power effects the output current and subsequently the threshold voltage reduces, resulting to an increment in the output IMD power. Both frequency and temperature reduces the magnitude of the output IMDs. In addition, the threshold voltage response with temperature alters the notch point of the nonlinear output IMD’s accordingly. The aforementioned investigation will help the circuit designers to evaluate the best biasing option in terms of minimum distortion, maximum gain for future design optimizations.

  12. Outcomes of Community-Based Screening for Depression and Suicide Prevention among Japanese Elders

    ERIC Educational Resources Information Center

    Oyama, Hirofumi; Fujita, Motoi; Goto, Masahiro; Shibuya, Hiroshi; Sakashita, Tomoe

    2006-01-01

    Purpose: In this study we evaluate outcomes of a community-based program to prevent suicide among elderly individuals aged 65 and older. Design and Methods: We used a quasi-experimental design with intervention and referent municipalities. The program included a 7-year implementation of depression screening with follow-up by general practitioners…

  13. An entropy decision approach in flash flood warning: rainfall thresholds definition

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Napolitano, F.; Ridolfi, E.

    2009-09-01

    Flash floods events are floods characterised by very rapid response of the basins to the storms, and often they involve loss of life and damage to common and private properties. Due to the specific space-time scale of this kind of flood, generally only a short lead time is available for triggering civil protection measures. Thresholds values specify the precipitation amount for a given duration that generates a critical discharge in a given cross section. The overcoming of these values could produce a critical situation in river sites exposed to alluvial risk, so it is possible to compare directly the observed or forecasted precipitation with critical reference values, without running on line real time forecasting systems. This study is focused on the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated minimising an utility function based on the informative entropy concept. The study concludes with a system performance analysis, in terms of correctly issued warning, false alarms and missed alarms.

  14. Quantitative measurement of interocular suppression in anisometropic amblyopia: a case-control study.

    PubMed

    Li, Jinrong; Hess, Robert F; Chan, Lily Y L; Deng, Daming; Yang, Xiao; Chen, Xiang; Yu, Minbin; Thompson, Benjamin

    2013-08-01

    The aims of this study were to assess (1) the relationship between interocular suppression and visual function in patients with anisometropic amblyopia, (2) whether suppression can be simulated in matched controls using monocular defocus or neutral density filters, (3) the effects of spectacle or rigid gas-permeable contact lens correction on suppression in patients with anisometropic amblyopia, and (4) the relationship between interocular suppression and outcomes of occlusion therapy. Case-control study (aims 1-3) and cohort study (aim 4). Forty-five participants with anisometropic amblyopia and 45 matched controls (mean age, 8.8 years for both groups). Interocular suppression was assessed using Bagolini striated lenses, neutral density filters, and an objective psychophysical technique that measures the amount of contrast imbalance between the 2 eyes that is required to overcome suppression (dichoptic motion coherence thresholds). Visual acuity was assessed using a logarithm minimum angle of resolution tumbling E chart and stereopsis using the Randot preschool test. Interocular suppression assessed using dichoptic motion coherence thresholds. Patients exhibited significantly stronger suppression than controls, and stronger suppression was correlated significantly with poorer visual acuity in amblyopic eyes. Reducing monocular acuity in controls to match that of cases using neutral density filters (luminance reduction) resulted in levels of interocular suppression comparable with that in patients. This was not the case for monocular defocus (optical blur). Rigid gas-permeable contact lens correction resulted in less suppression than spectacle correction, and stronger suppression was associated with poorer outcomes after occlusion therapy. Interocular suppression plays a key role in the visual deficits associated with anisometropic amblyopia and can be simulated in controls by inducing a luminance difference between the eyes. Accurate quantification of suppression using the dichoptic motion coherence threshold technique may provide useful information for the management and treatment of anisometropic amblyopia. Proprietary or commercial disclosure may be found after the references. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  15. Reference geometry-based detection of (4D-)CT motion artifacts: a feasibility study

    NASA Astrophysics Data System (ADS)

    Werner, René; Gauer, Tobias

    2015-03-01

    Respiration-correlated computed tomography (4D or 3D+t CT) can be considered as standard of care in radiation therapy treatment planning for lung and liver lesions. The decision about an application of motion management devices and the estimation of patient-specific motion effects on the dose distribution relies on precise motion assessment in the planning 4D CT data { which is impeded in case of CT motion artifacts. The development of image-based/post-processing approaches to reduce motion artifacts would benefit from precise detection and localization of the artifacts. Simple slice-by-slice comparison of intensity values and threshold-based analysis of related metrics suffer from- depending on the threshold- high false-positive or -negative rates. In this work, we propose exploiting prior knowledge about `ideal' (= artifact free) reference geometries to stabilize metric-based artifact detection by transferring (multi-)atlas-based concepts to this specific task. Two variants are introduced and evaluated: (S1) analysis and comparison of warped atlas data obtained by repeated non-linear atlas-to-patient registration with different levels of regularization; (S2) direct analysis of vector field properties (divergence, curl magnitude) of the atlas-to-patient transformation. Feasibility of approaches (S1) and (S2) is evaluated by motion-phantom data and intra-subject experiments (four patients) as well as - adopting a multi-atlas strategy- inter-subject investigations (twelve patients involved). It is demonstrated that especially sorting/double structure artifacts can be precisely detected and localized by (S1). In contrast, (S2) suffers from high false positive rates.

  16. Psychophysical and Patient Factors as Determinants of Pain, Function and Health Status in Shoulder Disorders

    PubMed Central

    Uddin, Zakir; MacDermid, Joy C.; Moro, Jaydeep; Galea, Victoria; Gross, Anita R.

    2016-01-01

    Objective: To estimate the extent to which psychophysical quantitative sensory test (QST) and patient factors (gender, age and comorbidity) predict pain, function and health status in people with shoulder disorders. To determine if there are gender differences for QST measures in current perception threshold (CPT), vibration threshold (VT) and pressure pain (PP) threshold and tolerance. Design: A cross-sectional study design. Setting: MacHAND Clinical Research Lab at McMaster University. Subjects: 34 surgical and 10 nonsurgical participants with shoulder pain were recruited. Method: Participants completed the following patient reported outcomes: pain (Numeric Pain Rating, Pain Catastrophizing Scale, Shoulder Pain and Disability Index) and health status (Short Form-12). Participants completed QST at 4 standardized locations and then an upper extremity performance-based endurance test (FIT-HaNSA). Pearson r’s were computed to determine the relationships between QST variables and patient factors with either pain, function or health status. Eight regression models were built to analysis QST’s and patient factors separately as predictors of either pain, function or health status. An independent sample t-test was done to evaluate the gender effect on QST. Results: Greater PP threshold and PP tolerance was significantly correlated with higher shoulder functional performance on the FIT-HANSA (r =0.31-0.44) and lower self-reported shoulder disability (r = -0.32 to -0.36). Higher comorbidity was consistently correlated (r =0.31-0.46) with more pain, and less function and health status. Older age was correlated to more pain intensity and less function (r =0.31-0.57). In multivariate models, patient factors contributed significantly to pain, function or health status models (r2 =0.19-0.36); whereas QST did not. QST was significantly different between males and females [in PP threshold (3.9 vs. 6.2, p < .001) and PP tolerance (7.6 vs. 2.6, p < .001) and CPT (1.6 vs. 2.3, p =.02)]. Conclusion: Psychophysical dimensions and patient factors (gender, age and comorbidity) affect self-reported and performance-based outcome measures in people with shoulder disorders. PMID:29399220

  17. Assessment of long-term outcomes associated with urinary prostate cancer antigen 3 and TMPRSS2:ERG gene fusion at repeat biopsy.

    PubMed

    Merdan, Selin; Tomlins, Scott A; Barnett, Christine L; Morgan, Todd M; Montie, James E; Wei, John T; Denton, Brian T

    2015-11-15

    In men with clinically localized prostate cancer who have undergone at least 1 previous negative biopsy and have elevated serum prostate-specific antigen (PSA) levels, long-term health outcomes associated with the assessment of urinary prostate cancer antigen 3 (PCA3) and the transmembrane protease, serine 2 (TMPRSS2):v-ets erythroblastosis virus E26 oncogene homolog (avian) (ERG) gene fusion (T2:ERG) have not been investigated previously in relation to the decision to recommend a repeat biopsy. The authors performed a decision analysis using a decision tree for men with elevated PSA levels. The probability of cancer was estimated using the Prostate Cancer Prevention Trial Risk Calculator (version 2.0). The use of PSA alone was compared with the use of PCA3 and T2:ERG scores, with each evaluated independently, in combination with PSA to trigger a repeat biopsy. When PCA3 and T2:ERG score evaluations were used, predefined thresholds were established to determine whether the patient should undergo a repeat biopsy. Biopsy outcomes were defined as either positive (with a Gleason score of <7, 7, or >7) or negative. Probabilities and estimates of 10-year overall survival and 15-year cancer-specific survival were derived from previous studies and a literature review. Outcomes were defined as age-dependent and Gleason score-dependent 10-year overall and 15-year cancer-specific survival rates and the percentage of biopsies avoided. Incorporating the PCA3 score (biopsy threshold, 25; generated based on the urine PCA3 level normalized to the amount of PSA messenger RNA) or the T2:ERG score (biopsy threshold, 10; based on the urine T2:ERG level normalized to the amount of PSA messenger RNA) into the decision to recommend repeat biopsy would have avoided 55.4% or 64.7% of repeat biopsies for the base-case patient, respectively, and changes in the 10-year survival rate were only 0.93% or 1.41%, respectively. Multi-way sensitivity analyses suggested that these results were robust with respect to the model parameters. The use of PCA3 or T2:ERG testing for repeat biopsy decisions can substantially reduce the number of biopsies without significantly affecting 10-year survival. © 2015 American Cancer Society.

  18. Cost effectiveness of a manual based coping strategy programme in promoting the mental health of family carers of people with dementia (the START (STrAtegies for RelaTives) study): a pragmatic randomised controlled trial

    PubMed Central

    King, Derek; Romeo, Renee; Schehl, Barbara; Barber, Julie; Griffin, Mark; Rapaport, Penny; Livingston, Debbie; Mummery, Cath; Walker, Zuzana; Hoe, Juanita; Sampson, Elizabeth L; Cooper, Claudia; Livingston, Gill

    2013-01-01

    Objective To assess whether the START (STrAtegies for RelatTives) intervention added to treatment as usual is cost effective compared with usual treatment alone. Design Cost effectiveness analysis nested within a pragmatic randomised controlled trial. Setting Three mental health and one neurological outpatient dementia service in London and Essex, UK. Participants Family carers of people with dementia. Intervention Eight session, manual based, coping intervention delivered by supervised psychology graduates to family carers of people with dementia added to usual treatment, compared with usual treatment alone. Primary outcome measures Costs measured from a health and social care perspective were analysed alongside the Hospital Anxiety and Depression Scale total score (HADS-T) of affective symptoms and quality adjusted life years (QALYs) in cost effectiveness analyses over eight months from baseline. Results Of the 260 participants recruited to the study, 173 were randomised to the START intervention, and 87 to usual treatment alone. Mean HADS-T scores were lower in the intervention group than the usual treatment group over the 8 month evaluation period (mean difference −1.79 (95% CI −3.32 to −0.33)), indicating better outcomes associated with the START intervention. There was a small improvement in health related quality of life as measured by QALYs (0.03 (−0.01 to 0.08)). Costs were no different between the intervention and usual treatment groups (£252 (−28 to 565) higher for START group). The cost effectiveness calculations suggested that START had a greater than 99% chance of being cost effective compared with usual treatment alone at a willingness to pay threshold of £30 000 per QALY gained, and a high probability of cost effectiveness on the HADS-T measure. Conclusions The manual based coping intervention START, when added to treatment as usual, was cost effective compared with treatment as usual alone by reference to both outcome measures (affective symptoms for family carers, and carer based QALYs). Trial Registration ISCTRN 70017938 PMID:24162943

  19. A better way to evaluate remote monitoring programs in chronic disease care: receiver operating characteristic analysis.

    PubMed

    Brown Connolly, Nancy E

    2014-12-01

    This foundational study applies the process of receiver operating characteristic (ROC) analysis to evaluate utility and predictive value of a disease management (DM) model that uses RM devices for chronic obstructive pulmonary disease (COPD). The literature identifies a need for a more rigorous method to validate and quantify evidence-based value for remote monitoring (RM) systems being used to monitor persons with a chronic disease. ROC analysis is an engineering approach widely applied in medical testing, but that has not been evaluated for its utility in RM. Classifiers (saturated peripheral oxygen [SPO2], blood pressure [BP], and pulse), optimum threshold, and predictive accuracy are evaluated based on patient outcomes. Parametric and nonparametric methods were used. Event-based patient outcomes included inpatient hospitalization, accident and emergency, and home health visits. Statistical analysis tools included Microsoft (Redmond, WA) Excel(®) and MedCalc(®) (MedCalc Software, Ostend, Belgium) version 12 © 1993-2013 to generate ROC curves and statistics. Persons with COPD were monitored a minimum of 183 days, with at least one inpatient hospitalization within 12 months prior to monitoring. Retrospective, de-identified patient data from a United Kingdom National Health System COPD program were used. Datasets included biometric readings, alerts, and resource utilization. SPO2 was identified as a predictive classifier, with an optimal average threshold setting of 85-86%. BP and pulse were failed classifiers, and areas of design were identified that may improve utility and predictive capacity. Cost avoidance methodology was developed. RESULTS can be applied to health services planning decisions. Methods can be applied to system design and evaluation based on patient outcomes. This study validated the use of ROC in RM program evaluation.

  20. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration

    PubMed Central

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    Introduction This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. Objectives The Centers for Medicare and Medicaid Services’ Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records. To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California’s (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. Methods We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. Results We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals’ mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals’ decreased, KPNC hospitals’ performance would appear better. Conclusion Future hospital benchmarking should consider the impact of variation in admission thresholds. PMID:29035176

  1. Transfusion strategy for acute upper gastrointestinal bleeding.

    PubMed

    Handel, James; Lang, Eddy

    2015-09-01

    Clinical question Does a hemoglobin transfusion threshold of 70 g/L yield better patient outcomes than a threshold of 90 g/L in patients with acute upper gastrointestinal bleeding? Article chosen Villanueva C, Colomo A, Bosch A, et al. Transfusion strategies for acute upper gastrointestinal bleeding. N Engl J Med 2013;368(1):11-21. Study objectives The authors of this study measured mortality, from any cause, within the first 45 days, in patients with acute upper gastrointestinal bleeding, who were managed with a hemoglobin threshold for red cell transfusion of either 70 g/L or 90 g/L. The secondary outcome measures included rate of further bleeding and rate of adverse events.

  2. Visualizing the pressure and time burden of intracranial hypertension in adult and paediatric traumatic brain injury.

    PubMed

    Güiza, Fabian; Depreitere, Bart; Piper, Ian; Citerio, Giuseppe; Chambers, Iain; Jones, Patricia A; Lo, Tsz-Yan Milly; Enblad, Per; Nillson, Pelle; Feyen, Bart; Jorens, Philippe; Maas, Andrew; Schuhmann, Martin U; Donald, Rob; Moss, Laura; Van den Berghe, Greet; Meyfroidt, Geert

    2015-06-01

    To assess the impact of the duration and intensity of episodes of increased intracranial pressure on 6-month neurological outcome in adult and paediatric traumatic brain injury. Analysis of prospectively collected minute-by-minute intracranial pressure and mean arterial blood pressure data of 261 adult and 99 paediatric traumatic brain injury patients from multiple European centres. The relationship of episodes of elevated intracranial pressure (defined as a pressure above a certain threshold during a certain time) with 6-month Glasgow Outcome Scale was visualized in a colour-coded plot. The colour-coded plot illustrates the intuitive concept that episodes of higher intracranial pressure can only be tolerated for shorter durations: the curve that delineates the duration and intensity of those intracranial pressure episodes associated with worse outcome is an approximately exponential decay curve. In children, the curve resembles that of adults, but the delineation between episodes associated with worse outcome occurs at lower intracranial pressure thresholds. Intracranial pressures above 20 mmHg lasting longer than 37 min in adults, and longer than 8 min in children, are associated with worse outcomes. In a multivariate model, together with known baseline risk factors for outcome in severe traumatic brain injury, the cumulative intracranial pressure-time burden is independently associated with mortality. When cerebrovascular autoregulation, assessed with the low-frequency autoregulation index, is impaired, the ability to tolerate elevated intracranial pressures is reduced. When the cerebral perfusion pressure is below 50 mmHg, all intracranial pressure insults, regardless of duration, are associated with worse outcome. The intracranial pressure-time burden associated with worse outcome is visualised in a colour-coded plot. In children, secondary injury occurs at lower intracranial pressure thresholds as compared to adults. Impaired cerebrovascular autoregulation reduces the ability to tolerate intracranial pressure insults. Thus, 50 mmHg might be the lower acceptable threshold for cerebral perfusion pressure.

  3. Manual therapy in the management of a patient with a symptomatic Morton's Neuroma: A case report.

    PubMed

    Sault, Josiah D; Morris, Matthew V; Jayaseelan, Dhinu J; Emerson-Kavchak, Alicia J

    2016-02-01

    Patients with Morton's neuroma are rarely referred to physical therapy. This case reports the resolution of pain, increase in local pressure pain thresholds, and improvement of scores on the Lower Extremity Functional Scale and Foot and Ankle Ability Measure following a course of joint based manual therapy for a patient who had failed standard conservative medical treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Relationships between rainfall and Combined Sewer Overflow (CSO) occurrences

    NASA Astrophysics Data System (ADS)

    Mailhot, A.; Talbot, G.; Lavallée, B.

    2015-04-01

    Combined Sewer Overflow (CSO) has been recognized as a major environmental issue in many countries. In Canada, the proposed reinforcement of the CSO frequency regulations will result in new constraints on municipal development. Municipalities will have to demonstrate that new developments do not increase CSO frequency above a reference level based on historical CSO records. Governmental agencies will also have to define a framework to assess the impact of new developments on CSO frequency and the efficiency of the various proposed measures to maintain CSO frequency at its historic level. In such a context, it is important to correctly assess the average number of days with CSO and to define relationships between CSO frequency and rainfall characteristics. This paper investigates such relationships using available CSO and rainfall datasets for Quebec. CSO records for 4285 overflow structures (OS) were analyzed. A simple model based on rainfall thresholds was developed to forecast the occurrence of CSO on a given day based on daily rainfall values. The estimated probability of days with CSO have been used to estimate the rainfall threshold value at each OS by imposing that the probability of exceeding this rainfall value for a given day be equal to the estimated probability of days with CSO. The forecast skill of this model was assessed for 3437 OS using contingency tables. The statistical significance of the forecast skill could be assessed for 64.2% of these OS. The threshold model has demonstrated significant forecast skill for 91.3% of these OS confirming that for most OS a simple threshold model can be used to assess the occurrence of CSO.

  5. A cost-effectiveness analysis of a preventive exercise program for patients with advanced head and neck cancer treated with concomitant chemo-radiotherapy

    PubMed Central

    2011-01-01

    Background Concomitant chemo-radiotherapy (CCRT) has become an indispensable organ, but not always function preserving treatment modality for advanced head and neck cancer. To prevent/limit the functional side effects of CCRT, special exercise programs are increasingly explored. This study presents cost-effectiveness analyses of a preventive (swallowing) exercise program (PREP) compared to usual care (UC) from a health care perspective. Methods A Markov decision model of PREP versus UC was developed for CCRT in advanced head and neck cancer. Main outcome variables were tube dependency at one-year and number of post-CCRT hospital admission days. Primary outcome was costs per quality adjusted life years (cost/QALY), with an incremental cost-effectiveness ratio (ICER) as outcome parameter. The Expected Value of Perfect Information (EVPI) was calculated to obtain the value of further research. Results PREP resulted in less tube dependency (3% and 25%, respectively), and in fewer hospital admission days than UC (3.2 and 4.5 days respectively). Total costs for UC amounted to €41,986 and for PREP to €42,271. Quality adjusted life years for UC amounted to 0.68 and for PREP to 0.77. Based on costs per QALY, PREP has a higher probability of being cost-effective as long as the willingness to pay threshold for 1 additional QALY is at least €3,200/QALY. At the prevailing threshold of €20,000/QALY the probability for PREP being cost-effective compared to UC was 83%. The EVPI demonstrated potential value in undertaking additional research to reduce the existing decision uncertainty. Conclusions Based on current evidence, PREP for CCRT in advanced head and neck cancer has the higher probability of being cost-effective when compared to UC. Moreover, the majority of sensitivity analyses produced ICERs that are well below the prevailing willingness to pay threshold for an additional QALY (range from dominance till €45,906/QALY). PMID:22051143

  6. Fluctuation scaling in the visual cortex at threshold

    NASA Astrophysics Data System (ADS)

    Medina, José M.; Díaz, José A.

    2016-05-01

    Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.

  7. Collision count in rugby union: A comparison of micro-technology and video analysis methods.

    PubMed

    Reardon, Cillian; Tobin, Daniel P; Tierney, Peter; Delahunt, Eamonn

    2017-10-01

    The aim of our study was to determine if there is a role for manipulation of g force thresholds acquired via micro-technology for accurately detecting collisions in rugby union. In total, 36 players were recruited from an elite Guinness Pro12 rugby union team. Player movement profiles and collisions were acquired via individual global positioning system (GPS) micro-technology units. Players were assigned to a sub-category of positions in order to determine positional collision demands. The coding of collisions by micro-technology at g force thresholds between 2 and 5.5 g (0.5 g increments) was compared with collision coding by an expert video analyst using Bland-Altman assessments. The most appropriate g force threshold (smallest mean difference compared with video analyst coding) was lower for all forwards positions (2.5 g) than for all backs positions (3.5 g). The Bland-Altman 95% limits of agreement indicated that there may be a substantial over- or underestimation of collisions coded via GPS micro-technology when using expert video analyst coding as the reference comparator. The manipulation of the g force thresholds applied to data acquired by GPS micro-technology units based on incremental thresholds of 0.5 g does not provide a reliable tool for the accurate coding of collisions in rugby union. Future research should aim to investigate smaller g force threshold increments and determine the events that cause coding of false positives.

  8. A simulation framework for auditory discrimination experiments: Revealing the importance of across-frequency processing in speech perception.

    PubMed

    Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger

    2016-05-01

    A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.

  9. The Identification of a Threshold of Long Work Hours for Predicting Elevated Risks of Adverse Health Outcomes.

    PubMed

    Conway, Sadie H; Pompeii, Lisa A; Gimeno Ruiz de Porras, David; Follis, Jack L; Roberts, Robert E

    2017-07-15

    Working long hours has been associated with adverse health outcomes. However, a definition of long work hours relative to adverse health risk has not been established. Repeated measures of work hours among approximately 2,000 participants from the Panel Study of Income Dynamics (1986-2011), conducted in the United States, were retrospectively analyzed to derive statistically optimized cutpoints of long work hours that best predicted three health outcomes. Work-hours cutpoints were assessed for model fit, calibration, and discrimination separately for the outcomes of poor self-reported general health, incident cardiovascular disease, and incident cancer. For each outcome, the work-hours threshold that best predicted increased risk was 52 hours per week or more for a minimum of 10 years. Workers exposed at this level had a higher risk of poor self-reported general health (relative risk (RR) = 1.28; 95% confidence interval (CI): 1.06, 1.53), cardiovascular disease (RR = 1.42; 95% CI: 1.24, 1.63), and cancer (RR = 1.62; 95% CI: 1.22, 2.17) compared with those working 35-51 hours per week for the same duration. This study provides the first health risk-based definition of long work hours. Further examination of the predictive power of this cutpoint on other health outcomes and in other study populations is needed. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Mortality differences by surgical volume among patients with stomach cancer: a threshold for a favorable volume-outcome relationship.

    PubMed

    Choi, Hyeok; Yang, Seong-Yoon; Cho, Hee-Seung; Kim, Woorim; Park, Eun-Cheol; Han, Kyu-Tae

    2017-07-17

    Many studies have assessed the volume-outcome relationship in cancer patients, but most focused on better outcomes in higher volume groups rather than identifying a specific threshold that could assist in clinical decision-making for achieving the best outcomes. The current study suggests an optimal volume for achieving good outcome, as an extension of previous studies on the volume-outcome relationship in stomach cancer patients. We used National Health Insurance Service (NHIS) Sampling Cohort data during 2004-2013, comprising healthcare claims for 2550 patients with newly diagnosed stomach cancer. We conducted survival analyses adopting the Cox proportional hazard model to investigate the association of three threshold values for surgical volume of stomach cancer patients for cancer-specific mortality using the Youden index. Overall, 17.10% of patients died due to cancer during the study period. The risk of mortality among patients who received surgical treatment gradually decreased with increasing surgical volume at the hospital, while the risk of mortality increased again in "high" surgical volume hospitals, resulting in a j-shaped curve (mid-low = hazard ratio (HR) 0.773, 95% confidence interval (CI) 0.608-0.983; mid-high = HR 0.541, 95% CI 0.372-0.788; high = HR 0.659, 95% CI 0.473-0.917; ref = low). These associations were especially significant in regions with unsubstantial surgical volumes and less severe cases. The optimal surgical volume threshold was about 727.3 surgical cases for stomach cancer per hospital over the 1-year study period in South Korea. However, such positive effects decreased after exceeding a certain volume of surgeries.

  11. Direct and simultaneous detection of organic and inorganic ingredients in herbal powder preparations by Fourier transform infrared microspectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Chen, Jian-bo; Sun, Su-qin; Tang, Xu-dong; Zhang, Jing-zhao; Zhou, Qun

    2016-08-01

    Herbal powder preparation is a kind of widely-used herbal product in the form of powder mixture of herbal ingredients. Identification of herbal ingredients is the first and foremost step in assuring the quality, safety and efficacy of herbal powder preparations. In this research, Fourier transform infrared (FT-IR) microspectroscopic identification method is proposed for the direct and simultaneous recognition of multiple organic and inorganic ingredients in herbal powder preparations. First, the reference spectrum of characteristic particles of each herbal ingredient is assigned according to FT-IR results and other available information. Next, a statistical correlation threshold is determined as the lower limit of correlation coefficients between the reference spectrum and a larger number of calibration characteristic particles. After validation, the reference spectrum and correlation threshold can be used to identify herbal ingredient in mixture preparations. A herbal ingredient is supposed to be present if correlation coefficients between the reference spectrum and some sample particles are above the threshold. Using this method, all kinds of herbal materials in powder preparation Kouqiang Kuiyang San are identified successfully. This research shows the potential of FT-IR microspectroscopic identification method for the accurate and quick identification of ingredients in herbal powder preparations.

  12. Industry guidelines, laws and regulations ignored: quality of drug advertising in medical journals.

    PubMed

    Lankinen, Kari S; Levola, Tero; Marttinen, Kati; Puumalainen, Inka; Helin-Salmivaara, Arja

    2004-11-01

    To document the quality of evidence base for marketing claims in prescription drug advertisements, to facilitate identification of potential targets for quality improvement. A sample of 1036 advertisements from four major Finnish medical journals published in 2002. Marketing claims were classified in four groups: unambiguous clinical outcome, vague clinical outcome, emotive or immeasurable outcome and non-clinical outcome. Medline references were traced and classified according to the level of evidence available. The statistical variables used in the advertisements were also documented. The sample included 245 distinct advertisements with 883 marketing claims, 1-10 claims per advertisement. Three hundred thirty seven (38%) of the claims were referenced. Each claim could be supported by one reference or more, so the number of references analysed totalled 381, 1-9 references per advertisement. Nine percent of the claims implied unambiguous clinical outcomes, 68% included vague or emotive statements. Twenty one percent of the references were irrelevant to the claim. There was a fair amount of non-scientific and scientific support for the 73 unambiguous claims, but not a single claim was supported by strong scientific evidence. Vague, emotive and non-clinical claims were significantly more often supported by non-Medline or irrelevant references than unambiguous claims. Statistical parameters were stated only 34 times. Referenced marketing claims may appear more scientific, but the use of references does not guarantee the quality of the claims. For the benefit of all stakeholders, both the regulatory control and industry's self-control of drug marketing should adopt more active monitoring roles, and apply sanctions when appropriate. Concerted efforts by several stakeholders might be more effective. Copyright 2004 John Wiley & Sons, Ltd.

  13. Computer-Based Rehabilitation for Developing Speech and Language in Hearing-Impaired Children: A Systematic Review

    ERIC Educational Resources Information Center

    Simpson, Andrea; El-Refaie, Amr; Stephenson, Caitlin; Chen, Yi-Ping Phoebe; Deng, Dennis; Erickson, Shane; Tay, David; Morris, Meg E.; Doube, Wendy; Caelli, Terry

    2015-01-01

    The purpose of this systematic review was to examine whether online or computer-based technologies were effective in assisting the development of speech and language skills in children with hearing loss. Relevant studies of children with hearing loss were analysed with reference to (1) therapy outcomes, (2) factors affecting outcomes, and (3)…

  14. Using computer assisted image analysis to determine the optimal Ki67 threshold for predicting outcome of invasive breast cancer

    PubMed Central

    Tay, Timothy Kwang Yong; Thike, Aye Aye; Pathmanathan, Nirmala; Jara-Lazaro, Ana Richelia; Iqbal, Jabed; Sng, Adeline Shi Hui; Ye, Heng Seow; Lim, Jeffrey Chun Tatt; Koh, Valerie Cui Yun; Tan, Jane Sie Yong; Yeong, Joe Poh Sheng; Chow, Zi Long; Li, Hui Hua; Cheng, Chee Leong; Tan, Puay Hoon

    2018-01-01

    Background Ki67 positivity in invasive breast cancers has an inverse correlation with survival outcomes and serves as an immunohistochemical surrogate for molecular subtyping of breast cancer, particularly ER positive breast cancer. The optimal threshold of Ki67 in both settings, however, remains elusive. We use computer assisted image analysis (CAIA) to determine the optimal threshold for Ki67 in predicting survival outcomes and differentiating luminal B from luminal A breast cancers. Methods Quantitative scoring of Ki67 on tissue microarray (TMA) sections of 440 invasive breast cancers was performed using Aperio ePathology ImmunoHistochemistry Nuclear Image Analysis algorithm, with TMA slides digitally scanned via Aperio ScanScope XT System. Results On multivariate analysis, tumours with Ki67 ≥14% had an increased likelihood of recurrence (HR 1.941, p=0.021) and shorter overall survival (HR 2.201, p=0.016). Similar findings were observed in the subset of 343 ER positive breast cancers (HR 2.409, p=0.012 and HR 2.787, p=0.012 respectively). The value of Ki67 associated with ER+HER2-PR<20% tumours (Luminal B subtype) was found to be <17%. Conclusion Using CAIA, we found optimal thresholds for Ki67 that predict a poorer prognosis and an association with the Luminal B subtype of breast cancer. Further investigation and validation of these thresholds are recommended. PMID:29545924

  15. A Cost-effectiveness Analysis of Early vs Late Tracheostomy.

    PubMed

    Liu, C Carrie; Rudmik, Luke

    2016-10-01

    The timing of tracheostomy in critically ill patients requiring mechanical ventilation is controversial. An important consideration that is currently missing in the literature is an evaluation of the economic impact of an early tracheostomy strategy vs a late tracheostomy strategy. To evaluate the cost-effectiveness of the early tracheostomy strategy vs the late tracheostomy strategy. This economic analysis was performed using a decision tree model with a 90-day time horizon. The economic perspective was that of the US health care third-party payer. The primary outcome was the incremental cost per tracheostomy avoided. Probabilities were obtained from meta-analyses of randomized clinical trials. Costs were obtained from the published literature and the Healthcare Cost and Utilization Project database. A multivariate probabilistic sensitivity analysis was performed to account for uncertainty surrounding mean values used in the reference case. The reference case demonstrated that the cost of the late tracheostomy strategy was $45 943.81 for 0.36 of effectiveness. The cost of the early tracheostomy strategy was $31 979.12 for 0.19 of effectiveness. The incremental cost-effectiveness ratio for the late tracheostomy strategy compared with the early tracheostomy strategy was $82 145.24 per tracheostomy avoided. With a willingness-to-pay threshold of $50 000, the early tracheostomy strategy is cost-effective with 56% certainty. The adaptation of an early vs a late tracheostomy strategy depends on the priorities of the decision-maker. Up to a willingness-to-pay threshold of $80 000 per tracheostomy avoided, the early tracheostomy strategy has a higher probability of being the more cost-effective intervention.

  16. Performance-based planning and programming guidebook.

    DOT National Transportation Integrated Search

    2013-09-01

    "Performance-based planning and programming (PBPP) refers to the application of performance management principles within the planning and programming processes of transportation agencies to achieve desired performance outcomes for the multimodal tran...

  17. Revision surgery of metal-on-metal hip arthroplasties for adverse reactions to metal debris.

    PubMed

    Matharu, Gulraj S; Eskelinen, Antti; Judge, Andrew; Pandit, Hemant G; Murray, David W

    2018-06-01

    Background and purpose - The initial outcomes following metal-on-metal hip arthroplasty (MoMHA) revision surgery performed for adverse reactions to metal debris (ARMD) were poor. Furthermore, robust thresholds for performing ARMD revision are lacking. This article is the second of 2. The first article considered the various investigative modalities used during MoMHA patient surveillance (Matharu et al. 2018a ). The present article aims to provide a clinical update regarding ARMD revision surgery in MoMHA patients (hip resurfacing and large-diameter MoM total hip arthroplasty), with specific focus on the threshold for performing ARMD revision, the surgical strategy, and the outcomes following revision. Results and interpretation - The outcomes following ARMD revision surgery appear to have improved with time for several reasons, among them the introduction of regular patient surveillance and lowering of the threshold for performing revision. Furthermore, registry data suggest that outcomes following ARMD revision are influenced by modifiable factors (type of revision procedure and bearing surface implanted), meaning surgeons could potentially reduce failure rates. However, additional large multi-center studies are needed to develop robust thresholds for performing ARMD revision surgery, which will guide surgeons' treatment of MoMHA patients. The long-term systemic effects of metal ion exposure in patients with these implants must also be investigated, which will help establish whether there are any systemic reasons to recommend revision of MoMHAs.

  18. Stuttering Thoughts: Negative Self-Referent Thinking Is Less Sensitive to Aversive Outcomes in People with Higher Levels of Depressive Symptoms

    PubMed Central

    Iijima, Yudai; Takano, Keisuke; Boddez, Yannick; Raes, Filip; Tanno, Yoshihiko

    2017-01-01

    Learning theories of depression have proposed that depressive cognitions, such as negative thoughts with reference to oneself, can develop through a reinforcement learning mechanism. This negative self-reference is considered to be positively reinforced by rewarding experiences such as genuine support from others after negative self-disclosure, and negatively reinforced by avoidance of potential aversive situations. The learning account additionally predicts that negative self-reference would be maintained by an inability to adjust one’s behavior when negative self-reference no longer leads to such reward. To test this prediction, we designed an adapted version of the reversal-learning task. In this task, participants were reinforced to choose and engage in either negative or positive self-reference by probabilistic economic reward and punishment. Although participants were initially trained to choose negative self-reference, the stimulus-reward contingencies were reversed to prompt a shift toward positive self-reference (Study 1) and a further shift toward negative self-reference (Study 2). Model-based computational analyses showed that depressive symptoms were associated with a low learning rate of negative self-reference, indicating a high level of reward expectancy for negative self-reference even after the contingency reversal. Furthermore, the difficulty in updating outcome predictions of negative self-reference was significantly associated with the extent to which one possesses negative self-images. These results suggest that difficulty in adjusting action-outcome estimates for negative self-reference increases the chance to be faced with negative aspects of self, which may result in depressive symptoms. PMID:28824511

  19. Stuttering Thoughts: Negative Self-Referent Thinking Is Less Sensitive to Aversive Outcomes in People with Higher Levels of Depressive Symptoms.

    PubMed

    Iijima, Yudai; Takano, Keisuke; Boddez, Yannick; Raes, Filip; Tanno, Yoshihiko

    2017-01-01

    Learning theories of depression have proposed that depressive cognitions, such as negative thoughts with reference to oneself, can develop through a reinforcement learning mechanism. This negative self-reference is considered to be positively reinforced by rewarding experiences such as genuine support from others after negative self-disclosure, and negatively reinforced by avoidance of potential aversive situations. The learning account additionally predicts that negative self-reference would be maintained by an inability to adjust one's behavior when negative self-reference no longer leads to such reward. To test this prediction, we designed an adapted version of the reversal-learning task. In this task, participants were reinforced to choose and engage in either negative or positive self-reference by probabilistic economic reward and punishment. Although participants were initially trained to choose negative self-reference, the stimulus-reward contingencies were reversed to prompt a shift toward positive self-reference (Study 1) and a further shift toward negative self-reference (Study 2). Model-based computational analyses showed that depressive symptoms were associated with a low learning rate of negative self-reference, indicating a high level of reward expectancy for negative self-reference even after the contingency reversal. Furthermore, the difficulty in updating outcome predictions of negative self-reference was significantly associated with the extent to which one possesses negative self-images. These results suggest that difficulty in adjusting action-outcome estimates for negative self-reference increases the chance to be faced with negative aspects of self, which may result in depressive symptoms.

  20. SU-F-J-113: Multi-Atlas Based Automatic Organ Segmentation for Lung Radiotherapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Han, J; Ailawadi, S

    Purpose: Normal organ segmentation is one time-consuming and labor-intensive step for lung radiotherapy treatment planning. The aim of this study is to evaluate the performance of a multi-atlas based segmentation approach for automatic organs at risk (OAR) delineation. Methods: Fifteen Lung stereotactic body radiation therapy patients were randomly selected. Planning CT images and OAR contours of the heart - HT, aorta - AO, vena cava - VC, pulmonary trunk - PT, and esophagus – ES were exported and used as reference and atlas sets. For automatic organ delineation for a given target CT, 1) all atlas sets were deformably warpedmore » to the target CT, 2) the deformed sets were accumulated and normalized to produce organ probability density (OPD) maps, and 3) the OPD maps were converted to contours via image thresholding. Optimal threshold for each organ was empirically determined by comparing the auto-segmented contours against their respective reference contours. The delineated results were evaluated by measuring contour similarity metrics: DICE, mean distance (MD), and true detection rate (TD), where DICE=(intersection volume/sum of two volumes) and TD = {1.0 - (false positive + false negative)/2.0}. Diffeomorphic Demons algorithm was employed for CT-CT deformable image registrations. Results: Optimal thresholds were determined to be 0.53 for HT, 0.38 for AO, 0.28 for PT, 0.43 for VC, and 0.31 for ES. The mean similarity metrics (DICE[%], MD[mm], TD[%]) were (88, 3.2, 89) for HT, (79, 3.2, 82) for AO, (75, 2.7, 77) for PT, (68, 3.4, 73) for VC, and (51,2.7, 60) for ES. Conclusion: The investigated multi-atlas based approach produced reliable segmentations for the organs with large and relatively clear boundaries (HT and AO). However, the detection of small and narrow organs with diffused boundaries (ES) were challenging. Sophisticated atlas selection and multi-atlas fusion algorithms may further improve the quality of segmentations.« less

  1. Prediction of Tissue Outcome and Assessment of Treatment Effect in Acute Ischemic Stroke Using Deep Learning.

    PubMed

    Nielsen, Anne; Hansen, Mikkel Bo; Tietze, Anna; Mouridsen, Kim

    2018-06-01

    Treatment options for patients with acute ischemic stroke depend on the volume of salvageable tissue. This volume assessment is currently based on fixed thresholds and single imagine modalities, limiting accuracy. We wish to develop and validate a predictive model capable of automatically identifying and combining acute imaging features to accurately predict final lesion volume. Using acute magnetic resonance imaging, we developed and trained a deep convolutional neural network (CNN deep ) to predict final imaging outcome. A total of 222 patients were included, of which 187 were treated with rtPA (recombinant tissue-type plasminogen activator). The performance of CNN deep was compared with a shallow CNN based on the perfusion-weighted imaging biomarker Tmax (CNN Tmax ), a shallow CNN based on a combination of 9 different biomarkers (CNN shallow ), a generalized linear model, and thresholding of the diffusion-weighted imaging biomarker apparent diffusion coefficient (ADC) at 600×10 -6 mm 2 /s (ADC thres ). To assess whether CNN deep is capable of differentiating outcomes of ±intravenous rtPA, patients not receiving intravenous rtPA were included to train CNN deep, -rtpa to access a treatment effect. The networks' performances were evaluated using visual inspection, area under the receiver operating characteristic curve (AUC), and contrast. CNN deep yields significantly better performance in predicting final outcome (AUC=0.88±0.12) than generalized linear model (AUC=0.78±0.12; P =0.005), CNN Tmax (AUC=0.72±0.14; P <0.003), and ADC thres (AUC=0.66±0.13; P <0.0001) and a substantially better performance than CNN shallow (AUC=0.85±0.11; P =0.063). Measured by contrast, CNN deep improves the predictions significantly, showing superiority to all other methods ( P ≤0.003). CNN deep also seems to be able to differentiate outcomes based on treatment strategy with the volume of final infarct being significantly different ( P =0.048). The considerable prediction improvement accuracy over current state of the art increases the potential for automated decision support in providing recommendations for personalized treatment plans. © 2018 American Heart Association, Inc.

  2. Serum cystatin C- versus creatinine-based definitions of acute kidney injury following cardiac surgery: a prospective cohort study.

    PubMed

    Spahillari, Aferdita; Parikh, Chirag R; Sint, Kyaw; Koyner, Jay L; Patel, Uptal D; Edelstein, Charles L; Passik, Cary S; Thiessen-Philbrook, Heather; Swaminathan, Madhav; Shlipak, Michael G

    2012-12-01

    The primary aim of this study was to compare the sensitivity and rapidity of acute kidney injury (AKI) detection by cystatin C level relative to creatinine level after cardiac surgery. Prospective cohort study. 1,150 high-risk adult cardiac surgery patients in the TRIBE-AKI (Translational Research Investigating Biomarker Endpoints for Acute Kidney Injury) Consortium. Changes in serum creatinine and cystatin C levels. Postsurgical incidence of AKI. Serum creatinine and cystatin C were measured at the preoperative visit and daily on postoperative days 1-5. To allow comparisons between changes in creatinine and cystatin C levels, AKI end points were defined by the relative increases in each marker from baseline (25%, 50%, and 100%) and the incidence of AKI was compared based on each marker. Secondary aims were to compare clinical outcomes among patients defined as having AKI by cystatin C and/or creatinine levels. Overall, serum creatinine level detected more cases of AKI than cystatin C level: 35% developed a ≥25% increase in serum creatinine level, whereas only 23% had a ≥25% increase in cystatin C level (P < 0.001). Creatinine level also had higher proportions meeting the 50% (14% and 8%; P < 0.001) and 100% (4% and 2%; P = 0.005) thresholds for AKI diagnosis. Clinical outcomes generally were not statistically different for AKI cases detected by creatinine or cystatin C level. However, for each AKI threshold, patients with AKI confirmed by both markers had a significantly higher risk of the combined mortality/dialysis outcome compared with patients with AKI detected by creatinine level alone (P = 0.002). There were few adverse clinical outcomes, limiting our ability to detect differences in outcomes between subgroups of patients based on their definitions of AKI. In this large multicenter study, we found that cystatin C level was less sensitive for AKI detection than creatinine level. However, confirmation by cystatin C level appeared to identify a subset of patients with AKI with a substantially higher risk of adverse outcomes. Published by Elsevier Inc.

  3. CD4+ T-cell-guided structured treatment interruptions of antiretroviral therapy in HIV disease: projecting beyond clinical trials.

    PubMed

    Yazdanpanah, Yazdan; Wolf, Lindsey L; Anglaret, Xavier; Gabillard, Delphine; Walensky, Rochelle P; Moh, Raoul; Danel, Christine; Sloan, Caroline E; Losina, Elena; Freedberg, Kenneth A

    2010-01-01

    International trials have shown that CD4+ T-cell-guided structured treatment interruptions (STI) of antiretroviral therapy (ART) lead to worse outcomes than continuous treatment. We simulated continuous ART and STI strategies with higher CD4+ T-cell interruption/reintroduction thresholds than those assessed in actual trials. Using a model of HIV, we simulated cohorts of African adults with different baseline CD4+ T-cell counts (< or = 200; 201-350; and 351-500 cells/microl). We varied ART initiation criteria (immediate; CD4+ T-cell count < 350 cells/microl or > or = 350 cells/microl with severe HIV-related disease; and CD4+ T-cell count <200 cells/microl or > or = 200 cells/microl with severe HIV-related disease), and ART interruption/reintroduction thresholds (350/250; 500/350; and 700/500 cells/microl). First-line therapy was non-nucleoside reverse transcriptase inhibitor (NNRTI)-based and second-line therapy was protease inhibitor (PI)-based. STI generally reduced life expectancy compared with continuous ART. Life expectancy increased with earlier ART initiation and higher interruption/reintroduction thresholds. STI reduced life expectancy by 48-69 and 11-30 months compared with continuous ART when interruption/reintroduction thresholds were 350/250 and 500/350 cells/microl, depending on ART initiation criteria. When patients interrupted/reintroduced ART at 700/500 cells/microl, life expectancies ranged from 2 months lower to 1 month higher than continuous ART. STI-related life expectancy increased with decreased risk of virological resistance after ART interruptions. STI with NNRTI-based regimens was almost always less effective than continuous treatment, regardless of interruption/reintroduction thresholds. The risks associated with STI decrease only if patients start ART earlier, interrupt/reintroduce treatment at very high CD4+ T-cell thresholds (700/500 cells/microl) and use first-line medications with higher resistance barriers, such as PIs.

  4. A novel approach for human whole transcriptome analysis based on absolute gene expression of microarray data

    PubMed Central

    Bikel, Shirley; Jacobo-Albavera, Leonor; Sánchez-Muñoz, Fausto; Cornejo-Granados, Fernanda; Canizales-Quinteros, Samuel; Soberón, Xavier; Sotelo-Mundo, Rogerio R.; del Río-Navarro, Blanca E.; Mendoza-Vargas, Alfredo; Sánchez, Filiberto

    2017-01-01

    Background In spite of the emergence of RNA sequencing (RNA-seq), microarrays remain in widespread use for gene expression analysis in the clinic. There are over 767,000 RNA microarrays from human samples in public repositories, which are an invaluable resource for biomedical research and personalized medicine. The absolute gene expression analysis allows the transcriptome profiling of all expressed genes under a specific biological condition without the need of a reference sample. However, the background fluorescence represents a challenge to determine the absolute gene expression in microarrays. Given that the Y chromosome is absent in female subjects, we used it as a new approach for absolute gene expression analysis in which the fluorescence of the Y chromosome genes of female subjects was used as the background fluorescence for all the probes in the microarray. This fluorescence was used to establish an absolute gene expression threshold, allowing the differentiation between expressed and non-expressed genes in microarrays. Methods We extracted the RNA from 16 children leukocyte samples (nine males and seven females, ages 6–10 years). An Affymetrix Gene Chip Human Gene 1.0 ST Array was carried out for each sample and the fluorescence of 124 genes of the Y chromosome was used to calculate the absolute gene expression threshold. After that, several expressed and non-expressed genes according to our absolute gene expression threshold were compared against the expression obtained using real-time quantitative polymerase chain reaction (RT-qPCR). Results From the 124 genes of the Y chromosome, three genes (DDX3Y, TXLNG2P and EIF1AY) that displayed significant differences between sexes were used to calculate the absolute gene expression threshold. Using this threshold, we selected 13 expressed and non-expressed genes and confirmed their expression level by RT-qPCR. Then, we selected the top 5% most expressed genes and found that several KEGG pathways were significantly enriched. Interestingly, these pathways were related to the typical functions of leukocytes cells, such as antigen processing and presentation and natural killer cell mediated cytotoxicity. We also applied this method to obtain the absolute gene expression threshold in already published microarray data of liver cells, where the top 5% expressed genes showed an enrichment of typical KEGG pathways for liver cells. Our results suggest that the three selected genes of the Y chromosome can be used to calculate an absolute gene expression threshold, allowing a transcriptome profiling of microarray data without the need of an additional reference experiment. Discussion Our approach based on the establishment of a threshold for absolute gene expression analysis will allow a new way to analyze thousands of microarrays from public databases. This allows the study of different human diseases without the need of having additional samples for relative expression experiments. PMID:29230367

  5. Performance of new thresholds of the Glasgow Blatchford score in managing patients with upper gastrointestinal bleeding.

    PubMed

    Laursen, Stig B; Dalton, Harry R; Murray, Iain A; Michell, Nick; Johnston, Matt R; Schultz, Michael; Hansen, Jane M; Schaffalitzky de Muckadell, Ove B; Blatchford, Oliver; Stanley, Adrian J

    2015-01-01

    Upper gastrointestinal hemorrhage (UGIH) is a common cause of hospital admission. The Glasgow Blatchford score (GBS) is an accurate determinant of patients' risk for hospital-based intervention or death. Patients with a GBS of 0 are at low risk for poor outcome and could be managed as outpatients. Some investigators therefore have proposed extending the definition of low-risk patients by using a higher GBS cut-off value, possibly with an age adjustment. We compared 3 thresholds of the GBS and 2 age-adjusted modifications to identify the optimal cut-off value or modification. We performed an observational study of 2305 consecutive patients presenting with UGIH at 4 centers (Scotland, England, Denmark, and New Zealand). The performance of each threshold and modification was evaluated based on sensitivity and specificity analyses, the proportion of low-risk patients identified, and outcomes of patients classified as low risk. There were differences in age (P = .0001), need for intervention (P < .0001), mortality (P < .015), and GBS (P = .0001) among sites. All systems identified low-risk patients with high levels of sensitivity (>97%). The GBS at cut-off values of ≤1 and ≤2, and both modifications, identified low-risk patients with higher levels of specificity (40%-49%) than the GBS with a cut-off value of 0 (22% specificity; P < .001). The GBS at a cut-off value of ≤2 had the highest specificity, but 3% of patients classified as low-risk patients had adverse outcomes. All GBS cut-off values, and score modifications, had low levels of specificity when tested in New Zealand (2.5%-11%). A GBS cut-off value of ≤1 and both GBS modifications identify almost twice as many low-risk patients with UGIH as a GBS at a cut-off value of 0. Implementing a protocol for outpatient management, based on one of these scores, could reduce hospital admissions by 15% to 20%. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.

  6. Altitude training causes haematological fluctuations with relevance for the Athlete Biological Passport.

    PubMed

    Bonne, Thomas Christian; Lundby, Carsten; Lundby, Anne Kristine; Sander, Mikael; Bejder, Jacob; Nordsborg, Nikolai Baastrup

    2015-08-01

    The impact of altitude training on haematological parameters and the Athlete Biological Passport (ABP) was evaluated in international-level elite athletes. One group of swimmers lived high and trained high (LHTH, n = 10) for three to four weeks at 2130 m or higher whereas a control group (n = 10) completed a three-week training camp at sea-level. Haematological parameters were determined weekly three times before and four times after the training camps. ABP thresholds for haemoglobin concentration ([Hb]), reticulocyte percentage (RET%), OFF score and the abnormal blood profile score (ABPS) were calculated using the Bayesian model. After altitude training, six swimmers exceeded the 99% ABP thresholds: two swimmers exceeded the OFF score thresholds at day +7; one swimmer exceeded the OFF score threshold at day +28; one swimmer exceeded the threshold for RET% at day +14; and one swimmer surpassed the ABPS threshold at day +14. In the control group, no values exceeded the individual ABP reference range. In conclusion, LHTH induces haematological changes in Olympic-level elite athletes which can exceed the individually generated references in the ABP. Training at altitude should be considered a confounding factor for ABP interpretation for up to four weeks after altitude exposure but does not consistently cause abnormal values in the ABP. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Abnormal glomerular filtration rate in children, adolescents and young adults starts below 75 mL/min/1.73 m(2).

    PubMed

    Pottel, Hans; Hoste, Liesbeth; Delanaye, Pierre

    2015-05-01

    The chronic kidney disease (CKD) classification system for children is similar to that for adults, with both mainly based on estimated glomerular filtration rate (eGFR) combined with fixed cut-off values. The main cut-off eGFR value used to define CKD is 60 mL/min/1.73 m(2), a value that is also applied for children older than 2 years of age, adolescents and young adults. Based on a literature search, we evaluated inclusion criteria for eGFR in clinical trials or research studies on CKD for children. We also collected information on direct measurements of GFR (mGFR) in children and adolescents, with the aim to estimate the normal reference range for GFR. Using serum creatinine (Scr) normal reference values and Scr-based eGFR-equations, we also evaluated the correspondence between Scr normal reference values and (e)GFR normal reference values. Based on our literature search, the inclusion of children in published CKD studies has been based on cut-off values for eGFR of >60 mL/min/1.73 m(2). The lower reference limits for mGFR far exceed this adult threshold. Using eGFR values calculated using Scr-based formulas, we found that abnormal Scr levels in children already correspond to eGFR values that are below a cut-off of 75 mL/min/1.73 m(2). Abnormal GFR in children, adolescents and young adults starts below 75 mL/min/1.73 m(2), and as abnormality is a sign of disease, we recommend referring children, adolescents and young adults with an (e)GFR of <75 mL/min/1.73 m(2) for further clinical assessment.

  8. Symmetrical and asymmetrical outcomes of leader anger expression: A qualitative study of army personnel

    PubMed Central

    Lindebaum, Dirk; Jordan, Peter J; Morris, Lucy

    2016-01-01

    Recent studies have highlighted the utility of anger at work, suggesting that anger can have positive outcomes. Using the Dual Threshold Model, we assess the positive and negative consequences of anger expressions at work and focus on the conditions under which expressions of anger crossing the impropriety threshold are perceived as productive or counterproductive by observers or targets of that anger. To explore this phenomenon, we conducted a phenomenological study (n = 20) to probe the lived experiences of followers (as observers and targets) associated with anger expressions by military leaders. The nature of task (e.g. the display rules prescribed for combat situations) emerged as one condition under which the crossing of the impropriety threshold leads to positive outcomes of anger expressions. Our data reveal tensions between emotional display rules and emotional display norms in the military, thereby fostering paradoxical attitudes toward anger expression and its consequences among followers. Within this paradoxical space, anger expressions have both positive (asymmetrical) and negative (symmetrical) consequences. We place our findings in the context of the Dual Threshold Model, discuss the practical implications of our research and offer avenues for future studies. PMID:26900171

  9. Symmetrical and asymmetrical outcomes of leader anger expression: A qualitative study of army personnel.

    PubMed

    Lindebaum, Dirk; Jordan, Peter J; Morris, Lucy

    2016-02-01

    Recent studies have highlighted the utility of anger at work, suggesting that anger can have positive outcomes. Using the Dual Threshold Model, we assess the positive and negative consequences of anger expressions at work and focus on the conditions under which expressions of anger crossing the impropriety threshold are perceived as productive or counterproductive by observers or targets of that anger. To explore this phenomenon, we conducted a phenomenological study ( n = 20) to probe the lived experiences of followers (as observers and targets) associated with anger expressions by military leaders. The nature of task (e.g. the display rules prescribed for combat situations) emerged as one condition under which the crossing of the impropriety threshold leads to positive outcomes of anger expressions. Our data reveal tensions between emotional display rules and emotional display norms in the military, thereby fostering paradoxical attitudes toward anger expression and its consequences among followers. Within this paradoxical space, anger expressions have both positive (asymmetrical) and negative (symmetrical) consequences. We place our findings in the context of the Dual Threshold Model, discuss the practical implications of our research and offer avenues for future studies.

  10. Preoperative cow-side lactatemia measurement predicts negative outcome in Holstein dairy cattle with right abomasal disorders.

    PubMed

    Boulay, G; Francoz, D; Doré, E; Dufour, S; Veillette, M; Badillo, M; Bélanger, A-M; Buczinski, S

    2014-01-01

    The objectives of the current study were (1) to determine the gain in prognostic accuracy of preoperative l-lactate concentration (LAC) measured on farm on cows with right displaced abomasum (RDA) or abomasal volvulus (AV) for predicting negative outcome; and (2) to suggest clinically relevant thresholds for such use. A cohort of 102 cows with on-farm surgical diagnostic of RDA or AV was obtained from June 2009 through December 2011. Blood was drawn from coccygeal vessels before surgery and plasma LAC was immediately measured by using a portable clinical analyzer. Dairy producers were interviewed by phone 30 d following surgery and the outcome was determined: a positive outcome if the owner was satisfied of the overall evolution 30 d postoperatively, and a negative outcome if the cow was culled, died, or if the owner reported being unsatisfied 30 d postoperatively. The area under the curve of the receiver operating characteristic curve for LAC was 0.92 and was significantly greater than the area under the curve of the receiver operating characteristic curve of heart rate (HR; 0.77), indicating that LAC, in general, performed better than HR to predict a negative outcome. Furthermore, the ability to predict a negative outcome was significantly improved when LAC measurement was considered in addition to the already available HR data (area under the curve: 0.93 and 95% confidence interval: 0.87, 0.99). Important inflection points of the misclassification cost term function were noted at thresholds of 2 and 6 mmol/L, suggesting the potential utility of these cut-points. The 2 and 6 mmol/L thresholds had a sensitivity, specificity, positive predictive value, and negative predictive value for predicting a negative outcome of 76.2, 82.7, 53.3, and 93.1%, and of 28.6, 97.5, 75, and 84%, respectively. In terms of clinical interpretation, LAC ≤2 mmol/L appeared to be a good indicator of positive outcome and could be used to support a surgical treatment decision. The treatment decision for cows with LAC between 2 and 6 mmol/L, however, would depend on the economic context and the owner's attitude to risk in regard to potential return on its investment. Finally, performing a surgical correction on commercial cows with RDA or AV and a LAC ≥6 mmol/L appeared to be unjustified and these animals should be culled based on their high probability of negative outcome. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Program Evaluation of Group-based Cognitive Behavioral Therapy for Insomnia: a Focus on Treatment Adherence and Outcomes in Older Adults with Co-morbidities.

    PubMed

    Ludwin, Brian M; Bamonti, Patricia; Mulligan, Elizabeth A

    2017-11-21

    To describe a program evaluation of the interrelationship of adherence and treatment outcomes in a sample of veteran older adults with co-morbidities who participated in group-based cognitive behavioral therapy for insomnia. Retrospective data extraction was performed for 14 older adults. Adherence measures and sleep outcomes were measured with sleep diaries and Insomnia Severity Index. Demographic and clinical information was extracted through chart review. Adherence with prescribed time in bed, daily sleep diaries, and maintaining consistent time out of bed and time in bed was generally high. There were moderate, though not significant, improvements in consistency of time in bed and time out of bed over time. Adherence was not significantly associated with sleep outcomes despite improvements in most sleep outcomes. The non-significant relationship between sleep outcomes and adherence may reflect the moderating influence of co-morbidities or may suggest a threshold effect beyond which stricter adherence has a limited impact on outcomes. Development of multi-method adherence measures across all treatment components will be important to understand the influence of adherence on treatment outcomes as monitoring adherence to time in bed and time out of bed had limited utility for understanding treatment outcomes in our sample.

  12. Association of daily asthma emergency department visits and hospital admissions with ambient air pollutants among the pediatric Medicaid population in Detroit: time-series and time-stratified case-crossover analyses with threshold effects.

    PubMed

    Li, Shi; Batterman, Stuart; Wasilevich, Elizabeth; Wahl, Robert; Wirth, Julie; Su, Feng-Chiao; Mukherjee, Bhramar

    2011-11-01

    Asthma morbidity has been associated with ambient air pollutants in time-series and case-crossover studies. In such study designs, threshold effects of air pollutants on asthma outcomes have been relatively unexplored, which are of potential interest for exploring concentration-response relationships. This study analyzes daily data on the asthma morbidity experienced by the pediatric Medicaid population (ages 2-18 years) of Detroit, Michigan and concentrations of pollutants fine particles (PM2.5), CO, NO2 and SO2 for the 2004-2006 period, using both time-series and case-crossover designs. We use a simple, testable and readily implementable profile likelihood-based approach to estimate threshold parameters in both designs. Evidence of significant increases in daily acute asthma events was found for SO2 and PM2.5, and a significant threshold effect was estimated for PM2.5 at 13 and 11 μg m(-3) using generalized additive models and conditional logistic regression models, respectively. Stronger effect sizes above the threshold were typically noted compared to standard linear relationship, e.g., in the time series analysis, an interquartile range increase (9.2 μg m(-3)) in PM2.5 (5-day-moving average) had a risk ratio of 1.030 (95% CI: 1.001, 1.061) in the generalized additive models, and 1.066 (95% CI: 1.031, 1.102) in the threshold generalized additive models. The corresponding estimates for the case-crossover design were 1.039 (95% CI: 1.013, 1.066) in the conditional logistic regression, and 1.054 (95% CI: 1.023, 1.086) in the threshold conditional logistic regression. This study indicates that the associations of SO2 and PM2.5 concentrations with asthma emergency department visits and hospitalizations, as well as the estimated PM2.5 threshold were fairly consistent across time-series and case-crossover analyses, and suggests that effect estimates based on linear models (without thresholds) may underestimate the true risk. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Optimizing detection and analysis of slow waves in sleep EEG.

    PubMed

    Mensen, Armand; Riedner, Brady; Tononi, Giulio

    2016-12-01

    Analysis of individual slow waves in EEG recording during sleep provides both greater sensitivity and specificity compared to spectral power measures. However, parameters for detection and analysis have not been widely explored and validated. We present a new, open-source, Matlab based, toolbox for the automatic detection and analysis of slow waves; with adjustable parameter settings, as well as manual correction and exploration of the results using a multi-faceted visualization tool. We explore a large search space of parameter settings for slow wave detection and measure their effects on a selection of outcome parameters. Every choice of parameter setting had some effect on at least one outcome parameter. In general, the largest effect sizes were found when choosing the EEG reference, type of canonical waveform, and amplitude thresholding. Previously published methods accurately detect large, global waves but are conservative and miss the detection of smaller amplitude, local slow waves. The toolbox has additional benefits in terms of speed, user-interface, and visualization options to compare and contrast slow waves. The exploration of parameter settings in the toolbox highlights the importance of careful selection of detection METHODS: The sensitivity and specificity of the automated detection can be improved by manually adding or deleting entire waves and or specific channels using the toolbox visualization functions. The toolbox standardizes the detection procedure, sets the stage for reliable results and comparisons and is easy to use without previous programming experience. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Identifying optimal threshold statistics for elimination of hookworm using a stochastic simulation model.

    PubMed

    Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M

    2017-06-30

    There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.

  15. Point estimation following two-stage adaptive threshold enrichment clinical trials.

    PubMed

    Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel

    2018-05-31

    Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  16. Drug Adverse Event Detection in Health Plan Data Using the Gamma Poisson Shrinker and Comparison to the Tree-based Scan Statistic

    PubMed Central

    Brown, Jeffrey S.; Petronis, Kenneth R.; Bate, Andrew; Zhang, Fang; Dashevsky, Inna; Kulldorff, Martin; Avery, Taliser R.; Davis, Robert L.; Chan, K. Arnold; Andrade, Susan E.; Boudreau, Denise; Gunter, Margaret J.; Herrinton, Lisa; Pawloski, Pamala A.; Raebel, Marsha A.; Roblin, Douglas; Smith, David; Reynolds, Robert

    2013-01-01

    Background: Drug adverse event (AE) signal detection using the Gamma Poisson Shrinker (GPS) is commonly applied in spontaneous reporting. AE signal detection using large observational health plan databases can expand medication safety surveillance. Methods: Using data from nine health plans, we conducted a pilot study to evaluate the implementation and findings of the GPS approach for two antifungal drugs, terbinafine and itraconazole, and two diabetes drugs, pioglitazone and rosiglitazone. We evaluated 1676 diagnosis codes grouped into 183 different clinical concepts and four levels of granularity. Several signaling thresholds were assessed. GPS results were compared to findings from a companion study using the identical analytic dataset but an alternative statistical method—the tree-based scan statistic (TreeScan). Results: We identified 71 statistical signals across two signaling thresholds and two methods, including closely-related signals of overlapping diagnosis definitions. Initial review found that most signals represented known adverse drug reactions or confounding. About 31% of signals met the highest signaling threshold. Conclusions: The GPS method was successfully applied to observational health plan data in a distributed data environment as a drug safety data mining method. There was substantial concordance between the GPS and TreeScan approaches. Key method implementation decisions relate to defining exposures and outcomes and informed choice of signaling thresholds. PMID:24300404

  17. The Ling 6(HL) test: typical pediatric performance data and clinical use evaluation.

    PubMed

    Glista, Danielle; Scollie, Susan; Moodie, Sheila; Easwar, Vijayalakshmi

    2014-01-01

    The Ling 6(HL) test offers a calibrated version of naturally produced speech sounds in dB HL for evaluation of detection thresholds. Aided performance has been previously characterized in adults. The purpose of this work was to evaluate and refine the Ling 6(HL) test for use in pediatric hearing aid outcome measurement. This work is presented across two studies incorporating an integrated knowledge translation approach in the characterization of normative and typical performance, and in the evaluation of clinical feasibility, utility, acceptability, and implementation. A total of 57 children, 28 normally hearing and 29 with binaural sensorineural hearing loss, were included in Study 1. Children wore their own hearing aids fitted using Desired Sensation Level v5.0. Nine clinicians from The Network of Pediatric Audiologists participated in Study 2. A CD-based test format was used in the collection of unaided and aided detection thresholds in laboratory and clinical settings; thresholds were measured clinically as part of routine clinical care. Confidence intervals were derived to characterize normal performance and typical aided performance according to hearing loss severity. Unaided-aided performance was analyzed using a repeated-measures analysis of variance. The audiologists completed an online questionnaire evaluating the quality, feasibility/executability, utility/comparative value/relative advantage, acceptability/applicability, and interpretability, in addition to recommendation and general comments sections. Ling 6(HL) thresholds were reliably measured with children 3-18 yr old. Normative and typical performance ranges were translated into a scoring tool for use in pediatric outcome measurement. In general, questionnaire respondents generally agreed that the Ling 6(HL) test was a high-quality outcome evaluation tool that can be implemented successfully in clinical settings. By actively collaborating with pediatric audiologists and using an integrated knowledge translation framework, this work supported the creation of an evidence-based clinical tool that has the potential to be implemented in, and useful to, clinical practice. More research is needed to characterize performance in alternative listening conditions to facilitate use with infants, for example. Future efforts focused on monitoring the use of the Ling 6(HL) test in daily clinical practice may help describe whether clinical use has been maintained across time and if any additional adaptations are necessary to facilitate clinical uptake. American Academy of Audiology.

  18. Coupling a regional warning system to a semantic engine on online news for enhancing landslide prediction

    NASA Astrophysics Data System (ADS)

    Battistini, Alessandro; Rosi, Ascanio; Segoni, Samuele; Catani, Filippo; Casagli, Nicola

    2017-04-01

    Landslide inventories are basic data for large scale landslide modelling, e.g. they are needed to calibrate and validate rainfall thresholds, physically based models and early warning systems. The setting up of landslide inventories with traditional methods (e.g. remote sensing, field surveys and manual retrieval of data from technical reports and local newspapers) is time consuming. The objective of this work is to automatically set up a landslide inventory using a state-of-the art semantic engine based on data mining on online news (Battistini et al., 2013) and to evaluate if the automatically generated inventory can be used to validate a regional scale landslide warning system based on rainfall-thresholds. The semantic engine scanned internet news in real time in a 50 months test period. At the end of the process, an inventory of approximately 900 landslides was set up for the Tuscany region (23,000 km2, Italy). The inventory was compared with the outputs of the regional landslide early warning system based on rainfall thresholds, and a good correspondence was found: e.g. 84% of the events reported in the news is correctly identified by the model. In addition, the cases of not correspondence were forwarded to the rainfall threshold developers, which used these inputs to update some of the thresholds. On the basis of the results obtained, we conclude that automatic validation of landslide models using geolocalized landslide events feedback is possible. The source of data for validation can be obtained directly from the internet channel using an appropriate semantic engine. We also automated the validation procedure, which is based on a comparison between forecasts and reported events. We verified that our approach can be automatically used for a near real time validation of the warning system and for a semi-automatic update of the rainfall thresholds, which could lead to an improvement of the forecasting effectiveness of the warning system. In the near future, the proposed procedure could operate in continuous time and could allow for a periodic update of landslide hazard models and landslide early warning systems with minimum human intervention. References: Battistini, A., Segoni, S., Manzo, G., Catani, F., Casagli, N. (2013). Web data mining for automatic inventory of geohazards at national scale. Applied Geography, 43, 147-158.

  19. Reference Guide to Odor Thresholds for Hazardous Air Pollutants Listed in the Clean Air Act Amendments of 1990.

    EPA Science Inventory

    In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. Discussion...

  20. Cost effectiveness of a manual based coping strategy programme in promoting the mental health of family carers of people with dementia (the START (STrAtegies for RelaTives) study): a pragmatic randomised controlled trial.

    PubMed

    Knapp, Martin; King, Derek; Romeo, Renee; Schehl, Barbara; Barber, Julie; Griffin, Mark; Rapaport, Penny; Livingston, Debbie; Mummery, Cath; Walker, Zuzana; Hoe, Juanita; Sampson, Elizabeth L; Cooper, Claudia; Livingston, Gill

    2013-10-25

    To assess whether the START (STrAtegies for RelatTives) intervention added to treatment as usual is cost effective compared with usual treatment alone. Cost effectiveness analysis nested within a pragmatic randomised controlled trial. Three mental health and one neurological outpatient dementia service in London and Essex, UK. Family carers of people with dementia. Eight session, manual based, coping intervention delivered by supervised psychology graduates to family carers of people with dementia added to usual treatment, compared with usual treatment alone. Costs measured from a health and social care perspective were analysed alongside the Hospital Anxiety and Depression Scale total score (HADS-T) of affective symptoms and quality adjusted life years (QALYs) in cost effectiveness analyses over eight months from baseline. Of the 260 participants recruited to the study, 173 were randomised to the START intervention, and 87 to usual treatment alone. Mean HADS-T scores were lower in the intervention group than the usual treatment group over the 8 month evaluation period (mean difference -1.79 (95% CI -3.32 to -0.33)), indicating better outcomes associated with the START intervention. There was a small improvement in health related quality of life as measured by QALYs (0.03 (-0.01 to 0.08)). Costs were no different between the intervention and usual treatment groups (£252 (-28 to 565) higher for START group). The cost effectiveness calculations suggested that START had a greater than 99% chance of being cost effective compared with usual treatment alone at a willingness to pay threshold of £30,000 per QALY gained, and a high probability of cost effectiveness on the HADS-T measure. The manual based coping intervention START, when added to treatment as usual, was cost effective compared with treatment as usual alone by reference to both outcome measures (affective symptoms for family carers, and carer based QALYs). ISCTRN 70017938.

  1. Weight loss outcomes among patients referred after primary bariatric procedure.

    PubMed

    Obeid, Nabeel R; Malick, Waqas; Baxter, Andrew; Molina, Bianca; Schwack, Bradley F; Kurian, Marina S; Ren-Fielding, Christine J; Fielding, George A

    2016-07-01

    Bariatric patients may not always obtain long-term care by their primary surgeon. Our aim was to evaluate weight loss outcomes in patients who had surgery elsewhere. We conducted a retrospective analysis. Postreferral management included nonsurgical, revision, or conversion. Primary outcomes were percent excess weight loss (%EWL) overall, according to original operation, and based on postreferral management. Between 2001 and 2013, there were 569 patients. Mean follow-up was 3.1 years. Management was 42% nonsurgical, 41% revision, and 17% conversion. Overall, mean %EWL was 45.3%. Based on original surgery type, %EWL was 41.2% for adjustable gastric banding vs 58.3% for Roux-en-Y gastric bypass (P ≤ .0001). Management affected %EWL (41.2% nonsurgical vs 45.3% revision vs 55.1% conversion, P ≤ .0001). Patients referred after bariatric surgery can achieve satisfactory weight loss. This differs based on surgery type and management strategy. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. The influence of different signal-to-background ratios on spatial resolution and F18-FDG-PET quantification using point spread function and time-of-flight reconstruction.

    PubMed

    Rogasch, Julian Mm; Hofheinz, Frank; Lougovski, Alexandr; Furth, Christian; Ruf, Juri; Großer, Oliver S; Mohnike, Konrad; Hass, Peter; Walke, Mathias; Amthauer, Holger; Steffen, Ingo G

    2014-12-01

    F18-fluorodeoxyglucose positron-emission tomography (FDG-PET) reconstruction algorithms can have substantial influence on quantitative image data used, e.g., for therapy planning or monitoring in oncology. We analyzed radial activity concentration profiles of differently reconstructed FDG-PET images to determine the influence of varying signal-to-background ratios (SBRs) on the respective spatial resolution, activity concentration distribution, and quantification (standardized uptake value [SUV], metabolic tumor volume [MTV]). Measurements were performed on a Siemens Biograph mCT 64 using a cylindrical phantom containing four spheres (diameter, 30 to 70 mm) filled with F18-FDG applying three SBRs (SBR1, 16:1; SBR2, 6:1; SBR3, 2:1). Images were reconstructed employing six algorithms (filtered backprojection [FBP], FBP + time-of-flight analysis [FBP + TOF], 3D-ordered subset expectation maximization [3D-OSEM], 3D-OSEM + TOF, point spread function [PSF], PSF + TOF). Spatial resolution was determined by fitting the convolution of the object geometry with a Gaussian point spread function to radial activity concentration profiles. MTV delineation was performed using fixed thresholds and semiautomatic background-adapted thresholding (ROVER, ABX, Radeberg, Germany). The pairwise Wilcoxon test revealed significantly higher spatial resolutions for PSF + TOF (up to 4.0 mm) compared to PSF, FBP, FBP + TOF, 3D-OSEM, and 3D-OSEM + TOF at all SBRs (each P < 0.05) with the highest differences for SBR1 decreasing to the lowest for SBR3. Edge elevations in radial activity profiles (Gibbs artifacts) were highest for PSF and PSF + TOF declining with decreasing SBR (PSF + TOF largest sphere; SBR1, 6.3%; SBR3, 2.7%). These artifacts induce substantial SUVmax overestimation compared to the reference SUV for PSF algorithms at SBR1 and SBR2 leading to substantial MTV underestimation in threshold-based segmentation. In contrast, both PSF algorithms provided the lowest deviation of SUVmean from reference SUV at SBR1 and SBR2. At high contrast, the PSF algorithms provided the highest spatial resolution and lowest SUVmean deviation from the reference SUV. In contrast, both algorithms showed the highest deviations in SUVmax and threshold-based MTV definition. At low contrast, all investigated reconstruction algorithms performed approximately equally. The use of PSF algorithms for quantitative PET data, e.g., for target volume definition or in serial PET studies, should be performed with caution - especially if comparing SUV of lesions with high and low contrasts.

  3. Using ROC Curves to Choose Minimally Important Change Thresholds when Sensitivity and Specificity Are Valued Equally: The Forgotten Lesson of Pythagoras. Theoretical Considerations and an Example Application of Change in Health Status

    PubMed Central

    Froud, Robert; Abel, Gary

    2014-01-01

    Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472

  4. Motion-based nearest vector metric for reference frame selection in the perception of motion.

    PubMed

    Agaoglu, Mehmet N; Clarke, Aaron M; Herzog, Michael H; Ögmen, Haluk

    2016-05-01

    We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects.

  5. Can Preoperative Patient-reported Outcome Measures Be Used to Predict Meaningful Improvement in Function After TKA?

    PubMed

    Berliner, Jonathan L; Brodke, Dane J; Chan, Vanessa; SooHoo, Nelson F; Bozic, Kevin J

    2017-01-01

    Despite the overall effectiveness of total knee arthroplasty (TKA), a subset of patients do not experience expected improvements in pain, physical function, and quality of life as documented by patient-reported outcome measures (PROMs), which assess a patient's physical and emotional health and pain. It is therefore important to develop preoperative tools capable of identifying patients unlikely to improve by a clinically important margin after surgery. The purpose of this study was to determine if an association exists between preoperative PROM scores and patients' likelihood of experiencing a clinically meaningful change in function 1 year after TKA. A retrospective study design was used to evaluate preoperative and 1-year postoperative Knee injury and Osteoarthritis Outcome Score (KOOS) and SF-12 version 2 (SF12v2) scores from 562 patients who underwent primary unilateral TKA. This cohort represented 75% of the 750 patients who underwent surgery during that time period; a total of 188 others (25%) either did not complete PROM scores at the designated times or were lost to follow-up. Minimum clinically important differences (MCIDs) were calculated for each PROM using a distribution-based method and were used to define meaningful clinical improvement. MCID values for KOOS and SF12v2 physical component summary (PCS) scores were calculated to be 10 and 5, respectively. A receiver operating characteristic analysis was used to determine threshold values for preoperative KOOS and SF12v2 PCS scores and their respective predictive abilities. Threshold values defined the point after which the likelihood of clinically meaningful improvement began to diminish. Multivariate regression was used to control for the effect of preoperative mental and emotional health, patient attributes quantified by SF12v2 mental component summary (MCS) scores, on patients' likelihood of experiencing meaningful improvement in function after surgery. Threshold values for preoperative KOOS and SF12v2 PCS scores were a maximum of 58 (area under the curve [AUC], 0.76; p < 0.001) and 34 (AUC, 0.65; p < 0.001), respectively. Patients scoring above these thresholds, indicating better preoperative function, were less likely to experience a clinically meaningful improvement in function after TKA. When accounting for mental and emotional health with a multivariate analysis, the predictive ability of both KOOS and SF12v2 PCS threshold values improved (AUCs increased to 0.80 and 0.71, respectively). Better preoperative mental and emotional health, as reflected by a higher MCS score, resulted in higher threshold values for KOOS and SF12v2 PCS. We identified preoperative PROM threshold values that are associated with clinically meaningful improvements in functional outcome after TKA. Patients with preoperative KOOS or SF12v2 PCS scores above the defined threshold values have a diminishing probability of experiencing clinically meaningful improvement after TKA. Patients with worse baseline mental and emotional health (as defined by SF12v2 MCS score) have a lower probability of experiencing clinically important levels of functional improvement after surgery. The results of this study are directly applicable to patient-centered informed decision-making tools and may be used to facilitate discussions with patients regarding the expected benefit after TKA. Level III, prognostic study.

  6. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  7. Six-minute Stepper Test to Set Pulmonary Rehabilitation Intensity in Patients with COPD - A Retrospective Study.

    PubMed

    Bonnevie, Tristan; Gravier, Francis-Edouard; Leboullenger, Marie; Médrinal, Clément; Viacroze, Catherine; Cuvelier, Antoine; Muir, Jean-François; Tardif, Catherine; Debeaumont, David

    2017-06-01

    Pulmonary rehabilitation (PR) improves outcomes in patients with chronic obstructive pulmonary disease (COPD). Optimal assessment includes cardiopulmonary exercise testing (CPET), but consultations are limited. Field tests could be used to individualize PR instead of CPET. The six-minute stepper test (6MST) is easy to set up and its sensitivity and reproducibility have previously been reported in patients with COPD. The aim of this study was to develop a prediction equation to set intensity in patients attending PR, based on the 6MST. The following relationships were analyzed: mean heart rate (HR) during the first (HR 1-3 ) and last (HR 4-6 ) 3 minutes of the 6MST and HR at the ventilatory threshold (HRvt) from CPET; step count at the end of the 6MST and workload at the Ventilatory threshold (VT) (Wvt); and forced expiratory volume in 1 second and step count during the 6MST. This retrospective study included patients with COPD referred for PR who underwent CPET, pulmonary function evaluations and the 6MST. Twenty-four patients were included. Prediction equations were HRvt = 0.7887 × HR 1-3 + 20.83 and HRvt = 0.6180 × HR 4-6 + 30.77. There was a strong correlation between HR 1-3 and HR 4-6 and HRvt (r = 0.69, p < 0.001 and r = 0.57, p < 0.01 respectively). A significant correlation was also found between step count and LogWvt (r = 0.63, p < 0.01). The prediction equation was LogWvt = 0.001722 × step count + 1.248. The 6MST could be used to individualize aerobic training in patients with COPD. Further prospective studies are needed to confirm these results.

  8. Customization of Advia 120 thresholds for canine erythrocyte volume and hemoglobin concentration, and effects on morphology flagging results.

    PubMed

    Grimes, Carolyn N; Fry, Michael M

    2014-12-01

    This study sought to develop customized morphology flagging thresholds for canine erythrocyte volume and hemoglobin concentration [Hgb] on the ADVIA 120 hematology analyzer; compare automated morphology flagging with results of microscopic blood smear evaluation; and examine effects of customized thresholds on morphology flagging results. Customized thresholds were determined using data from 52 clinically healthy dogs. Blood smear evaluation and automated morphology flagging results were correlated with mean cell volume (MCV) and cellular hemoglobin concentration mean (CHCM) in 26 dogs. Customized thresholds were applied retroactively to complete blood (cell) count (CBC) data from 5 groups of dogs, including a reference sample group, clinical cases, and animals with experimentally induced iron deficiency anemia. Automated morphology flagging correlated more highly with MCV or CHCM than did blood smear evaluation; correlation with MCV was highest using customized thresholds. Customized morphology flagging thresholds resulted in more sensitive detection of microcytosis, macrocytosis, and hypochromasia than default thresholds.

  9. Comparative effectiveness and cost-effectiveness analyses frequently agree on value.

    PubMed

    Glick, Henry A; McElligott, Sean; Pauly, Mark V; Willke, Richard J; Bergquist, Henry; Doshi, Jalpa; Fleisher, Lee A; Kinosian, Bruce; Perfetto, Eleanor; Polsky, Daniel E; Schwartz, J Sanford

    2015-05-01

    The Patient-Centered Outcomes Research Institute, known as PCORI, was established by Congress as part of the Affordable Care Act (ACA) to promote evidence-based treatment. Provisions of the ACA prohibit the use of a cost-effectiveness analysis threshold and quality-adjusted life-years (QALYs) in PCORI comparative effectiveness studies, which has been understood as a prohibition on support for PCORI's conducting conventional cost-effectiveness analyses. This constraint complicates evidence-based choices where incremental improvements in outcomes are achieved at increased costs of care. How frequently this limitation inhibits efficient cost containment, also a goal of the ACA, depends on how often more effective treatment is not cost-effective relative to less effective treatment. We examined the largest database of studies of comparisons of effectiveness and cost-effectiveness to see how often there is disagreement between the more effective treatment and the cost-effective treatment, for various thresholds that may define good value. We found that under the benchmark assumption, disagreement between the two types of analyses occurs in 19 percent of cases. Disagreement is more likely to occur if a treatment intervention is musculoskeletal and less likely to occur if it is surgical or involves secondary prevention, or if the study was funded by a pharmaceutical company. Project HOPE—The People-to-People Health Foundation, Inc.

  10. Methodology Series Module 2: Case-control Studies.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    Case-Control study design is a type of observational study. In this design, participants are selected for the study based on their outcome status. Thus, some participants have the outcome of interest (referred to as cases), whereas others do not have the outcome of interest (referred to as controls). The investigator then assesses the exposure in both these groups. The investigator should define the cases as specifically as possible. Sometimes, definition of a disease may be based on multiple criteria; thus, all these points should be explicitly stated in case definition. An important aspect of selecting a control is that they should be from the same 'study base' as that of the cases. We can select controls from a variety of groups. Some of them are: General population; relatives or friends; and hospital patients. Matching is often used in case-control control studies to ensure that the cases and controls are similar in certain characteristics, and it is a useful technique to increase the efficiency of the study. Case-Control studies can usually be conducted relatively faster and are inexpensive - particularly when compared with cohort studies (prospective). It is useful to study rare outcomes and outcomes with long latent periods. This design is not very useful to study rare exposures. Furthermore, they may also be prone to certain biases - selection bias and recall bias.

  11. Risk factors and screening instruments to predict adverse outcomes for undifferentiated older emergency department patients: a systematic review and meta-analysis.

    PubMed

    Carpenter, Christopher R; Shelton, Erica; Fowler, Susan; Suffoletto, Brian; Platts-Mills, Timothy F; Rothman, Richard E; Hogan, Teresita M

    2015-01-01

    A significant proportion of geriatric patients experience suboptimal outcomes following episodes of emergency department (ED) care. Risk stratification screening instruments exist to distinguish vulnerable subsets, but their prognostic accuracy varies. This systematic review quantifies the prognostic accuracy of individual risk factors and ED-validated screening instruments to distinguish patients more or less likely to experience short-term adverse outcomes like unanticipated ED returns, hospital readmissions, functional decline, or death. A medical librarian and two emergency physicians conducted a medical literature search of PubMed, EMBASE, SCOPUS, CENTRAL, and ClinicalTrials.gov using numerous combinations of search terms, including emergency medical services, risk stratification, geriatric, and multiple related MeSH terms in hundreds of combinations. Two authors hand-searched relevant specialty society research abstracts. Two physicians independently reviewed all abstracts and used the revised Quality Assessment of Diagnostic Accuracy Studies instrument to assess individual study quality. When two or more qualitatively similar studies were identified, meta-analysis was conducted using Meta-DiSc software. Primary outcomes were sensitivity, specificity, positive likelihood ratio (LR+), and negative likelihood ratio (LR-) for predictors of adverse outcomes at 1 to 12 months after the ED encounters. A hypothetical test-treatment threshold analysis was constructed based on the meta-analytic summary estimate of prognostic accuracy for one outcome. A total of 7,940 unique citations were identified yielding 34 studies for inclusion in this systematic review. Studies were significantly heterogeneous in terms of country, outcomes assessed, and the timing of post-ED outcome assessments. All studies occurred in ED settings and none used published clinical decision rule derivation methodology. Individual risk factors assessed included dementia, delirium, age, dependency, malnutrition, pressure sore risk, and self-rated health. None of these risk factors significantly increased the risk of adverse outcome (LR+ range = 0.78 to 2.84). The absence of dependency reduces the risk of 1-year mortality (LR- = 0.27) and nursing home placement (LR- = 0.27). Five constructs of frailty were evaluated, but none increased or decreased the risk of adverse outcome. Three instruments were evaluated in the meta-analysis: Identification of Seniors at Risk, Triage Risk Screening Tool, and Variables Indicative of Placement Risk. None of these instruments significantly increased (LR+ range for various outcomes = 0.98 to 1.40) or decreased (LR- range = 0.53 to 1.11) the risk of adverse outcomes. The test threshold for 3-month functional decline based on the most accurate instrument was 42%, and the treatment threshold was 61%. Risk stratification of geriatric adults following ED care is limited by the lack of pragmatic, accurate, and reliable instruments. Although absence of dependency reduces the risk of 1-year mortality, no individual risk factor, frailty construct, or risk assessment instrument accurately predicts risk of adverse outcomes in older ED patients. Existing instruments designed to risk stratify older ED patients do not accurately distinguish high- or low-risk subsets. Clinicians, educators, and policy-makers should not use these instruments as valid predictors of post-ED adverse outcomes. Future research to derive and validate feasible ED instruments to distinguish vulnerable elders should employ published decision instrument methods and examine the contributions of alternative variables, such as health literacy and dementia, which often remain clinically occult. © 2014 by the Society for Academic Emergency Medicine.

  12. A REFERENCE-INVARIANT HEALTH DISPARITY INDEX BASED ON RÉNYI DIVERGENCE

    PubMed Central

    Talih, Makram

    2015-01-01

    One of four overarching goals of Healthy People 2020 (HP2020) is to achieve health equity, eliminate disparities, and improve the health of all groups. In health disparity indices (HDIs) such as the mean log deviation (MLD) and Theil index (TI), disparities are relative to the population average, whereas in the index of disparity (IDisp) the reference is the group with the least adverse health outcome. Although the latter may be preferable, identification of a reference group can be affected by statistical reliability. To address this issue, we propose a new HDI, the Rényi index (RI), which is reference-invariant. When standardized, the RI extends the Atkinson index, where a disparity aversion parameter can incorporate societal values associated with health equity. In addition, both the MLD and TI are limiting cases of the RI. Also, a symmetrized Rényi index (SRI) can be constructed, resulting in a symmetric measure in the two distributions whose relative entropy is being evaluated. We discuss alternative symmetric and reference-invariant HDIs derived from the generalized entropy (GE) class and the Bregman divergence, and argue that the SRI is more robust than its GE-based counterpart to small changes in the distribution of the adverse health outcome. We evaluate the design-based standard errors and bootstrapped sampling distributions for the SRI, and illustrate the proposed methodology using data from the National Health and Nutrition Examination Survey (NHANES) on the 2001–04 prevalence of moderate or severe periodontitis among adults aged 45–74, which tracks Oral Health objective OH-5 in HP2020. Such data, which uses a binary individual-level outcome variable, are typical of HP2020 data. PMID:26568778

  13. A REFERENCE-INVARIANT HEALTH DISPARITY INDEX BASED ON RÉNYI DIVERGENCE.

    PubMed

    Talih, Makram

    One of four overarching goals of Healthy People 2020 (HP2020) is to achieve health equity, eliminate disparities, and improve the health of all groups. In health disparity indices (HDIs) such as the mean log deviation (MLD) and Theil index (TI), disparities are relative to the population average, whereas in the index of disparity (IDisp) the reference is the group with the least adverse health outcome. Although the latter may be preferable, identification of a reference group can be affected by statistical reliability. To address this issue, we propose a new HDI, the Rényi index (RI), which is reference-invariant. When standardized, the RI extends the Atkinson index, where a disparity aversion parameter can incorporate societal values associated with health equity. In addition, both the MLD and TI are limiting cases of the RI. Also, a symmetrized Rényi index (SRI) can be constructed, resulting in a symmetric measure in the two distributions whose relative entropy is being evaluated. We discuss alternative symmetric and reference-invariant HDIs derived from the generalized entropy (GE) class and the Bregman divergence, and argue that the SRI is more robust than its GE-based counterpart to small changes in the distribution of the adverse health outcome. We evaluate the design-based standard errors and bootstrapped sampling distributions for the SRI, and illustrate the proposed methodology using data from the National Health and Nutrition Examination Survey (NHANES) on the 2001-04 prevalence of moderate or severe periodontitis among adults aged 45-74, which tracks Oral Health objective OH-5 in HP2020. Such data, which uses a binary individual-level outcome variable, are typical of HP2020 data.

  14. Influence of the angular scattering of electrons on the runaway threshold in air

    NASA Astrophysics Data System (ADS)

    Chanrion, O.; Bonaventura, Z.; Bourdon, A.; Neubert, T.

    2016-04-01

    The runaway electron mechanism is of great importance for the understanding of the generation of x- and gamma rays in atmospheric discharges. In 1991, terrestrial gamma-ray flashes (TGFs) were discovered by the Compton Gamma-Ray Observatory. Those emissions are bremsstrahlung from high energy electrons that run away in electric fields associated with thunderstorms. In this paper, we discuss the runaway threshold definition with a particular interest in the influence of the angular scattering for electron energy close to the threshold. In order to understand the mechanism of runaway, we compare the outcome of different Fokker-Planck and Monte Carlo models with increasing complexity in the description of the scattering. The results show that the inclusion of the stochastic nature of collisions smooths the probability to run away around the threshold. Furthermore, we observe that a significant number of electrons diffuse out of the runaway regime when we take into account the diffusion in angle due to the scattering. Those results suggest using a runaway threshold energy based on the Fokker-Planck model assuming the angular equilibrium that is 1.6 to 1.8 times higher than the one proposed by [1, 2], depending on the magnitude of the ambient electric field. The threshold also is found to be 5 to 26 times higher than the one assuming forward scattering. We give a fitted formula for the threshold field valid over a large range of electric fields. Furthermore, we have shown that the assumption of forward scattering is not valid below 1 MeV where the runaway threshold usually is defined. These results are important for the thermal runaway and the runaway electron avalanche discharge mechanisms suggested to participate in the TGF generation.

  15. Does a 3-month multidisciplinary intervention improve pain, body composition and physical fitness in women with fibromyalgia?

    PubMed

    Carbonell-Baeza, Ana; Aparicio, Virginia A; Ortega, Francisco B; Cuevas, Ana M; Alvarez, Inmaculada C; Ruiz, Jonatan R; Delgado-Fernandez, Manuel

    2011-12-01

    To determine the effects of a 3-month multidisciplinary intervention on pain (primary outcome), body composition and physical fitness (secondary outcomes) in women with fibromyalgia (FM). 75 women with FM were allocated to a low-moderate intensity 3-month (three times/week) multidisciplinary (pool, land-based and psychological sessions) programme (n=33) or to a usual care group (n=32). The outcome variables were pain threshold, body composition (body mass index and estimated body fat percentage) and physical fitness (30 s chair stand, handgrip strength, chair sit and reach, back scratch, blind flamingo, 8 feet up and go and 6 min walk test). The authors observed a significant interaction effect (group*time) for the left (L) and right (R) side of the anterior cervical (p<0.001) and the lateral epicondyle R (p=0.001) tender point. Post hoc analysis revealed that pain threshold increased in the intervention group (positive) in the anterior cervical R (p<0.001) and L (p=0.012), and in the lateral epicondyle R (p=0.010), whereas it decreased (negative) in the anterior cervical R (p<0.001) and L (p=0.002) in the usual care group. There was also a significant interaction effect for chair sit and reach. Post hoc analysis revealed improvement in the intervention group (p=0.002). No significant improvement attributed to the training was observed in the rest of physical fitness or body composition variables. A 3-month multidisciplinary intervention three times/week had a positive effect on pain threshold in several tender points in women with FM. Though no overall improvements were observed in physical fitness or body composition, the intervention had positive effects on lower-body flexibility.

  16. Exploiting the potential of free software to evaluate root canal biomechanical preparation outcomes through micro-CT images.

    PubMed

    Neves, A A; Silva, E J; Roter, J M; Belladona, F G; Alves, H D; Lopes, R T; Paciornik, S; De-Deus, G A

    2015-11-01

    To propose an automated image processing routine based on free software to quantify root canal preparation outcomes in pairs of sound and instrumented roots after micro-CT scanning procedures. Seven mesial roots of human mandibular molars with different canal configuration systems were studied: (i) Vertucci's type 1, (ii) Vertucci's type 2, (iii) two individual canals, (iv) Vertucci's type 6, canals (v) with and (vi) without debris, and (vii) canal with visible pulp calcification. All teeth were instrumented with the BioRaCe system and scanned in a Skyscan 1173 micro-CT before and after canal preparation. After reconstruction, the instrumented stack of images (IS) was registered against the preoperative sound stack of images (SS). Image processing included contrast equalization and noise filtering. Sound canal volumes were obtained by a minimum threshold. For the IS, a fixed conservative threshold was chosen as the best compromise between instrumented canal and dentine whilst avoiding debris, resulting in instrumented canal plus empty spaces. Arithmetic and logical operations between sound and instrumented stacks were used to identify debris. Noninstrumented dentine was calculated using a minimum threshold in the IS and subtracting from the SS and total debris. Removed dentine volume was obtained by subtracting SS from IS. Quantitative data on total debris present in the root canal space after instrumentation, noninstrumented areas and removed dentine volume were obtained for each test case, as well as three-dimensional volume renderings. After standardization of acquisition, reconstruction and image processing micro-CT images, a quantitative approach for calculation of root canal biomechanical outcomes was achieved using free software. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  17. Climate variability in Andalusia (southern Spain) during the period 1701-1850 AD from documentary sources: evaluation and comparison with climate model simulations

    NASA Astrophysics Data System (ADS)

    Rodrigo, F. S.; Gómez-Navarro, J. J.; Montávez Gómez, J. P.

    2011-07-01

    In this work, a reconstruction of climatic conditions in Andalusia (southern Iberia Peninsula) during the period 1701-1850, as well as an evaluation of its associated uncertainties, is presented. This period is interesting because it is characterized by a minimum in the solar irradiance (Dalton Minimum, around 1800), as well as intense volcanic activity (for instance, the eruption of the Tambora in 1815), when the increasing atmospheric CO2 concentrations were of minor importance. The reconstruction is based on the analysis of a wide variety of documentary data. The reconstruction methodology is based on accounting the number of extreme events in past, and inferring mean value and standard deviation using the assumption of normal distribution for the seasonal means of climate variables. This reconstruction methodology is tested within the pseudoreality of a high-resolution paleoclimate simulation performed with the regional climate model MM5 coupled to the global model ECHO-G. Results show that the reconstructions are influenced by the reference period chosen and the threshold values used to define extreme values. This creates uncertainties which are assesed within the context of the climate simulation. An ensemble of reconstructions was obtained using two different reference periods and two pairs of percentiles as threshold values. Results correspond to winter temperature, and winter, spring, and autumn rainfall, and they are compared with simulations of the climate model for the considered period. The comparison of the distribution functions corresponding to 1790-1820 and 1960-1990 periods indicates that during the Dalton Minimum the frequency of dry and warm (wet and cold) winters was lesser (higher) than during the reference period. In spring and autumn it was detected an increase (decrease) in the frequency of wet (dry) seasons. Future research challenges are outlined.

  18. Domestic violence and mental health: a cross-sectional survey of women seeking help from domestic violence support services.

    PubMed

    Ferrari, Giulia; Agnew-Davies, Roxane; Bailey, Jayne; Howard, Louise; Howarth, Emma; Peters, Tim J; Sardinha, Lynnmarie; Feder, Gene

    2014-01-01

    Domestic violence and abuse (DVA) are associated with an increased risk of mental illness, but we know little about the mental health of female DVA survivors seeking support from domestic violence services. Domestic violence and abuse (DVA) are associated with an increased risk of mental illness, but we know little about the mental health of female DVA survivors seeking support from domestic violence services. Baseline data on 260 women enrolled in a randomized controlled trial of a psychological intervention for DVA survivors was analyzed. We report prevalence of and associations between mental health status and severity of abuse at the time of recruitment. We used logistic and normal regression models for binary and continuous outcomes, respectively. Mental health measures used were: Clinical Outcomes in Routine Evaluation-Outcome Measure (CORE-OM), Patient Health Questionnaire, Generalized Anxiety Disorder Assessment, and the Posttraumatic Diagnostic Scale (PDS) to measure posttraumatic stress disorder. The Composite Abuse Scale (CAS) measured abuse. Exposure to DVA was high, with a mean CAS score of 56 (SD 34). The mean CORE-OM score was 18 (SD 8) with 76% above the clinical threshold (95% confidence interval: 70-81%). Depression and anxiety levels were high, with means close to clinical thresholds, and all respondents recorded PTSD scores above the clinical threshold. Symptoms of mental illness increased stepwise with increasing severity of DVA. Exposure to DVA was high, with a mean CAS score of 56 (SD 34). The mean CORE-OM score was 18 (SD 8) with 76% above the clinical threshold (95% confidence interval: 70-81%). Depression and anxiety levels were high, with means close to clinical thresholds, and all respondents recorded PTSD scores above the clinical threshold. Symptoms of mental illness increased stepwise with increasing severity of DVA.

  19. Evaluation of markers and risk prediction models: Overview of relationships between NRI and decision-analytic measures

    PubMed Central

    Calster, Ben Van; Vickers, Andrew J; Pencina, Michael J; Baker, Stuart G; Timmerman, Dirk; Steyerberg, Ewout W

    2014-01-01

    BACKGROUND For the evaluation and comparison of markers and risk prediction models, various novel measures have recently been introduced as alternatives to the commonly used difference in the area under the ROC curve (ΔAUC). The Net Reclassification Improvement (NRI) is increasingly popular to compare predictions with one or more risk thresholds, but decision-analytic approaches have also been proposed. OBJECTIVE We aimed to identify the mathematical relationships between novel performance measures for the situation that a single risk threshold T is used to classify patients as having the outcome or not. METHODS We considered the NRI and three utility-based measures that take misclassification costs into account: difference in Net Benefit (ΔNB), difference in Relative Utility (ΔRU), and weighted NRI (wNRI). We illustrate the behavior of these measures in 1938 women suspect of ovarian cancer (prevalence 28%). RESULTS The three utility-based measures appear transformations of each other, and hence always lead to consistent conclusions. On the other hand, conclusions may differ when using the standard NRI, depending on the adopted risk threshold T, prevalence P and the obtained differences in sensitivity and specificity of the two models that are compared. In the case study, adding the CA-125 tumor marker to a baseline set of covariates yielded a negative NRI yet a positive value for the utility-based measures. CONCLUSIONS The decision-analytic measures are each appropriate to indicate the clinical usefulness of an added marker or compare prediction models, since these measures each reflect misclassification costs. This is of practical importance as these measures may thus adjust conclusions based on purely statistical measures. A range of risk thresholds should be considered in applying these measures. PMID:23313931

  20. Survey of abdominal obesities in an adult urban population of Kinshasa, Democratic Republic of Congo

    PubMed Central

    Kasiam Lasi On’kin, JB; Longo-Mbenza, B; Okwe, A Nge; Kabangu, N Kangola

    2007-01-01

    Summary Background The prevalence of overweight/obesity, which is an important cardiovascular risk factor, is rapidly increasing worldwide. Abdominal obesity, a fundamental component of the metabolic syndrome, is not defined by appropriate cutoff points for sub-Saharan Africa. Objective To provide baseline and reference data on the anthropometry/body composition and the prevalence rates of obesity types and levels in the adult urban population of Kinshasa, DRC, Central Africa. Methods During this cross-sectional study carried out within a random sample of adults in Kinshasa town, body mass index, waist circumference and fatty mass were measured using standard methods. Their reference and local thresholds (cut-off points) were compared with those of WHO, NCEP and IFD to define the types and levels of obesity in the population. Results From this sample of 11 511 subjects (5 676 men and 5 835 women), the men presented with similar body mass index and fatty mass values to those of the women, but higher waist measurements. The international thresholds overestimated the prevalence of denutrition, but underscored that of general and abdominal obesity. The two types of obesity were more prevalent among women than men when using both international and local thresholds. Body mass index was negatively associated with age; but abdominal obesity was more frequent before 20 years of age and between 40 and 60 years old. Local thresholds of body mass index (≥ 23, ≥ 27 and ≥ 30 kg/m2) and waist measurement (≥ 80, ≥ 90 and ≥ 94 cm) defined epidemic rates of overweight/general obesity (52%) and abdominal obesity (40.9%). The threshold of waist circumference ≥ 94 cm (90th percentile) corresponding to the threshold of the body mass index ≥ 30 kg/m2 (90th percentile) was proposed as the specific threshold of definition of the metabolic syndrome, without reference to gender, for the cities of sub-Saharan Africa. Conclusion Further studies are required to define the optimal threshold of waist circumference in rural settings. The present local cut-off points of body mass index and waist circumference could be appropriate for the identification of Africans at risk of obesity-related disorders, and indicate the need to implement interventions to reverse increasing levels of obesity. PMID:17985031

  1. A Set of Image Processing Algorithms for Computer-Aided Diagnosis in Nuclear Medicine Whole Body Bone Scan Images

    NASA Astrophysics Data System (ADS)

    Huang, Jia-Yann; Kao, Pan-Fu; Chen, Yung-Sheng

    2007-06-01

    Adjustment of brightness and contrast in nuclear medicine whole body bone scan images may confuse nuclear medicine physicians when identifying small bone lesions as well as making the identification of subtle bone lesion changes in sequential studies difficult. In this study, we developed a computer-aided diagnosis system, based on the fuzzy sets histogram thresholding method and anatomical knowledge-based image segmentation method that was able to analyze and quantify raw image data and identify the possible location of a lesion. To locate anatomical reference points, the fuzzy sets histogram thresholding method was adopted as a first processing stage to suppress the soft tissue in the bone images. Anatomical knowledge-based image segmentation method was then applied to segment the skeletal frame into different regions of homogeneous bones. For the different segmented bone regions, the lesion thresholds were set at different cut-offs. To obtain lesion thresholds in different segmented regions, the ranges and standard deviations of the image's gray-level distribution were obtained from 100 normal patients' whole body bone images and then, another 62 patients' images were used for testing. The two groups of images were independent. The sensitivity and the mean number of false lesions detected were used as performance indices to evaluate the proposed system. The overall sensitivity of the system is 92.1% (222 of 241) and 7.58 false detections per patient scan image. With a high sensitivity and an acceptable false lesions detection rate, this computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions.

  2. A classification model of Hyperion image base on SAM combined decision tree

    NASA Astrophysics Data System (ADS)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model heightens 9.9%.

  3. Automatic delineation of functional lung volumes with 68Ga-ventilation/perfusion PET/CT.

    PubMed

    Le Roux, Pierre-Yves; Siva, Shankar; Callahan, Jason; Claudic, Yannis; Bourhis, David; Steinfort, Daniel P; Hicks, Rodney J; Hofman, Michael S

    2017-10-10

    Functional volumes computed from 68 Ga-ventilation/perfusion (V/Q) PET/CT, which we have shown to correlate with pulmonary function test parameters (PFTs), have potential diagnostic utility in a variety of clinical applications, including radiotherapy planning. An automatic segmentation method would facilitate delineation of such volumes. The aim of this study was to develop an automated threshold-based approach to delineate functional volumes that best correlates with manual delineation. Thirty lung cancer patients undergoing both V/Q PET/CT and PFTs were analyzed. Images were acquired following inhalation of Galligas and, subsequently, intravenous administration of 68 Ga-macroaggreted-albumin (MAA). Using visually defined manual contours as the reference standard, various cutoff values, expressed as a percentage of the maximal pixel value, were applied. The average volume difference and Dice similarity coefficient (DSC) were calculated, measuring the similarity of the automatic segmentation and the reference standard. Pearson's correlation was also calculated to compare automated volumes with manual volumes, and automated volumes optimized to PFT indices. For ventilation volumes, mean volume difference was lowest (- 0.4%) using a 15%max threshold with Pearson's coefficient of 0.71. Applying this cutoff, median DSC was 0.93 (0.87-0.95). Nevertheless, limits of agreement in volume differences were large (- 31.0 and 30.2%) with differences ranging from - 40.4 to + 33.0%. For perfusion volumes, mean volume difference was lowest and Pearson's coefficient was highest using a 15%max threshold (3.3% and 0.81, respectively). Applying this cutoff, median DSC was 0.93 (0.88-0.93). Nevertheless, limits of agreement were again large (- 21.1 and 27.8%) with volume differences ranging from - 18.6 to + 35.5%. Using the 15%max threshold, moderate correlation was demonstrated with FEV1/FVC (r = 0.48 and r = 0.46 for ventilation and perfusion images, respectively). No correlation was found between other PFT indices. To automatically delineate functional volumes with 68 Ga-V/Q PET/CT, the most appropriate cutoff was 15%max for both ventilation and perfusion images. However, using this unique threshold systematically provided unacceptable variability compared to the reference volume and relatively poor correlation with PFT parameters. Accordingly, a visually adapted semi-automatic method is favored, enabling rapid and quantitative delineation of lung functional volumes with 68 Ga-V/Q PET/CT.

  4. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    PubMed

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  5. Automated prediction of tissue outcome after acute ischemic stroke in computed tomography perfusion images

    NASA Astrophysics Data System (ADS)

    Vos, Pieter C.; Bennink, Edwin; de Jong, Hugo; Velthuis, Birgitta K.; Viergever, Max A.; Dankbaar, Jan Willem

    2015-03-01

    Assessment of the extent of cerebral damage on admission in patients with acute ischemic stroke could play an important role in treatment decision making. Computed tomography perfusion (CTP) imaging can be used to determine the extent of damage. However, clinical application is hindered by differences among vendors and used methodology. As a result, threshold based methods and visual assessment of CTP images has not yet shown to be useful in treatment decision making and predicting clinical outcome. Preliminary results in MR studies have shown the benefit of using supervised classifiers for predicting tissue outcome, but this has not been demonstrated for CTP. We present a novel method for the automatic prediction of tissue outcome by combining multi-parametric CTP images into a tissue outcome probability map. A supervised classification scheme was developed to extract absolute and relative perfusion values from processed CTP images that are summarized by a trained classifier into a likelihood of infarction. Training was performed using follow-up CT scans of 20 acute stroke patients with complete recanalization of the vessel that was occluded on admission. Infarcted regions were annotated by expert neuroradiologists. Multiple classifiers were evaluated in a leave-one-patient-out strategy for their discriminating performance using receiver operating characteristic (ROC) statistics. Results showed that a RandomForest classifier performed optimally with an area under the ROC of 0.90 for discriminating infarct tissue. The obtained results are an improvement over existing thresholding methods and are in line with results found in literature where MR perfusion was used.

  6. Threshold behaviors of social dynamics and financial outcomes of Ponzi scheme diffusion in complex networks

    NASA Astrophysics Data System (ADS)

    Fu, Peihua; Zhu, Anding; Ni, He; Zhao, Xin; Li, Xiulin

    2018-01-01

    Ponzi schemes always lead to mass disasters after collapse. It is important to study the critical behaviors of both social dynamics and financial outcomes for Ponzi scheme diffusion in complex networks. We develop the potential-investor-divestor-investor (PIDI) model by considering the individual behavior of direct reinvestment. We find that only the spreading rate relates to the epidemic outbreak while the reinvestment rate relates to the zero and non-zero final states for social dynamics of both homo- and inhomogeneous networks. Financially, we find that there is a critical spreading threshold, above which the scheme needs not to use its own initial capital for taking off, i.e. the starting cost is covered by the rapidly inflowing funds. However, the higher the cost per recruit, the larger the critical spreading threshold and the worse the financial outcomes. Theoretical and simulation results also reveal that schemes are easier to take off in inhomogeneous networks. The reinvestment rate does not affect the starting. However, it improves the financial outcome in the early stages and postpones the outbreak of financial collapse. Some policy suggestions for the regulator from the perspective of social physics are proposed in the end of the paper.

  7. Pilot study about dose-effect relationship of ocular injury in argon laser photocoagulation

    NASA Astrophysics Data System (ADS)

    Chen, P.; Zhang, C. P.; Fu, X. B.; Zhang, T. M.; Wang, C. Z.; Qian, H. W.; San, Q.

    2011-03-01

    The aim of this article was to study the injury effect of either convergent or parallel argon laser beam on rabbit retina, get the dose-effect relationship for the two types of laser beams, and calculate the damage threshold of argon laser for human retinas. An argon laser therapeutic instrument for ophthalmology was used in this study. A total of 80 rabbit eyes were irradiated for 600 lesions, half of which were treated by convergent laser and the other half were done with parallel laser beam. After irradiation, slit lamp microscope and fundus photography were used to observe the lesions, change and the incidence of injury was processed statistically to get the damage threshold of rabbit retina. Based on results from the experiments on animals and the data from clinical cases of laser treatment, the photocoagulation damage thresholds of human retinas for convergent and parallel argon laser were calculated to be 0.464 and 0.285 mJ respectively. These data provided biological reference for safely operation when employing laser photocoagulation in clinical practice and other fields.

  8. Biological thresholds of nitrogen and phosphorus in a typical urban river system of the Yangtz delta, China.

    PubMed

    Liang, Xinqiang; Zhu, Sirui; Ye, Rongzhong; Guo, Ru; Zhu, Chunyan; Fu, Chaodong; Tian, Guangming; Chen, Yingxu

    2014-09-01

    River health and associated risks are fundamentally dependent on the levels of the primary productivities, i.e., sestonic and benthic chlorophyll-a. We selected a typical urban river system of the Yangtz delta to investigate nutrient and non-nutrient responses of chlorophyll-a contents and to determine biological thresholds of N and P. Results showed the mean contents of sestonic and benthic chlorophyll-a across all sampling points reached 10.2 μg L(-1) and 149.3 mg m(-2). The self-organized mapping analysis suggested both chlorophyll-a contents clearly responded to measurements of N, P, and water temperature. Based on the chlorophyll-a criteria for fresh water and measured variables, we recommend the biological thresholds of N and P for our river system be set at 2.4 mg N L(-1) and 0.2 mg P L(-1), and these be used as initial nutrient reference values for local river managers to implement appropriate strategies to alleviate nutrient loads and trophic status. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Counselling versus low-intensity cognitive behavioural therapy for persistent sub-threshold and mild depression (CLICD): a pilot/feasibility randomised controlled trial.

    PubMed

    Freire, Elizabeth; Williams, Christopher; Messow, Claudia-Martina; Cooper, Mick; Elliott, Robert; McConnachie, Alex; Walker, Andrew; Heard, Deborah; Morrison, Jill

    2015-08-15

    Persistent depressive symptoms below the threshold criteria for major depression represent a chronic condition with high risk of progression to a diagnosis of major depression. The evidence base for psychological treatments such as Person-Centred Counselling and Low-Intensity Cognitive Behavioural Therapy for sub-threshold depressive symptoms and mild depression is limited, particularly for longer-term outcomes. This study aimed to test the feasibility of delivering a randomised controlled trial into the clinical and cost effectiveness of Low-Intensity Cognitive Behavioural Therapy versus Person-Centred Counselling for patients with persistent sub-threshold depressive symptoms and mild depression. The primary outcome measures for this pilot/feasibility trial were recruitment, adherence and retention rates at six months from baseline. An important secondary outcome measure was recovery from, or prevention of, depression at six months assessed via a structured clinical interview by an independent assessor blind to the participant's treatment condition. Thirty-six patients were recruited in five general practices and were randomised to either eight weekly sessions of person-centred counselling each lasting up to an hour, or up to eight weeks of cognitive-behavioural self-help resources with guided telephone support sessions lasting 20-30 minutes each. Recruitment rate in relation to the number of patients approached at the general practices was 1.8 %. Patients attended an average of 5.5 sessions in both interventions. Retention rate for the 6-month follow-up assessments was 72.2 %. Of participants assessed at six months, 71.4 % of participants with a diagnosis of mild depression at baseline had recovered, while 66.7 % with a diagnosis of persistent subthreshold depression at baseline had not developed major depression. There were no significant differences between treatment groups for both recovery and prevention of depression at six months or on any of the outcome measures. It is feasible to recruit participants and successfully deliver both interventions in a primary care setting to patients with subthreshold and mild depression; however recruiting requires significant input at the general practices. The evidence from this study suggests that short-term Person-Centred Counselling and Low-Intensity Cognitive Behaviour Therapy are potentially effective and their effectiveness should be evaluated in a larger randomised controlled study which includes a health economic evaluation. Current Controlled Trials ISRCTN60972025 .

  10. Sub-threshold Post Traumatic Stress Disorder in the WHO World Mental Health Surveys

    PubMed Central

    McLaughlin, Katie A.; Koenen, Karestan C.; Friedman, Matthew J.; Ruscio, Ayelet Meron; Karam, Elie G.; Shahly, Victoria; Stein, Dan J.; Hill, Eric D.; Petukhova, Maria; Alonso, Jordi; Andrade, Laura Helena; Angermeyer, Matthias C.; Borges, Guilherme; de Girolamo, Giovanni; de Graaf, Ron; Demyttenaere, Koen; Florescu, Silvia E.; Mladenova, Maya; Posada-Villa, Jose; Scott, Kate M.; Takeshima, Tadashi; Kessler, Ronald C.

    2014-01-01

    Background Although only a minority of people exposed to a traumatic event (TE) develops PTSD, symptoms not meeting full PTSD criteria are common and often clinically significant. Individuals with these symptoms have sometimes been characterized as having sub-threshold PTSD, but no consensus exists on the optimal definition of this term. Data from a large cross-national epidemiological survey are used to provide a principled basis for such a definition. Methods The WHO World Mental Health (WMH) Surveys administered fully-structured psychiatric diagnostic interviews to community samples in 13 countries containing assessments of PTSD associated with randomly selected TEs. Focusing on the 23,936 respondents reporting lifetime TE exposure, associations of approximated DSM-5 PTSD symptom profiles with six outcomes (distress-impairment, suicidality, comorbid fear-distress disorders, PTSD symptom duration) were examined to investigate implications of different sub-threshold definitions. Results Although consistently highest distress-impairment, suicidality, comorbidity, and symptom duration were observed among the 3.0% of respondents with DSM-5 PTSD than other symptom profiles, the additional 3.6% of respondents meeting two or three of DSM-5 Criteria BE also had significantly elevated scores for most outcomes. The proportion of cases with threshold versus sub-threshold PTSD varied depending on TE type, with threshold PTSD more common following interpersonal violence and sub-threshold PTSD more common following events happening to loved ones. Conclusions Sub-threshold DSM-5 PTSD is most usefully defined as meeting two or three of the DSM-5 Criteria B-E. Use of a consistent definition is critical to advance understanding of the prevalence, predictors, and clinical significance of sub-threshold PTSD. PMID:24842116

  11. Reporting individual surgeon outcomes does not lead to risk aversion in abdominal aortic aneurysm surgery.

    PubMed

    Saratzis, A; Thatcher, A; Bath, M F; Sidloff, D A; Bown, M J; Shakespeare, J; Sayers, R D; Imray, C

    2017-02-01

    INTRODUCTION Reporting surgeons' outcomes has recently been introduced in the UK. This has the potential to result in surgeons becoming risk averse. The aim of this study was to investigate whether reporting outcomes for abdominal aortic aneurysm (AAA) surgery impacts on the number and risk profile (level of fitness) of patients offered elective treatment. METHODS Publically available National Vascular Registry data were used to compare the number of AAAs treated in those centres across the UK that reported outcomes for the periods 2008-2012, 2009-2013 and 2010-2014. Furthermore, the number and characteristics of patients referred for consideration of elective AAA repair at a single tertiary unit were analysed yearly between 2010 and 2014. Clinic, casualty and theatre event codes were searched to obtain all AAAs treated. The results of cardiopulmonary exercise testing (CPET) were assessed. RESULTS For the 85 centres that reported outcomes in all three five-year periods, the median number of AAAs treated per unit increased between the periods 2008-2012 and 2010-2014 from 192 to 214 per year (p=0.006). In the single centre cohort study, the proportion of patients offered elective AAA repair increased from 74% in 2009-2010 to 81% in 2013-2014, with a maximum of 84% in 2012-2013. The age, aneurysm size and CPET results (anaerobic threshold levels) for those eventually offered elective treatment did not differ significantly between 2010 and 2014. CONCLUSIONS The results do not support the assumption that reporting individual surgeon outcomes is associated with a risk averse strategy regarding patient selection in aneurysm surgery at present.

  12. Red blood cell transfusion for people undergoing hip fracture surgery.

    PubMed

    Brunskill, Susan J; Millette, Sarah L; Shokoohi, Ali; Pulford, E C; Doree, Carolyn; Murphy, Michael F; Stanworth, Simon

    2015-04-21

    The incidence of hip fracture is increasing and it is more common with increasing age. Surgery is used for almost all hip fractures. Blood loss occurs as a consequence of both the fracture and the surgery and thus red blood cell transfusion is frequently used. However, red blood cell transfusion is not without risks. Therefore, it is important to identify the evidence for the effective and safe use of red blood cell transfusion in people with hip fracture. To assess the effects (benefits and harms) of red blood cell transfusion in people undergoing surgery for hip fracture. We searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register (31 October 2014), the Cochrane Central Register of Controlled Trials (The Cochrane Library, 2014, Issue 10), MEDLINE (January 1946 to 20 November 2014), EMBASE (January 1974 to 20 November 2014), CINAHL (January 1982 to 20 November 2014), British Nursing Index Database (January 1992 to 20 November 2014), the Systematic Review Initiative's Transfusion Evidence Library, PubMed for e-publications, various other databases and ongoing trial registers. Randomised controlled trials comparing red blood cell transfusion versus no transfusion or an alternative to transfusion, different transfusion protocols or different transfusion thresholds in people undergoing surgery for hip fracture. Three review authors independently assessed each study's risk of bias and extracted data using a study-specific form. We pooled data where there was homogeneity in the trial comparisons and the timing of outcome measurement. We used GRADE criteria to assess the quality (low, moderate or high) of the evidence for each outcome. We included six trials (2722 participants): all compared two thresholds for red blood cell transfusion: a 'liberal' strategy to maintain a haemoglobin concentration of usually 10 g/dL versus a more 'restrictive' strategy based on symptoms of anaemia or a lower haemoglobin concentration, usually 8 g/dL. The exact nature of the transfusion interventions, types of surgery and participants varied between trials. The mean age of participants ranged from 81 to 87 years and approximately 24% of participants were men. The largest trial enrolled 2016 participants, over 60% of whom had a history of cardiovascular disease. The percentage of participants receiving a red blood cell transfusion ranged from 74% to 100% in the liberal transfusion threshold group and from 11% to 45% in the restrictive transfusion threshold group. There were no results available for the smallest trial (18 participants). All studies were at some risk of bias, in particular performance bias relating to the absence of blinding of personnel. We judged the evidence for all outcomes, except myocardial infarction, was low quality reflecting risk of bias primarily from imbalances in protocol violations in the largest trial and imprecision, often because of insufficient events. Thus, further research is likely to have an important impact on these results.There was no evidence of a difference between a liberal versus restricted threshold transfusion in mortality, at 30 days post hip fracture surgery (risk ratio (RR) 0.92, 95% confidence interval (CI) 0.67 to 1.26; five trials; 2683 participants; low quality evidence) or at 60 days post surgery (RR 1.08, 95% CI 0.80 to 1.44; three trials; 2283 participants; low quality evidence). Assuming an illustrative baseline risk of 50 deaths per 1000 participants in the restricted threshold group at 30 days, these data equate to four fewer (95% CI 17 fewer to 14 more) deaths per 1000 in the liberal threshold group at 30 days.There was no evidence of a difference between a liberal versus restricted threshold transfusion in functional recovery at 60 days, assessed in terms of the inability to walk 10 feet (3 m) without human assistance (RR 1.00, 95% CI 0.87 to 1.15; two trials; 2083 participants; low quality evidence).There was low quality evidence of no difference between the transfusion thresholds in postoperative morbidity for the following complications: thromboembolism (RR 1.15 favouring a restrictive threshold, 95% CI 0.56 to 2.37; four trials; 2416 participants), stroke (RR 2.40 favouring a restrictive threshold, 95% CI 0.85 to 6.79; four trials; 2416 participants), wound infection (RR 1.61 favouring a restrictive threshold, 95% CI 0.77 to 3.35; three trials; 2332 participants), respiratory infection (pneumonia) (RR 1.35 favouring a restrictive threshold, 95% CI 0.95 to 1.92; four trials; 2416 participants) and new diagnosis of congestive heart failure (RR 0.77 favouring a liberal threshold, 95% CI 0.48 to 1.23; three trials; 2332 participants). There was very low quality evidence of a lower risk of myocardial infarction in the liberal compared with the restrictive transfusion threshold group (RR 0.59, 95% CI 0.36 to 0.96; three trials; 2217 participants). Assuming an illustrative baseline risk of myocardial infarction of 24 per 1000 participants in the restricted threshold group, this result was compatible with between one and 15 fewer myocardial infarctions in the liberal threshold group. We found low quality evidence of no difference in mortality, functional recovery or postoperative morbidity between 'liberal' versus 'restrictive' thresholds for red blood cell transfusion in people undergoing surgery for hip fracture. Although further research may change the estimates of effect, the currently available evidence does not support the use of liberal red blood cell transfusion thresholds based on a 10 g/dL haemoglobin trigger in preference to more restrictive transfusion thresholds based on lower haemoglobin levels or symptoms of anaemia in these people. Future research needs to address the effectiveness of red blood cell transfusions at different time points in the surgical pathway, whether pre-operative, peri-operative or postoperative. In particular, such research would need to consider people who are symptomatic or haemodynamically unstable who were excluded from most of these trials.

  13. Qcritical as a Geomorphically and Biologically Relevant Flow Threshold for Stormwater Management and Catchment-scale Stream Restoration

    NASA Astrophysics Data System (ADS)

    Hawley, R. J.; Vietz, G. J.; Wooten, M. S.

    2016-12-01

    The threshold discharge that initiates streambed mobilization (Qcritical) is one of the most mechanistically-important flows for geomorphic function and biological integrity in stream ecosystems. Increased frequency and duration of flows that exceed Qcritical are a dominant driver of geomorphic instability and excess benthic disturbance in urban/suburban streams (i.e. the urban disturbance regime). Qcritical frequency also corresponds to measures of stream integrity in reference streams, with both geomorphic stability and biological indices significantly correlated to time since a Qcritical event in one 7-y study. Indeed, reference site macroinvertebrate communities during years with atypically frequent Qcritical events were more similar to sites draining watersheds with 30% imperviousness than to reference site communities of more typical rainfall years. Despite its biophysical relevance to stream ecosystems, Qcritical is one of the most overlooked and misunderstood flows in the stormwater management and stream restoration fields. Regional stormwater policies and stream restoration design guidance are often based on the misplaced assumption that streambed erosion does not occur at sub-bankfull events (often assumed to correspond to the 1-y recurrence discharge). Using an international database of nearly 200 sites we show that Qcritical varies by several orders of magnitude as a function of streambed particle size. Qcritical in sand-dominated streams is likely to be orders of magnitude less than the 1-yr discharge, whereas Qcritical in cobble/boulder dominated streams could be much larger than the 1-yr discharge, implying that stormwater/restoration policies focused on the 1-yr event could lack efficacy in many stream settings. Qcritical is a geomorphically- and biologically-relevant discharge threshold when developing stormwater management policies intended to protect streams from excess erosion, designing watershed-scale restoration efforts to restore a more natural disturbance regime, or reconstructing stream reaches designed to restore sediment continuity. Incorporation of Qcritical into such restoration and management efforts ensures that designs are actually tailored to the mechanisms that drive channel erosion and disturbance to the benthos.

  14. Postoperative hand therapy in Dupuytren's disease.

    PubMed

    Herweijer, Hester; Dijkstra, Pieter U; Nicolai, Jean-Philippe A; Van der Sluis, Corry K

    2007-11-30

    Postoperative hand therapy in patients after surgery for Dupuytren's contracture is common medical practice to improve outcomes. Until now, patients are referred for postoperative hand rehabilitation on an empirical basis. To evaluate whether referral criteria after surgery because of Dupuytren's disease were actually adhered to, and, to analyse differences in outcomes between patients who were referred according to the criteria (correctly referred) and those who were not referred but should have been (incorrectly not referred). Referral pattern was evaluated prospectively in 46 patients. Total active/passive range of joint motion (TAM/ TPM), sensibility, pinch force, Disability Arm Shoulder Hand questionnaire (DASH) and Michigan Hand outcomes Questionnaire (MHQ) were used as outcome measures preoperatively and 10 months postoperatively. In total 21 patients were referred correctly and 17 patients were incorrectly not referred. Significant improvements on TAM/TPM, DASH and MHQ were found at follow-up for the total group. No differences in outcomes were found between patients correctly referred and patients incorrectly not referred for postoperative hand therapy. Referral criteria were not adhered to. Given the lack of differences in outcomes between patients correctly referred and patients incorrectly not referred, postoperative hand therapy in Dupuytren's disease should be reconsidered.

  15. Multi-GHz Synchronous Waveform Acquisition With Real-Time Pattern-Matching Trigger Generation

    NASA Astrophysics Data System (ADS)

    Kleinfelder, Stuart A.; Chiang, Shiuh-hua Wood; Huang, Wei

    2013-10-01

    A transient waveform capture and digitization circuit with continuous synchronous 2-GHz sampling capability and real-time programmable windowed trigger generation has been fabricated and tested. Designed in 0.25 μm CMOS, the digitizer contains a circular array of 128 sample and hold circuits for continuous sample acquisition, and attains 2-GHz sample speeds with over 800-MHz analog bandwidth. Sample clock generation is synchronous, combining a phase-locked loop for high-speed clock generation and a high-speed fully-differential shift register for distributing clocks to all 128 sample circuits. Using two comparators per sample, the sampled voltage levels are compared against two reference levels, a high threshold and a low threshold, that are set via per-comparator digital to analog converters (DACs). The 256 per-comparator 5-bit DACs compensate for comparator offsets and allow for fine reference level adjustment. The comparator results are matched in 8-sample-wide windows against up to 72 programmable patterns in real time using an on-chip programmable logic array. Each 8-sample trigger window is equivalent to 4 ns of acquisition, overlapped sample by sample in a circular fashion through the entire 128-sample array. The 72 pattern-matching trigger criteria can be programmed to be any combination of High-above the high threshold, Low-below the low threshold, Middle-between the two thresholds, or “Don't Care”-any state is accepted. A trigger pattern of “HLHLHLHL,” for example, watches for a waveform that is oscillating at about 1 GHz given the 2-GHz sample rate. A trigger is flagged in under 20 ns if there is a match, after which sampling is stopped, and on-chip digitization can proceed via 128 parallel 10-bit converters, or off-chip conversion can proceed via an analog readout. The chip exceeds 11 bits of dynamic range, nets over 800-MHz -3-dB bandwidth in a realistic system, and jitter in the PLL-based sampling clock has been measured to be about 1 part per million, RMS.

  16. Psychophysics, reliability, and norm values for temporal contrast sensitivity implemented on the two alternative forced choice C-Quant device.

    PubMed

    van den Berg, Thomas J T P; Franssen, Luuk; Kruijt, Bastiaan; Coppens, Joris E

    2011-08-01

    The current paper describes the design and population testing of a flicker sensitivity assessment technique corresponding to the psychophysical approach for straylight measurement. The purpose is twofold: to check the subjects' capability to perform the straylight test and as a test for retinal integrity for other purposes. The test was implemented in the Oculus C-Quant straylight meter, using homemade software (MATLAB). The geometry of the visual field lay-out was identical, as was the subjects' 2AFC task. A comparable reliability criterion ("unc") was developed. Outcome measure was logTCS (temporal contrast sensitivity). The population test was performed in science fair settings on about 400 subjects. Moreover, 2 subjects underwent extensive tests to check whether optical defects, mimicked with trial lenses and scatter filters, affected the TCS outcome. Repeated measures standard deviation was 0.11 log units for the reference population. Normal values for logTCS were around 2 (threshold 1%) with some dependence on age (range 6 to 85 years). The test outcome did not change upon a tenfold (optical) deterioration in visual acuity or straylight. The test has adequate precision for checking a subject's capability to perform straylight assessment. The unc reliability criterion ensures sufficient precision, also for assessment of retinal sensitivity loss.

  17. Vibrant soundbridge in aural atresia: does severity matter?

    PubMed

    McKinnon, B J; Dumon, T; Hagen, R; Lesinskas, E; Mlynski, R; Profant, M; Spindel, J; Van Beek-King, J; Zernotti, M

    2014-07-01

    Congenital aural atresia (CAA) poses significant challenges to surgical remediation. Both bone anchored hearing aids (BAHA) and the Vibrant Soundbridge (VSB) have been considered as alternatives or adjuncts to conventional atresiaplasty. A consensus statement on VSB implantation in children and adolescents recommended against implantation when the Jahrsdoerfer score was less than 8. More recent publications suggest that patients with Jahrsdoerfer scores between three and seven may benefit from VSB implantation. The purpose of this study was to further investigate the outcomes of VSB implantation in CAA. The study was a multi-center, retrospective review. A retrospective review of data (patient's demographic, clinical, implant and audiological information) from four collaborating centers that have performed VSB implantation in CAA was performed. Outcomes based on severity of the atresia using the Jahrsdoerfer and Yellon-Branstetter scoring systems were also evaluated. Data from 28 patients from the four centers revealed no iatrogenic facial nerve injuries or change in bone thresholds. Post-operative speech threshold and speech recognition was, respectively, 39 dB and 94%. Jahrsdoerfer and Yellon scores ranged from 4 to 9 and 4 to 12, respectively. The scores did not correlate to or predict outcomes. Three individual elements of the scores did correlate to initial, but not long-term outcomes. Atresiaplasty and BAHA in the management of CAA are not complete solutions. VSB may offer an alternative in these surgically complex patients for achieving amplification, though better metrics for patient selection need to be developed. LEVEL OF EVIDENCE : IV.

  18. Evaluation of the 'Fitting to Outcomes eXpert' (FOX®) with established cochlear implant users.

    PubMed

    Buechner, Andreas; Vaerenberg, Bart; Gazibegovic, Dzemal; Brendel, Martina; De Ceulaer, Geert; Govaerts, Paul; Lenarz, Thomas

    2015-01-01

    To evaluate the possible impact of 'Fitting to Outcomes eXpert (FOX(®))' on cochlear implant (CI) fitting in a clinic with extensive experience of fitting a range of CI systems, as a way to assess whether a software tool such as FOX is able to complement standard clinical procedures. Ten adult post-lingually deafened and unilateral long-term users of the Advanced Bionics(TM) CI system (Clarion CII or HiRes 90K(TM)) underwent speech perception assessment with their current clinical program. One cycle 'iteration' of FOX optimization was performed and the program adjusted accordingly. After a month of using both clinical and FOX programs, a second iteration of FOX optimization was performed. Following this, the assessments were repeated without further acclimatization. FOX prescribed programming modifications in all subjects. Soundfield-aided thresholds were significantly lower for FOX than the clinical program. Group speech scores in noise were not significantly different between the two programs but three individual subjects had improved speech scores with the FOX MAP, two had worse speech scores, and five were the same. FOX provided a standardized approach to fitting based on outcome measures rather than comfort alone. The results indicated that for this group of well-fitted patients, FOX improved outcomes in some individuals. There were significant changes, both better and worse, in individual speech perception scores but median scores remained unchanged. Soundfield-aided thresholds were significantly improved for the FOX group.

  19. Standardised method of determining vibratory perception thresholds for diagnosis and screening in neurological investigation.

    PubMed Central

    Goldberg, J M; Lindblom, U

    1979-01-01

    Vibration threshold determinations were made by means of an electromagnetic vibrator at three sites (carpal, tibial, and tarsal), which were primarily selected for examining patients with polyneuropathy. Because of the vast variation demonstrated for both vibrator output and tissue damping, the thresholds were expressed in terms of amplitude of stimulator movement measured by means of an accelerometer, instead of applied voltage which is commonly used. Statistical analysis revealed a higher power of discimination for amplitude measurements at all three stimulus sites. Digital read-out gave the best statistical result and was also most practical. Reference values obtained from 110 healthy males, 10 to 74 years of age, were highly correlated with age for both upper and lower extremities. The variance of the vibration perception threshold was less than that of the disappearance threshold, and determination of the perception threshold alone may be sufficient in most cases. PMID:501379

  20. Comparison of tiered formularies and reference pricing policies: a systematic review

    PubMed Central

    Morgan, Steve; Hanley, Gillian; Greyson, Devon

    2009-01-01

    Objectives To synthesize methodologically comparable evidence from the published literature regarding the outcomes of tiered formularies and therapeutic reference pricing of prescription drugs. Methods We searched the following electronic databases: ABI/Inform, CINAHL, Clinical Evidence, Digital Dissertations & Theses, Evidence-Based Medicine Reviews (which incorporates ACP Journal Club, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Cochrane Methodology Register, Database of Abstracts of Reviews of Effectiveness, Health Technology Assessments and NHS Economic Evaluation Database), EconLit, EMBASE, International Pharmaceutical Abstracts, MEDLINE, PAIS International and PAIS Archive, and the Web of Science. We also searched the reference lists of relevant articles and several grey literature sources. We sought English-language studies published from 1986 to 2007 that examined the effects of either therapeutic reference pricing or tiered formularies, reported on outcomes relevant to patient care and cost-effectiveness, and employed quantitative study designs that included concurrent or historical comparison groups. We abstracted and assessed potentially appropriate articles using a modified version of the data abstraction form developed by the Cochrane Effective Practice and Organisation of Care Group. Results From an initial list of 2964 citations, 12 citations (representing 11 studies) were deemed eligible for inclusion in our review: 3 studies (reported in 4 articles) of reference pricing and 8 studies of tiered formularies. The introduction of reference pricing was associated with reduced plan spending, switching to preferred medicines, reduced overall drug utilization and short-term increases in the use of physician services. Reference pricing was not associated with adverse health impacts. The introduction of tiered formularies was associated with reduced plan expenditures, greater patient costs and increased rates of non-compliance with prescribed drug therapy. From the data available, we were unable to examine the hypothesis that tiered formulary policies result in greater use of physician services and potentially worse health outcomes. Conclusion The available evidence does not clearly differentiate between reference pricing and tiered formularies in terms of policy outcomes. Reference pricing appears to have a slight evidentiary advantage, given that patients’ health outcomes under tiered formularies have not been well studied and that tiered formularies are associated with increased rates of medicine discontinuation. PMID:21603047

  1. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    PubMed

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC task is assumed to choose the interval in which the greatest number of events occurred or randomly chooses among intervals which are tied for the greatest number of events. The subject is further assumed to count events over the duration of an evaluation interval that has the same timing and duration as the expected stimulus. The increase in the rate of the events caused by stimulation is proportional to the time-varying amplitude envelope of the bandpass-filtered signal raised to an exponent. We find the exponent to be about 3, consistent with our previous studies. This challenges models that are based on the assumption of the integration of a neural response that is directly proportional to the stimulus amplitude or proportional to its square (i.e., proportional to the stimulus intensity or power). Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Spatial early warning signals in a lake manipulation

    USGS Publications Warehouse

    Butitta, Vince L.; Carpenter, Stephen R.; Loken, Luke; Pace, Michael L.; Stanley, Emily H.

    2017-01-01

    Rapid changes in state have been documented for many of Earth's ecosystems. Despite a growing toolbox of methods for detecting declining resilience or early warning indicators (EWIs) of ecosystem transitions, these methods have rarely been evaluated in whole-ecosystem trials using reference ecosystems. In this study, we experimentally tested EWIs of cyanobacteria blooms based on changes in the spatial structure of a lake. We induced a cyanobacteria bloom by adding nutrients to an experimental lake and mapped fine-resolution spatial patterning of cyanobacteria using a mobile sensor platform. Prior to the bloom, we detected theoretically predicted spatial EWIs based on variance and spatial autocorrelation, as well as a new index based on the extreme values. Changes in EWIs were not discernible in an unenriched reference lake. Despite the fluid environment of a lake where spatial heterogeneity driven by biological processes may be overwhelmed by physical mixing, spatial EWIs detected an approaching bloom suggesting the utility of spatial metrics for signaling ecological thresholds.

  3. Seizure disorders in 43 cattle.

    PubMed

    D'Angelo, A; Bellino, C; Bertone, I; Cagnotti, G; Iulini, B; Miniscalco, B; Casalone, C; Gianella, P; Cagnasso, A

    2015-01-01

    Large animals have a relatively high seizure threshold, and in most cases seizures are acquired. No published case series have described this syndrome in cattle. To describe clinical findings and outcomes in cattle referred to the Veterinary Teaching Hospital of the University of Turin (Italy) because of seizures. Client-owned cattle with documented evidence of seizures. Medical records of cattle with episodes of seizures reported between January 2002 and February 2014 were reviewed. Evidence of seizures was identified based on the evaluation of seizure episodes by the referring veterinarian or 1 of the authors. Animals were recruited if physical and neurologic examinations were performed and if diagnostic laboratory test results were available. Forty-three of 49 cases fulfilled the inclusion criteria. The mean age was 8 months. Thirty-one animals were male and 12 were female. Piedmontese breed accounted for 39/43 (91%) animals. Seizures were etiologically classified as reactive in 30 patients (70%) and secondary or structural in 13 (30%). Thirty-six animals survived, 2 died naturally, and 5 were euthanized for reasons of animal welfare. The definitive cause of reactive seizures was diagnosed as hypomagnesemia (n = 2), hypocalcemia (n = 12), and hypomagnesemia-hypocalcemia (n = 16). The cause of structural seizures was diagnosed as cerebrocortical necrosis (n = 8), inflammatory diseases (n = 4), and lead (Pb) intoxication (n = 1). The study results indicate that seizures largely are reported in beef cattle and that the cause can be identified and successfully treated in most cases. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  4. Developmental Changes during Childhood in Single-Letter Acuity and Its Crowding by Surrounding Contours

    ERIC Educational Resources Information Center

    Jeon, Seong Taek; Hamid, Joshua; Maurer, Daphne; Lewis, Terri L.

    2010-01-01

    Crowding refers to impaired target recognition caused by surrounding contours. We investigated the development of crowding in central vision by comparing single-letter and crowding thresholds in groups of 5-year-olds, 8-year-olds, 11-year-olds, and adults. The task was to discriminate the orientation of a Sloan letter E. Single-letter thresholds,…

  5. Gravity matters: Motion perceptions modified by direction and body position.

    PubMed

    Claassen, Jens; Bardins, Stanislavs; Spiegel, Rainer; Strupp, Michael; Kalla, Roger

    2016-07-01

    Motion coherence thresholds are consistently higher at lower velocities. In this study we analysed the influence of the position and direction of moving objects on their perception and thereby the influence of gravity. This paradigm allows a differentiation to be made between coherent and randomly moving objects in an upright and a reclining position with a horizontal or vertical axis of motion. 18 young healthy participants were examined in this coherent threshold paradigm. Motion coherence thresholds were significantly lower when position and motion were congruent with gravity independent of motion velocity (p=0.024). In the other conditions higher motion coherence thresholds (MCT) were found at lower velocities and vice versa (p<0.001). This result confirms previous studies with higher MCT at lower velocity but is in contrast to studies concerning perception of virtual turns and optokinetic nystagmus, in which differences of perception were due to different directions irrespective of body position, i.e. perception took place in an egocentric reference frame. Since the observed differences occurred in an upright position only, perception of coherent motion in this study is defined by an earth-centered reference frame rather than by an ego-centric frame. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Comparison of cytology, HPV DNA testing and HPV 16/18 genotyping alone or combined targeting to the more balanced methodology for cervical cancer screening.

    PubMed

    Chatzistamatiou, Kimon; Moysiadis, Theodoros; Moschaki, Viktoria; Panteleris, Nikolaos; Agorastos, Theodoros

    2016-07-01

    The objective of the present study was to identify the most effective cervical cancer screening algorithm incorporating different combinations of cytology, HPV testing and genotyping. Women 25-55years old recruited for the "HERMES" (HEllenic Real life Multicentric cErvical Screening) study were screened in terms of cytology and high-risk (hr) HPV testing with HPV 16/18 genotyping. Women positive for cytology or/and hrHPV were referred for colposcopy, biopsy and treatment. Ten screening algorithms based on different combinations of cytology, HPV testing and HPV 16/18 genotyping were investigated in terms of diagnostic accuracy. Three clusters of algorithms were formed according to the balance between effectiveness and harm caused by screening. The cluster showing the best balance included two algorithms based on co-testing and two based on HPV primary screening with HPV 16/18 genotyping. Among these, hrHPV testing with HPV 16/18 genotyping and reflex cytology (atypical squamous cells of undetermined significance - ASCUS threshold) presented the optimal combination of sensitivity (82.9%) and specificity relative to cytology alone (0.99) with 1.26 false positive rate relative to cytology alone. HPV testing with HPV 16/18 genotyping, referring HPV 16/18 positive women directly to colposcopy, and hrHPV (non 16/18) positive women to reflex cytology (ASCUS threshold), as a triage method to colposcopy, reflects the best equilibrium between screening effectiveness and harm. Algorithms, based on cytology as initial screening method, on co-testing or HPV primary without genotyping, and on HPV primary with genotyping but without cytology triage, are not supported according to the present analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Normative behavioral thresholds for short tone-bursts.

    PubMed

    Beattie, R C; Rochverger, I

    2001-10-01

    Although tone-bursts have been commonly used in auditory brainstem response (ABR) evaluations for many years, national standards describing normal calibration values have not been established. This study was designed to gather normative threshold data to establish a physical reference for tone-burst stimuli that can be reproduced across clinics and laboratories. More specifically, we obtained norms for 3-msec tone-bursts presented at two repetition rates (9.3/sec and 39/sec), two gating functions (Trapezoid and Blackman), and four frequencies (500, 1000, 2000, and 4000 Hz). Our results are specified using three physical references: dB peak sound pressure level, dB peak-to-peak equivalent sound pressure level, and dB SPL (fast meter response, rate = 50 stimuli/sec). These data are offered for consideration when calibrating ABR equipment. The 39/sec stimulus rate yielded tone-burst thresholds that were approximately 3 dB lower than the 9.3/sec rate. The improvement in threshold with increasing stimulus rate may reflect the ability of the auditory system to integrate energy that occurs within a time interval of 200 to 500 msec (temporal integration). The Trapezoid gating function yielded thresholds that averaged 1.4 dB lower than the Blackman function. Although these differences are small and of little clinical importance, the cumulative effects of several instrument and/or procedural variables may yield clinically important differences.

  8. Consciousness Indexing and Outcome Prediction with Resting-State EEG in Severe Disorders of Consciousness.

    PubMed

    Stefan, Sabina; Schorr, Barbara; Lopez-Rolon, Alex; Kolassa, Iris-Tatjana; Shock, Jonathan P; Rosenfelder, Martin; Heck, Suzette; Bender, Andreas

    2018-04-17

    We applied the following methods to resting-state EEG data from patients with disorders of consciousness (DOC) for consciousness indexing and outcome prediction: microstates, entropy (i.e. approximate, permutation), power in alpha and delta frequency bands, and connectivity (i.e. weighted symbolic mutual information, symbolic transfer entropy, complex network analysis). Patients with unresponsive wakefulness syndrome (UWS) and patients in a minimally conscious state (MCS) were classified into these two categories by fitting and testing a generalised linear model. We aimed subsequently to develop an automated system for outcome prediction in severe DOC by selecting an optimal subset of features using sequential floating forward selection (SFFS). The two outcome categories were defined as UWS or dead, and MCS or emerged from MCS. Percentage of time spent in microstate D in the alpha frequency band performed best at distinguishing MCS from UWS patients. The average clustering coefficient obtained from thresholding beta coherence performed best at predicting outcome. The optimal subset of features selected with SFFS consisted of the frequency of microstate A in the 2-20 Hz frequency band, path length obtained from thresholding alpha coherence, and average path length obtained from thresholding alpha coherence. Combining these features seemed to afford high prediction power. Python and MATLAB toolboxes for the above calculations are freely available under the GNU public license for non-commercial use ( https://qeeg.wordpress.com ).

  9. Changes to Hearing Levels Over the First Year After Stapes Surgery: An Analysis of 139 Patients.

    PubMed

    Nash, Robert; Patel, Bhavesh; Lavy, Jeremy

    2018-06-15

    Stapes surgery is performed for hearing restoration in patients with otosclerosis. Results from stapes surgery are good, although a small proportion will have a persistent conductive hearing loss and will consider revision surgery. The timing of such surgery depends on expected changes to hearing thresholds during the postoperative period. We performed a retrospective case series analysis of a database of outcomes from stapes surgery performed between July 26, 2013 and March 11, 2016 at one center. Hearing outcomes over the year subsequent to surgery were recorded. There was a significant improvement in hearing outcomes between the postoperative visit at 6 weeks (mean air-bone gap 6.0 dB) and the hearing outcome at 6 months (mean air-bone gap 3.3 dB) (p < 0.01). This improvement was maintained at 12 months (mean air-bone gap 3.1 dB), although there were individual patients whose hearing outcome improved or deteriorated during this period. Improvements in air conduction thresholds mirrored improvements in air-bone gap measurements. Patients with an initial suboptimal or poor result after stapes surgery may observed improvement in their hearing thresholds in the year after surgery. These patients may have large preoperative air-bone gaps, and have a trend to have obliterated footplates. Revision surgery should not be considered until at least 6 months after primary surgery.

  10. Predictive performance of rainfall thresholds for shallow landslide triggering in Switzerland from daily gridded precipitation data

    NASA Astrophysics Data System (ADS)

    Leonarduzzi, E.; Molnar, P.; McArdell, B. W.

    2017-12-01

    In Switzerland floods are responsible for most of the damage caused by rainfall-triggered natural hazards (89%), followed by landslides (6%, almost 600 M USD) as reported in Hilker et al. (2009) for the period 1972-2007. A high-resolution gridded daily precipitation dataset is combined with a landslide inventory containing over 2000 events in the period 1972-2012 to analyze rainfall thresholds that lead to landsliding in Switzerland. First triggering rainfall and landslides are co-located obtaining the distributions of triggering and non-triggering rainfall event properties at the scale of the precipitation data (2*2 km2) and considering 1 day as the interarrival time to separate events. Then rainfall thresholds are obtained by maximizing true positives (accurate predictions) while minimizing false negatives (false alarms), using the True Skill Statistic. The best predictive performance is obtained by the intensity-duration ID threshold curve, followed by peak daily intensity (Imax) and mean event intensity (Imean). Event duration by itself has very low predictive power. In addition to country-wide thresholds, local ones are also defined by regionalization based on surface erodibility and local long-term climate (mean daily precipitation). Different Imax thresholds are determined for each of the regions separately. It is found that wetter local climate and lower erodibility lead to significantly higher rainfall thresholds required to trigger landslides. However, the improvement in model performance due to regionalization is marginal and much lower than what can be achieved by having a high quality landslide database. In order to validate the performance of the Imax rainfall threshold model, reference cases will be presented in which the landslide locations and timing are randomized and the landslide sample size is reduced. Jack-knife and cross-validation experiments demonstrate that the model is robust. The results highlight the potential of using rainfall I-D threshold curves and Imax threshold values for predicting the occurrence of landslides on a country or regional scale even with daily precipitation data, with possible applications in landslide warning systems.

  11. Cornea nerve fiber quantification and construction of phenotypes in patients with fibromyalgia

    PubMed Central

    Oudejans, Linda; He, Xuan; Niesters, Marieke; Dahan, Albert; Brines, Michael; van Velzen, Monique

    2016-01-01

    Cornea confocal microscopy (CCM) is a novel non-invasive method to detect small nerve fiber pathology. CCM generally correlates with outcomes of skin biopsies in patients with small fiber pathology. The aim of this study was to quantify the morphology of small nerve fibers of the cornea of patients with fibromyalgia in terms of density, length and branching and further phenotype these patients using standardized quantitative sensory testing (QST). Small fiber pathology was detected in the cornea of 51% of patients: nerve fiber length was significantly decreased in 44% of patients compared to age- and sex-matched reference values; nerve fiber density and branching were significantly decreased in 10% and 28% of patients. The combination of the CCM parameters and sensory tests for central sensitization, (cold pain threshold, mechanical pain threshold, mechanical pain sensitivity, allodynia and/or windup), yielded four phenotypes of fibromyalgia patients in a subgroup analysis: one group with normal cornea morphology without and with signs of central sensitization, and a group with abnormal cornea morphology parameters without and with signs of central sensitization. In conclusion, half of the tested fibromyalgia population demonstrates signs of small fiber pathology as measured by CCM. The four distinct phenotypes suggest possible differences in disease mechanisms and may require different treatment approaches. PMID:27006259

  12. A cost-effectiveness threshold analysis of a multidisciplinary structured educational intervention in pediatric asthma.

    PubMed

    Rodriguez-Martinez, Carlos E; Sossa-Briceño, Monica P; Castro-Rodriguez, Jose A

    2018-05-01

    Asthma educational interventions have been shown to improve several clinically and economically important outcomes. However, these interventions are costly in themselves and could lead to even higher disease costs. A cost-effectiveness threshold analysis would be helpful in determining the threshold value of the cost of educational interventions, leading to these interventions being cost-effective. The aim of the present study was to perform a cost-effectiveness threshold analysis to determine the level at which the cost of a pediatric asthma educational intervention would be cost-effective and cost-saving. A Markov-type model was developed in order to estimate costs and health outcomes of a simulated cohort of pediatric patients with persistent asthma treated over a 12-month period. Effectiveness parameters were obtained from a single uncontrolled before-and-after study performed with Colombian asthmatic children. Cost data were obtained from official databases provided by the Colombian Ministry of Health. The main outcome was the variable "quality-adjusted life-years" (QALYs). A deterministic threshold sensitivity analysis showed that the asthma educational intervention will be cost-saving to the health system if its cost is under US$513.20. Additionally, the analysis showed that the cost of the intervention would have to be below US$967.40 in order to be cost-effective. This study identified the level at which the cost of a pediatric asthma educational intervention will be cost-effective and cost-saving for the health system in Colombia. Our findings could be a useful aid for decision makers in efficiently allocating limited resources when planning asthma educational interventions for pediatric patients.

  13. Is there a threshold level of maternal education sufficient to reduce child undernutrition? Evidence from Malawi, Tanzania and Zimbabwe.

    PubMed

    Makoka, Donald; Masibo, Peninah Kinya

    2015-08-22

    Maternal education is strongly associated with young child nutrition outcomes. However, the threshold of the level of maternal education that reduces the level of undernutrition in children is not well established. This paper investigates the level of threshold of maternal education that influences child nutrition outcomes using Demographic and Health Survey data from Malawi (2010), Tanzania (2009-10) and Zimbabwe (2005-06). The total number of children (weighted sample) was 4,563 in Malawi; 4,821 children in Tanzania; and 3,473 children in Zimbabwe Demographic and Health Surveys. Using three measures of child nutritional status: stunting, wasting and underweight, we employ a survey logistic regression to analyse the influence of various levels of maternal education on child nutrition outcomes. In Malawi, 45% of the children were stunted, 42% in Tanzania and 33% in Zimbabwe. There were 12% children underweight in Malawi and Zimbabwe and 16% in Tanzania.The level of wasting was 6% of children in Malawi, 5% in Tanzania and 4% in Zimbabwe. Stunting was significantly (p values < 0.0001) associated with mother's educational level in all the three countries. Higher levels of maternal education reduced the odds of child stunting, underweight and wasting in the three countries. The maternal threshold for stunting is more than ten years of schooling. Wasting and underweight have lower threshold levels. These results imply that the free primary education in the three African countries may not be sufficient and policies to keep girls in school beyond primary school hold more promise of addressing child undernutrition.

  14. Clinical course and prognosis of musculoskeletal pain in patients referred for physiotherapy: does pain site matter?

    PubMed

    de Vos Andersen, Nils-Bo; Kent, Peter; Hjort, Jakob; Christiansen, David Høyrup

    2017-03-29

    Danish patients with musculoskeletal disorders are commonly referred for primary care physiotherapy treatment but little is known about their general health status, pain diagnoses, clinical course and prognosis. The objectives of this study were to 1) describe the clinical course of patients with musculoskeletal disorders referred to physiotherapy, 2) identify predictors associated with a satisfactory outcome, and 3) determine the influence of the primary pain site diagnosis relative to those predictors. This was a prospective cohort study of patients (n = 2,706) newly referred because of musculoskeletal pain to 30 physiotherapy practices from January 2012 to May 2012. Data were collected via a web-based questionnaire 1-2 days prior to the first physiotherapy consultation and at 6 weeks, 3 and 6 months, from clinical records (including primary musculoskeletal symptom diagnosis based on the ICPC-2 classification system), and from national registry data. The main outcome was the Patient Acceptable Symptom State. Potential predictors were analysed using backwards step-wise selection during longitudinal Generalised Estimating Equation regression modelling. To assess the influence of pain site on these associations, primary pain site diagnosis was added to the model. Of the patients included, 66% were female and the mean age was 48 (SD 15). The percentage of patients reporting their symptoms as acceptable was 32% at 6 weeks, 43% at 3 months and 52% at 6 months. A higher probability of satisfactory outcome was associated with place of residence, being retired, no compensation claim, less frequent pain, shorter duration of pain, lower levels of disability and fear avoidance, better mental health and being a non-smoker. Primary pain site diagnosis had little influence on these associations, and was not predictive of a satisfactory outcome. Only half of the patients rated their symptoms as acceptable at 6 months. Although satisfactory outcome was difficult to predict at an individual patient level, there were a number of prognostic factors that were associated with this outcome. These factors should be considered when developing generic prediction tools to assess the probability of satisfactory outcome in musculoskeletal physiotherapy patients, because the site of pain did not affect that prognostic association.

  15. What is the optimal rate of caesarean section at population level? A systematic review of ecologic studies.

    PubMed

    Betran, Ana Pilar; Torloni, Maria Regina; Zhang, Jun; Ye, Jiangfeng; Mikolajczyk, Rafael; Deneux-Tharaux, Catherine; Oladapo, Olufemi Taiwo; Souza, João Paulo; Tunçalp, Özge; Vogel, Joshua Peter; Gülmezoglu, Ahmet Metin

    2015-06-21

    In 1985, WHO stated that there was no justification for caesarean section (CS) rates higher than 10-15% at population-level. While the CS rates worldwide have continued to increase in an unprecedented manner over the subsequent three decades, concern has been raised about the validity of the 1985 landmark statement. We conducted a systematic review to identify, critically appraise and synthesize the analyses of the ecologic association between CS rates and maternal, neonatal and infant outcomes. Four electronic databases were searched for ecologic studies published between 2000 and 2014 that analysed the possible association between CS rates and maternal, neonatal or infant mortality or morbidity. Two reviewers performed study selection, data extraction and quality assessment independently. We identified 11,832 unique citations and eight studies were included in the review. Seven studies correlated CS rates with maternal mortality, five with neonatal mortality, four with infant mortality, two with LBW and one with stillbirths. Except for one, all studies were cross-sectional in design and five were global analyses of national-level CS rates versus mortality outcomes. Although the overall quality of the studies was acceptable; only two studies controlled for socio-economic factors and none controlled for clinical or demographic characteristics of the population. In unadjusted analyses, authors found a strong inverse relationship between CS rates and the mortality outcomes so that maternal, neonatal and infant mortality decrease as CS rates increase up to a certain threshold. In the eight studies included in this review, this threshold was at CS rates between 9 and 16%. However, in the two studies that adjusted for socio-economic factors, this relationship was either weakened or disappeared after controlling for these confounders. CS rates above the threshold of 9-16% were not associated with decreases in mortality outcomes regardless of adjustments. Our findings could be interpreted to mean that at CS rates below this threshold, socio-economic development may be driving the ecologic association between CS rates and mortality. On the other hand, at rates higher than this threshold, there is no association between CS and mortality outcomes regardless of adjustment. The ecological association between CS rates and relevant morbidity outcomes needs to be evaluated before drawing more definite conclusions at population level.

  16. 76 FR 34017 - Claims for Credit or Refund

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-10

    ... revises the reference in Sec. 301.6402-4 to the Joint Committee on Taxation threshold referral amount... the Joint Committee on Taxation regarding specified types of refunds or credits in excess of a..., 90 Stat. 1520, 1835, amended section 6405 to reference the ``Joint Committee on Taxation,'' instead...

  17. Study of blur discrimination for 3D stereo viewing

    NASA Astrophysics Data System (ADS)

    Subedar, Mahesh; Karam, Lina J.

    2014-03-01

    Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.

  18. The Role of Problem-Based Learning in the Enhancement of Allied Health Education.

    ERIC Educational Resources Information Center

    Tavakol, Kamran; Reicherter, E. Anne

    2003-01-01

    Analyzes the literature on problem-based learning (PBL) and explains its rationale, process, and current outcomes research. Cites examples of PBL in medical education and its application to allied health education. (Contains 49 references.) (JOW)

  19. On the minimal risk threshold in research with children.

    PubMed

    Binik, Ariella

    2014-01-01

    To protect children in research, procedures that are not administered in the medical interests of a child must be restricted. The risk threshold for these procedures is generally measured according to the concept of minimal risk. Minimal risk is often defined according to the risks of "daily life." But it is not clear whose daily life should serve as the baseline; that is, it is not clear to whom minimal risk should refer. Commentators in research ethics often argue that "minimal risk" should refer to healthy children or the subjects of the research. I argue that neither of these interpretations is successful. I propose a new interpretation in which minimal risk refers to children who are not unduly burdened by their daily lives. I argue that children are not unduly burdened when they fare well, and I defend a substantive goods account of children's welfare.

  20. The effect of minimally invasive posterior cervical approaches versus open anterior approaches on neck pain and disability

    PubMed Central

    Steinberg, Jeffrey A.; German, John W.

    2012-01-01

    Background The choice of surgical approach to the cervical spine may have an influence on patient outcome, particularly with respect to future neck pain and disability. Some surgeons suggest that patients with myelopathy or radiculopathy and significant axial pain should be treated with an anterior interbody fusion because a posterior decompression alone may exacerbate the patients’ neck pain. To date, the effect of a minimally invasive posterior cervical decompression approach (miPCD) on neck pain has not been compared with that of an anterior cervical diskectomy or corpectomy with interbody fusion (ACF). Methods A retrospective review was undertaken of 63 patients undergoing either an miPCD (n = 35) or ACF (n = 28) for treatment of myelopathy or radiculopathy who had achieved a minimum of 6 months’ follow-up. Clinical outcomes were assessed by a patient-derived neck visual analog scale (VAS) score and the neck disability index (NDI). Outcomes were analyzed by use of (1) a threshold in which outcomes were classified as success (NDI < 40, VAS score < 4.0) or failure (NDI > 40, VAS score > 4.0) and (2) perioperative change in which outcomes were classified as success (ΔNDI ≥ – 15, ΔVAS score ≥ – 2.0) or failure (ΔNDI < – 15, ΔVAS score < –2.0). Groups were compared by use of χ2 tests with significance taken at P < .05. Results At last follow-up, the percentages of patients classified as successful using the perioperative change criteria were as follows: 42% for miPCD group versus 63% for ACF group based on neck VAS score (P = not significant [NS]) and 33% for miPCD group versus 50% for ACF group based on NDI (P < .05). At last follow-up, the percentages of patients classified as successful using the threshold criteria were as follows: 71% for miPCD group versus 82% for ACF group based on neck VAS score (P = NS) and 69% for miPCD group versus 68% for ACF group based on NDI (P = NS). Conclusions In this small retrospective analysis, miPCD was associated with similar neck pain and disability to ACF. Given the avoidance of cervical instrumentation and interbody fusion in the miPCD group, these results suggest that further comparative effectiveness study is warranted. PMID:25694872

  1. Defining a Hospital Volume Threshold for Minimally Invasive Pancreaticoduodenectomy in the United States

    PubMed Central

    Adam, Mohamed Abdelgadir; Thomas, Samantha; Youngwirth, Linda; Pappas, Theodore; Roman, Sanziana A.

    2016-01-01

    Importance There is increasing interest in expanding use of minimally invasive pancreaticoduodenectomy (MIPD). This procedure is complex, with data suggesting a significant association between hospital volume and outcomes. Objective To determine whether there is an MIPD hospital volume threshold for which patient outcomes could be optimized. Design, Setting, and Participants Adult patients undergoing MIPD were identified from the Healthcare Cost and Utilization Project National Inpatient Sample from 2000 to 2012. Multivariable models with restricted cubic splines were used to identify a hospital volume threshold by plotting annual hospital volume against the adjusted odds of postoperative complications. The current analysis was conducted on August 16, 2016. Main Outcomes and Measures Incidence of any complication. Results Of the 865 patients who underwent MIPD, 474 (55%) were male and the median patient age was 67 years (interquartile range, 59-74 years). Among the patients, 747 (86%) had cancer and 91 (11%) had benign conditions/pancreatitis. Overall, 410 patients (47%) had postoperative complications and 31 (4%) died in-hospital. After adjustment for demographic and clinical characteristics, increasing hospital volume was associated with reduced complications (overall association P < .001); the likelihood of experiencing a complication declined as hospital volume increased up to 22 cases per year (95% CI, 21-23). Median hospital volume was 6 cases per year (range, 1-60). Most patients (n = 717; 83%) underwent the procedure at low-volume (≤22 cases per year) hospitals. After adjustment for patient mix, undergoing MIPD at low- vs high-volume hospitals was significantly associated with increased odds for postoperative complications (odds ratio, 1.74; 95% CI, 1.03-2.94; P = .04). Conclusions and Relevance Hospital volume is significantly associated with improved outcomes from MIPD, with a threshold of 22 cases per year. Most patients undergo MIPD at low-volume hospitals. Protocols outlining minimum procedural volume thresholds should be considered to facilitate safer dissemination of MIPD. PMID:28030713

  2. A multicentre randomised controlled trial of Transfusion Indication Threshold Reduction on transfusion rates, morbidity and health-care resource use following cardiac surgery (TITRe2).

    PubMed

    Reeves, Barnaby C; Pike, Katie; Rogers, Chris A; Brierley, Rachel Cm; Stokes, Elizabeth A; Wordsworth, Sarah; Nash, Rachel L; Miles, Alice; Mumford, Andrew D; Cohen, Alan; Angelini, Gianni D; Murphy, Gavin J

    2016-08-01

    Uncertainty about optimal red blood cell transfusion thresholds in cardiac surgery is reflected in widely varying transfusion rates between surgeons and cardiac centres. To test the hypothesis that a restrictive compared with a liberal threshold for red blood cell transfusion after cardiac surgery reduces post-operative morbidity and health-care costs. Multicentre, parallel randomised controlled trial and within-trial cost-utility analysis from a UK NHS and Personal Social Services perspective. We could not blind health-care staff but tried to blind participants. Random allocations were generated by computer and minimised by centre and operation. Seventeen specialist cardiac surgery centres in UK NHS hospitals. Patients aged > 16 years undergoing non-emergency cardiac surgery with post-operative haemoglobin < 9 g/dl. Exclusion criteria were: unwilling to have transfusion owing to beliefs; platelet, red blood cell or clotting disorder; ongoing or recurrent sepsis; and critical limb ischaemia. Participants in the liberal group were eligible for transfusion immediately after randomisation (post-operative haemoglobin < 9 g/dl); participants in the restrictive group were eligible for transfusion if their post-operative haemoglobin fell to < 7.5 g/dl during the index hospital stay. The primary outcome was a composite outcome of any serious infectious (sepsis or wound infection) or ischaemic event (permanent stroke, myocardial infarction, gut infarction or acute kidney injury) during the 3 months after randomisation. Events were verified or adjudicated by blinded personnel. Secondary outcomes included blood products transfused; infectious events; ischaemic events; quality of life (European Quality of Life-5 Dimensions); duration of intensive care or high-dependency unit stay; duration of hospital stay; significant pulmonary morbidity; all-cause mortality; resource use, costs and cost-effectiveness. We randomised 2007 participants between 15 July 2009 and 18 February 2013; four withdrew, leaving 1000 and 1003 in the restrictive and liberal groups, respectively. Transfusion rates after randomisation were 53.4% (534/1000) and 92.2% (925/1003). The primary outcome occurred in 35.1% (331/944) and 33.0% (317/962) of participants in the restrictive and liberal groups [odds ratio (OR) 1.11, 95% confidence interval (CI) 0.91 to 1.34; p = 0.30], respectively. There were no subgroup effects for the primary outcome, although some sensitivity analyses substantially altered the estimated OR. There were no differences for secondary clinical outcomes except for mortality, with more deaths in the restrictive group (4.2%, 42/1000 vs. 2.6%, 26/1003; hazard ratio 1.64, 95% CI 1.00 to 2.67; p = 0.045). Serious post-operative complications excluding primary outcome events occurred in 35.7% (354/991) and 34.2% (339/991) of participants in the restrictive and liberal groups, respectively. The total cost per participant from surgery to 3 months postoperatively differed little by group, just £182 less (standard error £488) in the restrictive group, largely owing to the difference in red blood cells cost. In the base-case cost-effectiveness results, the point estimate suggested that the restrictive threshold was cost-effective; however, this result was very uncertain partly owing to the negligible difference in quality-adjusted life-years gained. A restrictive transfusion threshold is not superior to a liberal threshold after cardiac surgery. This finding supports restrictive transfusion due to reduced consumption and costs of red blood cells. However, secondary findings create uncertainty about recommending restrictive transfusion and prompt a new hypothesis that liberal transfusion may be superior after cardiac surgery. Reanalyses of existing trial datasets, excluding all participants who did not breach the liberal threshold, followed by a meta-analysis of the reanalysed results are the most obvious research steps to address the new hypothesis about the possible harm of red blood cell transfusion. Current Controlled Trials ISRCTN70923932. This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 20, No. 60. See the NIHR Journals Library website for further project information.

  3. Longitudinal Treatment Outcomes of Microsurgical Treatment of Neurosensory Deficit after Lower Third Molar Surgery: A Prospective Case Series.

    PubMed

    Leung, Yiu Yan; Cheung, Lim Kwong

    2016-01-01

    To prospectively evaluate the longitudinal subjective and objective outcomes of the microsurgical treatment of lingual nerve (LN) and inferior alveolar nerve (IAN) injury after third molar surgery. A 1-year longitudinal observational study was conducted on patients who received LN or IAN repair after third molar surgery-induced nerve injury. Subjective assessments ("numbness", "hyperaesthesia", "pain", "taste disturbance", "speech" and "social life impact") and objective assessments (light touch threshold, two-point discrimination, pain threshold, and taste discrimination) were recorded. 12 patients (10 females) with 10 LN and 2 IAN repairs were recruited. The subjective outcomes at post-operative 12 months for LN and IAN repair were improved. "Pain" and "hyperaesthesia" were most drastically improved. Light touch threshold improved from 44.7 g to 1.2 g for LN repair and 2 g to 0.5 g for IAN repair. Microsurgical treatment of moderate to severe LN injury after lower third molar surgery offered significant subjective and objective sensory improvements. 100% FSR was achieved at post-operative 6 months.

  4. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  5. In-vivo detectability index: development and validation of an automated methodology

    NASA Astrophysics Data System (ADS)

    Smith, Taylor Brunton; Solomon, Justin; Samei, Ehsan

    2017-03-01

    The purpose of this study was to develop and validate a method to estimate patient-specific detectability indices directly from patients' CT images (i.e., "in vivo"). The method works by automatically extracting noise (NPS) and resolution (MTF) properties from each patient's CT series based on previously validated techniques. Patient images are thresholded into skin-air interfaces to form edge-spread functions, which are further binned, differentiated, and Fourier transformed to form the MTF. The NPS is likewise estimated from uniform areas of the image. These are combined with assumed task functions (reference function: 10 mm disk lesion with contrast of -15 HU) to compute detectability indices for a non-prewhitening matched filter model observer predicting observer performance. The results were compared to those from a previous human detection study on 105 subtle, hypo-attenuating liver lesions, using a two-alternative-forcedchoice (2AFC) method, over 6 dose levels using 16 readers. The in vivo detectability indices estimated for all patient images were compared to binary 2AFC outcomes with a generalized linear mixed-effects statistical model (Probit link function, linear terms only, no interactions, random term for readers). The model showed that the in vivo detectability indices were strongly predictive of 2AFC outcomes (P < 0.05). A linear comparison between the human detection accuracy and model-predicted detection accuracy (for like conditions) resulted in Pearson and Spearman correlations coefficients of 0.86 and 0.87, respectively. These data provide evidence that the in vivo detectability index could potentially be used to automatically estimate and track image quality in a clinical operation.

  6. A novel approach to estimation of the time to biomarker threshold: applications to HIV.

    PubMed

    Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc

    2016-11-01

    In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. A treatment trade‐off based decision aid for patients with locally advanced non‐small cell lung cancer

    PubMed Central

    Brundage, Michael D.; Feldman‐Stewart, Deb; Dixon, Peter; Gregg, Richard; Youssef, Youssef; Davies, Diane; Mackillop, William J.

    2008-01-01

    Purpose To describe the structure and use of a decision aid for patients with locally advanced non‐small cell lung cancer (LA‐NSCLC) who are eligible for combined‐modality treatment (CMT) or for radiotherapy alone (RT). Methods The aid included a structured description of the treatment options and trade‐off exercises designed to help clarify the patient’s values for the relevant outcomes by determining the patient’s survival advantage threshold (the increase in survival conferred by CMT over RT that the patient deemed necessary for choosing CMT). Additional outcome measures included each patient’s strength of treatment preference, decisional conflict, objective understanding of survival information, decisional role preference, and evaluation of the aid itself. Results Twenty‐five patients met the eligibility criteria for study. Of these, seven declined the decision aid because they had a clear treatment preference (four chose CMT and three chose RT). The remaining 18 participants completed the decision aid; 16 chose CMT and two chose RT. All 18 patients wished to participate in the decision to some extent. All patients reported that using the decision support was useful to them and recommended its use for others. No patient or physician reported that the aid interfered with the physician‐patient relationship. Patients’ 3‐year survival advantage thresholds, and their median survival advantage thresholds, were each strongly correlated with their strengths of treatment preference (ρ=0.80, P < 0.001 and ρ=0.77, P < 0.001, respectively). For all but one patient, either their 3‐year or median survival threshold was consistent with their final treatment choice. Eight patients reported a stronger treatment preference after using the decision aid. Conclusions We conclude that a treatment trade‐off based decision aid for patients with locally advanced non‐small cell lung cancer is feasible, that it demonstrates internal consistency and convergent validity, and that it is favourably evaluated by patients and their physicians. The aid seems to help patients understand the benefits and risks of treatment and to choose the treatment that is most consistent with their values. PMID:11281912

  8. Academic Achievement in Adults with a History of Childhood Attention-Deficit/Hyperactivity Disorder: A Population-Based Prospective Study.

    PubMed

    Voigt, Robert G; Katusic, Slavica K; Colligan, Robert C; Killian, Jill M; Weaver, Amy L; Barbaresi, William J

    2017-01-01

    Previous research on the developmental course of attention-deficit/hyperactivity disorder (ADHD) is limited by biased clinic-referred samples and other methodological problems. Thus, questions about adult academic outcomes associated with childhood ADHD remain unanswered. Thus, the objective of this study was to describe academic outcomes in adulthood among incident cases of research-identified childhood ADHD versus non-ADHD referents from a population-based birth cohort. Young adults with research-identified childhood ADHD (N = 232; mean age 27.0 yr; 72.0% men) and referents (N = 335; mean age 28.6 yr; 62.7% men) from a 1976 to 1982 birth cohort (N = 5699) were invited to participate in a followup study and were administered an academic achievement battery consisting of the basic reading component of the Woodcock-Johnson III Tests of Achievement (WJ-III) and the arithmetic subtest of the Wide Range Achievement Test-Third Edition (WRAT-3). Outcomes were compared between the 2 groups using linear regression models, adjusted for age, sex, and comorbid learning disability status. Childhood ADHD cases scored from 3 to 5 grade equivalents lower on all academic tests compared with referents, with mean (SD) standard scores of 95.7 (8.4) versus 101.8 (8.1) in basic reading; 95.0 (9.3) versus 101.9 (8.5) in letterword identification; 98.2 (8.6) versus 103.2 (9.2) in passage comprehension; 95.7 (9.1) versus 100.9 (9.0) in word attack; and 87.8 (12.9) versus 98.0 (12.0) in arithmetic. This is the first prospective, population-based study of adult academic outcomes of childhood ADHD. Our data provide evidence that childhood onset ADHD is associated with long-term underachievement in reading and math that may negatively impact ultimate educational attainment and occupational functioning in adulthood.

  9. Association of Borderline Pulmonary Hypertension With Mortality and Hospitalization in a Large Patient Cohort: Insights From the Veterans Affairs Clinical Assessment, Reporting, and Tracking Program.

    PubMed

    Maron, Bradley A; Hess, Edward; Maddox, Thomas M; Opotowsky, Alexander R; Tedford, Ryan J; Lahm, Tim; Joynt, Karen E; Kass, Daniel J; Stephens, Thomas; Stanislawski, Maggie A; Swenson, Erik R; Goldstein, Ronald H; Leopold, Jane A; Zamanian, Roham T; Elwing, Jean M; Plomondon, Mary E; Grunwald, Gary K; Barón, Anna E; Rumsfeld, John S; Choudhary, Gaurav

    2016-03-29

    Pulmonary hypertension (PH) is associated with increased morbidity across the cardiopulmonary disease spectrum. Based primarily on expert consensus opinion, PH is defined by a mean pulmonary artery pressure (mPAP) ≥25 mm Hg. Although mPAP levels below this threshold are common among populations at risk for PH, the relevance of mPAP <25 mm Hg to clinical outcome is unknown. We analyzed retrospectively all US veterans undergoing right heart catheterization (2007-2012) in the Veterans Affairs healthcare system (n=21,727; 908-day median follow-up). Cox proportional hazards models were used to evaluate the association between mPAP and outcomes of all-cause mortality and hospitalization, adjusted for clinical covariates. When treating mPAP as a continuous variable, the mortality hazard increased beginning at 19 mm Hg (hazard ratio [HR]=1.183; 95% confidence interval [CI], 1.004-1.393) relative to 10 mm Hg. Therefore, patients were stratified into 3 groups: (1) referent (≤18 mm Hg; n=4,207); (2) borderline PH (19-24 mm Hg; n=5,030); and (3) PH (≥25 mm Hg; n=12,490). The adjusted mortality hazard was increased for borderline PH (HR=1.23; 95% CI, 1.12-1.36; P<0.0001) and PH (HR=2.16; 95% CI, 1.96-2.38; P<0.0001) compared with the referent group. The adjusted hazard for hospitalization was also increased in borderline PH (HR=1.07; 95% CI, 1.01-1.12; P=0.0149) and PH (HR=1.15; 95% CI, 1.09-1.22; P<0.0001). The borderline PH cohort remained at increased risk for mortality after excluding the following high-risk subgroups: (1) patients with pulmonary artery wedge pressure >15 mm Hg; (2) pulmonary vascular resistance ≥3.0 Wood units; or (3) inpatient status at the time of right heart catheterization. These data illustrate a continuum of risk according to mPAP level and that borderline PH is associated with increased mortality and hospitalization. Future investigations are needed to test the generalizability of our findings to other populations and study the effect of treatment on outcome in borderline PH. © 2016 American Heart Association, Inc.

  10. Effects of Admission and Treatment Strategies of DWI Courts on Offender Outcomes

    PubMed Central

    Sloan, Frank A.; Chepke, Lindsey M.; Davis, Dontrell V.; Acquah, Kofi; Zold-Kilbourne, Phyllis

    2013-01-01

    Purpose The purpose of this study is to classify DWI courts on the basis of the mix of difficult cases participating in the court (casemix severity) and the amount of involvement between the court and participant (service intensity). Using our classification typology, we assess how casemix severity and service intensity are associated with program outcomes. We expected that holding other factors constant, greater service intensity would improve program outcomes while a relatively severe casemix would result in worse program outcomes. Methods The study used data from 8 DWI courts, 7 from Michigan and 1 from North Carolina. Using a 2-way classification system based on court casemix severity and program intensity, we selected participants in 1 of the courts, and alternatively 2 courts as reference groups. Reference group courts had relatively severe casemixes and high service intensity. We used propensity score matching to match participants in the other courts to participants in the reference group court programs. Program outcome measures were the probabilities of participants’: failing to complete the court’s program; increasing educational attainment; participants improving employment from time of program enrollment; and re-arrest. Results For most outcomes, our main finding was that higher service intensity is associated with better outcomes for court participants, as anticipated, but a court’s casemix severity was unrelated to study outcomes. Conclusions Our results imply that devoting more resources to increasing duration of treatment is productive in terms of better outcomes, irrespective of the mix of participants in the court’s program PMID:23416679

  11. Evaluating links between forest harvest and stream temperature threshold exceedances: the value of spatial and temporal data

    Treesearch

    Jeremiah D. Groom; Sherri L. Johnson; Joshua D. Seeds; George G. Ice

    2017-01-01

    We present the results of a replicated before-after-control-impact study on 33 streams to test the effectiveness of riparian rules for private and State forests at meeting temperature criteria in streams in western Oregon. Many states have established regulatory temperature thresholds, referred to as numeric criteria, to protect cold-water fishes such as salmon and...

  12. An evidence-based decision assistance model for predicting training outcome in juvenile guide dogs.

    PubMed

    Harvey, Naomi D; Craigon, Peter J; Blythe, Simon A; England, Gary C W; Asher, Lucy

    2017-01-01

    Working dog organisations, such as Guide Dogs, need to regularly assess the behaviour of the dogs they train. In this study we developed a questionnaire-style behaviour assessment completed by training supervisors of juvenile guide dogs aged 5, 8 and 12 months old (n = 1,401), and evaluated aspects of its reliability and validity. Specifically, internal reliability, temporal consistency, construct validity, predictive criterion validity (comparing against later training outcome) and concurrent criterion validity (comparing against a standardised behaviour test) were evaluated. Thirty-nine questions were sourced either from previously published literature or created to meet requirements identified via Guide Dogs staff surveys and staff feedback. Internal reliability analyses revealed seven reliable and interpretable trait scales named according to the questions within them as: Adaptability; Body Sensitivity; Distractibility; Excitability; General Anxiety; Trainability and Stair Anxiety. Intra-individual temporal consistency of the scale scores between 5-8, 8-12 and 5-12 months was high. All scales excepting Body Sensitivity showed some degree of concurrent criterion validity. Predictive criterion validity was supported for all seven scales, since associations were found with training outcome, at at-least one age. Thresholds of z-scores on the scales were identified that were able to distinguish later training outcome by identifying 8.4% of all dogs withdrawn for behaviour and 8.5% of all qualified dogs, with 84% and 85% specificity. The questionnaire assessment was reliable and could detect traits that are consistent within individuals over time, despite juvenile dogs undergoing development during the study period. By applying thresholds to scores produced from the questionnaire this assessment could prove to be a highly valuable decision-making tool for Guide Dogs. This is the first questionnaire-style assessment of juvenile dogs that has shown value in predicting the training outcome of individual working dogs.

  13. Assessment of the performances of AcuStar HIT and the combination with heparin-induced multiple electrode aggregometry: a retrospective study.

    PubMed

    Minet, V; Bailly, N; Douxfils, J; Osselaer, J C; Laloy, J; Chatelain, C; Elalamy, I; Chatelain, B; Dogné, J M; Mullier, F

    2013-09-01

    Early diagnosis of immune heparin-induced thrombocytopenia (HIT) is challenging. HemosIL® AcuStar HIT and heparin-induced multiple electrode aggregometry (HIMEA) were recently proposed as rapid diagnostic methods. We conducted a study to assess performances of AcuStar HIT-IgG (PF4-H) and AcuStar HIT-Ab (PF4-H). The secondary objective was to compare the performances of the combination of Acustar HIT and HIMEA with standardised clinical diagnosis. Sera of 104 suspected HIT patients were retrospectively tested with AcuStar HIT. HIMEA was performed on available sera (n=81). The clinical diagnosis was established by analysing in a standardized manner the patient's medical records. These tests were also compared with PF4-Enhanced®, LTA, and SRA in subsets of patients. Thresholds were determined using ROC curve analysis with clinical outcome as reference. Using the recommended thresholds (1.00AU), the negative predictive value (NPV) of HIT-IgG and HIT-Ab were 100.0% (95% CI: 95.9%-100.0% and 95.7%-100.0%). The positive predictive value (PPV) were 64.3% (95% CI: 35.1%-87.2.2%) and 45.0% (95% CI: 23.2%-68.6%), respectively. Using our thresholds (HIT-IgG: 2.89AU, HIT-Ab: 9.41AU), NPV of HIT-IgG and HIT-Ab were 100.0% (95% CI: 96.0%-100.0% and 96.1%-100.0%). PPV were 75.0% (95% CI: 42.7%-94.5%) and 81.8% (95% CI: 48.3%-97.7%), respectively. Of the 79 patients with a medium-high pretest probability score, 67 were negative using HIT-IgG (PF4-H) test at our thresholds. HIMEA was performed on HIT-IgG positive patients. Using this combination, only one patient on 79 was incorrectly diagnosed. Acustar HIT showed good performances to exclude the diagnosis of HIT. Combination with HIMEA improves PPV. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Nonbinary quantification technique accounting for myocardial infarct heterogeneity: Feasibility of applying percent infarct mapping in patients.

    PubMed

    Mastrodicasa, Domenico; Elgavish, Gabriel A; Schoepf, U Joseph; Suranyi, Pal; van Assen, Marly; Albrecht, Moritz H; De Cecco, Carlo N; van der Geest, Rob J; Hardy, Rayphael; Mantini, Cesare; Griffith, L Parkwood; Ruzsics, Balazs; Varga-Szemes, Akos

    2018-02-15

    Binary threshold-based quantification techniques ignore myocardial infarct (MI) heterogeneity, yielding substantial misquantification of MI. To assess the technical feasibility of MI quantification using percent infarct mapping (PIM), a prototype nonbinary algorithm, in patients with suspected MI. Prospective cohort POPULATION: Patients (n = 171) with suspected MI referred for cardiac MRI. Inversion recovery balanced steady-state free-precession for late gadolinium enhancement (LGE) and modified Look-Locker inversion recovery (MOLLI) T 1 -mapping on a 1.5T system. Infarct volume (IV) and infarct fraction (IF) were quantified by two observers based on manual delineation, binary approaches (2-5 standard deviations [SD] and full-width at half-maximum [FWHM] thresholds) in LGE images, and by applying the PIM algorithm in T 1 and LGE images (PIM T1 ; PIM LGE ). IV and IF were analyzed using repeated measures analysis of variance (ANOVA). Agreement between the approaches was determined with Bland-Altman analysis. Interobserver agreement was assessed by intraclass correlation coefficient (ICC) analysis. MI was observed in 89 (54.9%) patients, and 185 (38%) short-axis slices. IF with 2, 3, 4, 5SDs and FWHM techniques were 15.7 ± 6.6, 13.4 ± 5.6, 11.6 ± 5.0, 10.8 ± 5.2, and 10.0 ± 5.2%, respectively. The 5SD and FWHM techniques had the best agreement with manual IF (9.9 ± 4.8%) determination (bias 1.0 and 0.2%; P = 0.1426 and P = 0.8094, respectively). The 2SD and 3SD algorithms significantly overestimated manual IF (9.9 ± 4.8%; both P < 0.0001). PIM LGE measured significantly lower IF (7.8 ± 3.7%) compared to manual values (P < 0.0001). PIM LGE , however, showed the best agreement with the PIM T1 reference (7.6 ± 3.6%, P = 0.3156). Interobserver agreement was rated good to excellent for IV (ICCs between 0.727-0.820) and fair to good for IF (0.589-0.736). The application of the PIM LGE technique for MI quantification in patients is feasible. PIM LGE , with its ability to account for voxelwise MI content, provides significantly smaller IF than any thresholding technique and shows excellent agreement with the T 1 -based reference. 2 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  15. Threshold Concepts, Student Learning and Curriculum: Making Connections between Theory and Practice

    ERIC Educational Resources Information Center

    Barradell, Sarah; Kennedy-Jones, Mary

    2015-01-01

    Threshold concepts, student learning and curriculum are constructs within a learning and teaching discourse foregrounded by Meyer and Land. In this paper, we introduce a conceptual model that integrates these three constructs and identifies desired outcomes at the intersects: namely the processes of (1) ways of thinking and practising, (2)…

  16. Development of Short-term Molecular Thresholds to Predict Long-term Mouse Liver Tumor Outcomes: Phthalate Case StudyTo be

    EPA Science Inventory

    Molecular Thresholds for Early Key Events in Liver Tumorgensis: PhthalateCase StudyTriangleShort-term changes in molecular profiles are a central component of strategies to model health effects of environmental chemicals such as phthalates, for which there is widespread human exp...

  17. Rapid identification of bacteria from positive blood culture bottles by use of matrix-assisted laser desorption-ionization time of flight mass spectrometry fingerprinting.

    PubMed

    Christner, Martin; Rohde, Holger; Wolters, Manuel; Sobottka, Ingo; Wegscheider, Karl; Aepfelbacher, Martin

    2010-05-01

    Early and adequate antimicrobial therapy has been shown to improve the clinical outcome in bloodstream infections (BSI). To provide rapid pathogen identification for targeted treatment, we applied matrix-assisted laser desorption-ionization time of flight (MALDI-TOF) mass spectrometry fingerprinting to bacteria directly recovered from blood culture bottles. A total of 304 aerobic and anaerobic blood cultures, reported positive by a Bactec 9240 system, were subjected in parallel to differential centrifugation with subsequent mass spectrometry fingerprinting and reference identification using established microbiological methods. A representative spectrum of bloodstream pathogens was recovered from 277 samples that grew a single bacterial isolate. Species identification by direct mass spectrometry fingerprinting matched reference identification in 95% of these samples and worked equally well for aerobic and anaerobic culture bottles. Application of commonly used score cutoffs to classify the fingerprinting results led to an identification rate of 87%. Mismatching mostly resulted from insufficient bacterial numbers and preferentially occurred with Gram-positive samples. The respective spectra showed low concordance to database references and were effectively rejected by score thresholds. Spiking experiments and examination of the respective study samples even suggested applicability of the method to mixed cultures. With turnaround times around 100 min, the approach allowed for reliable pathogen identification at the day of blood culture positivity, providing treatment-relevant information within the critical phase of septic illness.

  18. High uptake of hepatitis C virus treatment in HIV/hepatitis C virus co-infected patients attending an integrated HIV/hepatitis C virus clinic.

    PubMed

    Kieran, J; Dillon, A; Farrell, G; Jackson, A; Norris, S; Mulcahy, F; Bergin, C

    2011-10-01

    Hepatitis C virus (HCV) is a major cause of liver disease in HIV-infected patients. The HCV treatment outcomes and barriers to HCV referral were examined in a centre with a HIV/HCV co-infection clinic. Patients who were antibody positive for both HIV and HCV between 1987 and January 2009 were identified. A retrospective chart review was undertaken. Multivariate analysis was performed to assess predictors of HCV clinic referral. Data were collected on 386 HIV/HCV patients; 202/386 had been referred to the co-infection clinic and 107/202 had HCV treatment. In addition, 29/202 were undergoing pretreatment work-up. Overall sustained virologic response (SVR) was 44%; SVR was equivalent in those who acquired HIV/HCV infection from intravenous drug use (IDU) and others. On multivariate analysis, patients who missed appointments, were younger, with active IDU and advanced HIV and who were not offered HCV treatment were less likely to be referred to the clinic. Patients attending the clinic were more likely to have been screened for hepatocellular carcinoma than those attending the general HIV service. Two-thirds of patients referred to the clinic had engaged with the HCV treatment programme. Dedicated co-infection clinics lower the threshold for treatment and improve management of liver disease in co-infected patients.

  19. Icing detection from geostationary satellite data using machine learning approaches

    NASA Astrophysics Data System (ADS)

    Lee, J.; Ha, S.; Sim, S.; Im, J.

    2015-12-01

    Icing can cause a significant structural damage to aircraft during flight, resulting in various aviation accidents. Icing studies have been typically performed using two approaches: one is a numerical model-based approach and the other is a remote sensing-based approach. The model based approach diagnoses aircraft icing using numerical atmospheric parameters such as temperature, relative humidity, and vertical thermodynamic structure. This approach tends to over-estimate icing according to the literature. The remote sensing-based approach typically uses meteorological satellite/ground sensor data such as Geostationary Operational Environmental Satellite (GOES) and Dual-Polarization radar data. This approach detects icing areas by applying thresholds to parameters such as liquid water path and cloud optical thickness derived from remote sensing data. In this study, we propose an aircraft icing detection approach which optimizes thresholds for L1B bands and/or Cloud Optical Thickness (COT) from Communication, Ocean and Meteorological Satellite-Meteorological Imager (COMS MI) and newly launched Himawari-8 Advanced Himawari Imager (AHI) over East Asia. The proposed approach uses machine learning algorithms including decision trees (DT) and random forest (RF) for optimizing thresholds of L1B data and/or COT. Pilot Reports (PIREPs) from South Korea and Japan were used as icing reference data. Results show that RF produced a lower false alarm rate (1.5%) and a higher overall accuracy (98.8%) than DT (8.5% and 75.3%), respectively. The RF-based approach was also compared with the existing COMS MI and GOES-R icing mask algorithms. The agreements of the proposed approach with the existing two algorithms were 89.2% and 45.5%, respectively. The lower agreement with the GOES-R algorithm was possibly due to the high uncertainty of the cloud phase product from COMS MI.

  20. The effect of phasic auditory alerting on visual perception.

    PubMed

    Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus; Vangkilde, Signe; Habekost, Thomas

    2017-08-01

    Phasic alertness refers to a short-lived change in the preparatory state of the cognitive system following an alerting signal. In the present study, we examined the effect of phasic auditory alerting on distinct perceptual processes, unconfounded by motor components. We combined an alerting/no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen's Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects visual perception by increasing the visual processing speed and lowering the threshold of conscious perception (Experiment 1). By manipulating the intensity of the alerting cue, we further observed a positive relationship between alerting intensity and processing speed, which was not seen for the threshold of conscious perception (Experiment 2). This was replicated in a third experiment, in which pupil size was measured as a physiological marker of alertness. Results revealed that the increase in processing speed was accompanied by an increase in pupil size, substantiating the link between alertness and processing speed (Experiment 3). The implications of these results are discussed in relation to a newly developed mathematical model of the relationship between levels of alertness and the speed with which humans process visual information. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. ROCker: accurate detection and quantification of target genes in short-read metagenomic data sets by modeling sliding-window bitscores

    DOE PAGES

    Orellana, Luis H.; Rodriguez-R, Luis M.; Konstantinidis, Konstantinos T.

    2016-10-07

    Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles andmore » related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N 2O, to inert N 2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes.« less

  2. Zseq: An Approach for Preprocessing Next-Generation Sequencing Data.

    PubMed

    Alkhateeb, Abedalrhman; Rueda, Luis

    2017-08-01

    Next-generation sequencing technology generates a huge number of reads (short sequences), which contain a vast amount of genomic data. The sequencing process, however, comes with artifacts. Preprocessing of sequences is mandatory for further downstream analysis. We present Zseq, a linear method that identifies the most informative genomic sequences and reduces the number of biased sequences, sequence duplications, and ambiguous nucleotides. Zseq finds the complexity of the sequences by counting the number of unique k-mers in each sequence as its corresponding score and also takes into the account other factors such as ambiguous nucleotides or high GC-content percentage in k-mers. Based on a z-score threshold, Zseq sweeps through the sequences again and filters those with a z-score less than the user-defined threshold. Zseq algorithm is able to provide a better mapping rate; it reduces the number of ambiguous bases significantly in comparison with other methods. Evaluation of the filtered reads has been conducted by aligning the reads and assembling the transcripts using the reference genome as well as de novo assembly. The assembled transcripts show a better discriminative ability to separate cancer and normal samples in comparison with another state-of-the-art method. Moreover, de novo assembled transcripts from the reads filtered by Zseq have longer genomic sequences than other tested methods. Estimating the threshold of the cutoff point is introduced using labeling rules with optimistic results.

  3. Comparative analysis of risk-based cleanup levels and associated remediation costs using linearized multistage model (cancer slope factor) vs. threshold approach (reference dose) for three chlorinated alkenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, L.J.; Mihalich, J.P.

    1995-12-31

    The chlorinated alkenes 1,1-dichloroethene (1,1-DCE), tetrachloroethene (PCE), and trichloroethene (TCE) are common environmental contaminants found in soil and groundwater at hazardous waste sites. Recent assessment of data from epidemiology and mechanistic studies indicates that although exposure to 1,1-DCE, PCE, and TCE causes tumor formation in rodents, it is unlikely that these chemicals are carcinogenic to humans. Nevertheless, many state and federal agencies continue to regulate these compounds as carcinogens through the use of the linearized multistage model and resulting cancer slope factor (CSF). The available data indicate that 1,1-DCE, PCE, and TCE should be assessed using a threshold (i.e., referencemore » dose [RfD]) approach rather than a CSF. This paper summarizes the available metabolic, toxicologic, and epidemiologic data that question the use of the linear multistage model (and CSF) for extrapolation from rodents to humans. A comparative analysis of potential risk-based cleanup goals (RBGs) for these three compounds in soil is presented for a hazardous waste site. Goals were calculated using the USEPA CSFs and using a threshold (i.e., RfD) approach. Costs associated with remediation activities required to meet each set of these cleanup goals are presented and compared.« less

  4. ROCker: accurate detection and quantification of target genes in short-read metagenomic data sets by modeling sliding-window bitscores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orellana, Luis H.; Rodriguez-R, Luis M.; Konstantinidis, Konstantinos T.

    Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles andmore » related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N 2O, to inert N 2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes.« less

  5. ROCker: accurate detection and quantification of target genes in short-read metagenomic data sets by modeling sliding-window bitscores

    PubMed Central

    2017-01-01

    Abstract Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles and related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N2O, to inert N2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes. PMID:28180325

  6. WHO Environmental Noise Guidelines for the European Region: A Systematic Review on Environmental Noise and Permanent Hearing Loss and Tinnitus.

    PubMed

    Śliwińska-Kowalska, Mariola; Zaborowski, Kamil

    2017-09-27

    Background : Hearing loss is defined as worsening of hearing acuity and is usually expressed as an increase in the hearing threshold. Tinnitus, defined as "ringing in the ear", is a common and often disturbing accompaniment of hearing loss. Hearing loss and environmental exposures to noise are increasingly recognized health problems. Objectives : The objective was to assess whether the exposure-response relationship can be established between exposures to non-occupational noise and permanent hearing outcomes such as permanent hearing loss and tinnitus. Methods: Information sources : Computer searches of all accessible medical and other databases (PubMed, Web of Science, Scopus) were performed and complemented with manual searches. The search was not limited to a particular time span, except for the effects of personal listening devices (PLDs). The latter was limited to the years 2008-June 2015, since previous knowledge was summarized by SCENIHR descriptive systematic review published in 2008. Study eligibility criteria: The inclusion criteria were as follows: the exposure to noise was measured in sound pressure levels (SPLs) and expressed in individual equivalent decibel values (L EX,8h ), the studies included both exposed and reference groups, the outcome was a permanent health effect, i.e., permanent hearing loss assessed with pure-tone audiometry and/or permanent tinnitus assessed with a questionnaire. The eligibility criteria were evaluated by two independent reviewers. Study appraisal and synthesis methods: The risk of bias was assessed for all of the papers using a template for assessment of quality and the risk of bias. The GRADE (grading of recommendations assessment, development, and evaluation) approach was used to assess the overall quality of evidence. Meta-analysis was not possible due to methodological heterogeneity of included studies and the inadequacy of data. Results: Out of 220 references identified, five studies fulfilled the inclusion criteria. All of them were related to the use of PLDs and comprised in total of 1551 teenagers and young adults. Three studies used hearing loss as the outcome and three tinnitus. There was a positive correlation between noise level and hearing loss either at standard or extended high frequencies in all three of the studies on hearing loss. In one study, there was also a positive correlation between the duration of PLD use and hearing loss. There was no association between prolonged listening to loud music through PLDs and tinnitus or the results were contradictory. All of the evidence was of low quality. Limitations: The studies are cross-sectional. No study provides odds ratios of hearing loss by the level of exposure to noise. Conclusions: While using very strict inclusion criteria, there is low quality GRADE evidence that prolonged listening to loud music through PLDs increases the risk of hearing loss and results in worsening standard frequency audiometric thresholds. However, specific threshold analyses focused on stratifying risk according to clearly defined levels of exposure are missing. Future studies are needed to provide actionable guidance for PLDs users. No studies fulfilling the inclusion criteria related to other isolated or combined exposures to environmental noise were identified.

  7. WHO Environmental Noise Guidelines for the European Region: A Systematic Review on Environmental Noise and Permanent Hearing Loss and Tinnitus

    PubMed Central

    Śliwińska-Kowalska, Mariola; Zaborowski, Kamil

    2017-01-01

    Background: Hearing loss is defined as worsening of hearing acuity and is usually expressed as an increase in the hearing threshold. Tinnitus, defined as “ringing in the ear”, is a common and often disturbing accompaniment of hearing loss. Hearing loss and environmental exposures to noise are increasingly recognized health problems. Objectives: The objective was to assess whether the exposure-response relationship can be established between exposures to non-occupational noise and permanent hearing outcomes such as permanent hearing loss and tinnitus. Methods: Information sources: Computer searches of all accessible medical and other databases (PubMed, Web of Science, Scopus) were performed and complemented with manual searches. The search was not limited to a particular time span, except for the effects of personal listening devices (PLDs). The latter was limited to the years 2008–June 2015, since previous knowledge was summarized by SCENIHR descriptive systematic review published in 2008. Study eligibility criteria: The inclusion criteria were as follows: the exposure to noise was measured in sound pressure levels (SPLs) and expressed in individual equivalent decibel values (LEX,8h), the studies included both exposed and reference groups, the outcome was a permanent health effect, i.e., permanent hearing loss assessed with pure-tone audiometry and/or permanent tinnitus assessed with a questionnaire. The eligibility criteria were evaluated by two independent reviewers. Study appraisal and synthesis methods: The risk of bias was assessed for all of the papers using a template for assessment of quality and the risk of bias. The GRADE (grading of recommendations assessment, development, and evaluation) approach was used to assess the overall quality of evidence. Meta-analysis was not possible due to methodological heterogeneity of included studies and the inadequacy of data. Results: Out of 220 references identified, five studies fulfilled the inclusion criteria. All of them were related to the use of PLDs and comprised in total of 1551 teenagers and young adults. Three studies used hearing loss as the outcome and three tinnitus. There was a positive correlation between noise level and hearing loss either at standard or extended high frequencies in all three of the studies on hearing loss. In one study, there was also a positive correlation between the duration of PLD use and hearing loss. There was no association between prolonged listening to loud music through PLDs and tinnitus or the results were contradictory. All of the evidence was of low quality. Limitations: The studies are cross-sectional. No study provides odds ratios of hearing loss by the level of exposure to noise. Conclusions: While using very strict inclusion criteria, there is low quality GRADE evidence that prolonged listening to loud music through PLDs increases the risk of hearing loss and results in worsening standard frequency audiometric thresholds. However, specific threshold analyses focused on stratifying risk according to clearly defined levels of exposure are missing. Future studies are needed to provide actionable guidance for PLDs users. No studies fulfilling the inclusion criteria related to other isolated or combined exposures to environmental noise were identified. PMID:28953238

  8. Cost-effectiveness of the faecal immunochemical test at a range of positivity thresholds compared with the guaiac faecal occult blood test in the NHS Bowel Cancer Screening Programme in England

    PubMed Central

    Halloran, Stephen

    2017-01-01

    Objectives Through the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP), men and women in England aged between 60 and 74 years are invited for colorectal cancer (CRC) screening every 2 years using the guaiac faecal occult blood test (gFOBT). The aim of this analysis was to estimate the cost–utility of the faecal immunochemical test for haemoglobin (FIT) compared with gFOBT for a cohort beginning screening aged 60 years at a range of FIT positivity thresholds. Design We constructed a cohort-based Markov state transition model of CRC disease progression and screening. Screening uptake, detection, adverse event, mortality and cost data were taken from BCSP data and national sources, including a recent large pilot study of FIT screening in the BCSP. Results Our results suggest that FIT is cost-effective compared with gFOBT at all thresholds, resulting in cost savings and quality-adjusted life years (QALYs) gained over a lifetime time horizon. FIT was cost-saving (p<0.001) and resulted in QALY gains of 0.014 (95% CI 0.012 to 0.017) at the base case threshold of 180 µg Hb/g faeces. Greater health gains and cost savings were achieved as the FIT threshold was decreased due to savings in cancer management costs. However, at lower thresholds, FIT was also associated with more colonoscopies (increasing from 32 additional colonoscopies per 1000 people invited for screening for FIT 180 µg Hb/g faeces to 421 additional colonoscopies per 1000 people invited for screening for FIT 20 µg Hb/g faeces over a 40-year time horizon). Parameter uncertainty had limited impact on the conclusions. Conclusions This is the first published economic analysis of FIT screening in England using data directly comparing FIT with gFOBT in the NHS BSCP. These results for a cohort starting screening aged 60 years suggest that FIT is highly cost-effective at all thresholds considered. Further modelling is needed to estimate economic outcomes for screening across all age cohorts simultaneously. PMID:29079605

  9. Is skin penetration a determining factor in skin sensitization ...

    EPA Pesticide Factsheets

    Summary:Background. It is widely accepted that substances that cannot penetrate through the skin will not be sensitisers. Thresholds based on relevant physicochemical parameters such as a LogKow > 1 and a MW 1 is a true requirement for sensitisation.Methods. A large dataset of substances that had been evaluated for their skin sensitisation potential, together with measured LogKow values was compiled from the REACH database. The incidence of skin sensitisers relative to non-skin sensitisers below and above the LogKow = 1 threshold was evaluated. Results. 1482 substances with associated skin sensitisation outcomes and measured LogKow values were identified. 305 substances had a measured LogKow < 0 and of those, 38 were sensitisers.Conclusions. There was no significant difference in the incidence of skin sensitisation above and below the LogKow = 1 threshold. Reaction chemistry considerations could explain the skin sensitisation observed for the 38 sensitisers with a LogKow < 0. The LogKow threshold is a self-evident truth borne out from the widespread misconception that the ability to efficiently penetrate the stratum corneum is a key determinant of skin sensitisation potential and potency. Using the REACH data extracted to test out the validity of common assumptions in the skin sensitization AOP. Builds on trying to develop a proof of concept IATA

  10. Randomised Controlled Trial of a Parenting Intervention in the Voluntary Sector for Reducing Child Conduct Problems: Outcomes and Mechanisms of Change

    ERIC Educational Resources Information Center

    Gardner, Frances; Burton, Jennifer; Klimes, Ivana

    2006-01-01

    Background: To test effectiveness of a parenting intervention, delivered in a community-based voluntary-sector organisation, for reducing conduct problems in clinically-referred children. Methods: Randomised controlled trial, follow-up at 6, 18 months, assessors blind to treatment status. Participants--76 children referred for conduct problems,…

  11. Comparison of MRI segmentation techniques for measuring liver cyst volumes in autosomal dominant polycystic kidney disease.

    PubMed

    Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R

    To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Application of outlier analysis for baseline-free damage diagnosis

    NASA Astrophysics Data System (ADS)

    Kim, Seung Dae; In, Chi Won; Cronin, Kelly E.; Sohn, Hoon; Harries, Kent

    2006-03-01

    As carbon fiber-reinforced polymer (CFRP) laminates have been widely accepted as valuable materials for retrofitting civil infrastructure systems, an appropriate assessment of bonding conditions between host structures and CFRP laminates becomes a critical issue to guarantee the performance of CFRP strengthened structures. This study attempts to develop a continuous performance monitoring system for CFRP strengthened structures by autonomously inspecting the bonding conditions between the CFRP layers and the host structure. The uniqueness of this study is to develop a new concept and theoretical framework of nondestructive testing (NDT), in which debonding is detected "without using past baseline data." The proposed baseline-free damage diagnosis is achieved in two stages. In the first step, features sensitive to debonding of the CFPR layers but insensitive to loading conditions are extracted based on a concept referred to as a time reversal process. This time reversal process allows extracting damage-sensitive features without direct comparison with past baseline data. Then, a statistical damage classifier will be developed in the second step to make a decision regarding the bonding condition of the CFRP layers. The threshold necessary for decision making will be adaptively determined without predetermined threshold values. Monotonic and fatigue load tests of full-scale CFRP strengthened RC beams are conducted to demonstrate the potential of the proposed reference-free debonding monitoring system.

  13. Does pulse oximeter use impact health outcomes? A systematic review

    PubMed Central

    English, Mike; Shepperd, Sasha

    2016-01-01

    Objective Do newborns, children and adolescents up to 19 years have lower mortality rates, lower morbidity and shorter length of stay in health facilities where pulse oximeters are used to inform diagnosis and treatment (excluding surgical care) compared with health facilities where pulse oximeters are not used? Design Studies were obtained for this systematic literature review by systematically searching the Database of Abstracts of Reviews of Effects, Cochrane, Medion, PubMed, Web of Science, Embase, Global Health, CINAHL, WHO Global Health Library, international health organisation and NGO websites, and study references. Patients Children 0–19 years presenting for the first time to hospitals, emergency departments or primary care facilities. Interventions Included studies compared outcomes where pulse oximeters were used for diagnosis and/or management, with outcomes where pulse oximeters were not used. Main outcome measures: mortality, morbidity, length of stay, and treatment and management changes. Results The evidence is low quality and hypoxaemia definitions varied across studies, but the evidence suggests pulse oximeter use with children can reduce mortality rates (when combined with improved oxygen administration) and length of emergency department stay, increase admission of children with previously unrecognised hypoxaemia, and change physicians’ decisions on illness severity, diagnosis and treatment. Pulse oximeter use generally increased resource utilisation. Conclusions As international organisations are investing in programmes to increase pulse oximeter use in low-income settings, more research is needed on the optimal use of pulse oximeters (eg, appropriate oxygen saturation thresholds), and how pulse oximeter use affects referral and admission rates, length of stay, resource utilisation and health outcomes. PMID:26699537

  14. Methods for the estimation of the National Institute for Health and Care Excellence cost-effectiveness threshold.

    PubMed

    Claxton, Karl; Martin, Steve; Soares, Marta; Rice, Nigel; Spackman, Eldon; Hinde, Sebastian; Devlin, Nancy; Smith, Peter C; Sculpher, Mark

    2015-02-01

    Cost-effectiveness analysis involves the comparison of the incremental cost-effectiveness ratio of a new technology, which is more costly than existing alternatives, with the cost-effectiveness threshold. This indicates whether or not the health expected to be gained from its use exceeds the health expected to be lost elsewhere as other health-care activities are displaced. The threshold therefore represents the additional cost that has to be imposed on the system to forgo 1 quality-adjusted life-year (QALY) of health through displacement. There are no empirical estimates of the cost-effectiveness threshold used by the National Institute for Health and Care Excellence. (1) To provide a conceptual framework to define the cost-effectiveness threshold and to provide the basis for its empirical estimation. (2) Using programme budgeting data for the English NHS, to estimate the relationship between changes in overall NHS expenditure and changes in mortality. (3) To extend this mortality measure of the health effects of a change in expenditure to life-years and to QALYs by estimating the quality-of-life (QoL) associated with effects on years of life and the additional direct impact on QoL itself. (4) To present the best estimate of the cost-effectiveness threshold for policy purposes. Earlier econometric analysis estimated the relationship between differences in primary care trust (PCT) spending, across programme budget categories (PBCs), and associated disease-specific mortality. This research is extended in several ways including estimating the impact of marginal increases or decreases in overall NHS expenditure on spending in each of the 23 PBCs. Further stages of work link the econometrics to broader health effects in terms of QALYs. The most relevant 'central' threshold is estimated to be £12,936 per QALY (2008 expenditure, 2008-10 mortality). Uncertainty analysis indicates that the probability that the threshold is < £20,000 per QALY is 0.89 and the probability that it is < £30,000 per QALY is 0.97. Additional 'structural' uncertainty suggests, on balance, that the central or best estimate is, if anything, likely to be an overestimate. The health effects of changes in expenditure are greater when PCTs are under more financial pressure and are more likely to be disinvesting than investing. This indicates that the central estimate of the threshold is likely to be an overestimate for all technologies which impose net costs on the NHS and the appropriate threshold to apply should be lower for technologies which have a greater impact on NHS costs. The central estimate is based on identifying a preferred analysis at each stage based on the analysis that made the best use of available information, whether or not the assumptions required appeared more reasonable than the other alternatives available, and which provided a more complete picture of the likely health effects of a change in expenditure. However, the limitation of currently available data means that there is substantial uncertainty associated with the estimate of the overall threshold. The methods go some way to providing an empirical estimate of the scale of opportunity costs the NHS faces when considering whether or not the health benefits associated with new technologies are greater than the health that is likely to be lost elsewhere in the NHS. Priorities for future research include estimating the threshold for subsequent waves of expenditure and outcome data, for example by utilising expenditure and outcomes available at the level of Clinical Commissioning Groups as well as additional data collected on QoL and updated estimates of incidence (by age and gender) and duration of disease. Nonetheless, the study also starts to make the other NHS patients, who ultimately bear the opportunity costs of such decisions, less abstract and more 'known' in social decisions. The National Institute for Health Research-Medical Research Council Methodology Research Programme.

  15. Measuring Gait Quality in Parkinson’s Disease through Real-Time Gait Phase Recognition

    PubMed Central

    Mileti, Ilaria; Germanotta, Marco; Di Sipio, Enrica; Imbimbo, Isabella; Pacilli, Alessandra; Erra, Carmen; Petracca, Martina; Del Prete, Zaccaria; Bentivoglio, Anna Rita; Padua, Luca

    2018-01-01

    Monitoring gait quality in daily activities through wearable sensors has the potential to improve medical assessment in Parkinson’s Disease (PD). In this study, four gait partitioning methods, two based on thresholds and two based on a machine learning approach, considering the four-phase model, were compared. The methods were tested on 26 PD patients, both in OFF and ON levodopa conditions, and 11 healthy subjects, during walking tasks. All subjects were equipped with inertial sensors placed on feet. Force resistive sensors were used to assess reference time sequence of gait phases. Goodness Index (G) was evaluated to assess accuracy in gait phases estimation. A novel synthetic index called Gait Phase Quality Index (GPQI) was proposed for gait quality assessment. Results revealed optimum performance (G < 0.25) for three tested methods and good performance (0.25 < G < 0.70) for one threshold method. The GPQI resulted significantly higher in PD patients than in healthy subjects, showing a moderate correlation with clinical scales score. Furthermore, in patients with severe gait impairment, GPQI was found higher in OFF than in ON state. Our results unveil the possibility of monitoring gait quality in PD through real-time gait partitioning based on wearable sensors. PMID:29558410

  16. Pure-tone audiometry outside a sound booth using earphone attentuation, integrated noise monitoring, and automation.

    PubMed

    Swanepoel, De Wet; Matthysen, Cornelia; Eikelboom, Robert H; Clark, Jackie L; Hall, James W

    2015-01-01

    Accessibility of audiometry is hindered by the cost of sound booths and shortage of hearing health personnel. This study investigated the validity of an automated mobile diagnostic audiometer with increased attenuation and real-time noise monitoring for clinical testing outside a sound booth. Attenuation characteristics and reference ambient noise levels for the computer-based audiometer (KUDUwave) was evaluated alongside the validity of environmental noise monitoring. Clinical validity was determined by comparing air- and bone-conduction thresholds obtained inside and outside the sound booth (23 subjects). Twenty-three normal-hearing subjects (age range, 20-75 years; average age 35.5) and a sub group of 11 subjects to establish test-retest reliability. Improved passive attenuation and valid environmental noise monitoring was demonstrated. Clinically, air-conduction thresholds inside and outside the sound booth, corresponded within 5 dB or less > 90% of instances (mean absolute difference 3.3 ± 3.2 SD). Bone conduction thresholds corresponded within 5 dB or less in 80% of comparisons between test environments, with a mean absolute difference of 4.6 dB (3.7 SD). Threshold differences were not statistically significant. Mean absolute test-retest differences outside the sound booth was similar to those in the booth. Diagnostic pure-tone audiometry outside a sound booth, using automated testing, improved passive attenuation, and real-time environmental noise monitoring demonstrated reliable hearing assessments.

  17. Precision Voltage Referencing Techniques in MOS Technology.

    NASA Astrophysics Data System (ADS)

    Song, Bang-Sup

    With the increasing complexity of functions on a single MOS chip, precision analog cicuits implemented in the same technology are in great demand so as to be integrated together with digital circuits. The future development of MOS data acquisition systems will require precision on-chip MOS voltage references. This dissertation will probe two most promising configurations of on-chip voltage references both in NMOS and CMOS technologies. In NMOS, an ion-implantation effect on the temperature behavior of MOS devices is investigated to identify the fundamental limiting factors of a threshold voltage difference as an NMOS voltage source. For this kind of voltage reference, the temperature stability on the order of 20ppm/(DEGREES)C is achievable with a shallow single-threshold implant and a low-current, high-body bias operation. In CMOS, a monolithic prototype bandgap reference is designed, fabricated and tested which embodies a curvature compensation and exhibits a minimized sensitivity to the process parameter variation. Experimental results imply that an average temperature stability on the order of 10ppm/(DEGREES)C with a production spread of less than 10ppm/(DEGREES)C feasible over the commercial temperature range.

  18. "Mind the gap!" Evaluation of the performance gap attributable to exception reporting and target thresholds in the new GMS contract: National database analysis.

    PubMed

    Fleetcroft, Robert; Steel, Nicholas; Cookson, Richard; Howe, Amanda

    2008-06-17

    The 2003 revision of the UK GMS contract rewards general practices for performance against clinical quality indicators. Practices can exempt patients from treatment, and can receive maximum payment for less than full coverage of eligible patients. This paper aims to estimate the gap between the percentage of maximum incentive gained and the percentage of patients receiving indicated care (the pay-performance gap), and to estimate how much of the gap is attributable respectively to thresholds and to exception reporting. Analysis of Quality Outcomes Framework data in the National Primary Care Database and exception reporting data from the Information Centre from 8407 practices in England in 2005 - 6. The main outcome measures were the gap between the percentage of maximum incentive gained and the percentage of patients receiving indicated care at the practice level, both for individual indicators and a combined composite score. An additional outcome was the percentage of that gap attributable respectively to exception reporting and maximum threshold targets set at less than 100%. The mean pay-performance gap for the 65 aggregated clinical indicators was 13.3% (range 2.9% to 48%). 52% of this gap (6.9% of eligible patients) is attributable to thresholds being set at less than 100%, and 48% to patients being exception reported. The gap was greater than 25% in 9 indicators: beta blockers and cholesterol control in heart disease; cholesterol control in stroke; influenza immunization in asthma; blood pressure, sugar and cholesterol control in diabetes; seizures in epilepsy and treatment of hypertension. Threshold targets and exception reporting introduce an incentive ceiling, which substantially reduces the percentage of eligible patients that UK practices need to treat in order to receive maximum incentive payments for delivering that care. There are good clinical reasons for exception reporting, but after unsuitable patients have been exempted from treatment, there is no reason why all maximum thresholds should not be 100%, whilst retaining the current lower thresholds to provide incentives for lower performing practices.

  19. Disruption in the relationship between blood pressure and salty taste thresholds among overweight and obese children

    PubMed Central

    Bobowski, Nuala K.

    2015-01-01

    Background Prevalence of high blood pressure (BP) among American children has increased over the past two decades, due in part to increasing rates of obesity and excessive dietary salt intake. Objective We tested the hypotheses that the relationships among BP, salty taste sensitivity, and salt intake differ between normal-weight and overweight/obese children. Design In an observational study, sodium chloride (NaCl) and monosodium glutamate (MSG) taste detection thresholds were measured using the Monell two-alternative, forced-choice, paired-comparison tracking method. Weight and BP were measured, and salt intake was determined by 24-hour dietary recall. Participants/Setting Eight- to 14-year-olds (N=97; 52% overweight or obese) from the Philadelphia area completed anthropometrics and BP measurements; 97% completed one or both thresholds. Seventy-six percent provided valid dietary recall data. Testing was completed between December 2011 and August 2012. Main outcome measures NaCl and MSG detection thresholds, BP, and dietary salt intake. Statistical analyses Outcome measures were compared between normal-weight and overweight/obese children with t-tests. Relationships among outcome measures within groups were examined with Pearson correlations, and multiple regression analysis was used to examine the relationship between BP and thresholds, controlling for age, BMI-Z score, and dietary salt intake. Results Salt and MSG thresholds were positively correlated (r(71)=0.30, p=0.01) and did not differ between body-weight groups (p>0.20). Controlling for age, BMI-Z score, and salt intake, systolic BP was associated with NaCl thresholds among normal-weight children (p=0.01), but not among overweight/obese children. All children consumed excess salt (>8 g/day). Grain and meat products were the primary source of dietary sodium. Conclusions The apparent disruption in the relationship between salty taste response and BP among overweight/obese children suggests the relationship may be influenced by body weight. Further research is warranted to explore this relationship as a potential measure to prevent development of hypertension. PMID:25843808

  20. Exploring childhood lead exposure through GIS: a review of the recent literature.

    PubMed

    Akkus, Cem; Ozdenerol, Esra

    2014-06-18

    Childhood exposure to lead remains a critical health control problem in the US. Integration of Geographic Information Systems (GIS) into childhood lead exposure studies significantly enhanced identifying lead hazards in the environment and determining at risk children. Research indicates that the toxic threshold for lead exposure was updated three times in the last four decades: 60 to 30 micrograms per deciliter (µg/dL) in 1975, 25 µg/dL in 1985, and 10 µb/dL in 1991. These changes revealed the extent of lead poisoning. By 2012 it was evident that no safe blood lead threshold for the adverse effects of lead on children had been identified and the Center for Disease Control (CDC) currently uses a reference value of 5 µg/dL. Review of the recent literature on GIS-based studies suggests that numerous environmental risk factors might be critical for lead exposure. New GIS-based studies are used in surveillance data management, risk analysis, lead exposure visualization, and community intervention strategies where geographically-targeted, specific intervention measures are taken.

  1. Exploring Childhood Lead Exposure through GIS: A Review of the Recent Literature

    PubMed Central

    Akkus, Cem; Ozdenerol, Esra

    2014-01-01

    Childhood exposure to lead remains a critical health control problem in the US. Integration of Geographic Information Systems (GIS) into childhood lead exposure studies significantly enhanced identifying lead hazards in the environment and determining at risk children. Research indicates that the toxic threshold for lead exposure was updated three times in the last four decades: 60 to 30 micrograms per deciliter (µg/dL) in 1975, 25 µg/dL in 1985, and 10 µb/dL in 1991. These changes revealed the extent of lead poisoning. By 2012 it was evident that no safe blood lead threshold for the adverse effects of lead on children had been identified and the Center for Disease Control (CDC) currently uses a reference value of 5 µg/dL. Review of the recent literature on GIS-based studies suggests that numerous environmental risk factors might be critical for lead exposure. New GIS-based studies are used in surveillance data management, risk analysis, lead exposure visualization, and community intervention strategies where geographically-targeted, specific intervention measures are taken. PMID:24945189

  2. Extreme umbilical cord lengths, cord knot and entanglement: Risk factors and risk of adverse outcomes, a population-based study

    PubMed Central

    Kessler, Jörg

    2018-01-01

    Objectives To determine risk factors for short and long umbilical cord, entanglement and knot. Explore their associated risks of adverse maternal and perinatal outcome, including risk of recurrence in a subsequent pregnancy. To provide population based gestational age and sex and parity specific reference ranges for cord length. Design Population based registry study. Setting Medical Birth Registry of Norway 1999–2013. Population All singleton births (gestational age>22weeks<45 weeks) (n = 856 300). Methods Descriptive statistics and odds ratios of risk factors for extreme cord length and adverse outcomes based on logistic regression adjusted for confounders. Main outcome measures Short or long cord (<10th or >90th percentile), cord knot and entanglement, adverse pregnancy outcomes including perinatal and intrauterine death. Results Increasing parity, maternal height and body mass index, and diabetes were associated with increased risk of a long cord. Large placental and birth weight, and fetal male sex were factors for a long cord, which again was associated with a doubled risk of intrauterine and perinatal death, and increased risk of adverse neonatal outcome. Anomalous cord insertion, female sex, and a small placenta were associated with a short cord, which was associated with increased risk of fetal malformations, placental complications, caesarean delivery, non-cephalic presentation, perinatal and intrauterine death. At term, cord knot was associated with a quadrupled risk of perinatal death. The combination of a cord knot and entanglement had a more than additive effect to the association to perinatal death. There was a more than doubled risk of recurrence of a long or short cord, knot and entanglement in a subsequent pregnancy of the same woman. Conclusion Cord length is influenced both by maternal and fetal factors, and there is increased risk of recurrence. Extreme cord length, entanglement and cord knot are associated with increased risk of adverse outcomes including perinatal death. We provide population based reference ranges for umbilical cord length. PMID:29584790

  3. Segmentation Approach Towards Phase-Contrast Microscopic Images of Activated Sludge to Monitor the Wastewater Treatment.

    PubMed

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Lai, Koon Chun

    2017-12-01

    Image processing and analysis is an effective tool for monitoring and fault diagnosis of activated sludge (AS) wastewater treatment plants. The AS image comprise of flocs (microbial aggregates) and filamentous bacteria. In this paper, nine different approaches are proposed for image segmentation of phase-contrast microscopic (PCM) images of AS samples. The proposed strategies are assessed for their effectiveness from the perspective of microscopic artifacts associated with PCM. The first approach uses an algorithm that is based on the idea that different color space representation of images other than red-green-blue may have better contrast. The second uses an edge detection approach. The third strategy, employs a clustering algorithm for the segmentation and the fourth applies local adaptive thresholding. The fifth technique is based on texture-based segmentation and the sixth uses watershed algorithm. The seventh adopts a split-and-merge approach. The eighth employs Kittler's thresholding. Finally, the ninth uses a top-hat and bottom-hat filtering-based technique. The approaches are assessed, and analyzed critically with reference to the artifacts of PCM. Gold approximations of ground truth images are prepared to assess the segmentations. Overall, the edge detection-based approach exhibits the best results in terms of accuracy, and the texture-based algorithm in terms of false negative ratio. The respective scenarios are explained for suitability of edge detection and texture-based algorithms.

  4. Randomized, Prospective, Three-Arm Study to Confirm the Auditory Safety and Efficacy of Artemether-Lumefantrine in Colombian Patients with Uncomplicated Plasmodium falciparum Malaria

    PubMed Central

    Carrasquilla, Gabriel; Barón, Clemencia; Monsell, Edwin M.; Cousin, Marc; Walter, Verena; Lefèvre, Gilbert; Sander, Oliver; Fisher, Laurel M.

    2012-01-01

    The safety of artemether-lumefantrine in patients with acute, uncomplicated Plasmodium falciparum malaria was investigated prospectively using the auditory brainstem response (ABR) and pure-tone thresholds. Secondary outcomes included polymerase chain reaction-corrected cure rates. Patients were randomly assigned in a 3:1:1 ratio to either artemether-lumefantrine (N = 159), atovaquone-proguanil (N = 53), or artesunate-mefloquine (N = 53). The null hypothesis (primary outcome), claiming that the percentage of patients with a baseline to Day-7 ABR Wave III latency increase of > 0.30 msec is ≥ 15% after administration of artemether-lumefantrine, was rejected; 2.6% of patients (95% confidence interval: 0.7–6.6) exceeded 0.30 msec, i.e., significantly below 15% (P < 0.0001). A model-based analysis found no apparent relationship between drug exposure and ABR change. In all three groups, average improvements (2–4 dB) in pure-tone thresholds were observed, and polymerase chain reaction-corrected cure rates were > 95% to Day 42. The results support the continued safe and efficacious use of artemether-lumefantrine in uncomplicated falciparum malaria. PMID:22232454

  5. Randomized, prospective, three-arm study to confirm the auditory safety and efficacy of artemether-lumefantrine in Colombian patients with uncomplicated Plasmodium falciparum malaria.

    PubMed

    Carrasquilla, Gabriel; Barón, Clemencia; Monsell, Edwin M; Cousin, Marc; Walter, Verena; Lefèvre, Gilbert; Sander, Oliver; Fisher, Laurel M

    2012-01-01

    The safety of artemether-lumefantrine in patients with acute, uncomplicated Plasmodium falciparum malaria was investigated prospectively using the auditory brainstem response (ABR) and pure-tone thresholds. Secondary outcomes included polymerase chain reaction-corrected cure rates. Patients were randomly assigned in a 3:1:1 ratio to either artemether-lumefantrine (N = 159), atovaquone-proguanil (N = 53), or artesunate-mefloquine (N = 53). The null hypothesis (primary outcome), claiming that the percentage of patients with a baseline to Day-7 ABR Wave III latency increase of > 0.30 msec is ≥ 15% after administration of artemether-lumefantrine, was rejected; 2.6% of patients (95% confidence interval: 0.7-6.6) exceeded 0.30 msec, i.e., significantly below 15% (P < 0.0001). A model-based analysis found no apparent relationship between drug exposure and ABR change. In all three groups, average improvements (2-4 dB) in pure-tone thresholds were observed, and polymerase chain reaction-corrected cure rates were > 95% to Day 42. The results support the continued safe and efficacious use of artemether-lumefantrine in uncomplicated falciparum malaria.

  6. Resuscitation Outcomes Consortium (ROC) PRIMED cardiac arrest trial methods part 1: rationale and methodology for the impedance threshold device (ITD) protocol.

    PubMed

    Aufderheide, Tom P; Kudenchuk, Peter J; Hedges, Jerris R; Nichol, Graham; Kerber, Richard E; Dorian, Paul; Davis, Daniel P; Idris, Ahamed H; Callaway, Clifton W; Emerson, Scott; Stiell, Ian G; Terndrup, Thomas E

    2008-08-01

    The primary aim of this study is to compare survival to hospital discharge with a modified Rankin score (MRS)< or =3 between standard cardiopulmonary resuscitation (CPR) plus an active impedance threshold device (ITD) versus standard CPR plus a sham ITD in patients with out-of-hospital cardiac arrest. Secondary aims are to compare functional status and depression at discharge and at 3 and 6 months post-discharge in survivors. Prospective, double-blind, randomized, controlled, clinical trial. Patients with non-traumatic out-of-hospital cardiac arrest treated by emergency medical services (EMS) providers. EMS systems participating in the Resuscitation Outcomes Consortium. Based on a one-sided significance level of 0.025, power=0.90, a survival with MRS< or =3 to discharge rate of 5.33% with standard CPR and sham ITD, and two interim analyses, a maximum of 14,742 evaluable patients are needed to detect a 6.69% survival with MRS< or =3 to discharge with standard CPR and active ITD (1.36% absolute survival difference). If the ITD demonstrates the hypothesized improvement in survival, it is estimated that 2700 deaths from cardiac arrest per year would be averted in North America alone.

  7. Optimization of IVF pregnancy outcomes with donor spermatozoa.

    PubMed

    Wang, Jeff G; Douglas, Nataki C; Prosser, Robert; Kort, Daniel; Choi, Janet M; Sauer, Mark V

    2009-03-01

    To identify risk factors for suboptimal IVF outcomes using insemination with donor spermatozoa and to define a lower threshold that may signal a conversion to fertilization by ICSI rather than insemination. Retrospective, age-matched, case-control study of women undergoing non-donor oocyte IVF cycles using either freshly ejaculated (N=138) or cryopreserved donor spermatozoa (N=69). Associations between method of fertilization, semen sample parameters, and pregnancy rates were analyzed. In vitro fertilization of oocytes with donor spermatozoa by insemination results in equivalent fertilization and pregnancy rates compared to those of freshly ejaculated spermatozoa from men with normal semen analyses when the post-processing motility is greater than or equal to 88%. IVF by insemination with donor spermatozoa when the post-processing motility is less than 88% is associated with a 5-fold reduction in pregnancy rates when compared to those of donor spermatozoa above this motility threshold. When the post-processing donor spermatozoa motility is low, fertilization by ICSI is associated with significantly higher pregnancy rates compared to those of insemination. While ICSI does not need to be categorically instituted when using donor spermatozoa in IVF, patients should be counseled that conversion from insemination to ICSI may be recommended based on low post-processing motility.

  8. Crossing the Threshold: Bringing Biological Variation to the Foreground

    PubMed Central

    Batzli, Janet M.; Knight, Jennifer K.; Hartley, Laurel M.; Maskiewicz, April Cordero; Desy, Elizabeth A.

    2016-01-01

    Threshold concepts have been referred to as “jewels in the curriculum”: concepts that are key to competency in a discipline but not taught explicitly. In biology, researchers have proposed the idea of threshold concepts that include such topics as variation, randomness, uncertainty, and scale. In this essay, we explore how the notion of threshold concepts can be used alongside other frameworks meant to guide instructional and curricular decisions, and we examine the proposed threshold concept of variation and how it might influence students’ understanding of core concepts in biology focused on genetics and evolution. Using dimensions of scientific inquiry, we outline a schema that may allow students to experience and apply the idea of variation in such a way that it transforms their future understanding and learning of genetics and evolution. We encourage others to consider the idea of threshold concepts alongside the Vision and Change core concepts to provide a lens for targeted instruction and as an integrative bridge between concepts and competencies. PMID:27856553

  9. Lowering the hemoglobin threshold for transfusion in coronary artery bypass procedures: effect on patient outcome.

    PubMed

    Bracey, A W; Radovancevic, R; Riggs, S A; Houston, S; Cozart, H; Vaughn, W K; Radovancevic, B; McAllister, H A; Cooley, D A

    1999-10-01

    There is controversy regarding the application of transfusion triggers in cardiac surgery. The goal of this study was to determine if lowering the hemoglobin threshold for red cell (RBC) transfusion to 8 g per dL after coronary artery bypass graft surgery would reduce blood use without adversely affecting patient outcome. Consecutive patients (n = 428) undergoing elective primary coronary artery bypass graft surgery were randomly assigned to two groups: study patients (n = 212) received RBC transfusions in the postoperative period if the Hb level was < 8 g per dL or if predetermined clinical conditions required RBC support, and control patients (n = 216) were treated according to individual physician's orders (hemoglobin levels < 9 g/dL as the institutional guideline). Multiple demographic, procedure-related, transfusion, laboratory, and outcome data were analyzed. Questionnaires were administered for patient self-assessment of fatigue and anemia. Preoperative and operative clinical characteristics, as well as the intraoperative transfusion rate, were similar for both groups. There was a significant difference between the postoperative RBC transfusion rates in study (0.9 +/- 1.5 RBC units) and control (1.4 +/- 1.8 RBC units) groups (p = 0.005). There was no difference in clinical outcome, including morbidity and mortality rates, in the two groups; group scores for self-assessment of fatigue and anemia were also similar. A lower Hb threshold of 8 g per dL does not adversely affect patient outcome. Moreover, RBC resources can be saved without increased risk to the patient.

  10. Direct access compared with referred physical therapy episodes of care: a systematic review.

    PubMed

    Ojha, Heidi A; Snyder, Rachel S; Davenport, Todd E

    2014-01-01

    Evidence suggests that physical therapy through direct access may help decrease costs and improve patient outcomes compared with physical therapy by physician referral. The purpose of this study was to conduct a systematic review of the literature on patients with musculoskeletal injuries and compare health care costs and patient outcomes in episodes of physical therapy by direct access compared with referred physical therapy. Ovid MEDLINE, CINAHL (EBSCO), Web of Science, and PEDro were searched using terms related to physical therapy and direct access. Included articles were hand searched for additional references. Included studies compared data from physical therapy by direct access with physical therapy by physician referral, studying cost, outcomes, or harm. The studies were appraised using the Centre for Evidence-Based Medicine (CEBM) levels of evidence criteria and assigned a methodological score. Of the 1,501 articles that were screened, 8 articles at levels 3 to 4 on the CEBM scale were included. There were statistically significant and clinically meaningful findings across studies that satisfaction and outcomes were superior, and numbers of physical therapy visits, imaging ordered, medications prescribed, and additional non-physical therapy appointments were less in cohorts receiving physical therapy by direct access compared with referred episodes of care. There was no evidence for harm. There is evidence across level 3 and 4 studies (grade B to C CEBM level of recommendation) that physical therapy by direct access compared with referred episodes of care is associated with improved patient outcomes and decreased costs. Primary limitations were lack of group randomization, potential for selection bias, and limited generalizability. Physical therapy by way of direct access may contain health care costs and promote high-quality health care. Third-party payers should consider paying for physical therapy by direct access to decrease health care costs and incentivize optimal patient outcomes.

  11. A Carotenoid Health Index Based on Plasma Carotenoids and Health Outcomes

    PubMed Central

    Donaldson, Michael S.

    2011-01-01

    While there have been many studies on health outcomes that have included measurements of plasma carotenoids, this data has not been reviewed and assembled into a useful form. In this review sixty-two studies of plasma carotenoids and health outcomes, mostly prospective cohort studies or population-based case-control studies, are analyzed together to establish a carotenoid health index. Five cutoff points are established across the percentiles of carotenoid concentrations in populations, from the tenth to ninetieth percentile. The cutoff points (mean ± standard error of the mean) are 1.11 ± 0.08, 1.47 ± 0.08, 1.89 ± 0.08, 2.52 ± 0.13, and 3.07 ± 0.20 µM. For all cause mortality there seems to be a low threshold effect with protection above every cutoff point but the lowest. But for metabolic syndrome and cancer outcomes there tends to be significant positive health outcomes only above the higher cutoff points, perhaps as a triage effect. Based on this data a carotenoid health index is proposed with risk categories as follows: very high risk: <1 µM, high risk: 1-1.5 µM, moderate risk: 1.5-2.5 µM, low risk: 2.5-4 µM, and very low risk: >4 µM. Over 95 percent of the USA population falls into the moderate or high risk category of the carotenoid health index. PMID:22292108

  12. Ventilatory thresholds determined from HRV: comparison of 2 methods in obese adolescents.

    PubMed

    Quinart, S; Mourot, L; Nègre, V; Simon-Rigaud, M-L; Nicolet-Guénat, M; Bertrand, A-M; Meneveau, N; Mougin, F

    2014-03-01

    The development of personalised training programmes is crucial in the management of obesity. We evaluated the ability of 2 heart rate variability analyses to determine ventilatory thresholds (VT) in obese adolescents. 20 adolescents (mean age 14.3±1.6 years and body mass index z-score 4.2±0.1) performed an incremental test to exhaustion before and after a 9-month multidisciplinary management programme. The first (VT1) and second (VT2) ventilatory thresholds were identified by the reference method (gas exchanges). We recorded RR intervals to estimate VT1 and VT2 from heart rate variability using time-domain analysis and time-varying spectral-domain analysis. The coefficient correlations between thresholds were higher with spectral-domain analysis compared to time-domain analysis: Heart rate at VT1: r=0.91 vs. =0.66 and VT2: r=0.91 vs. =0.66; power at VT1: r=0.91 vs. =0.74 and VT2: r=0.93 vs. =0.78; spectral-domain vs. time-domain analysis respectively). No systematic bias in heart rate at VT1 and VT2 with standard deviations <6 bpm were found, confirming that spectral-domain analysis could replace the reference method for the detection of ventilatory thresholds. Furthermore, this technique is sensitive to rehabilitation and re-training, which underlines its utility in clinical practice. This inexpensive and non-invasive tool is promising for prescribing physical activity programs in obese adolescents. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Changes in ecosystem resilience detected in automated measures of ecosystem metabolism during a whole-lake manipulation

    PubMed Central

    Batt, Ryan D.; Carpenter, Stephen R.; Cole, Jonathan J.; Pace, Michael L.; Johnson, Robert A.

    2013-01-01

    Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems. PMID:24101479

  14. Changes in ecosystem resilience detected in automated measures of ecosystem metabolism during a whole-lake manipulation.

    PubMed

    Batt, Ryan D; Carpenter, Stephen R; Cole, Jonathan J; Pace, Michael L; Johnson, Robert A

    2013-10-22

    Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems.

  15. Gluten and celiac disease--an immunological perspective.

    PubMed

    Rallabhandi, Prasad

    2012-01-01

    Gluten, a complex protein group in wheat, rye, and barley, causes celiac disease (CD), an autoimmune enteropathy of the small intestine, in genetically susceptible individuals. CD affects about 1% of the general population and causes significant health problems. Adverse inflammatory reactions to gluten are mediated by inappropriate T-cell activation leading to severe damage of the gastrointestinal mucosa, causing atrophy of absorptive surface villi. Gluten peptides bind to the chemokine receptor, CXCR3, and induce release of zonulin, which mediates tight-junction disassembly and subsequent increase in intestinal permeability. Proinflammatory cytokine IL-15 also contributes to the pathology of CD, by driving the expansion of intra-epithelial lymphocytes that damage the epithelium and promote the onset of T-cell lymphomas. There is no cure or treatment for CD, except for avoiding dietary gluten. Current gluten thresholds for food labeling have been established based on the available analytical methods, which show variation in gluten detection and quantification. Also, the clinical heterogeneity of celiac patients poses difficulty in defining clinically acceptable gluten thresholds in gluten-free foods. Presently, there is no bioassay available to measure gluten-induced immunobiological responses. This review focuses on various aspects of CD, and the importance of gluten thresholds and reference material from an immunological perspective.

  16. Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation.

    PubMed

    Nesti, Alessandro; de Winkel, Ksander; Bülthoff, Heinrich H

    2017-01-01

    While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.

  17. Cost-Savings Analysis of Renal Scintigraphy, Stratified by Renal Function Thresholds: Mercaptoacetyltriglycine Versus Diethylene Triamine Penta-Acetic Acid.

    PubMed

    Parikh, Kushal R; Davenport, Matthew S; Viglianti, Benjamin L; Hubers, David; Brown, Richard K J

    2016-07-01

    To determine the financial implications of switching technetium (Tc)-99m mercaptoacetyltriglycine (MAG-3) to Tc-99m diethylene triamine penta-acetic acid (DTPA) at certain renal function thresholds before renal scintigraphy. Institutional review board approval was obtained, and informed consent was waived for this HIPAA-compliant, retrospective, cohort study. Consecutive adult subjects (27 inpatients; 124 outpatients) who underwent MAG-3 renal scintigraphy, in the period from July 1, 2012 to June 30, 2013, were stratified retrospectively by hypothetical serum creatinine and estimated glomerular filtration rate (eGFR) thresholds, based on pre-procedure renal function. Thresholds were used to estimate the financial effects of using MAG-3 when renal function was at or worse than a given cutoff value, and DTPA otherwise. Cost analysis was performed with consideration of raw material and preparation costs, with radiotracer costs estimated by both vendor list pricing and proprietary institutional pricing. The primary outcome was a comparison of each hypothetical threshold to the clinical reality in which all subjects received MAG-3, and the results were supported by univariate sensitivity analysis. Annual cost savings by serum creatinine threshold were as follows (threshold given in mg/dL): $17,319 if ≥1.0; $33,015 if ≥1.5; and $35,180 if ≥2.0. Annual cost savings by eGFR threshold were as follows (threshold given in mL/min/1.73 m(2)): $21,649 if ≤60; $28,414 if ≤45; and $32,744 if ≤30. Cost-savings inflection points were approximately 1.25 mg/dL (serum creatinine) and 60 mL/min/1.73m(2) (eGFR). Secondary analysis by proprietary institutional pricing revealed similar trends, and cost savings of similar magnitude. Sensitivity analysis confirmed cost savings at all tested thresholds. Reserving MAG-3 utilization for patients who have impaired renal function can impart substantial annual cost savings to a radiology department. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  18. An Image Segmentation System Based on Thresholding.

    DTIC Science & Technology

    1978-12-01

    referred to as tuoe gjj~ j flj t~zoe ~~~ errg r~ res p e c t i v e l y . Examples of these errors are provided in F ig ure 2 This figure also shows a...h it’ . IV& ’ I’~ I~~ t ’ dl St~ flt~ C t ~ t he ide ~ td ~ a’.’ en t p1 xe I i 0 0 ~ r ~ - c ~~ dI~ + ~~ — 2 I I~ r dr ~~ o — O r* . - ’ 0 — 0

  19. Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music

    PubMed Central

    Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia

    2012-01-01

    Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550

  20. Threshold Laws for Two-Electron Ejection Processes: A Still Controversial Problem in Atomic Physics

    NASA Technical Reports Server (NTRS)

    Temkin, Aaron

    2003-01-01

    This talk deals with collision processes of the following kind: (a) an ionizing collision of an electron with a neutral atom, (b) a photon incident of a negative ion resulting in two-electron ejection. In both cases the final state is a positive ion and two outgoing electrons, and in principle both processes should be governed by the same form of threshold law. It is generally conceded that this is one of the most difficult basic problems in nonrelativistic quantum mechanics. The standard treatment (due to Wannier) will be briefly reviewed in terms of the derivation of his well- known threshold law for the yield (Q) of positive ions vs. the excess energy (E): Q(sub w) varies as E(exp 1.127...). The derivation is a brilliant analysis based on Newton's equations, leading to the dominance of events in which the two electrons emerge on opposite sides of the residual ion with similar energies. In contrast, I will argue on the basis of quantum mechanical ideas that in the threshold limit the more likely outcome are events in which the electrons emerge with decidedly different energies, leading to a formally different (Coulomb-dipole) threshold law Q(sub CD) varies as E(1 + C sin(alpha ln(E)+mu)]/[ln(E)](exp 2). Additional aspects of that approach will be discussed . Some: experimental results will be presented, and more incisive predictions involving polarized projectiles and targets will be given.

  1. Robust Adaptive Thresholder For Document Scanning Applications

    NASA Astrophysics Data System (ADS)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  2. Defining major trauma using the 2008 Abbreviated Injury Scale.

    PubMed

    Palmer, Cameron S; Gabbe, Belinda J; Cameron, Peter A

    2016-01-01

    The Injury Severity Score (ISS) is the most ubiquitous summary score derived from Abbreviated Injury Scale (AIS) data. It is frequently used to classify patients as 'major trauma' using a threshold of ISS >15. However, it is not known whether this is still appropriate, given the changes which have been made to the AIS codeset since this threshold was first used. This study aimed to identify appropriate ISS and New Injury Severity Score (NISS) thresholds for use with the 2008 AIS (AIS08) which predict mortality and in-hospital resource use comparably to ISS >15 using AIS98. Data from 37,760 patients in a state trauma registry were retrieved and reviewed. AIS data coded using the 1998 AIS (AIS98) were mapped to AIS08. ISS and NISS were calculated, and their effects on patient classification compared. The ability of selected ISS and NISS thresholds to predict mortality or high-level in-hospital resource use (the need for ICU or urgent surgery) was assessed. An ISS >12 using AIS08 was similar to an ISS >15 using AIS98 in terms of both the number of patients classified major trauma, and overall major trauma mortality. A 10% mortality level was only seen for ISS 25 or greater. A NISS >15 performed similarly to both of these ISS thresholds. However, the AIS08-based ISS >12 threshold correctly classified significantly more patients than a NISS >15 threshold for all three severity measures assessed. When coding injuries using AIS08, an ISS >12 appears to function similarly to an ISS >15 in AIS98 for the purposes of identifying a population with an elevated risk of death after injury. Where mortality is a primary outcome of trauma monitoring, an ISS >12 threshold could be adopted to identify major trauma patients. Level II evidence--diagnostic tests and criteria. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The benefits and tradeoffs for varied high-severity injury risk thresholds for advanced automatic crash notification systems.

    PubMed

    Bahouth, George; Graygo, Jill; Digges, Kennerly; Schulman, Carl; Baur, Peter

    2014-01-01

    The objectives of this study are to (1) characterize the population of crashes meeting the Centers for Disease Control and Prevention (CDC)-recommended 20% risk of Injury Severity Score (ISS)>15 injury and (2) explore the positive and negative effects of an advanced automatic crash notification (AACN) system whose threshold for high-risk indications is 10% versus 20%. Binary logistic regression analysis was performed to predict the occurrence of motor vehicle crash injuries at both the ISS>15 and Maximum Abbreviated Injury Scale (MAIS) 3+ level. Models were trained using crash characteristics recommended by the CDC Committee on Advanced Automatic Collision Notification and Triage of the Injured Patient. Each model was used to assign the probability of severe injury (defined as MAIS 3+ or ISS>15 injury) to a subset of NASS-CDS cases based on crash attributes. Subsequently, actual AIS and ISS levels were compared with the predicted probability of injury to determine the extent to which the seriously injured had corresponding probabilities exceeding the 10% and 20% risk thresholds. Models were developed using an 80% sample of NASS-CDS data from 2002 to 2012 and evaluations were performed using the remaining 20% of cases from the same period. Within the population of seriously injured (i.e., those having one or more AIS 3 or higher injuries), the number of occupants whose injury risk did not exceed the 10% and 20% thresholds were estimated to be 11,700 and 18,600, respectively, each year using the MAIS 3+ injury model. For the ISS>15 model, 8,100 and 11,000 occupants sustained ISS>15 injuries yet their injury probability did not reach the 10% and 20% probability for severe injury respectively. Conversely, model predictions suggested that, at the 10% and 20% thresholds, 207,700 and 55,400 drivers respectively would be incorrectly flagged as injured when their injuries had not reached the AIS 3 level. For the ISS>15 model, 87,300 and 41,900 drivers would be incorrectly flagged as injured when injury severity had not reached the ISS>15 injury level. This article provides important information comparing the expected positive and negative effects of an AACN system with thresholds at the 10% and 20% levels using 2 outcome metrics. Overall, results suggest that the 20% risk threshold would not provide a useful notification to improve the quality of care for a large number of seriously injured crash victims. Alternately, a lower threshold may increase the over triage rate. Based on the vehicle damage observed for crashes reaching and exceeding the 10% risk threshold, we anticipate that rescue services would have been deployed based on current Public Safety Answering Point (PSAP) practices.

  4. The prognostic value of standardized reference values for speckle-tracking global longitudinal strain in hypertrophic cardiomyopathy.

    PubMed

    Hartlage, Gregory R; Kim, Jonathan H; Strickland, Patrick T; Cheng, Alan C; Ghasemzadeh, Nima; Pernetz, Maria A; Clements, Stephen D; Williams, B Robinson

    2015-03-01

    Speckle-tracking left ventricular global longitudinal strain (GLS) assessment may provide substantial prognostic information for hypertrophic cardiomyopathy (HCM) patients. Reference values for GLS have been recently published. We aimed to evaluate the prognostic value of standardized reference values for GLS in HCM patients. An analysis of HCM clinic patients who underwent GLS was performed. GLS was defined as normal (more negative or equal to -16%) and abnormal (less negative than -16%) based on recently published reference values. Patients were followed for a composite of events including heart failure hospitalization, sustained ventricular arrhythmia, and all-cause death. The power of GLS to predict outcomes was assessed relative to traditional clinical and echocardiographic variables present in HCM. 79 HCM patients were followed for a median of 22 months (interquartile range 9-30 months) after imaging. During follow-up, 15 patients (19%) met the primary outcome. Abnormal GLS was the only echocardiographic variable independently predictive of the primary outcome [multivariate Hazard ratio 5.05 (95% confidence interval 1.09-23.4, p = 0.038)]. When combined with traditional clinical variables, abnormal GLS remained independently predictive of the primary outcome [multivariate Hazard ratio 5.31 (95 % confidence interval 1.18-24, p = 0.030)]. In a model including the strongest clinical and echocardiographic predictors of the primary outcome, abnormal GLS demonstrated significant incremental benefit for risk stratification [net reclassification improvement 0.75 (95 % confidence interval 0.21-1.23, p < 0.0001)]. Abnormal GLS is an independent predictor of adverse outcomes in HCM patients. Standardized use of GLS may provide significant incremental value over traditional variables for risk stratification.

  5. Substance use disorder and risk of suicidal ideation, suicide attempt and suicide death: a meta-analysis.

    PubMed

    Poorolajal, Jalal; Haghtalab, Tahereh; Farhadi, Mehran; Darvishi, Nahid

    2016-09-01

    This meta-analysis addressed the association between substance use disorder (SUD) and suicide outcomes based on current evidence. We searched PubMed, Web of Science and Scopus until May 2015. We also searched the reference lists of included studies and Psycinfo website. We included observational (cohort, case-control, cross-sectional) studies addressing the association between SUD and suicide. Our outcomes of interest were suicide ideation, suicide attempt and suicide death. For each outcome, we calculated the odds ratio (OR) or risk ratio (RR) with 95% confidence intervals (CI) based on the random-effects model. We identified a total of 12 413 references and included 43 studies with 870 967 participants. There was a significant association between SUD and suicidal ideation: OR 2.04 (95% CI: 1.59, 2.50; I 2 = 88.8%, 16 studies); suicide attempt OR 2.49 (95% CI: 2.00, 2.98; I 2 = 94.3%, 24 studies) and suicide death OR 1.49 (95% CI: 0.97, 2.00; I 2 = 82.7%, 7 studies). Based on current evidence, there is a strong association between SUD and suicide outcomes. However, evidence based on long-term prospective cohort studies is limited and needs further investigation. Moreover, further evidence is required to assess and compare the association between suicide outcomes and different types of illicit drugs, dose-response relationship and the way they are used. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Technical Note: An operational landslide early warning system at regional scale based on space-time variable rainfall thresholds

    NASA Astrophysics Data System (ADS)

    Segoni, S.; Battistini, A.; Rossi, G.; Rosi, A.; Lagomarsino, D.; Catani, F.; Moretti, S.; Casagli, N.

    2014-10-01

    We set up an early warning system for rainfall-induced landslides in Tuscany (23 000 km2). The system is based on a set of state-of-the-art intensity-duration rainfall thresholds (Segoni et al., 2014b), makes use of LAMI rainfall forecasts and real-time rainfall data provided by an automated network of more than 300 rain-gauges. The system was implemented in a WebGIS to ease the operational use in civil protection procedures: it is simple and intuitive to consult and it provides different outputs. Switching among different views, the system is able to focus both on monitoring of real time data and on forecasting at different lead times up to 48 h. Moreover, the system can switch between a very straightforward view where a synoptic scenario of the hazard can be shown all over the region and a more in-depth view were the rainfall path of rain-gauges can be displayed and constantly compared with rainfall thresholds. To better account for the high spatial variability of the physical features, which affects the relationship between rainfall and landslides, the region is subdivided into 25 alert zones, each provided with a specific threshold. The warning system reflects this subdivision: using a network of 332 rain gauges, it allows monitoring each alert zone separately and warnings can be issued independently from an alert zone to another. An important feature of the warning system is the use of thresholds that may vary in time adapting at the conditions of the rainfall path recorded by the rain-gauges. Depending on when the starting time of the rainfall event is set, the comparison with the threshold may produce different outcomes. Therefore, a recursive algorithm was developed to check and compare with the thresholds all possible starting times, highlighting the worst scenario and showing in the WebGIS interface at what time and how much the rainfall path has exceeded or will exceed the most critical threshold. Besides forecasting and monitoring the hazard scenario over the whole region with hazard levels differentiated for 25 distinct alert zones, the system can be used to gather, analyze, visualize, explore, interpret and store rainfall data, thus representing a potential support to both decision makers and scientists.

  7. Total knee replacement plus physical and medical therapy or treatment with physical and medical therapy alone: a randomised controlled trial in patients with knee osteoarthritis (the MEDIC-study).

    PubMed

    Skou, Soren T; Roos, Ewa M; Laursen, Mogens B; Rathleff, Michael S; Arendt-Nielsen, Lars; Simonsen, Ole H; Rasmussen, Sten

    2012-05-09

    There is a lack of high quality evidence concerning the efficacy of total knee arthroplasty (TKA). According to international evidence-based guidelines, treatment of knee osteoarthritis (KOA) should include patient education, exercise and weight loss. Insoles and pharmacological treatment can be included as supplementary treatments. If the combination of these non-surgical treatment modalities is ineffective, TKA may be indicated. The purpose of this randomised controlled trial is to examine whether TKA provides further improvement in pain, function and quality of life in addition to optimised non-surgical treatment in patients with KOA defined as definite radiographic OA and up to moderate pain. The study will be conducted in The North Denmark Region. 100 participants with radiographic KOA (K-L grade ≥2) and mean pain during the previous week of ≤ 60 mm (0-100, best to worst scale) who are considered eligible for TKA by an orthopaedic surgeon will be included. The treatment will consist of 12 weeks of optimised non-surgical treatment consisting of patient education, exercise, diet, insoles, analgesics and/or NSAIDs. Patients will be randomised to either receiving or not receiving a TKA in addition to the optimised non-surgical treatment. The primary outcome will be the change from baseline to 12 months on the Knee Injury and Osteoarthritis Outcome Score (KOOS)(4) defined as the average score for the subscale scores for pain, symptoms, activities of daily living, and quality of life. Secondary outcomes include the five individual KOOS subscale scores, EQ-5D, pain on a 100 mm Visual Analogue Scale, self-efficacy, pain pressure thresholds, and isometric knee flexion and knee extension strength. This is the first randomised controlled trial to investigate the efficacy of TKA as an adjunct treatment to optimised non-surgical treatment in patients with KOA. The results will significantly contribute to evidence-based recommendations for the treatment of patients with KOA. Clinicaltrials.gov reference: NCT01410409.

  8. Total knee replacement plus physical and medical therapy or treatment with physical and medical therapy alone: a randomised controlled trial in patients with knee osteoarthritis (the MEDIC-study)

    PubMed Central

    2012-01-01

    Background There is a lack of high quality evidence concerning the efficacy of total knee arthroplasty (TKA). According to international evidence-based guidelines, treatment of knee osteoarthritis (KOA) should include patient education, exercise and weight loss. Insoles and pharmacological treatment can be included as supplementary treatments. If the combination of these non-surgical treatment modalities is ineffective, TKA may be indicated. The purpose of this randomised controlled trial is to examine whether TKA provides further improvement in pain, function and quality of life in addition to optimised non-surgical treatment in patients with KOA defined as definite radiographic OA and up to moderate pain. Methods/Design The study will be conducted in The North Denmark Region. 100 participants with radiographic KOA (K-L grade ≥2) and mean pain during the previous week of ≤ 60 mm (0–100, best to worst scale) who are considered eligible for TKA by an orthopaedic surgeon will be included. The treatment will consist of 12 weeks of optimised non-surgical treatment consisting of patient education, exercise, diet, insoles, analgesics and/or NSAIDs. Patients will be randomised to either receiving or not receiving a TKA in addition to the optimised non-surgical treatment. The primary outcome will be the change from baseline to 12 months on the Knee Injury and Osteoarthritis Outcome Score (KOOS)4 defined as the average score for the subscale scores for pain, symptoms, activities of daily living, and quality of life. Secondary outcomes include the five individual KOOS subscale scores, EQ-5D, pain on a 100 mm Visual Analogue Scale, self-efficacy, pain pressure thresholds, and isometric knee flexion and knee extension strength. Discussion This is the first randomised controlled trial to investigate the efficacy of TKA as an adjunct treatment to optimised non-surgical treatment in patients with KOA. The results will significantly contribute to evidence-based recommendations for the treatment of patients with KOA. Trial registration Clinicaltrials.gov reference: NCT01410409 PMID:22571284

  9. The Potential for Spatial Distribution Indices to Signal Thresholds in Marine Fish Biomass

    PubMed Central

    Reuchlin-Hugenholtz, Emilie

    2015-01-01

    The frequently observed positive relationship between fish population abundance and spatial distribution suggests that changes in distribution can be indicative of trends in abundance. If contractions in spatial distribution precede declines in spawning stock biomass (SSB), spatial distribution reference points could complement the SSB reference points that are commonly used in marine conservation biology and fisheries management. When relevant spatial distribution information is integrated into fisheries management and recovery plans, risks and uncertainties associated with a plan based solely on the SSB criterion would be reduced. To assess the added value of spatial distribution data, we examine the relationship between SSB and four metrics of spatial distribution intended to reflect changes in population range, concentration, and density for 10 demersal populations (9 species) inhabiting the Scotian Shelf, Northwest Atlantic. Our primary purpose is to assess their potential to serve as indices of SSB, using fisheries independent survey data. We find that metrics of density offer the best correlate of spawner biomass. A decline in the frequency of encountering high density areas is associated with, and in a few cases preceded by, rapid declines in SSB in 6 of 10 populations. Density-based indices have considerable potential to serve both as an indicator of SSB and as spatially based reference points in fisheries management. PMID:25789624

  10. Dual Processing Model for Medical Decision-Making: An Extension to Diagnostic Testing

    PubMed Central

    Tsalatsanis, Athanasios; Hozo, Iztok; Kumar, Ambuj; Djulbegovic, Benjamin

    2015-01-01

    Dual Processing Theories (DPT) assume that human cognition is governed by two distinct types of processes typically referred to as type 1 (intuitive) and type 2 (deliberative). Based on DPT we have derived a Dual Processing Model (DPM) to describe and explain therapeutic medical decision-making. The DPM model indicates that doctors decide to treat when treatment benefits outweigh its harms, which occurs when the probability of the disease is greater than the so called “threshold probability” at which treatment benefits are equal to treatment harms. Here we extend our work to include a wider class of decision problems that involve diagnostic testing. We illustrate applicability of the proposed model in a typical clinical scenario considering the management of a patient with prostate cancer. To that end, we calculate and compare two types of decision-thresholds: one that adheres to expected utility theory (EUT) and the second according to DPM. Our results showed that the decisions to administer a diagnostic test could be better explained using the DPM threshold. This is because such decisions depend on objective evidence of test/treatment benefits and harms as well as type 1 cognition of benefits and harms, which are not considered under EUT. Given that type 1 processes are unique to each decision-maker, this means that the DPM threshold will vary among different individuals. We also showed that when type 1 processes exclusively dominate decisions, ordering a diagnostic test does not affect a decision; the decision is based on the assessment of benefits and harms of treatment. These findings could explain variations in the treatment and diagnostic patterns documented in today’s clinical practice. PMID:26244571

  11. Dual Processing Model for Medical Decision-Making: An Extension to Diagnostic Testing.

    PubMed

    Tsalatsanis, Athanasios; Hozo, Iztok; Kumar, Ambuj; Djulbegovic, Benjamin

    2015-01-01

    Dual Processing Theories (DPT) assume that human cognition is governed by two distinct types of processes typically referred to as type 1 (intuitive) and type 2 (deliberative). Based on DPT we have derived a Dual Processing Model (DPM) to describe and explain therapeutic medical decision-making. The DPM model indicates that doctors decide to treat when treatment benefits outweigh its harms, which occurs when the probability of the disease is greater than the so called "threshold probability" at which treatment benefits are equal to treatment harms. Here we extend our work to include a wider class of decision problems that involve diagnostic testing. We illustrate applicability of the proposed model in a typical clinical scenario considering the management of a patient with prostate cancer. To that end, we calculate and compare two types of decision-thresholds: one that adheres to expected utility theory (EUT) and the second according to DPM. Our results showed that the decisions to administer a diagnostic test could be better explained using the DPM threshold. This is because such decisions depend on objective evidence of test/treatment benefits and harms as well as type 1 cognition of benefits and harms, which are not considered under EUT. Given that type 1 processes are unique to each decision-maker, this means that the DPM threshold will vary among different individuals. We also showed that when type 1 processes exclusively dominate decisions, ordering a diagnostic test does not affect a decision; the decision is based on the assessment of benefits and harms of treatment. These findings could explain variations in the treatment and diagnostic patterns documented in today's clinical practice.

  12. [Simulation on area threshold of urban building land based on water environmental response in watersheds.

    PubMed

    He, Zhi Chao; Huang, Shuo; Guo, Qing Hai; Xiao, Li Shan; Yang, De Wei; Wang, Ying; Yang, Yi Fu

    2016-08-01

    Urban sprawl has impacted increasingly on water environment quality in watersheds. Based on water environmental response, the simulation and prediction of expanding threshold of urban building land could provide an alternative reference for urban construction planning. Taking three watersheds (i.e., Yundang Lake at complete urbanization phase, Maluan Bay at peri-urbanization phase and Xinglin Bay at early urbanization phase) with 2009-2012 observation data as example, we calculated the upper limit of TN and TP capacity in three watersheds and identified the threshold value of urban building land in watersheds using the regional nutrient management (ReNuMa) model, and also predicted the water environmental effects associated with the changes of urban landscape pattern. Results indicated that the upper limit value of TN was 12900, 42800 and 43120 kg, while that of TP was 340, 420 and 450 kg for Yundang, Maluan and Xinglin watershed, respectively. In reality, the environment capacity of pollutants in Yundang Lake was not yet satura-ted, and annual pollutant loads in Maluan Bay and Xinglin Bay were close to the upper limit. How-ever, an obvious upward trend of annual TN and TP loads was observed in Xinglin Bay. The annual pollutant load was not beyond the annual upper limit in three watersheds under Scenario 1, while performed oppositely under Scenario 3. Under Scenario 2, the annual pollutant load in Yundang Lake was under-saturation, and the TN and TP in Maluan Bay were over their limits. The area thresholds of urban building land were 1320, 5600 and 4750 hm 2 in Yundang Lake, Maluan Bay and Xinglin Bay, respectively. This study could benefit the regulation on urban landscape planning.

  13. A hybrid hydrologically complemented warning model for shallow landslides induced by extreme rainfall in Korean Mountain

    NASA Astrophysics Data System (ADS)

    Singh Pradhan, Ananta Man; Kang, Hyo-Sub; Kim, Yun-Tae

    2016-04-01

    This study uses a physically based approach to evaluate the factor of safety of the hillslope for different hydrological conditions, in Mt Umyeon, south of Seoul. The hydrological conditions were determined using intensity and duration of whole Korea of known landslide inventory data. Quantile regression statistical method was used to ascertain different probability warning levels on the basis of rainfall thresholds. Physically based models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical probabilistic methods can include other causative factors which influence the slope stability such as forest, soil and geology, but rely on good landslide inventories of the site. In this study a hybrid approach has described that combines the physically-based landslide susceptibility for different hydrological conditions. A presence-only based maximum entropy model was used to hybrid and analyze relation of landslide with conditioning factors. About 80% of the landslides were listed among the unstable sites identified in the proposed model, thereby presenting its effectiveness and accuracy in determining unstable areas and areas that require evacuation. These cumulative rainfall thresholds provide a valuable reference to guide disaster prevention authorities in the issuance of warning levels with the ability to reduce losses and save lives.

  14. Precipitation thresholds for triggering floods in Corgo hydrographic basin (Northern Portugal)

    NASA Astrophysics Data System (ADS)

    Santos, Monica; Fragoso, Marcelo

    2016-04-01

    The precipitation is a major cause of natural hazards and is therefore related to the flood events (Borga et al., 2011; Gaál et al., 2014; Wilhelmi & Morss, 2013). The severity of a precipitation event and their potential damage is dependent on the total amount of rain but also on the intensity and duration event (Gaál et al., 2014). In this work, it was established thresholds based on critical combinations: amount / duration of flood events with daily rainfall data for Corgo hydrographic basin, in northern Portugal. In Corgo basin are recorded 31 floods events between 1865 and 2011 (Santos et al., 2015; Zêzere et al., 2014). We determined the minimum, maximum and pre-warning thresholds that define the boundaries so that an event may occur. Additionally, we applied these thresholds to different flood events occurred in the past in the study basin. The results show that the ratio between the flood events and precipitation events that occur above the minimum threshold has relatively low probability of a flood happen. These results may be related to the reduced number of floods events (only those that caused damage reported by the media and produced some type of damage). The maximum threshold is not useful for floods forecasting, since the majority of true positives are below this limit. The retrospective analysis of the thresholds defined suggests that the minimum and pre warning thresholds are well adjusted. The application of rainfall thresholds contribute to minimize possible situations of pre-crisis or immediate crisis, reducing the consequences and the resources involved in emergency response of flood events. References Borga, M., Anagnostou, E. N., Blöschl, G., & Creutin, J. D. (2011). Flash flood forecasting, warning and risk management: the HYDRATE project. Environmental Science & Policy, 14(7), 834-844. doi: 10.1016/j.envsci.2011.05.017 Gaál, L., Molnar, P., & Szolgay, J. (2014). Selection of intense rainfall events based on intensity thresholds and lightning data in Switzerland. Hydrol. Earth Syst. Sci., 18(5), 1561-1573. doi: 10.5194/hess-18-1561-2014 Santos, M., Santos, J. A., & Fragoso, M. (2015). Historical damaging flood records for 1871-2011 in Northern Portugal and underlying atmospheric forcings. Journal of Hydrology, 530, 591-603. doi: 10.1016/j.jhydrol.2015.10.011 Wilhelmi, O. V., & Morss, R. E. (2013). Integrated analysis of societal vulnerability in an extreme precipitation event: A Fort Collins case study. Environmental Science & Policy, 26, 49-62. doi: 10.1016/j.envsci.2012.07.005 Zêzere, J. L., Pereira, S., Tavares, A. O., Bateira, C., Trigo, R. M., Quaresma, I., Santos, P. P., Santos, M., & Verde, J. (2014). DISASTER: a GIS database on hydro-geomorphologic disasters in Portugal. Nat. Hazards, 1-30. doi: 10.1007/s11069-013-1018-y

  15. Development of a real-time PCR method for the differential detection and quantification of four solanaceae in GMO analysis: potato (Solanum tuberosum), tomato (Solanum lycopersicum), eggplant (Solanum melongena), and pepper (Capsicum annuum).

    PubMed

    Chaouachi, Maher; El Malki, Redouane; Berard, Aurélie; Romaniuk, Marcel; Laval, Valérie; Brunel, Dominique; Bertheau, Yves

    2008-03-26

    The labeling of products containing genetically modified organisms (GMO) is linked to their quantification since a threshold for the presence of fortuitous GMOs in food has been established. This threshold is calculated from a combination of two absolute quantification values: one for the specific GMO target and the second for an endogenous reference gene specific to the taxon. Thus, the development of reliable methods to quantify GMOs using endogenous reference genes in complex matrixes such as food and feed is needed. Plant identification can be difficult in the case of closely related taxa, which moreover are subject to introgression events. Based on the homology of beta-fructosidase sequences obtained from public databases, two couples of consensus primers were designed for the detection, quantification, and differentiation of four Solanaceae: potato (Solanum tuberosum), tomato (Solanum lycopersicum), pepper (Capsicum annuum), and eggplant (Solanum melongena). Sequence variability was studied first using lines and cultivars (intraspecies sequence variability), then using taxa involved in gene introgressions, and finally, using taxonomically close taxa (interspecies sequence variability). This study allowed us to design four highly specific TaqMan-MGB probes. A duplex real time PCR assay was developed for simultaneous quantification of tomato and potato. For eggplant and pepper, only simplex real time PCR tests were developed. The results demonstrated the high specificity and sensitivity of the assays. We therefore conclude that beta-fructosidase can be used as an endogenous reference gene for GMO analysis.

  16. Maintaining evaluation designs in long term community based health promotion programmes: Heartbeat Wales case study.

    PubMed Central

    Nutbeam, D; Smith, C; Murphy, S; Catford, J

    1993-01-01

    STUDY OBJECTIVE--To examine the difficulties of developing and maintaining outcome evaluation designs in long term, community based health promotion programmes. DESIGN--Semistructured interviews of health promotion managers. SETTING--Wales and two reference health regions in England. PARTICIPANTS--Nine health promotion managers in Wales and 18 in England. MEASUREMENTS AND MAIN RESULTS--Information on selected heart health promotion activity undertaken or coordinated by health authorities from 1985-90 was collected. The Heartbeat Wales coronary heart disease prevention programme was set up in 1985, and a research and evaluation strategy was established to complement the intervention. A substantial increase in the budget occurred over the period. In the reference health regions in England this initiative was noted and rapidly taken up, thus compromising their use as control areas. CONCLUSION--Information on large scale, community based health promotion programmes can disseminate quickly and interfere with classic intervention/evaluation control designs through contamination. Alternative experimental designs for assessing the effectiveness of long term intervention programmes need to be considered. These should not rely solely on the use of reference populations, but should balance the measurement of outcome with an assessment of the process of change in communities. The development and use of intervention exposure measures together with well structured and comprehensive process evaluation in both the intervention and reference areas is recommended. PMID:8326270

  17. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.

  18. Complementarity and Correlations

    NASA Astrophysics Data System (ADS)

    Maccone, Lorenzo; Bruß, Dagmar; Macchiavello, Chiara

    2015-04-01

    We provide an interpretation of entanglement based on classical correlations between measurement outcomes of complementary properties: States that have correlations beyond a certain threshold are entangled. The reverse is not true, however. We also show that, surprisingly, all separable nonclassical states exhibit smaller correlations for complementary observables than some strictly classical states. We use mutual information as a measure of classical correlations, but we conjecture that the first result holds also for other measures (e.g., the Pearson correlation coefficient or the sum of conditional probabilities).

  19. Threshold units: A correct metric for reaction time?

    PubMed Central

    Zele, Andrew J.; Cao, Dingcai; Pokorny, Joel

    2007-01-01

    Purpose To compare reaction time (RT) to rod incremental and decremental stimuli expressed in physical contrast units or psychophysical threshold units. Methods Rod contrast detection thresholds and suprathreshold RTs were measured for Rapid-On and Rapid-Off ramp stimuli. Results Threshold sensitivity to Rapid-Off stimuli was higher than to Rapid-On stimuli. Suprathreshold RTs specified in Weber contrast for Rapid-Off stimuli were shorter than for Rapid-On stimuli. Reaction time data expressed in multiples of threshold reversed the outcomes: Reaction times for Rapid-On stimuli were shorter than those for Rapid-Off stimuli. The use of alternative contrast metrics also failed to equate RTs. Conclusions A case is made that the interpretation of RT data may be confounded when expressed in threshold units. Stimulus energy or contrast is the only metric common to the response characteristics of the cells underlying speeded responses. The use of threshold metrics for RT can confuse the interpretation of an underlying physiological process. PMID:17240416

  20. Psychological Factors Predict Local and Referred Experimental Muscle Pain: A Cluster Analysis in Healthy Adults

    PubMed Central

    Lee, Jennifer E.; Watson, David; Frey-Law, Laura A.

    2012-01-01

    Background Recent studies suggest an underlying three- or four-factor structure explains the conceptual overlap and distinctiveness of several negative emotionality and pain-related constructs. However, the validity of these latent factors for predicting pain has not been examined. Methods A cohort of 189 (99F; 90M) healthy volunteers completed eight self-report negative emotionality and pain-related measures (Eysenck Personality Questionnaire-Revised; Positive and Negative Affect Schedule; State-Trait Anxiety Inventory; Pain Catastrophizing Scale; Fear of Pain Questionnaire; Somatosensory Amplification Scale; Anxiety Sensitivity Index; Whiteley Index). Using principal axis factoring, three primary latent factors were extracted: General Distress; Catastrophic Thinking; and Pain-Related Fear. Using these factors, individuals clustered into three subgroups of high, moderate, and low negative emotionality responses. Experimental pain was induced via intramuscular acidic infusion into the anterior tibialis muscle, producing local (infusion site) and/or referred (anterior ankle) pain and hyperalgesia. Results Pain outcomes differed between clusters (multivariate analysis of variance and multinomial regression), with individuals in the highest negative emotionality cluster reporting the greatest local pain (p = 0.05), mechanical hyperalgesia (pressure pain thresholds; p = 0.009) and greater odds (2.21 OR) of experiencing referred pain compared to the lowest negative emotionality cluster. Conclusion Our results provide support for three latent psychological factors explaining the majority of the variance between several pain-related psychological measures, and that individuals in the high negative emotionality subgroup are at increased risk for (1) acute local muscle pain; (2) local hyperalgesia; and (3) referred pain using a standardized nociceptive input. PMID:23165778

  1. Dixie Valley, Nevada playa bathymetry constructed from Landsat TM data

    NASA Astrophysics Data System (ADS)

    Groeneveld, David P.; Barz, David D.

    2014-05-01

    A bathymetry model was developed from a series of Landsat Thematic Mapper (TM) images to assist discrimination of hydrologic processes on a low-relief, stable saline playa in Dixie Valley, Nevada, USA. The slope of the playa surface, established by field survey on a reference transect, enabled calculation of relative elevation of the edges of pooled brine mapped from Landsat TM5 band 5 reflectance (TMB5) in the 1.55-1.75 μm shortwave infrared region (SWIR) of the spectrum. A 0.02 TMB5 reflectance threshold accurately differentiated the shallow (1-2 mm depth) edges of pools. Isocontours of equal elevations of pool margins were mapped with the TMB5 threshold, forming concentric rings that were assigned relative elevations according to the position that the pool edges intersected the reference transect. These data were used to fit a digital elevation model and a curve for estimating pooled volume given the distance from the playa edge to the intersection of the pool edge with the reference transect. To project pooled volume using the bathymetric model for any TM snapshot, within a geographic information system, the 0.02 TMB5 threshold is first used to define the edge of the exposed brine. The distance of this edge from the playa edge along the reference transect is then measured and input to the bathymetric equation to yield pooled volume. Other satellite platforms with appropriate SWIR bands require calibration to Landsat TMB5. The method has applicability for filling reservoirs, bodies of water that fluctuate and especially bodies of water inaccessible to acoustic or sounding methods.

  2. A New DEM Generalization Method Based on Watershed and Tree Structure

    PubMed Central

    Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang

    2016-01-01

    The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296

  3. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  4. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  5. Climate variability in Andalusia (southern Spain) during the period 1701-1850 based on documentary sources: evaluation and comparison with climate model simulations

    NASA Astrophysics Data System (ADS)

    Rodrigo, F. S.; Gómez-Navarro, J. J.; Montávez Gómez, J. P.

    2012-01-01

    In this work, a reconstruction of climatic conditions in Andalusia (southern Iberian Peninsula) during the period 1701-1850, as well as an evaluation of its associated uncertainties, is presented. This period is interesting because it is characterized by a minimum in solar irradiance (Dalton Minimum, around 1800), as well as intense volcanic activity (for instance, the eruption of Tambora in 1815), at a time when any increase in atmospheric CO2 concentrations was of minor importance. The reconstruction is based on the analysis of a wide variety of documentary data. The reconstruction methodology is based on counting the number of extreme events in the past, and inferring mean value and standard deviation using the assumption of normal distribution for the seasonal means of climate variables. This reconstruction methodology is tested within the pseudoreality of a high-resolution paleoclimate simulation performed with the regional climate model MM5 coupled to the global model ECHO-G. The results show that the reconstructions are influenced by the reference period chosen and the threshold values used to define extreme values. This creates uncertainties which are assessed within the context of climate simulation. An ensemble of reconstructions was obtained using two different reference periods (1885-1915 and 1960-1990) and two pairs of percentiles as threshold values (10-90 and 25-75). The results correspond to winter temperature, and winter, spring and autumn rainfall, and they are compared with simulations of the climate model for the considered period. The mean value of winter temperature for the period 1781-1850 was 10.6 ± 0.1 °C (11.0 °C for the reference period 1960-1990). The mean value of winter rainfall for the period 1701-1850 was 267 ± 18 mm (224 mm for 1960-1990). The mean values of spring and autumn rainfall were 164 ± 11 and 194 ± 16 mm (129 and 162 mm for 1960-1990, respectively). Comparison of the distribution functions corresponding to 1790-1820 and 1960-1990 indicates that during the Dalton Minimum the frequency of dry and warm (wet and cold) winters was lower (higher) than during the reference period: temperatures were up to 0.5 °C lower than the 1960-1990 value, and rainfall was 4% higher.

  6. A Vulnerability-Based, Bottom-up Assessment of Future Riverine Flood Risk Using a Modified Peaks-Over-Threshold Approach and a Physically Based Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Knighton, James; Steinschneider, Scott; Walter, M. Todd

    2017-12-01

    There is a chronic disconnection among purely probabilistic flood frequency analysis of flood hazards, flood risks, and hydrological flood mechanisms, which hamper our ability to assess future flood impacts. We present a vulnerability-based approach to estimating riverine flood risk that accommodates a more direct linkage between decision-relevant metrics of risk and the dominant mechanisms that cause riverine flooding. We adapt the conventional peaks-over-threshold (POT) framework to be used with extreme precipitation from different climate processes and rainfall-runoff-based model output. We quantify the probability that at least one adverse hydrologic threshold, potentially defined by stakeholders, will be exceeded within the next N years. This approach allows us to consider flood risk as the summation of risk from separate atmospheric mechanisms, and supports a more direct mapping between hazards and societal outcomes. We perform this analysis within a bottom-up framework to consider the relevance and consequences of information, with varying levels of credibility, on changes to atmospheric patterns driving extreme precipitation events. We demonstrate our proposed approach using a case study for Fall Creek in Ithaca, NY, USA, where we estimate the risk of stakeholder-defined flood metrics from three dominant mechanisms: summer convection, tropical cyclones, and spring rain and snowmelt. Using downscaled climate projections, we determine how flood risk associated with a subset of mechanisms may change in the future, and the resultant shift to annual flood risk. The flood risk approach we propose can provide powerful new insights into future flood threats.

  7. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  8. Unidentified Chronic Fatigue Syndrome/myalgic encephalomyelitis (CFS/ME) is a major cause of school absence: surveillance outcomes from school-based clinics.

    PubMed

    Crawley, Esther M; Emond, Alan M; Sterne, Jonathan A C

    2011-01-01

    Objective To investigate the feasibility of conducting clinics for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) in schools. Design School-based clinical project. Participants Children aged 11-16 years were enrolled in three state secondary schools in England. Main outcome measures Number of children newly diagnosed as having CFS/ME. Methods Attendance officers identified children missing ≥20% of school in a 6-week term without a known cause, excluding those with a single episode off school, a known medical illness explaining the absence or known to be truanting. Children with fatigue were referred to a specialist CFS/ME service for further assessment. The authors compared children with CFS/ME identified through school-based clinics with those referred via health services. Outcomes of CFS/ME were evaluated at 6 weeks and 6 months. Results 461 of the 2855 enrolled children had missed ≥20% school over a 6-week period. In 315, of whom three had CFS/ME, the reason for absence was known. 112 of the 146 children with unexplained absence attended clinical review at school; two had been previously diagnosed as having CFS/ME and 42 were referred on to a specialist clinic, where 23 were newly diagnosed as having CFS/ME. Therefore, 28 of the 2855 (1.0%) children had CFS/ME. Children with CFS/ME identified through surveillance had been ill for an amount of time comparable to those referred via health services but had less fatigue (mean difference 4.4, 95% CI 2.2 to 6.6), less disability (mean difference -5.7, 95% CI -7.9 to -3.5) and fewer symptoms (mean difference 1.86, 95% CI 0.8 to 2.93). Of 19 children followed up, six had fully recovered at 6 weeks and a further six at 6 months. Conclusions Chronic fatigue is an important cause of unexplained absence from school. Children diagnosed through school-based clinics are less severely affected than those referred to specialist services and appear to make rapid progress when they access treatment.

  9. Serum creatinine elevation after renin-angiotensin system blockade and long term cardiorenal risks: cohort study

    PubMed Central

    Mansfield, Kathryn E; Bhaskaran, Krishnan; Nitsch, Dorothea; Sørensen, Henrik Toft; Smeeth, Liam; Tomlinson, Laurie A

    2017-01-01

    Objective To examine long term cardiorenal outcomes associated with increased concentrations of creatinine after the start of angiotensin converting enzyme inhibitor/angiotensin receptor blocker treatment. Design Population based cohort study using electronic health records from the Clinical Practice Research Datalink and Hospital Episode Statistics. Setting UK primary care, 1997-2014. Participants Patients starting treatment with angiotensin converting enzyme inhibitors or angiotensin receptor blockers (n=122 363). Main outcome measures Poisson regression was used to compare rates of end stage renal disease, myocardial infarction, heart failure, and death among patients with creatinine increases of 30% or more after starting treatment against those without such increases, and for each 10% increase in creatinine. Analyses were adjusted for age, sex, calendar period, socioeconomic status, lifestyle factors, chronic kidney disease, diabetes, cardiovascular comorbidities, and use of other antihypertensive drugs and non-steroidal anti-inflammatory drugs. Results Among the 2078 (1.7%) patients with creatinine increases of 30% or more, a higher proportion were female, were elderly, had cardiorenal comorbidity, and used non-steroidal anti-inflammatory drugs, loop diuretics, or potassium sparing diuretics. Creatinine increases of 30% or more were associated with an increased adjusted incidence rate ratio for all outcomes, compared with increases of less than 30%: 3.43 (95% confidence interval 2.40 to 4.91) for end stage renal disease, 1.46 (1.16 to 1.84) for myocardial infarction, 1.37 (1.14 to 1.65) for heart failure, and 1.84 (1.65 to 2.05) for death. The detailed categorisation of increases in creatinine concentrations (<10%, 10-19%, 20-29%, 30-39%, and ≥40%) showed a graduated relation for all outcomes (all P values for trends <0.001). Notably, creatinine increases of less than 30% were also associated with increased incidence rate ratios for all outcomes, including death (1.15 (1.09 to 1.22) for increases of 10-19% and 1.35 (1.23 to 1.49) for increases of 20-29%, using <10% as reference). Results were consistent across calendar periods, across subgroups of patients, and among continuing users. Conclusions Increases in creatinine after the start of angiotensin converting enzyme inhibitor/angiotensin receptor blocker treatment were associated with adverse cardiorenal outcomes in a graduated relation, even below the guideline recommended threshold of a 30% increase for stopping treatment. PMID:28279964

  10. [Value of cumulative electrodermal responses in subliminal auditory perception. A preliminary study].

    PubMed

    Borgeat, F; Pannetier, M F

    1982-01-01

    This exploratory study examined the usefulness of averaging electrodermal potential responses for research on subliminal auditory perception. Eighteen female subjects were exposed to three kinds (emotional, neutral and 1000 Hz tone) of auditory stimulation which were repeated six times at three intensities (detection threshold, 10 dB under this threshold and 10 dB above identification threshold). Analysis of electrodermal potential responses showed that the number of responses was related to the emotionality of subliminal stimuli presented at detection threshold but not at 10 dB under it. The data interpretation proposed refers to perceptual defence theory. This study indicates that electrodermal response count constitutes a useful measure for subliminal auditory perception research, but averaging those responses was not shown to bring additional information.

  11. Determination of Cross-Sectional Area of Focused Picosecond Gaussian Laser Beam

    NASA Technical Reports Server (NTRS)

    Ledesma, Rodolfo; Fitz-Gerald, James; Palmieri, Frank; Connell, John

    2018-01-01

    Measurement of the waist diameter of a focused Gaussian-beam at the 1/e(sup 2) intensity, also referred to as spot size, is key to determining the fluence in laser processing experiments. Spot size measurements are also helpful to calculate the threshold energy and threshold fluence of a given material. This work reports an application of a conventional method, by analyzing single laser ablated spots for different laser pulse energies, to determine the cross-sectional area of a focused Gaussian-beam, which has a nominal pulse width of approx. 10 ps. Polished tungsten was used as the target material, due to its low surface roughness and low ablation threshold, to measure the beam waist diameter. From the ablative spot measurements, the ablation threshold fluence of the tungsten substrate was also calculated.

  12. Segmentation and classification of brain images using firefly and hybrid kernel-based support vector machine

    NASA Astrophysics Data System (ADS)

    Selva Bhuvaneswari, K.; Geetha, P.

    2017-05-01

    Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.

  13. A volumetric pulmonary CT segmentation method with applications in emphysema assessment

    NASA Astrophysics Data System (ADS)

    Silva, José Silvestre; Silva, Augusto; Santos, Beatriz S.

    2006-03-01

    A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.

  14. Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data

    NASA Astrophysics Data System (ADS)

    Bouchami, J.; Dallaire, F.; Gutiérrez, A.; Idarraga, J.; Král, V.; Leroy, C.; Picard, S.; Pospíšil, S.; Scallon, O.; Solc, J.; Suk, M.; Turecek, D.; Vykydal, Z.; Žemlièka, J.

    2011-01-01

    The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of 6LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) — based on the ROOT application — allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons (252Cf and 241AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.

  15. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    PubMed

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on understanding the distributional characteristics of such uncertainty. Our approach provides a tool to improve decision making. © 2013 Society for Conservation Biology.

  16. Myelodysplastic Syndromes and Iron Chelation Therapy

    PubMed Central

    Angelucci, Emanuele; Urru, Silvana Anna Maria; Pilo, Federica; Piperno, Alberto

    2017-01-01

    Over recent decades we have been fortunate to witness the advent of new technologies and of an expanded knowledge and application of chelation therapies to the benefit of patients with iron overload. However, extrapolation of learnings from thalassemia to the myelodysplastic syndromes (MDS) has resulted in a fragmented and uncoordinated clinical evidence base. We’re therefore forced to change our understanding of MDS, looking with other eyes to observational studies that inform us about the relationship between iron and tissue damage in these subjects. The available evidence suggests that iron accumulation is prognostically significant in MDS, but levels of accumulation historically associated with organ damage (based on data generated in the thalassemias) are infrequent. Emerging experimental data have provided some insight into this paradox, as our understanding of iron-induced tissue damage has evolved from a process of progressive bulking of organs through high-volumes iron deposition, to one of ‘toxic’ damage inflicted through multiple cellular pathways. Damage from iron may, therefore, occur prior to reaching reference thresholds, and similarly, chelation may be of benefit before overt iron overload is seen. In this review, we revisit the scientific and clinical evidence for iron overload in MDS to better characterize the iron overload phenotype in these patients, which differs from the classical transfusional and non-transfusional iron overload syndrome. We hope this will provide a conceptual framework to better understand the complex associations between anemia, iron and clinical outcomes, to accelerate progress in this area. PMID:28293409

  17. The effects of context and musical training on auditory temporal-interval discrimination.

    PubMed

    Banai, Karen; Fisher, Shirley; Ganot, Ron

    2012-02-01

    Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Numerical simulation of high intensity focused ultrasound temperature distribution for transcranial brain therapy

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Wang, Yizhe; Zhou, Wenzheng; Zhang, Ji; Jian, Xiqi

    2017-03-01

    To provide a reference for the HIFU clinical therapeutic planning, the temperature distribution and lesion volume are analyzed by the numerical simulation. The adopted numerical simulation is based on a transcranial ultrasound therapy model, including an 8 annular-element curved phased array transducer. The acoustic pressure and temperature elevation are calculated by using the approximation of Westervelt Formula and the Pennes Heat Transfer Equation. In addition, the Time Reversal theory and eliminating hot spot technique are combined to optimize the temperature distribution. With different input powers and exposure times, the lesion volume is evaluated based on temperature threshold theory. The lesion region could be restored at the expected location by the time reversal theory. Although the lesion volume reduces after eliminating the peak temperature in the skull and more input power and exposure time is required, the injury of normal tissue around skull could be reduced during the HIFU therapy. The prediction of thermal deposition in the skull and the lesion region could provide a reference for clinical therapeutic dose.

  19. Defining the Lower Limit of a "Critical Bone Defect" in Open Diaphyseal Tibial Fractures.

    PubMed

    Haines, Nikkole M; Lack, William D; Seymour, Rachel B; Bosse, Michael J

    2016-05-01

    To determine healing outcomes of open diaphyseal tibial shaft fractures treated with reamed intramedullary nailing (IMN) with a bone gap of 10-50 mm on ≥50% of the cortical circumference and to better define a "critical bone defect" based on healing outcome. Retrospective cohort study. Forty patients, age 18-65, with open diaphyseal tibial fractures with a bone gap of 10-50 mm on ≥50% of the circumference as measured on standard anteroposterior and lateral postoperative radiographs treated with IMN. IMN of an open diaphyseal tibial fracture with a bone gap. Level-1 trauma center. Healing outcomes, union or nonunion. Forty patients were analyzed. Twenty-one (52.5%) went on to nonunion and nineteen (47.5%) achieved union. Radiographic apparent bone gap (RABG) and infection were the only 2 covariates predicting nonunion outcome (P = 0.046 for infection). The RABG was determined by measuring the bone gap on each cortex and averaging over 4 cortices. Fractures achieving union had a RABG of 12 ± 1 mm versus 20 ± 2 mm in those going on to nonunion (P < 0.01). This remained significant when patients with infection were removed. Receiver operator characteristic analysis demonstrated that RABG was predictive of outcome (area under the curve of 0.79). A RABG of 25 mm was the statistically optimal threshold for prediction of healing outcome. Patients with open diaphyseal tibial fractures treated with IMN and a <25 mm RABG have a reasonable probability of achieving union without additional intervention, whereas those with larger gaps have a higher probability of nonunion. Research investigating interventions for RABGs should use a predictive threshold for defining a critical bone defect that is associated with greater than 50% risk of nonunion without supplementary treatment. Prognostic Level III. See Instructions for Authors for a complete description of levels of evidence.

  20. Hyperglycaemia and risk of adverse perinatal outcomes: systematic review and meta-analysis.

    PubMed

    Farrar, Diane; Simmonds, Mark; Bryant, Maria; Sheldon, Trevor A; Tuffnell, Derek; Golder, Su; Dunne, Fidelma; Lawlor, Debbie A

    2016-09-13

     To assess the association between maternal glucose concentrations and adverse perinatal outcomes in women without gestational or existing diabetes and to determine whether clear thresholds for identifying women at risk of perinatal outcomes can be identified.  Systematic review and meta-analysis of prospective cohort studies and control arms of randomised trials.  Databases including Medline and Embase were searched up to October 2014 and combined with individual participant data from two additional birth cohorts.  Studies including pregnant women with oral glucose tolerance (OGTT) or challenge (OGCT) test results, with data on at least one adverse perinatal outcome.  Glucose test results were extracted for OGCT (50 g) and OGTT (75 g and 100 g) at fasting and one and two hour post-load timings. Data were extracted on induction of labour; caesarean and instrumental delivery; pregnancy induced hypertension; pre-eclampsia; macrosomia; large for gestational age; preterm birth; birth injury; and neonatal hypoglycaemia. Risk of bias was assessed with a modified version of the critical appraisal skills programme and quality in prognostic studies tools.  25 reports from 23 published studies and two individual participant data cohorts were included, with up to 207 172 women (numbers varied by the test and outcome analysed in the meta-analyses). Overall most studies were judged as having a low risk of bias. There were positive linear associations with caesarean section, induction of labour, large for gestational age, macrosomia, and shoulder dystocia for all glucose exposures across the distribution of glucose concentrations. There was no clear evidence of a threshold effect. In general, associations were stronger for fasting concentration than for post-load concentration. For example, the odds ratios for large for gestational age per 1 mmol/L increase of fasting and two hour post-load glucose concentrations (after a 75 g OGTT) were 2.15 (95% confidence interval 1.60 to 2.91) and 1.20 (1.13 to 1.28), respectively. Heterogeneity was low between studies in all analyses.  This review and meta-analysis identified a large number of studies in various countries. There was a graded linear association between fasting and post-load glucose concentration across the whole glucose distribution and most adverse perinatal outcomes in women without pre-existing or gestational diabetes. The lack of a clear threshold at which risk increases means that decisions regarding thresholds for diagnosing gestational diabetes are somewhat arbitrary. Research should now investigate the clinical and cost-effectiveness of applying different glucose thresholds for diagnosis of gestational diabetes on perinatal and longer term outcomes.  PROSPERO CRD42013004608. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Langston, C. A.

    2016-12-01

    Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.

  2. Identification of a dynamic temperature threshold for soil moisture freeze/thaw (F/T) state classification using soil real dielectric constant derivatives.

    NASA Astrophysics Data System (ADS)

    Pardo, R.; Berg, A. A.; Warland, J. S.

    2017-12-01

    The use of microwave remote sensing for surface ground ice detection has been well documented using both active and passive systems. Typical validation of these remotely sensed F/T state products relies on in-situ air or soil temperature measurements and a threshold of 0°C to identify frozen soil. However, in soil pores, the effects of capillary and adsorptive forces combine with the presence of dissolved salts to depress the freezing point. This is further confounded by the fact that water over this temperature range releases/absorbs latent heat of freezing/fusion. Indeed, recent results from SLAPEx2015, a campaign conducted to evaluate the ability to detect F/T state and examine the controls on F/T detection at multiple resolutions, suggest that using a soil temperature of 0°C as a threshold for freezing may not be appropriate. Coaxial impedance sensors, like Steven's HydraProbeII (HP), are the most widely used soil sensor in water supply forecast and climatological networks. These soil moisture probes have recently been used to validate remote sensing F/T products. This kind of validation is still relatively uncommon and dependent on categorical techniques based on seasonal reference states of frozen and non-frozen soil conditions. An experiment was conducted to identify the correlation between the phase state of the soil moisture and the probe measurements. Eight soil cores were subjected to F/T transitions in an environmental chamber. For each core, at a depth of 2.5 cm, the temperature and real dielectric constant (rdc) were measured every five minutes using HPs while two heat pulse probes captured the apparent heat capacity 24 minutes apart. Preliminary results show the phase transition of water is bounded by inflection points in the soil temperature, attributed to latent heat. The rdc, however, appears to be highly sensitive to changes in the water preceding the phase change. This opens the possibility of estimating a dynamic temperature threshold for soil F/T by identifying the soil temperatures at the times during which these inflection points in the soil rdc occur. This technique provides a more accurate threshold for F/T product than the static reference temperature currently established.

  3. Measuring patient tolerance for future adverse events in low-risk emergency department chest pain patients.

    PubMed

    Chen, Jennifer C; Cooper, Richelle J; Lopez-O'Sullivan, Ana; Schriger, David L

    2014-08-01

    We assess emergency department (ED) patients' risk thresholds for preferring admission versus discharge when presenting with chest pain and determine how the method of information presentation affects patients' choices. In this cross-sectional survey, we enrolled a convenience sample of lower-risk acute chest pain patients from an urban ED. We presented patients with a hypothetical value for the risk of adverse outcome that could be decreased by hospitalization and asked them to identify the risk threshold at which they preferred admission versus discharge. We randomized patients to a method of numeric presentation (natural frequency or percentage) and the initial risk presented (low or high) and followed each numeric assessment with an assessment based on visually depicted risks. We enrolled 246 patients and analyzed data on 234 with complete information. The geometric mean risk threshold with numeric presentation was 1 in 736 (1 in 233 with a percentage presentation; 1 in 2,425 with a natural frequency presentation) and 1 in 490 with a visual presentation. Fifty-nine percent of patients (137/234) chose the lowest or highest risk values offered. One hundred fourteen patients chose different thresholds for numeric and visual risk presentations. We observed strong anchoring effects; patients starting with the lowest risk chose a lower threshold than those starting with the highest risk possible and vice versa. Using an expected utility model to measure patients' risk thresholds does not seem to work, either to find a stable risk preference within individuals or in groups. Further work in measurement of patients' risk tolerance or methods of shared decisionmaking not dependent on assessment of risk tolerance is needed. Copyright © 2014 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  4. Weak wide-band signal detection method based on small-scale periodic state of Duffing oscillator

    NASA Astrophysics Data System (ADS)

    Hou, Jian; Yan, Xiao-peng; Li, Ping; Hao, Xin-hong

    2018-03-01

    The conventional Duffing oscillator weak signal detection method, which is based on a strong reference signal, has inherent deficiencies. To address these issues, the characteristics of the Duffing oscillatorʼs phase trajectory in a small-scale periodic state are analyzed by introducing the theory of stopping oscillation system. Based on this approach, a novel Duffing oscillator weak wide-band signal detection method is proposed. In this novel method, the reference signal is discarded, and the to-be-detected signal is directly used as a driving force. By calculating the cosine function of a phase space angle, a single Duffing oscillator can be used for weak wide-band signal detection instead of an array of uncoupled Duffing oscillators. Simulation results indicate that, compared with the conventional Duffing oscillator detection method, this approach performs better in frequency detection intervals, and reduces the signal-to-noise ratio detection threshold, while improving the real-time performance of the system. Project supported by the National Natural Science Foundation of China (Grant No. 61673066).

  5. Outcome-based ventilation: A framework for assessing performance, health, and energy impacts to inform office building ventilation decisions.

    PubMed

    Rackes, A; Ben-David, T; Waring, M S

    2018-07-01

    This article presents an outcome-based ventilation (OBV) framework, which combines competing ventilation impacts into a monetized loss function ($/occ/h) used to inform ventilation rate decisions. The OBV framework, developed for U.S. offices, considers six outcomes of increasing ventilation: profitable outcomes realized from improvements in occupant work performance and sick leave absenteeism; health outcomes from occupant exposure to outdoor fine particles and ozone; and energy outcomes from electricity and natural gas usage. We used the literature to set low, medium, and high reference values for OBV loss function parameters, and evaluated the framework and outcome-based ventilation rates using a simulated U.S. office stock dataset and a case study in New York City. With parameters for all outcomes set at medium values derived from literature-based central estimates, higher ventilation rates' profitable benefits dominated negative health and energy impacts, and the OBV framework suggested ventilation should be ≥45 L/s/occ, much higher than the baseline ~8.5 L/s/occ rate prescribed by ASHRAE 62.1. Only when combining very low parameter estimates for profitable impacts with very high ones for health and energy impacts were all outcomes on the same order. Even then, however, outcome-based ventilation rates were often twice the baseline rate or more. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. The meaning of (quality of) life in patients with eating disorders: a comparison of generic and disease-specific measures across diagnosis and outcome.

    PubMed

    Ackard, Diann M; Richter, Sara; Egan, Amber; Engel, Scott; Cronemeyer, Catherine L

    2014-04-01

    Compare general and disease-specific health-related quality of life (HRQoL) among female patients with an eating disorder (ED). Female patients (n = 221; 95.3% Caucasian; 94.0% never married) completed the Medical Outcome Short Form Health Survey (SF-36) and Eating Disorders Quality of Life (EDQoL) as part of a study of treatment outcomes. Multivariate regression models were used to compare HRQoL differences across initial ED diagnosis (85 AN-R, 19 AN-B/P, 27 BN, 90 EDNOS) and ED diagnostic classification at time of outcome assessment (140 no ED, 38 subthreshold ED, 43 full threshold ED). There were no significant differences across ED diagnosis at initial assessment on either of the SF-36 Component Summary scores. However, patients with AN-B/P scored poorer on the work/school EDQoL subscales than other ED diagnoses, and on the psychological EDQoL subscale compared to AN-R and EDNOS. At outcome assessment, comparisons across full threshold, subthreshold and no ED classification indicated that those with no ED reported better HRQoL than those with full threshold ED on the SF-36 Mental Components Summary and three of four EDQoL subscales. Furthermore, those with no ED reported better psychological HRQoL than those with subthreshold ED. Disease-specific HRQOL measures are important to use when comparing HRQoL in ED patients across treatment and outcome, and may have the sensitivity to detect meaningful differences by diagnosis more so than generic instruments. EDQoL scores from patients remitted from symptoms approach but do not reach scores for unaffected college females; thus, treatment should continue until quality of life is restored. Copyright © 2013 Wiley Periodicals, Inc.

  7. Upper Airway Stimulation for Obstructive Sleep Apnea: Durability of the Treatment Effect at 18 Months.

    PubMed

    Strollo, Patrick J; Gillespie, M Boyd; Soose, Ryan J; Maurer, Joachim T; de Vries, Nico; Cornelius, Jason; Hanson, Ronald D; Padhya, Tapan A; Steward, David L; Woodson, B Tucker; Verbraecken, Johan; Vanderveken, Olivier M; Goetting, Mark G; Feldman, Neil; Chabolle, Frédéric; Badr, M Safwan; Randerath, Winfried; Strohl, Kingman P

    2015-10-01

    To determine the stability of improvement in polysomnographic measures of sleep disordered breathing, patient reported outcomes, the durability of hypoglossal nerve recruitment and safety at 18 months in the Stimulation Treatment for Apnea Reduction (STAR) trial participants. Prospective multicenter single group trial with participants serving as their own controls. Twenty-two community and academic sleep medicine and otolaryngology practices. Primary outcome measures were the apnea-hypopnea index (AHI) and the 4% oxygen desaturation index (ODI). Secondary outcome measures were the Epworth Sleepiness Scale (ESS), the Functional Outcomes of Sleep Questionnaire (FOSQ), and oxygen saturation percent time < 90% during sleep. Stimulation level for each participant was collected at three predefined thresholds during awake testing. Procedure- and/or device-related adverse events were reviewed and coded by the Clinical Events Committee. The median AHI was reduced by 67.4% from the baseline of 29.3 to 9.7/h at 18 mo. The median ODI was reduced by 67.5% from 25.4 to 8.6/h at 18 mo. The FOSQ and ESS improved significantly at 18 mo compared to baseline values. The functional threshold was unchanged from baseline at 18 mo. Two participants experienced a serious device-related adverse event requiring neurostimulator repositioning and fixation. No tongue weakness reported at 18 mo. Upper airway stimulation via the hypoglossal nerve maintained a durable effect of improving airway stability during sleep and improved patient reported outcomes (Epworth Sleepiness Scale and Functional Outcomes of Sleep Questionnaire) without an increase of the stimulation thresholds or tongue injury at 18 mo of follow-up. © 2015 Associated Professional Sleep Societies, LLC.

  8. Relationship of Blood Pressure With Mortality and Cardiovascular Events Among Hypertensive Patients aged ≥60 years in Rural Areas of China

    PubMed Central

    Zheng, Liqiang; Li, Jue; Sun, Zhaoqing; Zhang, Xingang; Hu, Dayi; Sun, Yingxian

    2015-01-01

    Abstract The Eighth Joint National Committee (JNC-8) panel recently recommended a systolic blood pressure (BP) threshold of ≥150 mmHg for the initiation of drug therapy and a therapeutic target of <150/90 mmHg in patients ≥60 years of age. However, results from some post-hoc analysis of randomized controlled trials and observational studies did not support these recommendations. In the prospective cohort study, 5006 eligible hypertensive patients aged ≥60 years from rural areas of China were enrolled for the present analysis. The association between the average follow-up BP and outcomes (all-cause and cardiovascular death, incident coronary heart disease [CHD], and stroke), followed by a median of 4.8 years, were evaluated using Cox proportional hazards models adjusting for other potential confounders. The relationship between BP (systolic or diastolic) showed an increased or J-shaped curve association with adverse outcomes. Compared with the reference group of BP <140/90 mmHg, the risk of all-cause death (hazard ratio [HR]: 2.698; 95% confidence interval [CI]: 1.989–3.659), cardiovascular death (HR: 2.702; 95% CI: 1.855–3.935), incident CHD (HR: 3.263; 95% CI: 2.063–5.161), and stroke (HR: 2.334; 95% CI: 1.559–3.945) was still significantly increased in the group with BP of 140–149/<90 mmHg. Older hypertensive patients with BP of 140–149/<90 mmHg were at higher risk of developing adverse outcomes, implying that lenient BP control of 140–149/<90 mmHg, based on the JNC-8 guidelines, may not be appropriate for hypertensive patients aged ≥60 years in rural areas of China. PMID:26426621

  9. Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology

    PubMed Central

    Marshall, Brandon D. L.; Galea, Sandro

    2015-01-01

    Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821

  10. Suppression of LH during ovarian stimulation: analysing threshold values and effects on ovarian response and the outcome of assisted reproduction in down-regulated women stimulated with recombinant FSH.

    PubMed

    Balasch, J; Vidal, E; Peñarrubia, J; Casamitjana, R; Carmona, F; Creus, M; Fábregues, F; Vanrell, J A

    2001-08-01

    It has been recently suggested that gonadotrophin-releasing hormone agonist down-regulation in some normogonadotrophic women may result in profound suppression of LH concentrations, impairing adequate oestradiol synthesis and IVF and pregnancy outcome. The aims of this study, where receiver-operating characteristic (ROC) analysis was used, were: (i) to assess the usefulness of serum LH measurement on stimulation day 7 (S7) as a predictor of ovarian response, IVF outcome, implantation, and the outcome of pregnancy in patients treated with recombinant FSH under pituitary suppression; and (ii) to define the best threshold value, if any, to discriminate between women with 'low' or 'normal' LH concentrations. A total of 144 infertile women undergoing IVF/intracytoplasmic sperm injection (ICSI) treatment were included. Seventy-two consecutive patients having a positive pregnancy test (including 58 ongoing pregnancies and 14 early pregnancy losses) were initially selected. As a control non-pregnant group, the next non-conception IVF/ICSI cycle after each conceptual cycle in our assisted reproduction programme was used. The median and range of LH values in non-conception cycles, conception cycles, ongoing pregnancies, and early pregnancy losses, clearly overlapped. ROC analysis showed that serum LH concentration on S7 was unable to discriminate between conception and non-conception cycles (AUC(ROC) = 0.52; 95% CI: 0.44 to 0.61) or ongoing pregnancy versus early pregnancy loss groups (AUC(ROC) = 0.59; 95% CI: 0.46 to 0.70). To assess further the potential impact of suppressed concentrations of circulating LH during ovarian stimulation on the outcome of IVF/ICSI treatment, the three threshold values of mid-follicular serum LH proposed in the literature (<1, < or =0.7, <0.5 IU/l) to discriminate between women with 'low' or 'normal' LH were applied to our study population. No significant differences were found with respect to ovarian response, IVF/ICSI outcome, implantation, and the outcome of pregnancy between 'low' and 'normal' S7 LH women as defined by those threshold values. Our results do not support the need for additional exogenous LH supplementation in down-regulated women receiving a recombinant FSH-only preparation.

  11. Legal access to surrogate motherhood in illness that does not cause infertility.

    PubMed

    Jordaan, Donrich W

    2016-06-17

    The threshold requirement for surrogate motherhood entails that a commissioning parent or parents must be permanently unable to give birth to a child. The question of whether a commissioning mother who suffers from a permanent illness that does not cause infertility as such, but that renders pregnancy a significant health risk to such mother and/or to her prospective child in utero, has arisen in practice. In this article, I propose that the inability to give birth to a child as per the threshold requirement should not be interpreted narrowly as referring only to a commissioning parent's inherent inability to give birth to a child, but should rather be interpreted broadly as referring only to a commissioning parent's effective inability to give birth to a child - allowing consideration of the medical sequelae of pregnancy for the commissioning mother and her prospective child. I argue that such a broad interpretation of the threshold requirement is compatible with legislative intent and case law, and is demanded by our country's constitutional commitment to human rights.

  12. Children with Auditory Neuropathy Spectrum Disorder Fitted with Hearing Aids Applying the American Academy of Audiology Pediatric Amplification Guideline: Current Practice and Outcomes.

    PubMed

    Walker, Elizabeth; McCreery, Ryan; Spratford, Meredith; Roush, Patricia

    2016-03-01

    Up to 15% of children with permanent hearing loss (HL) have auditory neuropathy spectrum disorder (ANSD), which involves normal outer hair cell function and disordered afferent neural activity in the auditory nerve or brainstem. Given the varying presentations of ANSD in children, there is a need for more evidence-based research on appropriate clinical interventions for this population. This study compared the speech production, speech perception, and language outcomes of children with ANSD, who are hard of hearing, to children with similar degrees of mild-to-moderately severe sensorineural hearing loss (SNHL), all of whom were fitted with bilateral hearing aids (HAs) based on the American Academy of Audiology pediatric amplification guidelines. Speech perception and communication outcomes data were gathered in a prospective accelerated longitudinal design, with entry into the study between six mo and seven yr of age. Three sites were involved in participant recruitment: Boys Town National Research Hospital, the University of North Carolina at Chapel Hill, and the University of Iowa. The sample consisted of 12 children with ANSD and 22 children with SNHL. The groups were matched based on better-ear pure-tone average, better-ear aided speech intelligibility index, gender, maternal education level, and newborn hearing screening result (i.e., pass or refer). Children and their families participated in an initial baseline visit, followed by visits twice a year for children <2 yr of age and once a yr for children >2 yr of age. Paired-sample t-tests were used to compare children with ANSD to children with SNHL. Paired t-tests indicated no significant differences between the ANSD and SNHL groups on language and articulation measures. Children with ANSD displayed functional speech perception skills in quiet. Although the number of participants was too small to conduct statistical analyses for speech perception testing, there appeared to be a trend in which the ANSD group performed more poorly in background noise with HAs, compared to the SNHL group. The American Academy of Audiology Pediatric Amplification Guidelines recommend that children with ANSD receive an HA trial if their behavioral thresholds are sufficiently high enough to impede speech perception at conversational levels. For children with ANSD in the mild-to-severe HL range, the current results support this recommendation, as children with ANSD can achieve functional outcomes similar to peers with SNHL. American Academy of Audiology.

  13. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    PubMed

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  14. From comparison to classification: a cortical tool for boosting perception.

    PubMed

    Nahum, Mor; Daikhin, Luba; Lubin, Yedida; Cohen, Yamit; Ahissar, Merav

    2010-01-20

    Humans are much better in relative than in absolute judgments. This common assertion is based on findings that discrimination thresholds are much lower when measured with methods that allow interstimuli comparisons than when measured with methods that require classification of one stimulus at a time and are hence sensitive to memory load. We now challenged this notion by measuring discrimination thresholds and evoked potentials while listeners performed a two-tone frequency discrimination task. We tested various protocols that differed in the pattern of cross-trial tone repetition. We found that best performance was achieved only when listeners effectively used cross-trial repetition to avoid interstimulus comparisons with the repeated reference tone. Instead, they classified one tone, the nonreference tone, as either high or low by comparing it with a recently formed internal reference. Listeners were not aware of the switch from interstimulus comparison to classification. Its successful use was revealed by the conjunction of improved behavioral performance and an event-related potential component (P3), indicating an implicit perceptual decision, which followed the nonreference tone in each trial. Interestingly, tone repetition itself did not suffice for the switch, implying that the bottleneck to discrimination does not reside at the lower, sensory stage. Rather, the temporal consistency of repetition was important, suggesting the involvement of higher-level mechanisms with longer time constants. These findings suggest that classification is based on more automatic and accurate mechanisms than interstimulus comparisons and that the ability to effectively use them depends on a dynamic interplay between higher- and lower-level cortical mechanisms.

  15. A Pilot Study of Immune and Mood Outcomes of a Community-Based Intervention for Dementia Caregivers: The PLST Intervention

    PubMed Central

    Garand, Linda; Buckwalter, Kathleen C.; Lubaroff, David M.; Tripp-Reimer, Toni; Frantz, Rita A.; Ansley, Timothy N.

    2010-01-01

    Providing care to a family member with dementia is conceptualized as a chronic stressor with adverse psychological and physical effects. The purpose of this pilot study was to evaluate mood and immune outcomes of caregivers exposed to a community-based psychoeducational nursing intervention based on the Progressively Lowered Stress Threshold (PLST) model. The PLST intervention is designed to strengthen the psychological resources of dementia caregivers by teaching methods of preventing and/or managing behavioral problems exhibited by the person with dementia. Mood and immune outcomes were compared between caregivers randomly assigned to receive either the PLST or a comparison intervention. Results of this pilot study suggest that caregivers who received the PLST intervention demonstrated significantly stronger T-cell proliferative responses to both PHA and ConA, indicating an improvement in T-cell immune function immediately after the in-home intervention (T2) and again after six months of telephone support for application of the PLST model (T3). Findings do not support the hypothesis that the PLST intervention had a significant effect on total mood disturbance or NK cell cytotoxicity over the course of the study. PMID:12143075

  16. Patient-based outcomes in patients with primary tinnitus undergoing tinnitus retraining therapy.

    PubMed

    Berry, Julie A; Gold, Susan L; Frederick, Ellen Alvarez; Gray, William C; Staecker, Hinrich

    2002-10-01

    To determine whether the Tinnitus Handicap Inventory (THI), a validated patient-based outcomes measure, may improve our ability to quantify impact and assess therapy for patients with tinnitus. Nonrandomized, prospective analysis of 32 patients undergoing tinnitus retraining therapy (TRT). Assessment tools included comprehensive audiology, a subjective self-assessment survey of tinnitus characteristics, and the THI. Tinnitus Handicap Inventory scores were assessed at baseline and 6 months following TRT. Baseline analysis revealed significant correlation between the subjective presence of hyperacusis and higher total, emotional, and catastrophic THI scores. Tinnitus Handicap Inventory scores correlated with subjective perception of overall tinnitus effect (P<.001). Mean pure-tone threshold average was 17.4 dB, and mean speech discrimination was 97.0%. There were no consistent correlations between baseline audiologic parameters and THI scores. Following 6 months of TRT, the total, emotional, functional, and catastrophic THI scores significantly improved (P<.001). Loudness discomfort levels also significantly improved (P< or =.02). There is significant improvement in self-perceived disability following TRT as measured by the THI. The results confirm the utility of the THI as a patient-based outcomes measure for quantifying treatment status in patients with primary tinnitus.

  17. A New Approach to Threshold Attribute Based Signatures

    DTIC Science & Technology

    2011-01-01

    Inspired by developments in attribute based encryption and signatures, there has recently been a spurtof progress in the direction of threshold ...attribute based signatures (t-ABS). In this work we propose anovel approach to construct threshold attribute based signatures inspired by ring signatures...Thresholdattribute based signatures, dened by a (t; n) threshold predicate, ensure that the signer holds atleastt out of a specied set of n attributes

  18. Experimental evidence of a pathogen invasion threshold

    PubMed Central

    Krkošek, Martin

    2018-01-01

    Host density thresholds to pathogen invasion separate regions of parameter space corresponding to endemic and disease-free states. The host density threshold is a central concept in theoretical epidemiology and a common target of human and wildlife disease control programmes, but there is mixed evidence supporting the existence of thresholds, especially in wildlife populations or for pathogens with complex transmission modes (e.g. environmental transmission). Here, we demonstrate the existence of a host density threshold for an environmentally transmitted pathogen by combining an epidemiological model with a microcosm experiment. Experimental epidemics consisted of replicate populations of naive crustacean zooplankton (Daphnia dentifera) hosts across a range of host densities (20–640 hosts l−1) that were exposed to an environmentally transmitted fungal pathogen (Metschnikowia bicuspidata). Epidemiological model simulations, parametrized independently of the experiment, qualitatively predicted experimental pathogen invasion thresholds. Variability in parameter estimates did not strongly influence outcomes, though systematic changes to key parameters have the potential to shift pathogen invasion thresholds. In summary, we provide one of the first clear experimental demonstrations of pathogen invasion thresholds in a replicated experimental system, and provide evidence that such thresholds may be predictable using independently constructed epidemiological models. PMID:29410876

  19. Auditory Performance and Electrical Stimulation Measures in Cochlear Implant Recipients With Auditory Neuropathy Compared With Severe to Profound Sensorineural Hearing Loss.

    PubMed

    Attias, Joseph; Greenstein, Tally; Peled, Miriam; Ulanovski, David; Wohlgelernter, Jay; Raveh, Eyal

    The aim of the study was to compare auditory and speech outcomes and electrical parameters on average 8 years after cochlear implantation between children with isolated auditory neuropathy (AN) and children with sensorineural hearing loss (SNHL). The study was conducted at a tertiary, university-affiliated pediatric medical center. The cohort included 16 patients with isolated AN with current age of 5 to 12.2 years who had been using a cochlear implant for at least 3.4 years and 16 control patients with SNHL matched for duration of deafness, age at implantation, type of implant, and unilateral/bilateral implant placement. All participants had had extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Most patients received Cochlear Nucleus devices, and the remainder either Med-El or Advanced Bionics devices. Unaided pure-tone audiograms were evaluated before and after implantation. Implantation outcomes were assessed by auditory and speech recognition tests in quiet and in noise. Data were also collected on the educational setting at 1 year after implantation and at school age. The electrical stimulation measures were evaluated only in the Cochlear Nucleus implant recipients in the two groups. Similar mapping and electrical measurement techniques were used in the two groups. Electrical thresholds, comfortable level, dynamic range, and objective neural response telemetry threshold were measured across the 22-electrode array in each patient. Main outcome measures were between-group differences in the following parameters: (1) Auditory and speech tests. (2) Residual hearing. (3) Electrical stimulation parameters. (4) Correlations of residual hearing at low frequencies with electrical thresholds at the basal, middle, and apical electrodes. The children with isolated AN performed equally well to the children with SNHL on auditory and speech recognition tests in both quiet and noise. More children in the AN group than the SNHL group were attending mainstream educational settings at school age, but the difference was not statistically significant. Significant between-group differences were noted in electrical measurements: the AN group was characterized by a lower current charge to reach subjective electrical thresholds, lower comfortable level and dynamic range, and lower telemetric neural response threshold. Based on pure-tone audiograms, the children with AN also had more residual hearing before and after implantation. Highly positive coefficients were found on correlation analysis between T levels across the basal and midcochlear electrodes and low-frequency acoustic thresholds. Prelingual children with isolated AN who fail to show expected oral and auditory progress after extensive rehabilitation with conventional hearing aids should be considered for cochlear implantation. Children with isolated AN had similar pattern as children with SNHL on auditory performance tests after cochlear implantation. The lower current charge required to evoke subjective and objective electrical thresholds in children with AN compared with children with SNHL may be attributed to the contribution to electrophonic hearing from the remaining neurons and hair cells. In addition, it is also possible that mechanical stimulation of the basilar membrane, as in acoustic stimulation, is added to the electrical stimulation of the cochlear implant.

  20. Patient-reported questionnaires in MS rehabilitation: responsiveness and minimal important difference of the multiple sclerosis questionnaire for physiotherapists (MSQPT).

    PubMed

    van der Maas, Nico Arie

    2017-03-16

    The Multiple Sclerosis Questionnaire for Physical Therapists (MSQPT) is a patient-rated outcome questionnaire for evaluating the rehabilitation of persons with multiple sclerosis (MS). Responsiveness was evaluated, and minimal important difference (MID) estimates were calculated to provide thresholds for clinical change for four items, three sections and the total score of the MSQPT. This multicentre study used a combined distribution- and anchor-based approach with multiple anchors and multiple rating of change questions. Responsiveness was evaluated using effect size, standardized response mean (SRM), modified SRM and relative efficiency. For distribution-based MID estimates, 0.2 and 0.33 standard deviations (SD), standard error of measurement (SEM) and minimal detectable change were used . Triangulation of anchor- and distribution-based MID estimates provided a range of MID values for each of the four items, the three sections and the total score of the MSQPT. The MID values were tested for their sensitivity and specificity for amelioration and deterioration for each of the four items, the three sections and the total score of the MSQPT. The MID values of each item and section and of the total score with the best sensitivity and specificity were selected as thresholds for clinical change. The outcome measures were the MSQPT, Hamburg Quality of Life Questionnaire for Multiple Sclerosis (HAQUAMS), rating of change questionnaires, Expanded Disability Status Scale, 6-metre timed walking test, Berg Balance Scale and 6-minute walking test. The effect size ranged from 0.46 to 1.49. The SRM data showed comparable results. The modified SRM ranged from 0.00 to 0.60. Anchor-based MID estimates were very low and were comparable with SD- and SEM-based estimates. The MSQPT was more responsive than the HAQUAMS in detecting improvement but less responsive in finding deterioration. The best MID estimates of the items, sections and total score, expressed in percentage of their maximum score, were between 5.4% (activity) and 22% (item 10) change for improvement and between 5.7% (total score) and 22% (item 10) change for deterioration. The MSQPT is a responsive questionnaire with an adequate MID that may be used as threshold for change during rehabilitation of MS patients. This trial was retrospectively (01/24/2015) registered in ClinicalTrials.gov as NCT02346279.

  1. STATISTICAL METHODOLOGY FOR THE SIMULTANEOUS ANALYSIS OF MULTIPLE TYPES OF OUTCOMES IN NONLINEAR THRESHOLD MODELS.

    EPA Science Inventory

    Multiple outcomes are often measured on each experimental unit in toxicology experiments. These multiple observations typically imply the existence of correlation between endpoints, and a statistical analysis that incorporates it may result in improved inference. When both disc...

  2. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. Statin Selection in Qatar Based on Multi-indication Pharmacotherapeutic Multi-criteria Scoring Model, and Clinician Preference.

    PubMed

    Al-Badriyeh, Daoud; Fahey, Michael; Alabbadi, Ibrahim; Al-Khal, Abdullatif; Zaidan, Manal

    2015-12-01

    Statin selection for the largest hospital formulary in Qatar is not systematic, not comparative, and does not consider the multi-indication nature of statins. There are no reports in the literature of multi-indication-based comparative scoring models of statins or of statin selection criteria weights that are based primarily on local clinicians' preferences and experiences. This study sought to comparatively evaluate statins for first-line therapy in Qatar, and to quantify the economic impact of this. An evidence-based, multi-indication, multi-criteria pharmacotherapeutic model was developed for the scoring of statins from the perspective of the main health care provider in Qatar. The literature and an expert panel informed the selection criteria of statins. Relative weighting of selection criteria was based on the input of the relevant local clinician population. Statins were comparatively scored based on literature evidence, with those exceeding a defined scoring threshold being recommended for use. With 95% CI and 5% margin of error, the scoring model was successfully developed. Selection criteria comprised 28 subcriteria under the following main criteria: clinical efficacy, best publish evidence and experience, adverse effects, drug interaction, dosing time, and fixed dose combination availability. Outcome measures for multiple indications were related to effects on LDL cholesterol, HDL cholesterol, triglyceride, total cholesterol, and C-reactive protein. Atorvastatin, pravastatin, and rosuvastatin exceeded defined pharmacotherapeutic thresholds. Atorvastatin and pravastatin were recommended as first-line use and rosuvastatin as a nonformulary alternative. It was estimated that this would produce a 17.6% cost savings in statins expenditure. Sensitivity analyses confirmed the robustness of the evaluation's outcomes against input uncertainties. Incorporating a comparative evaluation of statins in Qatari practices based on a locally developed, transparent, multi-indication, multi-criteria scoring model has the potential to considerably reduce expenditures on statins. Atorvastatin and pravastatin should be the first-line statin therapies in the main Qatari health care provider, with rosuvastatin as an alternative. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.

  4. The relationship between separation anxiety and impairment

    PubMed Central

    Foley, Debra L; Rowe, Richard; Maes, Hermine; Silberg, Judy; Eaves, Lindon; Pickles, Andrew

    2009-01-01

    The goal of this study was to characterize the contemporaneous and prognostic relationship between symptoms of separation anxiety disorder (SAD) and associated functional impairment. The sample comprised n=2067 8–16 year-old twins from a community-based registry. Juvenile subjects and their parents completed a personal interview on two occasions, separated by an average follow-up period of 18 months, about the subject’s current history of SAD and associated functional impairment. Results showed that SAD symptoms typically caused very little impairment but demonstrated significant continuity over time. Older youth had significantly more persistent symptoms than younger children. Prior symptom level independently predicted future symptom level and diagnostic symptom threshold, with and without impairment. Neither diagnostic threshold nor severity of impairment independently predicted outcomes after taking account of prior symptom levels. The results indicate that impairment may index current treatment need but symptom levels provide the best information about severity and prognosis. PMID:17658718

  5. Temporal Sensitivity Measured Shortly After Cochlear Implantation Predicts 6-Month Speech Recognition Outcome.

    PubMed

    Erb, Julia; Ludwig, Alexandra Annemarie; Kunke, Dunja; Fuchs, Michael; Obleser, Jonas

    2018-04-24

    Psychoacoustic tests assessed shortly after cochlear implantation are useful predictors of the rehabilitative speech outcome. While largely independent, both spectral and temporal resolution tests are important to provide an accurate prediction of speech recognition. However, rapid tests of temporal sensitivity are currently lacking. Here, we propose a simple amplitude modulation rate discrimination (AMRD) paradigm that is validated by predicting future speech recognition in adult cochlear implant (CI) patients. In 34 newly implanted patients, we used an adaptive AMRD paradigm, where broadband noise was modulated at the speech-relevant rate of ~4 Hz. In a longitudinal study, speech recognition in quiet was assessed using the closed-set Freiburger number test shortly after cochlear implantation (t0) as well as the open-set Freiburger monosyllabic word test 6 months later (t6). Both AMRD thresholds at t0 (r = -0.51) and speech recognition scores at t0 (r = 0.56) predicted speech recognition scores at t6. However, AMRD and speech recognition at t0 were uncorrelated, suggesting that those measures capture partially distinct perceptual abilities. A multiple regression model predicting 6-month speech recognition outcome with deafness duration and speech recognition at t0 improved from adjusted R = 0.30 to adjusted R = 0.44 when AMRD threshold was added as a predictor. These findings identify AMRD thresholds as a reliable, nonredundant predictor above and beyond established speech tests for CI outcome. This AMRD test could potentially be developed into a rapid clinical temporal-resolution test to be integrated into the postoperative test battery to improve the reliability of speech outcome prognosis.

  6. Perceived discrimination and health outcomes a gender comparison among Asian-Americans nationwide.

    PubMed

    Hahm, Hyeouk Chris; Ozonoff, Al; Gaumond, Jillian; Sue, Stanley

    2010-09-01

    We examined whether similarities and differences exist in the association between perceived discrimination and poor mental and physical health among Asian-American adult women and men. We also tested whether Asian-American women would have a lower perceived discrimination threshold for developing negative health outcomes than Asian-American men. Data were derived from the National Latino and Asian-American Study (2002-2003). A nationally representative sample of Asian-American adults (1,075 women and 972 men) was examined. There were more gender similarities than differences in the strong association between discrimination and health. More prominent gender differences were found for the specific level of discrimination and its potential health effects. Specifically, for both Asian women and men, a high level of perceived discrimination showed stronger associations with mental health than with physical health outcomes. And yet, compared with men, the threshold of discrimination was lower for women in affecting mental and physical health status. The findings underscore that a high level of discrimination was associated with negative mental and physical health outcomes for both women and men. However, women had more negative mental and physical health outcomes when exposed to a lower threshold of discrimination than men. These findings suggest that failing to examine women and men separately in discrimination research may no longer be appropriate among the Asian-American population. Future research should focus attention on the biological, social, and political mechanisms that mitigate the adverse health effects of discrimination in order to develop a more comprehensive approach to eliminate disparities in health. 2010 Jacobs Institute of Women

  7. A universal approach to determine footfall timings from kinematics of a single foot marker in hoofed animals

    PubMed Central

    Clayton, Hilary M.

    2015-01-01

    The study of animal movement commonly requires the segmentation of continuous data streams into individual strides. The use of forceplates and foot-mounted accelerometers readily allows the detection of the foot-on and foot-off events that define a stride. However, when relying on optical methods such as motion capture, there is lack of validated robust, universally applicable stride event detection methods. To date, no method has been validated for movement on a circle, while algorithms are commonly specific to front/hind limbs or gait. In this study, we aimed to develop and validate kinematic stride segmentation methods applicable to movement on straight line and circle at walk and trot, which exclusively rely on a single, dorsal hoof marker. The advantage of such marker placement is the robustness to marker loss and occlusion. Eight horses walked and trotted on a straight line and in a circle over an array of multiple forceplates. Kinetic events were detected based on the vertical force profile and used as the reference values. Kinematic events were detected based on displacement, velocity or acceleration signals of the dorsal hoof marker depending on the algorithm using (i) defined thresholds associated with derived movement signals and (ii) specific events in the derived movement signals. Method comparison was performed by calculating limits of agreement, accuracy, between-horse precision and within-horse precision based on differences between kinetic and kinematic event. In addition, we examined the effect of force thresholds ranging from 50 to 150 N on the timings of kinetic events. The two approaches resulted in very good and comparable performance: of the 3,074 processed footfall events, 95% of individual foot on and foot off events differed by no more than 26 ms from the kinetic event, with average accuracy between −11 and 10 ms and average within- and between horse precision ≤8 ms. While the event-based method may be less likely to suffer from scaling effects, on soft ground the threshold-based method may prove more valuable. While we found that use of velocity thresholds for foot on detection results in biased event estimates for the foot on the inside of the circle at trot, adjusting thresholds for this condition negated the effect. For the final four algorithms, we found no noteworthy bias between conditions or between front- and hind-foot timings. Different force thresholds in the range of 50 to 150 N had the greatest systematic effect on foot-off estimates in the hind limbs (up to on average 16 ms per condition), being greater than the effect on foot-on estimates or foot-off estimates in the forelimbs (up to on average ±7 ms per condition). PMID:26157641

  8. Impact of maternal age on obstetric and neonatal outcome with emphasis on primiparous adolescents and older women: a Swedish Medical Birth Register Study.

    PubMed

    Blomberg, Marie; Birch Tyrberg, Rasmus; Kjølhede, Preben

    2014-11-11

    To evaluate the associations between maternal age and obstetric and neonatal outcomes in primiparous women with emphasis on teenagers and older women. A population-based cohort study. The Swedish Medical Birth Register. Primiparous women with singleton births from 1992 through 2010 (N=798,674) were divided into seven age groups: <17 years, 17-19 years and an additional five 5-year classes. The reference group consisted of the women aged 25-29 years. Obstetric and neonatal outcome. The teenager groups had significantly more vaginal births (adjusted OR (aOR) 2.04 (1.79 to 2.32) and 1.95 (1.88 to 2.02) for age <17 years and 17-19 years, respectively); fewer caesarean sections (aOR 0.57 (0.48 to 0.67) and 0.55 (0.53 to 0.58)), and instrumental vaginal births (aOR 0.43 (0.36 to 0.52) and 0.50 (0.48 to 0.53)) compared with the reference group. The opposite was found among older women reaching a fourfold increased OR for caesarean section. The teenagers showed no increased risk of adverse neonatal outcome but presented an increased risk of prematurity <32 weeks (aOR 1.66 (1.10 to 2.51) and 1.20 (1.04 to 1.38)). Women with advancing age (≥30 years) revealed significantly increased risk of prematurity, perineal lacerations, preeclampsia, abruption, placenta previa, postpartum haemorrhage and unfavourable neonatal outcomes compared with the reference group. For clinicians counselling young women it is of importance to highlight the obstetrically positive consequences that fewer maternal complications and favourable neonatal outcomes are expected. The results imply that there is a need for individualising antenatal surveillance programmes and obstetric care based on age grouping in order to attempt to improve the outcomes in the age groups with less favourable obstetric and neonatal outcomes. Such changes in surveillance programmes and obstetric interventions need to be evaluated in further studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Invited commentary: the incremental value of customization in defining abnormal fetal growth status.

    PubMed

    Zhang, Jun; Sun, Kun

    2013-10-15

    Reference tools based on birth weight percentiles at a given gestational week have long been used to define fetuses or infants that are small or large for their gestational ages. However, important deficiencies of the birth weight reference are being increasingly recognized. Overwhelming evidence indicates that an ultrasonography-based fetal weight reference should be used to classify fetal and newborn sizes during pregnancy and at birth, respectively. Questions have been raised as to whether further adjustments for race/ethnicity, parity, sex, and maternal height and weight are helpful to improve the accuracy of the classification. In this issue of the Journal, Carberry et al. (Am J Epidemiol. 2013;178(8):1301-1308) show that adjustment for race/ethnicity is useful, but that additional fine tuning for other factors (i.e., full customization) in the classification may not further improve the ability to predict infant morbidity, mortality, and other fetal growth indicators. Thus, the theoretical advantage of full customization may have limited incremental value for pediatric outcomes, particularly in term births. Literature on the prediction of short-term maternal outcomes and very long-term outcomes (adult diseases) is too scarce to draw any conclusions. Given that each additional variable being incorporated in the classification scheme increases complexity and costs in practice, the clinical utility of full customization in obstetric practice requires further testing.

  10. Test-retest reliability of neurophysiological tests of hand-arm vibration syndrome in vibration exposed workers and unexposed referents.

    PubMed

    Gerhardsson, Lars; Gillström, Lennart; Hagberg, Mats

    2014-01-01

    Exposure to hand-held vibrating tools may cause the hand-arm vibration syndrome (HAVS). The aim was to study the test-retest reliability of hand and muscle strength tests, and tests for the determination of thermal and vibration perception thresholds, which are used when investigating signs of neuropathy in vibration exposed workers. In this study, 47 vibration exposed workers who had been investigated at the department of Occupational and Environmental Medicine in Gothenburg were compared with a randomized sample of 18 unexposed subjects from the general population of the city of Gothenburg. All participants passed a structured interview, answered several questionnaires and had a physical examination including hand and finger muscle strength tests, determination of vibrotactile (VPT) and thermal perception thresholds (TPT). Two weeks later, 23 workers and referents, selected in a randomized manner, were called back for the same test-procedures for the evaluation of test-retest reliability. The test-retest reliability after a two week interval expressed as limits of agreement (LOA; Bland-Altman), intra-class correlation coefficients (ICC) and Pearson correlation coefficients was excellent for tests with the Baseline hand grip, Pinch-grip and 3-Chuck grip among the exposed workers and referents (N = 23: percentage of differences within LOA 91 - 100%; ICC-values ≥0.93; Pearson r ≥0.93). The test-retest reliability was also excellent (percentage of differences within LOA 96-100 %) for the determination of vibration perception thresholds in digits 2 and 5 bilaterally as well as for temperature perception thresholds in digits 2 and 5, bilaterally (percentage of differences within LOA 91 - 96%). For ICC and Pearson r the results for vibration perception thresholds were good for digit 2, left hand and for digit 5, bilaterally (ICC ≥ 0.84; r ≥0.85), and lower (ICC = 0.59; r = 0.59) for digit 2, right hand. For the latter two indices the test-retest reliability for the determination of temperature thresholds was lower and showed more varying results. The strong test-retest reliability for hand and muscle strength tests as well as for the determination of VPTs makes these procedures useful for diagnostic purposes and follow-up studies in vibration exposed workers.

  11. Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE

    PubMed Central

    Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas

    2016-01-01

    To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782

  12. The asymmetry of U.S. monetary policy: Evidence from a threshold Taylor rule with time-varying threshold values

    NASA Astrophysics Data System (ADS)

    Zhu, Yanli; Chen, Haiqiang

    2017-05-01

    In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.

  13. Methodology Series Module 2: Case-control Studies

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    Case-Control study design is a type of observational study. In this design, participants are selected for the study based on their outcome status. Thus, some participants have the outcome of interest (referred to as cases), whereas others do not have the outcome of interest (referred to as controls). The investigator then assesses the exposure in both these groups. The investigator should define the cases as specifically as possible. Sometimes, definition of a disease may be based on multiple criteria; thus, all these points should be explicitly stated in case definition. An important aspect of selecting a control is that they should be from the same ‘study base’ as that of the cases. We can select controls from a variety of groups. Some of them are: General population; relatives or friends; and hospital patients. Matching is often used in case-control control studies to ensure that the cases and controls are similar in certain characteristics, and it is a useful technique to increase the efficiency of the study. Case-Control studies can usually be conducted relatively faster and are inexpensive – particularly when compared with cohort studies (prospective). It is useful to study rare outcomes and outcomes with long latent periods. This design is not very useful to study rare exposures. Furthermore, they may also be prone to certain biases – selection bias and recall bias. PMID:27057012

  14. Newly postulated neurodevelopmental risks of pediatric anesthesia: theories that could rock our world.

    PubMed

    Hays, Stephen Robert; Deshpande, Jayant K

    2013-04-01

    General anesthetics can induce apoptotic neurodegeneration and subsequent maladaptive behaviors in animals. Retrospective human studies suggest associations between early anesthetic exposure and subsequent adverse neurodevelopmental outcomes. The relevance of animal data to clinical practice is unclear and to our knowledge the causality underlying observed associations in humans is unknown. We reviewed newly postulated neurodevelopmental risks of pediatric anesthesia and discuss implications for the surgical care of children. We queried the MEDLINE®/PubMed® and EMBASE® databases for citations in English on pediatric anesthetic neurotoxicity with the focus on references from the last decade. Animal studies in rodents and primates demonstrate apoptotic neuropathology and subsequent maladaptive behaviors after exposure to all currently available general anesthetics with the possible exception of α2-adrenergic agonists. Similar adverse pathological and clinical effects occur after untreated pain. Anesthetic neurotoxicity in animals develops only after exposure above threshold doses and durations during a critical neurodevelopmental window of maximal synaptogenesis in the absence of concomitant painful stimuli. Anesthetic exposure outside this window or below threshold doses and durations shows no apparent neurotoxicity, while exposure in the context of concomitant painful stimuli is neuroprotective. Retrospective human studies suggest associations between early anesthetic exposure and subsequent adverse neurodevelopmental outcomes, particularly after multiple exposures. The causality underlying the associations is unknown. Ongoing investigations may clarify the risks associated with current practice. Surgical care of all patients mandates appropriate anesthesia. Neurotoxic doses and the duration of anesthetic exposure in animals may have little relevance to clinical practice, particularly surgical anesthesia for perioperative pain. The causality underlying the observed associations between early anesthetic exposure and subsequent adverse neurodevelopmental outcomes is unknown. Anesthetic exposure may be a marker of increased risk. Especially in young children, procedures requiring general anesthesia should be performed only as necessary and general anesthesia duration should be minimized. Alternatives to general anesthesia and the deferral of elective procedures beyond the first few years of life should be considered, as appropriate. Participation in ongoing efforts should be encouraged to generate further data. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  15. A Method for Evaluating Outcomes of Restoration When No Reference Sites Exist

    Treesearch

    J. Stephen Brewer; Timothy Menzel

    2009-01-01

    Ecological restoration typically seeks to shift species composition toward that of existing reference sites. Yet, comparing the assemblages in restored and reference habitats assumes that similarity to the reference habitat is the optimal outcome of restoration and does not provide a perspective on regionally rare off-site species. When no such reference assemblages of...

  16. Determining the True Cost to Deliver Total Hip and Knee Arthroplasty Over the Full Cycle of Care: Preparing for Bundling and Reference-Based Pricing.

    PubMed

    DiGioia, Anthony M; Greenhouse, Pamela K; Giarrusso, Michelle L; Kress, Justina M

    2016-01-01

    The Affordable Care Act accelerates health care providers' need to prepare for new care delivery platforms and payment models such as bundling and reference-based pricing (RBP). Thriving in this environment will be difficult without knowing the true cost of care delivery at the level of the clinical condition over the full cycle of care. We describe a project in which we identified true costs for both total hip and total knee arthroplasty. With the same tool, we identified cost drivers in each segment of care delivery and collected patient experience information. Combining cost and experience information with outcomes data we already collect allows us to drive costs down while protecting outcomes and experiences, and compete successfully in bundling and RBP programs. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Development of a diabetes treatment simulation model: with application to assessing alternative treatment intensification strategies on survival and diabetes-related complications.

    PubMed

    Chen, J; Alemao, E; Yin, D; Cook, J

    2008-06-01

    The objective of this analysis is to project the long-term impacts on life expectancy and occurrence over 5, 10, and 40 years of microvascular and macrovascular complications of diabetes when using different haemoglobin A1c (HbA1c) thresholds for intensifying treatment of type 2 diabetes. A flexible, discrete-event simulation model has been developed to evaluate alternative treatment strategies based on the United Kingdom Prospective Diabetes Study Outcomes Model. In the present analysis, the model is used to investigate the impact of alternative HbA1c thresholds for treatment intensification ranging from 7.0 to 9.0%. For each intensification strategy, the model is run using 80 simulated patients for each of 1224 patient profiles from the Real-Life Effectiveness and Care Patterns of Diabetes Management study (for a total of 97,920 simulated patients) to project the number of patients who will experience diabetes-related complications over time. The use of lower HbA1c thresholds for intensifying treatment is associated with improved long-term outcomes. When the HbA1c threshold for intensifying therapy from oral treatment to basal insulin (T1) is 7.0% and the threshold for intensifying basal insulin to multiple-dose insulin (T2) is 7.0%, simulated patients spend 54% of their time with HbA1c >7.0%, but 95% of their time with HbA1c >7.0% if T1 and T2 are set to 9.0%. More aggressive or proactive treatment postures are projected to reduce clinical events, including diabetes-related deaths and diabetes-related complications, particularly myocardial infarctions (MIs). When T1 and T2 are set to 7.0%, there are 592 fewer diabetes-related deaths in the first 5 years of the simulation and 3740 fewer deaths over 40 years compared with the results when T1 and T2 are set to 9.0%. These decreases in deaths were also associated with a 0.35 year gain in projected life expectancy. Compared with an aggressive strategy with both T1 and T2 being 7%, 644 more patients are projected to experience at least one episode of MI in the first 5 years if treatment intensification is delayed until HbA1c reaches 9.0%. This number increases over time, reaching 2906 additional patients experiencing at least one MI over a 40-year time period. We report results from a discrete-event simulation model to explore the impact of alternative treatment strategies for patients with type 2 diabetes. Strategies that intensify therapy (in response to rising HbA1c levels) at lower HbA1c thresholds (e.g. 7.0%) are associated with enhanced projected long-term health outcomes.

  18. Support by trained mentor mothers for abused women: a promising intervention in primary care.

    PubMed

    Prosman, Gert-Jan; Lo Fo Wong, Sylvie H; Lagro-Janssen, Antoine L M

    2014-02-01

    Intimate partner violence (IPV) against women is a major health problem and negatively affects the victim's mental and physical health. Evidence-based interventions in family practice are scarce. We aimed to evaluate a low threshold home-visiting intervention for abused women provided by trained mentor mothers in family practice. The aim was to reduce exposure to IPV, symptoms of depression as well as to improve social support, participation in society and acceptance of mental health care. A pre-post study of a 16-week mentoring intervention with identified abused women with children was conducted. After referral by a family doctor, a mentor mother visited the abused woman weekly. Primary outcomes are IPV assessed with the Composite Abuse Scale (CAS), depressive symptoms using the Symptom Checklist (SCL 90) and social support by the Utrecht Coping List. Secondary outcomes are analysed qualitatively: participation in society defined as employment and education and the acceptance of mental health care. At baseline, 63 out of 66 abused women were referred to mentor support. Forty-three participants completed the intervention programme. IPV decreased from CASt otal 46.7 (SD 24.7) to 9.0 (SD 9.1) (P ≤ 0.001) after the mentor mother support programme. Symptoms of depression decreased from 53.3 (SD 13.7) to 34.8 (SD 11.5) (P ≤ 0.001) and social support increased from 13.2 (SD 4.0) to 15.2 (SD 3.5) (P ≤ 0.001). Participation in society and the acceptance of mental health for mother and child improved. Sixteen weekly visits by trained mentor mothers are a promising intervention to decrease exposure to IPV and symptoms of depression, as well as to improve social support, participation in society and the acceptance of professional help for abused women and their children.

  19. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models: Stop Developing, Start Evaluating.

    PubMed

    van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik

    2017-04-01

    Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Four databases were searched for HEEs of PM-based strategies. Two reviewers independently selected eligible articles. A checklist was compiled to score items focusing on general characteristics of HEEs of PMs, model characteristics and quality of HEEs, evidence on PMs typically used in the HEEs, and the specific challenges in performing HEEs of PMs. After screening 791 abstracts, 171 full texts, and reference checking, 40 eligible HEEs evaluating 60 PMs were identified. In these HEEs, PM strategies were compared with current practice (n = 32; 80%), to other stratification methods for patient management (n = 19; 48%), to an extended PM (n = 9; 23%), or to alternative PMs (n = 5; 13%). The PMs guided decisions on treatment (n = 42; 70%), further testing (n = 18; 30%), or treatment prioritization (n = 4; 7%). For 36 (60%) PMs, only a single decision threshold was evaluated. Costs of risk prediction were ignored for 28 (46%) PMs. Uncertainty in outcomes was assessed using probabilistic sensitivity analyses in 22 (55%) HEEs. Despite the huge number of PMs in the medical literature, HEE of PMs remains rare. In addition, we observed great variety in their quality and methodology, which may complicate interpretation of HEE results and implementation of PMs in practice. Guidance on HEE of PMs could encourage and standardize their application and enhance methodological quality, thereby improving adequate use of PM strategies. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. Recursive estimators of mean-areal and local bias in precipitation products that account for conditional bias

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Seo, Dong-Jun

    2017-03-01

    This paper presents novel formulations of Mean field bias (MFB) and local bias (LB) correction schemes that incorporate conditional bias (CB) penalty. These schemes are based on the operational MFB and LB algorithms in the National Weather Service (NWS) Multisensor Precipitation Estimator (MPE). By incorporating CB penalty in the cost function of exponential smoothers, we are able to derive augmented versions of recursive estimators of MFB and LB. Two extended versions of MFB algorithms are presented, one incorporating spatial variation of gauge locations only (MFB-L), and the second integrating both gauge locations and CB penalty (MFB-X). These two MFB schemes and the extended LB scheme (LB-X) are assessed relative to the original MFB and LB algorithms (referred to as MFB-O and LB-O, respectively) through a retrospective experiment over a radar domain in north-central Texas, and through a synthetic experiment over the Mid-Atlantic region. The outcome of the former experiment indicates that introducing the CB penalty to the MFB formulation leads to small, but consistent improvements in bias and CB, while its impacts on hourly correlation and Root Mean Square Error (RMSE) are mixed. Incorporating CB penalty in LB formulation tends to improve the RMSE at high rainfall thresholds, but its impacts on bias are also mixed. The synthetic experiment suggests that beneficial impacts are more conspicuous at low gauge density (9 per 58,000 km2), and tend to diminish at higher gauge density. The improvement at high rainfall intensity is partly an outcome of the conservativeness of the extended LB scheme. This conservativeness arises in part from the more frequent presence of negative eigenvalues in the extended covariance matrix which leads to no, or smaller incremental changes to the smoothed rainfall amounts.

  1. Basal Insulin Regimens for Adults with Type 1 Diabetes Mellitus: A Cost-Utility Analysis.

    PubMed

    Dawoud, Dalia; Fenu, Elisabetta; Higgins, Bernard; Wonderling, David; Amiel, Stephanie A

    2017-12-01

    To assess the cost-effectiveness of basal insulin regimens for adults with type 1 diabetes mellitus in England. A cost-utility analysis was conducted in accordance with the National Institute for Health and Care Excellence reference case. The UK National Health Service and personal and social services perspective was used and a 3.5% discount rate was applied for both costs and outcomes. Relative effectiveness estimates were based on a systematic review of published trials and a Bayesian network meta-analysis. The IMS CORE Diabetes Model was used, in which net monetary benefit (NMB) was calculated using a threshold of £20,000 per quality-adjusted life-year (QALY) gained. A wide range of sensitivity analyses were conducted. Insulin detemir (twice daily) [iDet (bid)] had the highest mean QALY gain (11.09 QALYs) and NMB (£181,456) per patient over the model time horizon. Compared with the lowest cost strategy (insulin neutral protamine Hagedorn once daily), it had an incremental cost-effectiveness ratio of £7844/QALY gained. Insulin glargine (od) [iGlarg (od)] and iDet (od) were ranked as second and third, with NMBs of £180,893 and £180,423, respectively. iDet (bid) remained the most cost-effective treatment in all the sensitivity analyses performed except when high doses were assumed (>30% increment compared with other regimens), where iGlarg (od) ranked first. iDet (bid) is the most cost-effective regimen, providing the highest QALY gain and NMB. iGlarg (od) and iDet (od) are possible options for those for whom the iDet (bid) regimen is not acceptable or does not achieve required glycemic control. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. A systematic review and meta-analysis of the diagnostic accuracy of point-of-care tests for the detection of hyperketonemia in dairy cows.

    PubMed

    Tatone, Elise H; Gordon, Jessica L; Hubbs, Jessie; LeBlanc, Stephen J; DeVries, Trevor J; Duffield, Todd F

    2016-08-01

    Several rapid tests for use on farm have been validated for the detection of hyperketonemia (HK) in dairy cattle, however the reported sensitivity and specificity of each method varies and no single study has compared them all. Meta-analysis of diagnostic test accuracy is becoming more common in human medical literature but there are few veterinary examples. The objective of this work was to perform a systematic review and meta-analysis to determine the point-of-care testing method with the highest combined sensitivity and specificity, the optimal threshold for each method, and to identify gaps in the literature. A comprehensive literature search resulted in 5196 references. After removing duplicates and performing relevance screening, 23 studies were included for the qualitative synthesis and 18 for the meta-analysis. The three index tests evaluated in the meta-analysis were: the Precision Xtra(®) handheld device measuring beta-hydroxybutyrate (BHB) concentration in whole blood, and Ketostix(®) and KetoTest(®) semi-quantitative strips measuring the concentration of acetoacetate in urine and BHB in milk, respectively. The diagnostic accuracy of the 3 index tests relative to the reference standard measurement of BHB in serum or whole blood between 1.0-1.4mmol/L was compared using the hierarchical summary receiver operator characteristic (HSROC) method. Subgroup analysis was conducted for each index test to examine the accuracy at different thresholds. The impact of the reference standard threshold, the reference standard method, the prevalence of HK in the population, the primary study source and risk of bias of the primary study was explored using meta-regression. The Precision Xtra(®) device had the highest summary sensitivity in whole blood BHB at 1.2mmol/L, 94.8% (CI95%: 92.6-97.0), and specificity, 97.5% (CI95%: 96.9-98.1). The threshold employed (1.2-1.4mmol/L) did not impact the diagnostic accuracy of the test. The Ketostix(®) and KetoTest(®) strips had the highest summary sensitivity and specificity when the trace and weak positive thresholds were used, respectively. Controlling for the source of publication, HK prevalence and reference standard employed did not impact the estimated sensitivity and specificity of the tests. Including only peer-reviewed studies reduced the number of primary studies evaluating the Precision Xtra(®) by 43% and Ketostix(®) by 33%. Diagnosing HK with blood, urine or milk are valid options, however, the diagnostic inaccuracy of urine and milk should be considered when making economic and treatment decisions. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Effects of a cost-effective surgical workflow on cosmesis and patient's satisfaction in open thyroid surgery.

    PubMed

    Billmann, Franck; Bokor-Billmann, Therezia; Voigt, Joachim; Kiffner, Erhard

    2013-01-01

    In thyroid surgery, minimally invasive procedures are thought to improve cosmesis and patient's satisfaction. However, studies using standardized tools are scarce, and results are controversial. Moreover, minimally invasive techniques raise the question of material costs in a context of health spending cuts. The aim of the present study is to test a cost-effective surgical workflow to improve cosmesis in conventional open thyroid surgery. Our study ran between January 2009 and November 2010, and was based on a prospectively maintained thyroid surgery register. Patients operated for benign thyroid diseases were included. Since January 2010, a standardized surgical workflow was used in addition to the reference open procedure to improve the outcome. Two groups were created: (1) G1 group (patients operated with the reference technique), (2) G2 group (patients operated with our workflow in addition to reference technique). Patients were investigated for postoperative outcomes, self-evaluated body image, cosmetic and self-confidence scores. 820 patients were included in the present study. The overall body image and cosmetic scores were significantly better in the G2 group (P < 0.05). No significant difference was noted in terms of surgical outcomes, scar length, and self-confidence. Our surgical workflow in conjunction with the reference technique is safe and shows significant better results in terms of body image and cosmesis than do the reference technique alone. Thus, we recommend its implementation in order to improve outcomes in a cost-effective way. The limitations of the present study should be kept in mind in the elaboration of future studies. Copyright © 2012 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  4. Maternal and child psychological outcomes of HIV disclosure to young children in rural South Africa: the Amagugu intervention.

    PubMed

    Rochat, Tamsen J; Arteche, Adriane X; Stein, Alan; Mitchell, Joanie; Bland, Ruth M

    2015-06-01

    Increasingly, HIV-infected parents are surviving to nurture their children. Parental HIV disclosure is beneficial, but disclosure rates to younger children remain low. Previously, we demonstrated that the 'Amagugu' intervention increased disclosure to young children; however, effects on psychological outcomes have not been examined in detail. This study investigates the impact of the intervention on the maternal and child psychological outcomes. This pre-post evaluation design enrolled 281 HIV-infected women and their HIV-uninfected children (6-10 years) at the Africa Centre for Health and Population Studies, in rural South Africa. The intervention included six home-based counselling sessions delivered by lay-counsellors. Psychological outcomes included maternal psychological functioning (General Health Questionnaire, GHQ12 using 0,1,2,3 scoring); parenting stress (Parenting Stress Index, PSI36); and child emotional and behavioural functioning (Child Behaviour Checklist, CBCL). The proportions of mothers with psychological distress reduced after intervention: GHQ threshold at least 12 (from 41.3 to 24.9%, P < 0.001) and GHQ threshold at least 20 (from 17.8 to 11.7%, P = 0.040). Parenting stress scores also reduced (Pre M = 79.8; Post M = 76.2, P < 0.001): two subscales, parental distress and parent-child relationship, showed significant improvement, while mothers' perception of 'child as difficult' was not significantly improved. Reductions in scores were not moderated by disclosure level (full/partial). There was a significant reduction in child emotional and behavioural problems (CBCL Pre M = 56.1; Post M = 48.9, P < 0.001). Amagugu led to improvements in mothers' and children's mental health and parenting stress, irrespective of disclosure level, suggesting general nonspecific positive effects on family relationships. Findings require validation in a randomized control trial.

  5. Discrimination of enclosed images by weighted storage in an optical associative memory

    NASA Astrophysics Data System (ADS)

    Duelli, M.; Cudney, R. S.; Günter, P.

    1996-02-01

    We present an all-optical associative memory that can distinguish objects that are enclosed by or strongly overlap other objects. This is done by appropriately weighting the exposure of the stored images during recording. The images to be recalled associatively are stored in a photorefractive LiNbO 3 crystal via angular multiplexing. Thresholding of the reconstructed reference beams during associative readout is achieved by using a saturable absorber with an intensity tunable threshold.

  6. Preservation of motor maps with increased motor evoked potential amplitude threshold in RMT determination.

    PubMed

    Lucente, Giuseppe; Lam, Steven; Schneider, Heike; Picht, Thomas

    2018-02-01

    Non-invasive pre-surgical mapping of eloquent brain areas with navigated transcranial magnetic stimulation (nTMS) is a useful technique linked to the improvement of surgical planning and patient outcomes. The stimulator output intensity and subsequent resting motor threshold determination (rMT) are based on the motor-evoked potential (MEP) elicited in the target muscle with an amplitude above a predetermined threshold of 50 μV. However, a subset of patients is unable to achieve complete relaxation in the target muscles, resulting in false positives that jeopardize mapping validity with conventional MEP determination protocols. Our aim is to explore the feasibility and reproducibility of a novel mapping approach that investigates how an increase of the MEP amplitude threshold to 300 and 500 μV affects subsequent motor maps. Seven healthy subjects underwent motor mapping with nTMS. RMT was calculated with the conventional methodology in conjunction with experimental 300- and 500-μV MEP amplitude thresholds. Motor mapping was performed with 105% of rMT stimulator intensity using the FDI as the target muscle. Motor mapping was possible in all patients with both the conventional and experimental setups. Motor area maps with a conventional 50-μV threshold showed poor correlation with 300-μV (α = 0.446, p < 0.001) maps, but showed excellent consistency with 500-μV motor area maps (α = 0.974, p < 0.001). MEP latencies were significantly less variable (23 ms for 50 μV vs. 23.7 ms for 300 μV vs. 23.7 ms for 500 μV, p < 0.001). A slight but significant increase of the electric field (EF) value was found (EF: 60.8 V/m vs. 64.8 V/m vs. 66 V/m p < 0.001). Our study demonstrates the feasibility of increasing the MEP detection threshold to 500 μV in rMT determination and motor area mapping with nTMS without losing precision.

  7. A Financial Market Model Incorporating Herd Behaviour.

    PubMed

    Wray, Christopher M; Bishop, Steven R

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market price of an equity index option.

  8. The Key Events Dose-Response Framework: a cross-disciplinary mode-of-action based approach to examining dose-response and thresholds.

    PubMed

    Julien, Elizabeth; Boobis, Alan R; Olin, Stephen S

    2009-09-01

    The ILSI Research Foundation convened a cross-disciplinary working group to examine current approaches for assessing dose-response and identifying safe levels of intake or exposure for four categories of bioactive agents-food allergens, nutrients, pathogenic microorganisms, and environmental chemicals. This effort generated a common analytical framework-the Key Events Dose-Response Framework (KEDRF)-for systematically examining key events that occur between the initial dose of a bioactive agent and the effect of concern. Individual key events are considered with regard to factors that influence the dose-response relationship and factors that underlie variability in that relationship. This approach illuminates the connection between the processes occurring at the level of fundamental biology and the outcomes observed at the individual and population levels. Thus, it promotes an evidence-based approach for using mechanistic data to reduce reliance on default assumptions, to quantify variability, and to better characterize biological thresholds. This paper provides an overview of the KEDRF and introduces a series of four companion papers that illustrate initial application of the approach to a range of bioactive agents.

  9. A modular case-mix classification system for medical rehabilitation illustrated.

    PubMed

    Stineman, M G; Granger, C V

    1997-01-01

    The authors present a modular set of patient classification systems designed for medical rehabilitation that predict resource use and outcomes for clinically similar groups of individuals. The systems, based on the Functional Independence Measure, are referred to as Function-Related Groups (FIM-FRGs). Using data from 23,637 lower extremity fracture patients from 458 inpatient medical rehabilitation facilities, 1995 benchmarks are provided and illustrated for length of stay, functional outcome, and discharge to home and skilled nursing facilities (SNFs). The FIM-FRG modules may be used in parallel to study interactions between resource use and quality and could ultimately yield an integrated strategy for payment and outcomes measurement. This could position the rehabilitation community to take a pioneering role in the application of outcomes-based clinical indicators.

  10. Physiology, intervention, and outcome: three critical questions about cerebral tissue oxygen saturation monitoring.

    PubMed

    Meng, Lingzhong; Gruenbaum, Shaun E; Dai, Feng; Wang, Tianlong

    2018-05-01

    The balance between cerebral tissue oxygen consumption and supply can be continuously assessed by cerebral tissue oxygen saturation (SctO2) monitor. A construct consisting of three sequential questions, targeting the physiology monitored, the intervention implemented, and the outcomes affected, is proposed to critically appraise this monitor. The impact of the SctO2-guided care on patient outcome was examined through a systematic literature search and meta-analysis. We concluded that the physiology monitored by SctO2 is robust and dynamic, fragile (prone to derangement), and adversely consequential when deranged. The inter-individual variability of SctO2 measurement advocates for an intervention threshold based on a relative, not absolute, change. The intra-individual variability has multiple determinants which is the foundation of intervention. A variety of therapeutic options are available; however, none are 100% efficacious in treating cerebral dys-oxygenation. The therapeutic efficacy likely depends on both an appropriate differential diagnosis and the functional status of the regulatory mechanisms of cerebral blood flow. Meta-analysis based on five randomized controlled trials suggested a reduced incidence of early postoperative cognitive decline after major surgeries (RR= 0.53; 95% CI: 0.33-0.87; I2 =82%; P=0.01). However, its effects on other neurocognitive outcomes remain unclear. These results need to be interpreted with caution due to the high risks of bias. Quality RCTs based on improved intervention protocols and standardized outcome assessment are warranted in the future.

  11. A Randomized Controlled Trial of Brief and Ultrabrief Pulse Right Unilateral Electroconvulsive Therapy

    PubMed Central

    Katalinic, Natalie; Smith, Deirdre J.; Ingram, Anna; Dowling, Nathan; Martin, Donel; Addison, Kerryn; Hadzi-Pavlovic, Dusan; Simpson, Brett; Schweitzer,, Isaac

    2015-01-01

    Background: Some studies suggest better overall outcomes when right unilateral electroconvulsive therapy (RUL ECT) is given with an ultrabrief, rather than brief, pulse width. Methods: The aim of the study was to test if ultrabrief-pulse RUL ECT results in less cognitive side effects than brief- pulse RUL ECT, when given at doses which achieve comparable efficacy. One hundred and two participants were assigned to receive ultrabrief (at 8 times seizure threshold) or brief (at 5 times seizure threshold) pulse RUL ECT in a double-blind, randomized controlled trial. Blinded raters assessed mood and cognitive functioning over the ECT course. Results: Efficacy outcomes were not found to be significantly different. The ultrabrief group showed less cognitive impairment immediately after a single session of ECT, and over the treatment course (autobiographical memory, orientation). Conclusions: In summary, when ultrabrief RUL ECT was given at a higher dosage than brief RUL ECT (8 versus 5 times seizure threshold), efficacy was comparable while cognitive impairment was less. PMID:25522389

  12. Required Area for a Crew Person in a Space Vehicle

    NASA Technical Reports Server (NTRS)

    Mount, Frances E.

    1998-01-01

    This 176 page report was written in circa 1966 to examine the effects of confmement during space flight. One of the topics covered was the required size of a space vehicle for extended missions. Analysis was done using size of crew and length of time in a confmed space. The report was based on all information available at that time. The data collected and analyzed included both NASA and (when possible) Russian missions flown to date, analogs (such as submarines), and ground studies. Both psychological and physiological responses to confmement were examined. Factors evaluated in estimating the degree of impairment included the level of performance of intellectual, perceptual, manual and co-ordinated tasks, response to psychological testing, subjective comments of the participants, nature and extent of physiological change, and the nature and extent of behavioral change and the nature and extent of somatic complaints. Information was not included from studies where elements of perceptual isolation were more than mildly incidental - water immersion studies, studies in darkened and acoustically insulated rooms, studies with distorted environmental inputs - unpattemed light and white noise. Using the graph from the document, the upper line provides a threshold of minimum acceptable volumeall points above the line may be considered acceptable. The lower line provides a threshold of unacceptable volume - all points below the line are unacceptable. The area in between the two lines is the area of doubtful acceptability where impairment tends to increase with reduction in volume and increased duration of exposure. Reference is made of the Gemini VII, 14-day duration mission which had detectable impairment with a combination of 40 cubic feet per man for 14 days. In line with all other data this point should be in the 'marked impairment' zone. It is assumed that the state of fitness, dedication and experience influenced this outcome.

  13. Common and Specific Factors Approaches to Home-Based Treatment: I-FAST and MST

    ERIC Educational Resources Information Center

    Lee, Mo Yee; Greene, Gilbert J.; Fraser, J. Scott; Edwards, Shivani G.; Grove, David; Solovey, Andrew D.; Scott, Pamela

    2013-01-01

    Objectives: This study examined the treatment outcomes of integrated families and systems treatment (I-FAST), a moderated common factors approach, in reference to multisystemic therapy (MST), an established specific factor approach, for treating at risk children and adolescents and their families in an intensive community-based setting. Method:…

  14. Measuring Changes in Tactile Sensitivity in the Hind Paw of Mice Using an Electronic von Frey Apparatus

    PubMed Central

    Martinov, Tijana; Mack, Madison; Sykes, Akilah; Chatterjea, Devavani

    2013-01-01

    Measuring inflammation-induced changes in thresholds of hind paw withdrawal from mechanical pressure is a useful technique to assess changes in pain perception in rodents. Withdrawal thresholds can be measured first at baseline and then following drug, venom, injury, allergen, or otherwise evoked inflammation by applying an accurate force on very specific areas of the skin. An electronic von Frey apparatus allows precise assessment of mouse hind paw withdrawal thresholds that are not limited by the available filament sizes in contrast to classical von Frey measurements. The ease and rapidity of measurements allow for incorporation of assessment of tactile sensitivity outcomes in diverse models of rapid-onset inflammatory and neuropathic pain as multiple measurements can be taken within a short time period. Experimental measurements for individual rodent subjects can be internally controlled against individual baseline responses and exclusion criteria easily established to standardize baseline responses within and across experimental groups. Thus, measurements using an electronic von Frey apparatus represent a useful modification of the well-established classical von Frey filament-based assays for rodent mechanical allodynia that may also be applied to other nonhuman mammalian models. PMID:24378519

  15. Melt layer erosion of pure and lanthanum doped tungsten under VDE-like high heat flux loads

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Greuner, H.; Böswirth, B.; Luo, G.-N.; Fu, B. Q.; Xu, H. Y.; Liu, W.

    2013-07-01

    Heat loads expected for VDEs in ITER were applied in the neutral beam facility GLADIS at IPP Garching. Several ˜3 mm thick rolled pure W and W-1 wt% La2O3 plates were exposed to pulsed hydrogen beams with a central heat flux of 23 MW/m2 for 1.5-1.8 s. The melting thresholds are determined, and melt layer motion as well as material structure evolutions are shown. The melting thresholds of the two W grades are very close in this experimental setup. Lots of big bubbles with diameters from several μm to several 10 μm in the re-solidified layer of W were observed and they spread deeper with increasing heat flux. However, for W-1 wt% La2O3, no big bubbles were found in the corrugated melt layer. The underlying mechanisms referred to the melt layer motion and bubble issues are tentatively discussed based on comparison of the erosion characteristics between the two W grades.

  16. Exploration of Use of Copulas in Analysing the Relationship between Precipitation and Meteorological Drought in Beijing, China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Linlin; Wang, Hongrui; Wang, Cheng

    Drought risk analysis is essential for regional water resource management. In this study, the probabilistic relationship between precipitation and meteorological drought in Beijing, China, was calculated under three different precipitation conditions (precipitation equal to, greater than, or less than a threshold) based on copulas. The Standardized Precipitation Evapotranspiration Index (SPEI) was calculated based on monthly total precipitation and monthly mean temperature data. The trends and variations in the SPEI were analysed using Hilbert-Huang Transform (HHT) and Mann-Kendall (MK) trend tests with a running approach. The results of the HHT and MK test indicated a significant decreasing trend in the SPEI.more » The copula-based conditional probability indicated that the probability of meteorological drought decreased as monthly precipitation increased and that 10 mm can be regarded as the threshold for triggering extreme drought. From a quantitative perspective, when R ≤ mm, the probabilities of moderate drought, severe drought, and extreme drought were 22.1%, 18%, and 13.6%, respectively. This conditional probability distribution not only revealed the occurrence of meteorological drought in Beijing but also provided a quantitative way to analyse the probability of drought under different precipitation conditions. Furthermore, the results provide a useful reference for future drought prediction.« less

  17. The Impact of Community Based Health Insurance in Enhancing Better Accessibility and Lowering the Chance of Having Financial Catastrophe Due to Health Service Utilization: A Case Study of Savannakhet Province, Laos.

    PubMed

    Bodhisane, Somdeth; Pongpanich, Sathirakorn

    2017-07-01

    The Lao population mostly relies on out-of-pocket expenditures for health care services. This study aims to determine the role of community-based health insurance in making health care services accessible and in preventing financial catastrophe resulting from personal payment for inpatient services. A cross-sectional study design was applied. Data collection involved 126 insured and 126 uninsured households in identical study sites. Two logistic regression models were used to predict and compare the probability of hospitalization and financial catastrophe that occurred in both insured and uninsured households within the previous year. The findings show that insurance status does not significantly improve accessibility and financial protection against catastrophic expenditure. The reason is relatively simple, as catastrophic health expenditure refers to a total out-of-pocket payment equal to or more than 40% of household income minus subsistence. When household income declines as a result of inability to work due to illness, the 40% threshold is quickly reached. Despite this, results suggest that insured households are not significantly better off under community-based health insurance. However, compared to uninsured households, insured households do have better accessibility and a lower probability of reaching the financial catastrophe threshold.

  18. Exploration of Use of Copulas in Analysing the Relationship between Precipitation and Meteorological Drought in Beijing, China

    DOE PAGES

    Fan, Linlin; Wang, Hongrui; Wang, Cheng; ...

    2017-05-16

    Drought risk analysis is essential for regional water resource management. In this study, the probabilistic relationship between precipitation and meteorological drought in Beijing, China, was calculated under three different precipitation conditions (precipitation equal to, greater than, or less than a threshold) based on copulas. The Standardized Precipitation Evapotranspiration Index (SPEI) was calculated based on monthly total precipitation and monthly mean temperature data. The trends and variations in the SPEI were analysed using Hilbert-Huang Transform (HHT) and Mann-Kendall (MK) trend tests with a running approach. The results of the HHT and MK test indicated a significant decreasing trend in the SPEI.more » The copula-based conditional probability indicated that the probability of meteorological drought decreased as monthly precipitation increased and that 10 mm can be regarded as the threshold for triggering extreme drought. From a quantitative perspective, when R ≤ mm, the probabilities of moderate drought, severe drought, and extreme drought were 22.1%, 18%, and 13.6%, respectively. This conditional probability distribution not only revealed the occurrence of meteorological drought in Beijing but also provided a quantitative way to analyse the probability of drought under different precipitation conditions. Furthermore, the results provide a useful reference for future drought prediction.« less

  19. Automatic detection of muscle activity from mechanomyogram signals: a comparison of amplitude and wavelet-based methods.

    PubMed

    Alves, Natasha; Chau, Tom

    2010-04-01

    Knowledge of muscle activity timing is critical to many clinical applications, such as the assessment of muscle coordination and the prescription of muscle-activated switches for individuals with disabilities. In this study, we introduce a continuous wavelet transform (CWT) algorithm for the detection of muscle activity via mechanomyogram (MMG) signals. CWT coefficients of the MMG signal were compared to scale-specific thresholds derived from the baseline signal to estimate the timing of muscle activity. Test signals were recorded from the flexor carpi radialis muscles of 15 able-bodied participants as they squeezed and released a hand dynamometer. Using the dynamometer signal as a reference, the proposed CWT detection algorithm was compared against a global-threshold CWT detector as well as amplitude-based event detection for sensitivity and specificity to voluntary contractions. The scale-specific CWT-based algorithm exhibited superior detection performance over the other detectors. CWT detection also showed good muscle selectivity during hand movement, particularly when a given muscle was the primary facilitator of the contraction. This may suggest that, during contraction, the compound MMG signal has a recurring morphological pattern that is not prevalent in the baseline signal. The ability of CWT analysis to be implemented in real time makes it a candidate for muscle-activity detection in clinical applications.

  20. Assessing the Science Knowledge of University Students: Perils, Pitfalls and Possibilities

    ERIC Educational Resources Information Center

    Jones, Susan M.

    2014-01-01

    Science content knowledge is internationally regarded as a fundamentally important learning outcome for graduates of bachelor level science degrees: the Science Threshold Learning Outcomes (TLOs) recently adopted in Australia as a nationally agreed framework include "Science Knowledge" as TLO 2. Science knowledge is commonly assessed…

Top