2013-01-01
Background All rigorous primary cardiovascular disease (CVD) prevention guidelines recommend absolute CVD risk scores to identify high- and low-risk patients, but laboratory testing can be impractical in low- and middle-income countries. The purpose of this study was to compare the ranking performance of a simple, non-laboratory-based risk score to laboratory-based scores in various South African populations. Methods We calculated and compared 10-year CVD (or coronary heart disease (CHD)) risk for 14,772 adults from thirteen cross-sectional South African populations (data collected from 1987 to 2009). Risk characterization performance for the non-laboratory-based score was assessed by comparing rankings of risk with six laboratory-based scores (three versions of Framingham risk, SCORE for high- and low-risk countries, and CUORE) using Spearman rank correlation and percent of population equivalently characterized as ‘high’ or ‘low’ risk. Total 10-year non-laboratory-based risk of CVD death was also calculated for a representative cross-section from the 1998 South African Demographic Health Survey (DHS, n = 9,379) to estimate the national burden of CVD mortality risk. Results Spearman correlation coefficients for the non-laboratory-based score with the laboratory-based scores ranged from 0.88 to 0.986. Using conventional thresholds for CVD risk (10% to 20% 10-year CVD risk), 90% to 92% of men and 94% to 97% of women were equivalently characterized as ‘high’ or ‘low’ risk using the non-laboratory-based and Framingham (2008) CVD risk score. These results were robust across the six risk scores evaluated and the thirteen cross-sectional datasets, with few exceptions (lower agreement between the non-laboratory-based and Framingham (1991) CHD risk scores). Approximately 18% of adults in the DHS population were characterized as ‘high CVD risk’ (10-year CVD death risk >20%) using the non-laboratory-based score. Conclusions We found a high level of correlation between a simple, non-laboratory-based CVD risk score and commonly-used laboratory-based risk scores. The burden of CVD mortality risk was high for men and women in South Africa. The policy and clinical implications are that fast, low-cost screening tools can lead to similar risk assessment results compared to time- and resource-intensive approaches. Until setting-specific cohort studies can derive and validate country-specific risk scores, non-laboratory-based CVD risk assessment could be an effective and efficient primary CVD screening approach in South Africa. PMID:23880010
A summary risk score for the prediction of Alzheimer disease in elderly persons.
Reitz, Christiane; Tang, Ming-Xin; Schupf, Nicole; Manly, Jennifer J; Mayeux, Richard; Luchsinger, José A
2010-07-01
To develop a simple summary risk score for the prediction of Alzheimer disease in elderly persons based on their vascular risk profiles. A longitudinal, community-based study. New York, New York. Patients One thousand fifty-one Medicare recipients aged 65 years or older and residing in New York who were free of dementia or cognitive impairment at baseline. We separately explored the associations of several vascular risk factors with late-onset Alzheimer disease (LOAD) using Cox proportional hazards models to identify factors that would contribute to the risk score. Then we estimated the score values of each factor based on their beta coefficients and created the LOAD vascular risk score by summing these individual scores. Risk factors contributing to the risk score were age, sex, education, ethnicity, APOE epsilon4 genotype, history of diabetes, hypertension or smoking, high-density lipoprotein levels, and waist to hip ratio. The resulting risk score predicted dementia well. According to the vascular risk score quintiles, the risk to develop probable LOAD was 1.0 for persons with a score of 0 to 14 and increased 3.7-fold for persons with a score of 15 to 18, 3.6-fold for persons with a score of 19 to 22, 12.6-fold for persons with a score of 23 to 28, and 20.5-fold for persons with a score higher than 28. While additional studies in other populations are needed to validate and further develop the score, our study suggests that this vascular risk score could be a valuable tool to identify elderly individuals who might be at risk of LOAD. This risk score could be used to identify persons at risk of LOAD, but can also be used to adjust for confounders in epidemiologic studies.
Ueda, Peter; Woodward, Mark; Lu, Yuan; Hajifathalian, Kaveh; Al-Wotayan, Rihab; Aguilar-Salinas, Carlos A; Ahmadvand, Alireza; Azizi, Fereidoun; Bentham, James; Cifkova, Renata; Di Cesare, Mariachiara; Eriksen, Louise; Farzadfar, Farshad; Ferguson, Trevor S; Ikeda, Nayu; Khalili, Davood; Khang, Young-Ho; Lanska, Vera; León-Muñoz, Luz; Magliano, Dianna J; Margozzini, Paula; Msyamboza, Kelias P; Mutungi, Gerald; Oh, Kyungwon; Oum, Sophal; Rodríguez-Artalejo, Fernando; Rojas-Martinez, Rosalba; Valdivia, Gonzalo; Wilks, Rainford; Shaw, Jonathan E; Stevens, Gretchen A; Tolstrup, Janne S; Zhou, Bin; Salomon, Joshua A; Ezzati, Majid; Danaei, Goodarz
2017-03-01
Worldwide implementation of risk-based cardiovascular disease (CVD) prevention requires risk prediction tools that are contemporarily recalibrated for the target country and can be used where laboratory measurements are unavailable. We present two cardiovascular risk scores, with and without laboratory-based measurements, and the corresponding risk charts for 182 countries to predict 10-year risk of fatal and non-fatal CVD in adults aged 40-74 years. Based on our previous laboratory-based prediction model (Globorisk), we used data from eight prospective studies to estimate coefficients of the risk equations using proportional hazard regressions. The laboratory-based risk score included age, sex, smoking, blood pressure, diabetes, and total cholesterol; in the non-laboratory (office-based) risk score, we replaced diabetes and total cholesterol with BMI. We recalibrated risk scores for each sex and age group in each country using country-specific mean risk factor levels and CVD rates. We used recalibrated risk scores and data from national surveys (using data from adults aged 40-64 years) to estimate the proportion of the population at different levels of CVD risk for ten countries from different world regions as examples of the information the risk scores provide; we applied a risk threshold for high risk of at least 10% for high-income countries (HICs) and at least 20% for low-income and middle-income countries (LMICs) on the basis of national and international guidelines for CVD prevention. We estimated the proportion of men and women who were similarly categorised as high risk or low risk by the two risk scores. Predicted risks for the same risk factor profile were generally lower in HICs than in LMICs, with the highest risks in countries in central and southeast Asia and eastern Europe, including China and Russia. In HICs, the proportion of people aged 40-64 years at high risk of CVD ranged from 1% for South Korean women to 42% for Czech men (using a ≥10% risk threshold), and in low-income countries ranged from 2% in Uganda (men and women) to 13% in Iranian men (using a ≥20% risk threshold). More than 80% of adults were similarly classified as low or high risk by the laboratory-based and office-based risk scores. However, the office-based model substantially underestimated the risk among patients with diabetes. Our risk charts provide risk assessment tools that are recalibrated for each country and make the estimation of CVD risk possible without using laboratory-based measurements. National Institutes of Health. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ueda, Peter; Woodward, Mark; Lu, Yuan; Hajifathalian, Kaveh; Al-Wotayan, Rihab; Aguilar-Salinas, Carlos A; Ahmadvand, Alireza; Azizi, Fereidoun; Bentham, James; Cifkova, Renata; Di Cesare, Mariachiara; Eriksen, Louise; Farzadfar, Farshad; Ferguson, Trevor S; Ikeda, Nayu; Khalili, Davood; Khang, Young-Ho; Lanska, Vera; León-Muñoz, Luz; Magliano, Dianna J; Margozzini, Paula; Msyamboza, Kelias P; Mutungi, Gerald; Oh, Kyungwon; Oum, Sophal; Rodríguez-Artalejo, Fernando; Rojas-Martinez, Rosalba; Valdivia, Gonzalo; Wilks, Rainford; Shaw, Jonathan E; Stevens, Gretchen A; Tolstrup, Janne S; Zhou, Bin; Salomon, Joshua A; Ezzati, Majid; Danaei, Goodarz
2017-01-01
Summary Background Worldwide implementation of risk-based cardiovascular disease (CVD) prevention requires risk prediction tools that are contemporarily recalibrated for the target country and can be used where laboratory measurements are unavailable. We present two cardiovascular risk scores, with and without laboratory-based measurements, and the corresponding risk charts for 182 countries to predict 10-year risk of fatal and non-fatal CVD in adults aged 40–74 years. Methods Based on our previous laboratory-based prediction model (Globorisk), we used data from eight prospective studies to estimate coefficients of the risk equations using proportional hazard regressions. The laboratory-based risk score included age, sex, smoking, blood pressure, diabetes, and total cholesterol; in the non-laboratory (office-based) risk score, we replaced diabetes and total cholesterol with BMI. We recalibrated risk scores for each sex and age group in each country using country-specific mean risk factor levels and CVD rates. We used recalibrated risk scores and data from national surveys (using data from adults aged 40–64 years) to estimate the proportion of the population at different levels of CVD risk for ten countries from different world regions as examples of the information the risk scores provide; we applied a risk threshold for high risk of at least 10% for high-income countries (HICs) and at least 20% for low-income and middle-income countries (LMICs) on the basis of national and international guidelines for CVD prevention. We estimated the proportion of men and women who were similarly categorised as high risk or low risk by the two risk scores. Findings Predicted risks for the same risk factor profile were generally lower in HICs than in LMICs, with the highest risks in countries in central and southeast Asia and eastern Europe, including China and Russia. In HICs, the proportion of people aged 40–64 years at high risk of CVD ranged from 1% for South Korean women to 42% for Czech men (using a ≥10% risk threshold), and in low-income countries ranged from 2% in Uganda (men and women) to 13% in Iranian men (using a ≥20% risk threshold). More than 80% of adults were similarly classified as low or high risk by the laboratory-based and office-based risk scores. However, the office-based model substantially underestimated the risk among patients with diabetes. Interpretation Our risk charts provide risk assessment tools that are recalibrated for each country and make the estimation of CVD risk possible without using laboratory-based measurements. PMID:28126460
Developing points-based risk-scoring systems in the presence of competing risks.
Austin, Peter C; Lee, Douglas S; D'Agostino, Ralph B; Fine, Jason P
2016-09-30
Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points-based risk-scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points-based risk-scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points-based risk-scoring systems facilitates evidence-based clinical decision making. There is a growing interest in cause-specific mortality and in non-fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points-based risk-scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk-scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Wilson, Sandra R; Fink, Arlene; Verghese, Shinu; Beck, John C; Nguyen, Khue; Lavori, Philip
2007-03-01
To evaluate a new alcohol-related risk score for research use. Using data from a previously reported trial of a screening and education system for older adults (Computerized Alcohol-Related Problems Survey), secondary analyses were conducted comparing the ability of two different measures of risk to detect post-intervention group differences: the original categorical outcome measure and a new, finely grained quantitative risk score based on the same research-based risk factors. Three primary care group practices in southern California. Six hundred sixty-five patients aged 65 and older. A previously calculated, three-level categorical classification of alcohol-related risk and a newly developed quantitative risk score. Mean post-intervention risk scores differed between the three experimental conditions: usual care, patient report, and combined report (P<.001). The difference between the combined report and usual care was significant (P<.001) and directly proportional to baseline risk. The three-level risk classification did not reveal approximately 57.3% of the intervention effect detected by the risk score. The risk score also was sufficiently sensitive to detect the intervention effect within the subset of hypertensive patients (n=112; P=.001). As an outcome measure in intervention trials, the finely grained risk score is more sensitive than the trinary risk classification. The additional clinical value of the risk score relative to the categorical measure needs to be determined.
Krabbe, Christine Emma Maria; Schipf, Sabine; Ittermann, Till; Dörr, Marcus; Nauck, Matthias; Chenot, Jean-François; Markus, Marcello Ricardo Paulista; Völzke, Henry
2017-11-01
Compare performances of diabetes risk scores and glycated hemoglobin (HbA1c) to estimate the risk of incident type 2 diabetes mellitus (T2DM) in Northeast Germany. We studied 2916 subjects (20 to 81years) from the Study of Health in Pomerania (SHIP) in a 5-year follow-up period. Diabetes risk scores included the Cooperative Health Research in the Region of Augsburg (KORA) base model, the Danish diabetes risk score and the Data from the Epidemiological Study on the Insulin Resistance syndrome (D.E.S.I.R) clinical risk score. We assessed the performance of each of the diabetes risk scores and the HbA1c for 5-year risk of T2DM by the area under the receiver-operating characteristic curve (AUC) and calibration plots. In SHIP, the incidence of T2DM was 5.4% (n=157) in the 5-year follow-up period. Diabetes risk scores and HbA1c achieved AUCs ranging from 0.76 for the D.E.S.I.R. clinical risk score to 0.82 for the KORA base model. For diabetes risk scores, the discriminative ability was lower for the age group 55 to 74years. For HbA1c, the discriminative ability also decreased for the group 55 to 74years while it was stable in the age group 30 to 64years old. All diabetes risk scores and the HbA1c showed a good prediction for the risk of T2DM in SHIP. Which model or biomarker should be used is driven by its context of use, e.g. the practicability, implementation of interventions and availability of measurement. Copyright © 2017 Elsevier Inc. All rights reserved.
Proposal for a new categorization of aseptic processing facilities based on risk assessment scores.
Katayama, Hirohito; Toda, Atsushi; Tokunaga, Yuji; Katoh, Shigeo
2008-01-01
Risk assessment of aseptic processing facilities was performed using two published risk assessment tools. Calculated risk scores were compared with experimental test results, including environmental monitoring and media fill run results, in three different types of facilities. The two risk assessment tools used gave a generally similar outcome. However, depending on the tool used, variations were observed in the relative scores between the facilities. For the facility yielding the lowest risk scores, the corresponding experimental test results showed no contamination, indicating that these ordinal testing methods are insufficient to evaluate this kind of facility. A conventional facility having acceptable aseptic processing lines gave relatively high risk scores. The facility showing a rather high risk score demonstrated the usefulness of conventional microbiological test methods. Considering the significant gaps observed in calculated risk scores and in the ordinal microbiological test results between advanced and conventional facilities, we propose a facility categorization based on risk assessment. The most important risk factor in aseptic processing is human intervention. When human intervention is eliminated from the process by advanced hardware design, the aseptic processing facility can be classified into a new risk category that is better suited for assuring sterility based on a new set of criteria rather than on currently used microbiological analysis. To fully benefit from advanced technologies, we propose three risk categories for these aseptic facilities.
Hamilton-Craig, Christian R; Chow, Clara K; Younger, John F; Jelinek, V M; Chan, Jonathan; Liew, Gary Yh
2017-10-16
Introduction This article summarises the Cardiac Society of Australia and New Zealand position statement on coronary artery calcium (CAC) scoring. CAC scoring is a non-invasive method for quantifying coronary artery calcification using computed tomography. It is a marker of atherosclerotic plaque burden and the strongest independent predictor of future myocardial infarction and mortality. CAC scoring provides incremental risk information beyond traditional risk calculators such as the Framingham Risk Score. Its use for risk stratification is confined to primary prevention of cardiovascular events, and can be considered as individualised coronary risk scoring for intermediate risk patients, allowing reclassification to low or high risk based on the score. Medical practitioners should carefully counsel patients before CAC testing, which should only be undertaken if an alteration in therapy, including embarking on pharmacotherapy, is being considered based on the test result. Main recommendations CAC scoring should primarily be performed on individuals without coronary disease aged 45-75 years (absolute 5-year cardiovascular risk of 10-15%) who are asymptomatic. CAC scoring is also reasonable in lower risk groups (absolute 5-year cardiovascular risk, < 10%) where risk scores traditionally underestimate risk (eg, family history of premature CVD) and in patients with diabetes aged 40-60 years. We recommend aspirin and a high efficacy statin in high risk patients, defined as those with a CAC score ≥ 400, or a CAC score of 100-399 and above the 75th percentile for age and sex. It is reasonable to treat patients with CAC scores ≥ 100 with aspirin and a statin. It is reasonable not to treat asymptomatic patients with a CAC score of zero. Changes in management as a result of this statement Cardiovascular risk is reclassified according to CAC score. High risk patients are treated with a high efficacy statin and aspirin. Very low risk patients (ie, CAC score of zero) do not benefit from treatment.
Epstein, Arnold M.; Orav, E. John; Filice, Clara E.; Samson, Lok Wong; Joynt Maddox, Karen E.
2017-01-01
Importance Medicare recently launched the Physician Value-Based Payment Modifier (PVBM) Program, a mandatory pay-for-performance program for physician practices. Little is known about performance by practices that serve socially or medically high-risk patients. Objective To compare performance in the PVBM Program by practice characteristics. Design, Setting, and Participants Cross-sectional observational study using PVBM Program data for payments made in 2015 based on performance of large US physician practices caring for fee-for-service Medicare beneficiaries in 2013. Exposures High social risk (defined as practices in the top quartile of proportion of patients dually eligible for Medicare and Medicaid) and high medical risk (defined as practices in the top quartile of mean Hierarchical Condition Category risk score among fee-for-service beneficiaries). Main Outcomes and Measures Quality and cost z scores based on a composite of individual measures. Higher z scores reflect better performance on quality; lower scores, better performance on costs. Results Among 899 physician practices with 5 189 880 beneficiaries, 547 practices were categorized as low risk (neither high social nor high medical risk) (mean, 7909 beneficiaries; mean, 320 clinicians), 128 were high medical risk only (mean, 3675 beneficiaries; mean, 370 clinicians), 102 were high social risk only (mean, 1635 beneficiaries; mean, 284 clinicians), and 122 were high medical and social risk (mean, 1858 beneficiaries; mean, 269 clinicians). Practices categorized as low risk performed the best on the composite quality score (z score, 0.18 [95% CI, 0.09 to 0.28]) compared with each of the practices categorized as high risk (high medical risk only: z score, −0.55 [95% CI, −0.77 to −0.32]; high social risk only: z score, −0.86 [95% CI, −1.17 to −0.54]; and high medical and social risk: −0.78 [95% CI, −1.04 to −0.51]) (P < .001 across groups). Practices categorized as high social risk only performed the best on the composite cost score (z score, −0.52 [95% CI, −0.71 to −0.33]), low risk had the next best cost score (z score, −0.18 [95% CI, −0.25 to −0.10]), then high medical and social risk (z score, 0.40 [95% CI, 0.23 to 0.57]), and then high medical risk only (z score, 0.82 [95% CI, 0.65 to 0.99]) (P < .001 across groups). Total per capita costs were $9506 for practices categorized as low risk, $13 683 for high medical risk only, $8214 for high social risk only, and $11 692 for high medical and social risk. These patterns were associated with fewer bonuses and more penalties for high-risk practices. Conclusions and Relevance During the first year of the Medicare Physician Value-Based Payment Modifier Program, physician practices that served more socially high-risk patients had lower quality and lower costs, and practices that served more medically high-risk patients had lower quality and higher costs. PMID:28763549
Hilkens, Nina A; Li, Linxin; Rothwell, Peter M; Algra, Ale; Greving, Jacoba P
2018-03-01
The S 2 TOP-BLEED score may help to identify patients at high risk of bleeding on antiplatelet drugs after a transient ischemic attack or ischemic stroke. The score was derived on trial populations, and its performance in a real-world setting is unknown. We aimed to externally validate the S 2 TOP-BLEED score for major bleeding in a population-based cohort and to compare its performance with other risk scores for bleeding. We studied risk of bleeding in 2072 patients with a transient ischemic attack or ischemic stroke on antiplatelet agents in the population-based OXVASC (Oxford Vascular Study) according to 3 scores: S 2 TOP-BLEED, REACH, and Intracranial-B 2 LEED 3 S. Performance was assessed with C statistics and calibration plots. During 8302 patient-years of follow-up, 117 patients had a major bleed. The S 2 TOP-BLEED score showed a C statistic of 0.69 (95% confidence interval [CI], 0.64-0.73) and accurate calibration for 3-year risk of major bleeding. The S 2 TOP-BLEED score was much more predictive of fatal bleeding than nonmajor bleeding (C statistics 0.77; 95% CI, 0.69-0.85 and 0.50; 95% CI, 0.44-0.58). The REACH score had a C statistic of 0.63 (95% CI, 0.58-0.69) for major bleeding and the Intracranial-B 2 LEED 3 S score a C statistic of 0.60 (95% CI, 0.51-0.70) for intracranial bleeding. The ratio of ischemic events versus bleeds decreased across risk groups of bleeding from 6.6:1 in the low-risk group to 1.8:1 in the high-risk group. The S 2 TOP-BLEED score shows modest performance in a population-based cohort of patients with a transient ischemic attack or ischemic stroke. Although bleeding risks were associated with risks of ischemic events, risk stratification may still be useful to identify a subgroup of patients at particularly high risk of bleeding, in whom preventive measures are indicated. © 2018 The Authors.
van Rosendael, Alexander R; Maliakal, Gabriel; Kolli, Kranthi K; Beecy, Ashley; Al'Aref, Subhi J; Dwivedi, Aeshita; Singh, Gurpreet; Panday, Mohit; Kumar, Amit; Ma, Xiaoyue; Achenbach, Stephan; Al-Mallah, Mouaz H; Andreini, Daniele; Bax, Jeroen J; Berman, Daniel S; Budoff, Matthew J; Cademartiri, Filippo; Callister, Tracy Q; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J W; Cury, Ricardo C; DeLago, Augustin; Feuchtner, Gudrun; Hadamitzky, Martin; Hausleiter, Joerg; Kaufmann, Philipp A; Kim, Yong-Jin; Leipsic, Jonathon A; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert L; Rubinshtein, Ronen; Shaw, Leslee J; Villines, Todd C; Gransar, Heidi; Lu, Yao; Jones, Erica C; Peña, Jessica M; Lin, Fay Y; Min, James K
Machine learning (ML) is a field in computer science that demonstrated to effectively integrate clinical and imaging data for the creation of prognostic scores. The current study investigated whether a ML score, incorporating only the 16 segment coronary tree information derived from coronary computed tomography angiography (CCTA), provides enhanced risk stratification compared with current CCTA based risk scores. From the multi-center CONFIRM registry, patients were included with complete CCTA risk score information and ≥3 year follow-up for myocardial infarction and death (primary endpoint). Patients with prior coronary artery disease were excluded. Conventional CCTA risk scores (conventional CCTA approach, segment involvement score, duke prognostic index, segment stenosis score, and the Leaman risk score) and a score created using ML were compared for the area under the receiver operating characteristic curve (AUC). Only 16 segment based coronary stenosis (0%, 1-24%, 25-49%, 50-69%, 70-99% and 100%) and composition (calcified, mixed and non-calcified plaque) were provided to the ML model. A boosted ensemble algorithm (extreme gradient boosting; XGBoost) was used and the entire data was randomly split into a training set (80%) and testing set (20%). First, tuned hyperparameters were used to generate a trained model from the training data set (80% of data). Second, the performance of this trained model was independently tested on the unseen test set (20% of data). In total, 8844 patients (mean age 58.0 ± 11.5 years, 57.7% male) were included. During a mean follow-up time of 4.6 ± 1.5 years, 609 events occurred (6.9%). No CAD was observed in 48.7% (3.5% event), non-obstructive CAD in 31.8% (6.8% event), and obstructive CAD in 19.5% (15.6% event). Discrimination of events as expressed by AUC was significantly better for the ML based approach (0.771) vs the other scores (ranging from 0.685 to 0.701), P < 0.001. Net reclassification improvement analysis showed that the improved risk stratification was the result of down-classification of risk among patients that did not experience events (non-events). A risk score created by a ML based algorithm, that utilizes standard 16 coronary segment stenosis and composition information derived from detailed CCTA reading, has greater prognostic accuracy than current CCTA integrated risk scores. These findings indicate that a ML based algorithm can improve the integration of CCTA derived plaque information to improve risk stratification. Published by Elsevier Inc.
Gabriel, Rafael; Brotons, Carlos; Tormo, M José; Segura, Antonio; Rigo, Fernando; Elosua, Roberto; Carbayo, Julio A; Gavrila, Diana; Moral, Irene; Tuomilehto, Jaakko; Muñiz, Javier
2015-03-01
In Spain, data based on large population-based cohorts adequate to provide an accurate prediction of cardiovascular risk have been scarce. Thus, calibration of the EuroSCORE and Framingham scores has been proposed and done for our population. The aim was to develop a native risk prediction score to accurately estimate the individual cardiovascular risk in the Spanish population. Seven Spanish population-based cohorts including middle-aged and elderly participants were assembled. There were 11800 people (6387 women) representing 107915 person-years of follow-up. A total of 1214 cardiovascular events were identified, of which 633 were fatal. Cox regression analyses were conducted to examine the contributions of the different variables to the 10-year total cardiovascular risk. Age was the strongest cardiovascular risk factor. High systolic blood pressure, diabetes mellitus and smoking were strong predictive factors. The contribution of serum total cholesterol was small. Antihypertensive treatment also had a significant impact on cardiovascular risk, greater in men than in women. The model showed a good discriminative power (C-statistic=0.789 in men and C=0.816 in women). Ten-year risk estimations are displayed graphically in risk charts separately for men and women. The ERICE is a new native cardiovascular risk score for the Spanish population derived from the background and contemporaneous risk of several Spanish cohorts. The ERICE score offers the direct and reliable estimation of total cardiovascular risk, taking in consideration the effect of diabetes mellitus and cardiovascular risk factor management. The ERICE score is a practical and useful tool for clinicians to estimate the total individual cardiovascular risk in Spain. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
An efficient sampling strategy for selection of biobank samples using risk scores.
Björk, Jonas; Malmqvist, Ebba; Rylander, Lars; Rignell-Hydbom, Anna
2017-07-01
The aim of this study was to suggest a new sample-selection strategy based on risk scores in case-control studies with biobank data. An ongoing Swedish case-control study on fetal exposure to endocrine disruptors and overweight in early childhood was used as the empirical example. Cases were defined as children with a body mass index (BMI) ⩾18 kg/m 2 ( n=545) at four years of age, and controls as children with a BMI of ⩽17 kg/m 2 ( n=4472 available). The risk of being overweight was modelled using logistic regression based on available covariates from the health examination and prior to selecting samples from the biobank. A risk score was estimated for each child and categorised as low (0-5%), medium (6-13%) or high (⩾14%) risk of being overweight. The final risk-score model, with smoking during pregnancy ( p=0.001), birth weight ( p<0.001), BMI of both parents ( p<0.001 for both), type of residence ( p=0.04) and economic situation ( p=0.12), yielded an area under the receiver operating characteristic curve of 67% ( n=3945 with complete data). The case group ( n=416) had the following risk-score profile: low (12%), medium (46%) and high risk (43%). Twice as many controls were selected from each risk group, with further matching on sex. Computer simulations showed that the proposed selection strategy with stratification on risk scores yielded consistent improvements in statistical precision. Using risk scores based on available survey or register data as a basis for sample selection may improve possibilities to study heterogeneity of exposure effects in biobank-based studies.
Carrillo-Larco, Rodrigo M; Miranda, J Jaime; Gilman, Robert H; Medina-Lezama, Josefina; Chirinos-Pacheco, Julio A; Muñoz-Retamozo, Paola V; Smeeth, Liam; Checkley, William; Bernabe-Ortiz, Antonio
2017-11-29
Chronic Kidney Disease (CKD) represents a great burden for the patient and the health system, particularly if diagnosed at late stages. Consequently, tools to identify patients at high risk of having CKD are needed, particularly in limited-resources settings where laboratory facilities are scarce. This study aimed to develop a risk score for prevalent undiagnosed CKD using data from four settings in Peru: a complete risk score including all associated risk factors and another excluding laboratory-based variables. Cross-sectional study. We used two population-based studies: one for developing and internal validation (CRONICAS), and another (PREVENCION) for external validation. Risk factors included clinical- and laboratory-based variables, among others: sex, age, hypertension and obesity; and lipid profile, anemia and glucose metabolism. The outcome was undiagnosed CKD: eGFR < 60 ml/min/1.73m 2 . We tested the performance of the risk scores using the area under the receiver operating characteristic (ROC) curve, sensitivity, specificity, positive/negative predictive values and positive/negative likelihood ratios. Participants in both studies averaged 57.7 years old, and over 50% were females. Age, hypertension and anemia were strongly associated with undiagnosed CKD. In the external validation, at a cut-off point of 2, the complete and laboratory-free risk scores performed similarly well with a ROC area of 76.2% and 76.0%, respectively (P = 0.784). The best assessment parameter of these risk scores was their negative predictive value: 99.1% and 99.0% for the complete and laboratory-free, respectively. The developed risk scores showed a moderate performance as a screening test. People with a score of ≥ 2 points should undergo further testing to rule out CKD. Using the laboratory-free risk score is a practical approach in developing countries where laboratories are not readily available and undiagnosed CKD has significant morbidity and mortality.
Chen, Hong-Lin; Cao, Ying-Juan; Wang, Jing; Huai, Bao-Sha
2015-09-01
The Braden Scale is the most widely used pressure ulcer risk assessment in the world, but the currently used 5 risk classification groups do not accurately discriminate among their risk categories. To optimize risk classification based on Braden Scale scores, a retrospective analysis of all consecutively admitted patients in an acute care facility who were at risk for pressure ulcer development was performed between January 2013 and December 2013. Predicted pressure ulcer incidence first was calculated by logistic regression model based on original Braden score. Risk classification then was modified based on the predicted pressure ulcer incidence and compared between different risk categories in the modified (3-group) classification and the traditional (5-group) classification using chi-square test. Two thousand, six hundred, twenty-five (2,625) patients (mean age 59.8 ± 16.5, range 1 month to 98 years, 1,601 of whom were men) were included in the study; 81 patients (3.1%) developed a pressure ulcer. The predicted pressure ulcer incidence ranged from 0.1% to 49.7%. When the predicted pressure ulcer incidence was greater than 10.0% (high risk), the corresponding Braden scores were less than 11; when the predicted incidence ranged from 1.0% to 10.0% (moderate risk), the corresponding Braden scores ranged from 12 to 16; and when the predicted incidence was less than 1.0% (mild risk), the corresponding Braden scores were greater than 17. In the modified classification, observed pressure ulcer incidence was significantly different between each of the 3 risk categories (P less than 0.05). However, in the traditional classification, the observed incidence was not significantly different between the high-risk category and moderate-risk category (P less than 0.05) and between the mild-risk category and no-risk category (P less than 0.05). If future studies confirm the validity of these findings, pressure ulcer prevention protocols of care based on Braden Scale scores can be simplified.
Hannan, Edward L; Farrell, Louise Szypulski; Walford, Gary; Jacobs, Alice K; Berger, Peter B; Holmes, David R; Stamato, Nicholas J; Sharma, Samin; King, Spencer B
2013-06-01
This study sought to develop a percutaneous coronary intervention (PCI) risk score for in-hospital/30-day mortality. Risk scores are simplified linear scores that provide clinicians with quick estimates of patients' short-term mortality rates for informed consent and to determine the appropriate intervention. Earlier PCI risk scores were based on in-hospital mortality. However, for PCI, a substantial percentage of patients die within 30 days of the procedure after discharge. New York's Percutaneous Coronary Interventions Reporting System was used to develop an in-hospital/30-day logistic regression model for patients undergoing PCI in 2010, and this model was converted into a simple linear risk score that estimates mortality rates. The score was validated by applying it to 2009 New York PCI data. Subsequent analyses evaluated the ability of the score to predict complications and length of stay. A total of 54,223 patients were used to develop the risk score. There are 11 risk factors that make up the score, with risk factor scores ranging from 1 to 9, and the highest total score is 34. The score was validated based on patients undergoing PCI in the previous year, and accurately predicted mortality for all patients as well as patients who recently suffered a myocardial infarction (MI). The PCI risk score developed here enables clinicians to estimate in-hospital/30-day mortality very quickly and quite accurately. It accurately predicts mortality for patients undergoing PCI in the previous year and for MI patients, and is also moderately related to perioperative complications and length of stay. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Hannan, Edward L; Farrell, Louise Szypulski; Wechsler, Andrew; Jordan, Desmond; Lahey, Stephen J; Culliford, Alfred T; Gold, Jeffrey P; Higgins, Robert S D; Smith, Craig R
2013-01-01
Simplified risk scores for coronary artery bypass graft surgery are frequently in lieu of more complicated statistical models and are valuable for informed consent and choice of intervention. Previous risk scores have been based on in-hospital mortality, but a substantial number of patients die within 30 days of the procedure. These deaths should also be accounted for, so we have developed a risk score based on in-hospital and 30-day mortality. New York's Cardiac Surgery Reporting System was used to develop an in-hospital and 30-day logistic regression model for patients undergoing coronary artery bypass graft surgery in 2009, and this model was converted into a simple linear risk score that provides estimated in-hospital and 30-day mortality rates for different values of the score. The accuracy of the risk score in predicting mortality was tested. This score was also validated by applying it to 2008 New York coronary artery bypass graft data. Subsequent analyses evaluated the ability of the risk score to predict complications and length of stay. The overall in-hospital and 30-day mortality rate for the 10,148 patients in the study was 1.79%. There are seven risk factors comprising the score, with risk factor scores ranging from 1 to 5, and the highest possible total score is 23. The score accurately predicted mortality in 2009 as well as in 2008, and was strongly correlated with complications and length of stay. The risk score is a simple way of estimating short-term mortality that accurately predicts mortality in the year the model was developed as well as in the previous year. Perioperative complications and length of stay are also well predicted by the risk score. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Hijazi, Ziad; Oldgren, Jonas; Lindbäck, Johan; Alexander, John H; Connolly, Stuart J; Eikelboom, John W; Ezekowitz, Michael D; Held, Claes; Hylek, Elaine M; Lopes, Renato D; Yusuf, Salim; Granger, Christopher B; Siegbahn, Agneta; Wallentin, Lars
2018-01-01
Abstract Aims In atrial fibrillation (AF), mortality remains high despite effective anticoagulation. A model predicting the risk of death in these patients is currently not available. We developed and validated a risk score for death in anticoagulated patients with AF including both clinical information and biomarkers. Methods and results The new risk score was developed and internally validated in 14 611 patients with AF randomized to apixaban vs. warfarin for a median of 1.9 years. External validation was performed in 8548 patients with AF randomized to dabigatran vs. warfarin for 2.0 years. Biomarker samples were obtained at study entry. Variables significantly contributing to the prediction of all-cause mortality were assessed by Cox-regression. Each variable obtained a weight proportional to the model coefficients. There were 1047 all-cause deaths in the derivation and 594 in the validation cohort. The most important predictors of death were N-terminal pro B-type natriuretic peptide, troponin-T, growth differentiation factor-15, age, and heart failure, and these were included in the ABC (Age, Biomarkers, Clinical history)-death risk score. The score was well-calibrated and yielded higher c-indices than a model based on all clinical variables in both the derivation (0.74 vs. 0.68) and validation cohorts (0.74 vs. 0.67). The reduction in mortality with apixaban was most pronounced in patients with a high ABC-death score. Conclusion A new biomarker-based score for predicting risk of death in anticoagulated AF patients was developed, internally and externally validated, and well-calibrated in two large cohorts. The ABC-death risk score performed well and may contribute to overall risk assessment in AF. ClinicalTrials.gov identifier NCT00412984 and NCT00262600 PMID:29069359
White Matter Hyperintensities Improve Ischemic Stroke Recurrence Prediction.
Andersen, Søren Due; Larsen, Torben Bjerregaard; Gorst-Rasmussen, Anders; Yavarian, Yousef; Lip, Gregory Y H; Bach, Flemming W
2017-01-01
Nearly one in 5 patients with ischemic stroke will invariably experience a second stroke within 5 years. Stroke risk stratification schemes based solely on clinical variables perform only modestly in non-atrial fibrillation (AF) patients and improvement of these schemes will enhance their clinical utility. Cerebral white matter hyperintensities are associated with an increased risk of incident ischemic stroke in the general population, whereas their association with the risk of ischemic stroke recurrence is more ambiguous. In a non-AF stroke cohort, we investigated the association between cerebral white matter hyperintensities and the risk of recurrent ischemic stroke, and we evaluated the predictive performance of the CHA2DS2VASc score and the Essen Stroke Risk Score (clinical scores) when augmented with information on white matter hyperintensities. In a registry-based, observational cohort study, we included 832 patients (mean age 59.6 (SD 13.9); 42.0% females) with incident ischemic stroke and no AF. We assessed the severity of white matter hyperintensities using MRI. Hazard ratios stratified by the white matter hyperintensities score and adjusted for the components of the CHA2DS2VASc score were calculated based on the Cox proportional hazards analysis. Recalibrated clinical scores were calculated by adding one point to the score for the presence of moderate to severe white matter hyperintensities. The discriminatory performance of the scores was assessed with the C-statistic. White matter hyperintensities were significantly associated with the risk of recurrent ischemic stroke after adjusting for clinical risk factors. The hazard ratios ranged from 1.65 (95% CI 0.70-3.86) for mild changes to 5.28 (95% CI 1.98-14.07) for the most severe changes. C-statistics for the prediction of recurrent ischemic stroke were 0.59 (95% CI 0.51-0.65) for the CHA2DS2VASc score and 0.60 (95% CI 0.53-0.68) for the Essen Stroke Risk Score. The recalibrated clinical scores showed improved C-statistics: the recalibrated CHA2DS2VASc score 0.62 (95% CI 0.54-0.70; p = 0.024) and the recalibrated Essen Stroke Risk Score 0.63 (95% CI 0.56-0.71; p = 0.031). C-statistics of the white matter hyperintensities score were 0.62 (95% CI 0.52-0.68) to 0.65 (95% CI 0.58-0.73). An increasing burden of white matter hyperintensities was independently associated with recurrent ischemic stroke in a cohort of non-AF ischemic stroke patients. Recalibration of the CHA2DS2VASc score and the Essen Stroke Risk Score with one point for the presence of moderate to severe white matter hyperintensities led to improved discriminatory performance in ischemic stroke recurrence prediction. Risk scores based on white matter hyperintensities alone were at least as accurate as the established clinical risk scores in the prediction of ischemic stroke recurrence. © 2016 S. Karger AG, Basel.
Kuo, Ho-Chang; Wong, Henry Sung-Ching; Chang, Wei-Pin; Chen, Ben-Kuen; Wu, Mei-Shin; Yang, Kuender D; Hsieh, Kai-Sheng; Hsu, Yu-Wen; Liu, Shih-Feng; Liu, Xiao; Chang, Wei-Chiao
2017-10-01
Intravenous immunoglobulin (IVIG) is the treatment of choice in Kawasaki disease (KD). IVIG is used to prevent cardiovascular complications related to KD. However, a proportion of KD patients have persistent fever after IVIG treatment and are defined as IVIG resistant. To develop a risk scoring system based on genetic markers to predict IVIG responsiveness in KD patients, a total of 150 KD patients (126 IVIG responders and 24 IVIG nonresponders) were recruited for this study. A genome-wide association analysis was performed to compare the 2 groups and identified risk alleles for IVIG resistance. A weighted genetic risk score was calculated by the natural log of the odds ratio multiplied by the number of risk alleles. Eleven single-nucleotide polymorphisms were identified by genome-wide association study. The KD patients were categorized into 3 groups based on their calculated weighted genetic risk score. Results indicated a significant association between weighted genetic risk score (groups 3 and 4 versus group 1) and the response to IVIG (Fisher's exact P value 4.518×10 - 03 and 8.224×10 - 10 , respectively). This is the first weighted genetic risk score study based on a genome-wide association study in KD. The predictive model integrated the additive effects of all 11 single-nucleotide polymorphisms to provide a prediction of the responsiveness to IVIG. © 2017 The Authors.
Kawai, Vivian K.; Chung, Cecilia P.; Solus, Joseph F.; Oeser, Annette; Raggi, Paolo; Stein, C. Michael
2014-01-01
Objective Patients with rheumatoid arthritis (RA) have increased risk of atherosclerotic cardiovascular disease (ASCVD) that is underestimated by the Framingham risk score (FRS). We hypothesized that the 2013 ACC/AHA 10-year risk score would perform better than the FRS and the Reynolds risk score (RRS) in identifying RA patients known to have elevated cardiovascular risk based on high coronary artery calcification (CAC) scores. Methods Among 98 RA patients eligible for risk stratification using the ACC/AHA score we identified 34 patients with high CAC (≥ 300 Agatston units or ≥75th percentile) and compared the ability of the 10-year FRS, RRS and the ACC/AHA risk scores to correctly assign these patients to an elevated risk category. Results All three risk scores were higher in patients with high CAC (P values <0.05). The percentage of patients with high CAC correctly assigned to the elevated risk category was similar among the three scores (FRS 32%, RRS 32%, ACC/AHA 41%) (P=0.233). The c-statistics for the FRS, RRS and ACC/AHA risk scores predicting the presence of high CAC were 0.65, 0.66, and 0.65, respectively. Conclusions The ACC/AHA 10-year risk score does not offer any advantage compared to the traditional FRS and RRS in the identification of RA patients with elevated risk as determined by high CAC. The ACC/AHA risk score assigned almost 60% of patients with high CAC into a low risk category. Risk scores and standard risk prediction models used in the general population do not adequately identify many RA patients with elevated cardiovascular risk. PMID:25371313
VanWagner, Lisa B; Ning, Hongyan; Whitsett, Maureen; Levitsky, Josh; Uttal, Sarah; Wilkins, John T; Abecassis, Michael M; Ladner, Daniela P; Skaro, Anton I; Lloyd-Jones, Donald M
2017-12-01
Cardiovascular disease (CVD) complications are important causes of morbidity and mortality after orthotopic liver transplantation (OLT). There is currently no preoperative risk-assessment tool that allows physicians to estimate the risk for CVD events following OLT. We sought to develop a point-based prediction model (risk score) for CVD complications after OLT, the Cardiovascular Risk in Orthotopic Liver Transplantation risk score, among a cohort of 1,024 consecutive patients aged 18-75 years who underwent first OLT in a tertiary-care teaching hospital (2002-2011). The main outcome measures were major 1-year CVD complications, defined as death from a CVD cause or hospitalization for a major CVD event (myocardial infarction, revascularization, heart failure, atrial fibrillation, cardiac arrest, pulmonary embolism, and/or stroke). The bootstrap method yielded bias-corrected 95% confidence intervals for the regression coefficients of the final model. Among 1,024 first OLT recipients, major CVD complications occurred in 329 (32.1%). Variables selected for inclusion in the model (using model optimization strategies) included preoperative recipient age, sex, race, employment status, education status, history of hepatocellular carcinoma, diabetes, heart failure, atrial fibrillation, pulmonary or systemic hypertension, and respiratory failure. The discriminative performance of the point-based score (C statistic = 0.78, bias-corrected C statistic = 0.77) was superior to other published risk models for postoperative CVD morbidity and mortality, and it had appropriate calibration (Hosmer-Lemeshow P = 0.33). The point-based risk score can identify patients at risk for CVD complications after OLT surgery (available at www.carolt.us); this score may be useful for identification of candidates for further risk stratification or other management strategies to improve CVD outcomes after OLT. (Hepatology 2017;66:1968-1979). © 2017 by the American Association for the Study of Liver Diseases.
Talmud, Philippa J; Hingorani, Aroon D; Cooper, Jackie A; Marmot, Michael G; Brunner, Eric J; Kumari, Meena; Kivimäki, Mika; Humphries, Steve E
2010-01-14
To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes. Workplace based prospective cohort study with three 5 yearly medical screenings. 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years. Non-genetic variables included in two established risk models-the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)-and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor's diagnosis, or the use of anti-diabetic drugs. A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score. The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.
Cardiac Society of Australia and New Zealand Position Statement: Coronary Artery Calcium Scoring.
Liew, Gary; Chow, Clara; van Pelt, Niels; Younger, John; Jelinek, Michael; Chan, Jonathan; Hamilton-Craig, Christian
2017-12-01
Coronary Artery Calcium Scoring (CAC) is a non-invasive quantitation of coronary artery calcification using computed tomography (CT). It is a marker of atherosclerotic plaque burden and an independent predictor of future myocardial infarction and mortality. Coronary Artery Calcium Scoring provides incremental risk information beyond traditional risk calculators (eg. Framingham Risk Score). Its use for risk stratification is confined to primary prevention of cardiovascular events, and can be considered as "individualised coronary risk scoring" for those not considered to be of high or low risk. Medical practitioners should carefully counsel patients prior to CAC. Coronary Artery Calcium Scoring should only be undertaken if an alteration in therapy including embarking on pharmacotherapy is being considered based on the test result. Patient Groups to Consider Coronary Calcium Scoring: Patient Groups in Whom Coronary Calcium Scoring Should Not be Considered: Coronary Artery Calcium Scoring is not recommended for patients who are: Interpretation of CAC CAC=0 A zero score confers a very low risk of death, <1% at 10 years. CAC=1-100 Low risk, <10% CAC=101-400 Intermediate risk, 10-20% CAC=101-400 & >75th centile. Moderately high risk, 15-20% CAC >400 High risk, >20% Management Recommendations Based on CAC Optimal diet and lifestyle measures are encouraged in all risk groups and form the basis of primary prevention strategies. Patients with moderately-high or high risk based on CAC score are recommended to receive preventative medical therapy such as aspirin and statins. The evidence for pharmacotherapy is less robust in patients at intermediate levels of CAC 100-400, with modest benefit for aspirin use; though statins may be reasonable if they are above 75th centile. Aspirin and statins are generally not recommended in patients with CAC <100. Repeat CAC Testing In patients with a CAC of 0, a repeat CAC may be considered in 5 years but not sooner. In patients with positive calcium score, routine re-scanning is not currently recommended. However, an annual increase in CAC of >15% or annual increase of CAC >100 units are predictive of future myocardial infarction and mortality. Cost Effectiveness of CAC Based Primary Prevention Recommendations: There is currently no data in Australia and New Zealand that CAC is cost-effective in informing primary prevention decisions. Given the cost of testing is currently borne entirely by the patient, discussion regarding the implications of CAC results should occur before CAC is recommended and undertaken. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.
Adkin, A; Brouwer, A; Simons, R R L; Smith, R P; Arnold, M E; Broughan, J; Kosmider, R; Downs, S H
2016-01-01
Identifying and ranking cattle herds with a higher risk of being or becoming infected on known risk factors can help target farm biosecurity, surveillance schemes and reduce spread through animal trading. This paper describes a quantitative approach to develop risk scores, based on the probability of infection in a herd with bovine tuberculosis (bTB), to be used in a risk-based trading (RBT) scheme in England and Wales. To produce a practical scoring system the risk factors included need to be simple and quick to understand, sufficiently informative and derived from centralised national databases to enable verification and assess compliance. A logistic regression identified herd history of bTB, local bTB prevalence, herd size and movements of animals onto farms in batches from high risk areas as being significantly associated with the probability of bTB infection on farm. Risk factors were assigned points using the estimated odds ratios to weight them. The farm risk score was defined as the sum of these individual points yielding a range from 1 to 5 and was calculated for each cattle farm that was trading animals in England and Wales at the start of a year. Within 12 months, of those farms tested, 30.3% of score 5 farms had a breakdown (sensitivity). Of farms scoring 1-4 only 5.4% incurred a breakdown (1-specificity). The use of this risk scoring system within RBT has the potential to reduce infected cattle movements; however, there are cost implications in ensuring that the information underpinning any system is accurate and up to date. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Dynamic TIMI Risk Score for STEMI
Amin, Sameer T.; Morrow, David A.; Braunwald, Eugene; Sloan, Sarah; Contant, Charles; Murphy, Sabina; Antman, Elliott M.
2013-01-01
Background Although there are multiple methods of risk stratification for ST‐elevation myocardial infarction (STEMI), this study presents a prospectively validated method for reclassification of patients based on in‐hospital events. A dynamic risk score provides an initial risk stratification and reassessment at discharge. Methods and Results The dynamic TIMI risk score for STEMI was derived in ExTRACT‐TIMI 25 and validated in TRITON‐TIMI 38. Baseline variables were from the original TIMI risk score for STEMI. New variables were major clinical events occurring during the index hospitalization. Each variable was tested individually in a univariate Cox proportional hazards regression. Variables with P<0.05 were incorporated into a full multivariable Cox model to assess the risk of death at 1 year. Each variable was assigned an integer value based on the odds ratio, and the final score was the sum of these values. The dynamic score included the development of in‐hospital MI, arrhythmia, major bleed, stroke, congestive heart failure, recurrent ischemia, and renal failure. The C‐statistic produced by the dynamic score in the derivation database was 0.76, with a net reclassification improvement (NRI) of 0.33 (P<0.0001) from the inclusion of dynamic events to the original TIMI risk score. In the validation database, the C‐statistic was 0.81, with a NRI of 0.35 (P=0.01). Conclusions This score is a prospectively derived, validated means of estimating 1‐year mortality of STEMI at hospital discharge and can serve as a clinically useful tool. By incorporating events during the index hospitalization, it can better define risk and help to guide treatment decisions. PMID:23525425
Dynamic TIMI risk score for STEMI.
Amin, Sameer T; Morrow, David A; Braunwald, Eugene; Sloan, Sarah; Contant, Charles; Murphy, Sabina; Antman, Elliott M
2013-01-29
Although there are multiple methods of risk stratification for ST-elevation myocardial infarction (STEMI), this study presents a prospectively validated method for reclassification of patients based on in-hospital events. A dynamic risk score provides an initial risk stratification and reassessment at discharge. The dynamic TIMI risk score for STEMI was derived in ExTRACT-TIMI 25 and validated in TRITON-TIMI 38. Baseline variables were from the original TIMI risk score for STEMI. New variables were major clinical events occurring during the index hospitalization. Each variable was tested individually in a univariate Cox proportional hazards regression. Variables with P<0.05 were incorporated into a full multivariable Cox model to assess the risk of death at 1 year. Each variable was assigned an integer value based on the odds ratio, and the final score was the sum of these values. The dynamic score included the development of in-hospital MI, arrhythmia, major bleed, stroke, congestive heart failure, recurrent ischemia, and renal failure. The C-statistic produced by the dynamic score in the derivation database was 0.76, with a net reclassification improvement (NRI) of 0.33 (P<0.0001) from the inclusion of dynamic events to the original TIMI risk score. In the validation database, the C-statistic was 0.81, with a NRI of 0.35 (P=0.01). This score is a prospectively derived, validated means of estimating 1-year mortality of STEMI at hospital discharge and can serve as a clinically useful tool. By incorporating events during the index hospitalization, it can better define risk and help to guide treatment decisions.
Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing
2015-01-23
This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006-2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008-2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008-2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice.
Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing
2015-01-01
This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006–2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008–2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008–2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice. PMID:25625405
Monnereau, Claire; Vogelezang, Suzanne; Kruithof, Claudia J; Jaddoe, Vincent W V; Felix, Janine F
2016-08-18
Results from genome-wide association studies (GWAS) identified many loci and biological pathways that influence adult body mass index (BMI). We aimed to identify if biological pathways related to adult BMI also affect infant growth and childhood adiposity measures. We used data from a population-based prospective cohort study among 3,975 children with a mean age of 6 years. Genetic risk scores were constructed based on the 97 SNPs associated with adult BMI previously identified with GWAS and on 28 BMI related biological pathways based on subsets of these 97 SNPs. Outcomes were infant peak weight velocity, BMI at adiposity peak and age at adiposity peak, and childhood BMI, total fat mass percentage, android/gynoid fat ratio, and preperitoneal fat area. Analyses were performed using linear regression models. A higher overall adult BMI risk score was associated with infant BMI at adiposity peak and childhood BMI, total fat mass, android/gynoid fat ratio, and preperitoneal fat area (all p-values < 0.05). Analyses focused on specific biological pathways showed that the membrane proteins genetic risk score was associated with infant peak weight velocity, and the genetic risk scores related to neuronal developmental processes, hypothalamic processes, cyclicAMP, WNT-signaling, membrane proteins, monogenic obesity and/or energy homeostasis, glucose homeostasis, cell cycle, and muscle biology pathways were associated with childhood adiposity measures (all p-values <0.05). None of the pathways were associated with childhood preperitoneal fat area. A genetic risk score based on 97 SNPs related to adult BMI was associated with peak weight velocity during infancy and general and abdominal fat measurements at the age of 6 years. Risk scores based on genetic variants linked to specific biological pathways, including central nervous system and hypothalamic processes, influence body fat development from early life onwards.
Albéniz, Eduardo; Fraile, María; Ibáñez, Berta; Alonso-Aguirre, Pedro; Martínez-Ares, David; Soto, Santiago; Gargallo, Carla Jerusalén; Ramos Zabala, Felipe; Álvarez, Marco Antonio; Rodríguez-Sánchez, Joaquín; Múgica, Fernando; Nogales, Óscar; Herreros de Tejada, Alberto; Redondo, Eduardo; Guarner-Argente, Carlos; Pin, Noel; León-Brito, Helena; Pardeiro, Remedios; López-Roses, Leopoldo; Rodríguez-Téllez, Manuel; Jiménez, Alejandra; Martínez-Alcalá, Felipe; García, Orlando; de la Peña, Joaquín; Ono, Akiko; Alberca de Las Parras, Fernando; Pellisé, María; Rivero, Liseth; Saperas, Esteban; Pérez-Roldán, Francisco; Pueyo Royo, Antonio; Eguaras Ros, Javier; Zúñiga Ripa, Alba; Concepción-Martín, Mar; Huelin-Álvarez, Patricia; Colán-Hernández, Juan; Cubiella, Joaquín; Remedios, David; Bessa I Caserras, Xavier; López-Viedma, Bartolomé; Cobian, Julyssa; González-Haba, Mariano; Santiago, José; Martínez-Cara, Juan Gabriel; Valdivielso, Eduardo
2016-08-01
After endoscopic mucosal resection (EMR) of colorectal lesions, delayed bleeding is the most common serious complication, but there are no guidelines for its prevention. We aimed to identify risk factors associated with delayed bleeding that required medical attention after discharge until day 15 and develop a scoring system to identify patients at risk. We performed a prospective study of 1214 consecutive patients with nonpedunculated colorectal lesions 20 mm or larger treated by EMR (n = 1255) at 23 hospitals in Spain, from February 2013 through February 2015. Patients were examined 15 days after the procedure, and medical data were collected. We used the data to create a delayed bleeding scoring system, and assigned a weight to each risk factor based on the β parameter from multivariate logistic regression analysis. Patients were classified as being at low, average, or high risk for delayed bleeding. Delayed bleeding occurred in 46 cases (3.7%, 95% confidence interval, 2.7%-4.9%). In multivariate analysis, factors associated with delayed bleeding included age ≥75 years (odds ratio [OR], 2.36; P < .01), American Society of Anesthesiologist classification scores of III or IV (OR, 1.90; P ≤ .05), aspirin use during EMR (OR, 3.16; P < .05), right-sided lesions (OR, 4.86; P < .01), lesion size ≥40 mm (OR, 1.91; P ≤ .05), and a mucosal gap not closed by hemoclips (OR, 3.63; P ≤ .01). We developed a risk scoring system based on these 6 variables that assigned patients to the low-risk (score, 0-3), average-risk (score, 4-7), or high-risk (score, 8-10) categories with a receiver operating characteristic curve of 0.77 (95% confidence interval, 0.70-0.83). In these groups, the probabilities of delayed bleeding were 0.6%, 5.5%, and 40%, respectively. The risk of delayed bleeding after EMR of large colorectal lesions is 3.7%. We developed a risk scoring system based on 6 factors that determined the risk for delayed bleeding (receiver operating characteristic curve, 0.77). The factors most strongly associated with delayed bleeding were right-sided lesions, aspirin use, and mucosal defects not closed by hemoclips. Patients considered to be high risk (score, 8-10) had a 40% probability of delayed bleeding. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
A risk score for predicting near-term incidence of hypertension: the Framingham Heart Study.
Parikh, Nisha I; Pencina, Michael J; Wang, Thomas J; Benjamin, Emelia J; Lanier, Katherine J; Levy, Daniel; D'Agostino, Ralph B; Kannel, William B; Vasan, Ramachandran S
2008-01-15
Studies suggest that targeting high-risk, nonhypertensive individuals for treatment may delay hypertension onset, thereby possibly mitigating vascular complications. Risk stratification may facilitate cost-effective approaches to management. To develop a simple risk score for predicting hypertension incidence by using measures readily obtained in the physician's office. Longitudinal cohort study. Framingham Heart Study, Framingham, Massachusetts. 1717 nonhypertensive white individuals 20 to 69 years of age (mean age, 42 years; 54% women), without diabetes and with both parents in the original cohort of the Framingham Heart Study, contributed 5814 person-examinations. Scores were developed for predicting the 1-, 2-, and 4-year risk for new-onset hypertension, and performance characteristics of the prediction algorithm were assessed by using calibration and discrimination measures. Parental hypertension was ascertained from examinations of the original cohort of the Framingham Heart Study. During follow-up (median time over all person-examinations, 3.8 years), 796 persons (52% women) developed new-onset hypertension. In multivariable analyses, age, sex, systolic and diastolic blood pressure, body mass index, parental hypertension, and cigarette smoking were significant predictors of hypertension. According to the risk score based on these factors, the 4-year risk for incident hypertension was classified as low (<5%) in 34% of participants, medium (5% to 10%) in 19%, and high (>10%) in 47%. The c-statistic for the prediction model was 0.788, and calibration was very good. The risk score findings may not be generalizable to persons of nonwhite race or ethnicity or to persons with diabetes. The risk score algorithm has not been validated in an independent cohort and is based on single measurements of risk factors and blood pressure. The hypertension risk prediction score can be used to estimate an individual's absolute risk for hypertension on short-term follow-up, and it represents a simple, office-based tool that may facilitate management of high-risk individuals with prehypertension.
Riaz, Saima; Bashir, Humayun; Niazi, Imran Khalid; Butt, Sumera; Qamar, Faisal
2018-06-01
Mirels' scoring system quantifies the risk of sustaining a pathologic fracture in osseous metastases of weight bearing long bones. Conventional Mirels' scoring is based on radiographs. Our pilot study proposes Tc MDP bone SPECT-CT based modified Mirels' scoring system and its comparison with conventional Mirels' scoring. Cortical lysis was noted in 8(24%) by SPECT-CT versus 2 (6.3%) on X-rays. Additional SPECT-CT parameters were; circumferential involvement [1/4 (31%), 1/2 (3%), 3/4 (37.5%), 4/4 (28%)] and extra-osseous soft tissue [3%]. Our pilot study suggests the potential role of SPECT-CT in predicting risk of fracture in osseous metastases.
MANUSCRIPT IN PRESS: DEMENTIA & GERIATRIC COGNITIVE DISORDERS
O’Bryant, Sid E.; Xiao, Guanghua; Barber, Robert; Cullum, C. Munro; Weiner, Myron; Hall, James; Edwards, Melissa; Grammas, Paula; Wilhelmsen, Kirk; Doody, Rachelle; Diaz-Arrastia, Ramon
2015-01-01
Background Prior work on the link between blood-based biomarkers and cognitive status has largely been based on dichotomous classifications rather than detailed neuropsychological functioning. The current project was designed to create serum-based biomarker algorithms that predict neuropsychological test performance. Methods A battery of neuropsychological measures was administered. Random forest analyses were utilized to create neuropsychological test-specific biomarker risk scores in a training set that were entered into linear regression models predicting the respective test scores in the test set. Serum multiplex biomarker data were analyzed on 108 proteins from 395 participants (197 AD cases and 198 controls) from the Texas Alzheimer’s Research and Care Consortium. Results The biomarker risk scores were significant predictors (p<0.05) of scores on all neuropsychological tests. With the exception of premorbid intellectual status (6.6%), the biomarker risk scores alone accounted for a minimum of 12.9% of the variance in neuropsychological scores. Biomarker algorithms (biomarker risk scores + demographics) accounted for substantially more variance in scores. Review of the variable importance plots indicated differential patterns of biomarker significance for each test, suggesting the possibility of domain-specific biomarker algorithms. Conclusions Our findings provide proof-of-concept for a novel area of scientific discovery, which we term “molecular neuropsychology.” PMID:24107792
Benza, Raymond L; Miller, Dave P; Foreman, Aimee J; Frost, Adaani E; Badesch, David B; Benton, Wade W; McGoon, Michael D
2015-03-01
Data from the Registry to Evaluate Early and Long-Term Pulmonary Arterial Hypertension Disease Management (REVEAL) were used previously to develop a risk score calculator to predict 1-year survival. We evaluated prognostic implications of changes in the risk score and individual risk-score parameters over 12 months. Patients were grouped by decreased, unchanged, or increased risk score from enrollment to 12 months. Kaplan-Meier estimates of subsequent 1-year survival were made based on change in the risk score during the initial 12 months of follow-up. Cox regression was used for multivariable analysis. Of 2,529 patients in the analysis cohort, the risk score was decreased in 800, unchanged in 959, and increased in 770 at 12 months post-enrollment. Six parameters (functional class, systolic blood pressure, heart rate, 6-minute walk distance, brain natriuretic peptide levels, and pericardial effusion) each changed sufficiently over time to improve or worsen risk scores in ≥5% of patients. One-year survival estimates in the subsequent year were 93.7%, 90.3%, and 84.6% in patients with a decreased, unchanged, and increased risk score at 12 months, respectively. Change in risk score significantly predicted future survival, adjusting for risk at enrollment. Considering follow-up risk concurrently with risk at enrollment, follow-up risk was a much stronger predictor, although risk at enrollment maintained a significant effect on future survival. Changes in REVEAL risk scores occur in most patients with pulmonary arterial hypertension over a 12-month period and are predictive of survival. Thus, serial risk score assessments can identify changes in disease trajectory that may warrant treatment modifications. Copyright © 2015 International Society for Heart and Lung Transplantation. All rights reserved.
Neeki, Michael M.; Dong, Fanglong; Au, Christine; Toy, Jake; Khoshab, Nima; Lee, Carol; Kwong, Eugene; Yuen, Ho Wang; Lee, Jonathan; Ayvazian, Arbi; Lux, Pamela; Borger, Rodney
2017-01-01
Introduction Necrotizing fasciitis (NF) is an uncommon but rapidly progressive infection that results in gross morbidity and mortality if not treated in its early stages. The Laboratory Risk Indicator for Necrotizing Fasciitis (LRINEC) score is used to distinguish NF from other soft tissue infections such as cellulitis or abscess. This study analyzed the ability of the LRINEC score to accurately rule out NF in patients who were confirmed to have cellulitis, as well as the capability to differentiate cellulitis from NF. Methods This was a 10-year retrospective chart-review study that included emergency department (ED) patients ≥18 years old with a diagnosis of cellulitis or NF. We calculated a LRINEC score ranging from 0–13 for each patient with all pertinent laboratory values. Three categories were developed per the original LRINEC score guidelines denoting NF risk stratification: high risk (LRINEC score ≥8), moderate risk (LRINEC score 6–7), and low risk (LRINEC score ≤5). All cases missing laboratory values were due to the absence of a C-reactive protein (CRP) value. Since the score for a negative or positive CRP value for the LRINEC score was 0 or 4 respectively, a LRINEC score of 0 or 1 without a CRP value would have placed the patient in the “low risk” group and a LRINEC score of 8 or greater without CRP value would have placed the patient in the “high risk” group. These patients missing CRP values were added to these respective groups. Results Among the 948 ED patients with cellulitis, more than one-tenth (10.7%, n=102 of 948) were moderate or high risk for NF based on LRINEC score. Of the 135 ED patients with a diagnosis of NF, 22 patients had valid CRP laboratory values and LRINEC scores were calculated. Among the other 113 patients without CRP values, six patients had a LRINEC score ≥ 8, and 19 patients had a LRINEC score ≤ 1. Thus, a total of 47 patients were further classified based on LRINEC score without a CRP value. More than half of the NF group (63.8%, n=30 of 47) had a low risk based on LRINEC ≤5. Moreover, LRINEC appeared to perform better in the diabetes population than in the non-diabetes population. Conclusion The LRINEC score may not be an accurate tool for NF risk stratification and differentiation between cellulitis and NF in the ED setting. This decision instrument demonstrated a high false positive rate when determining NF risk stratification in confirmed cases of cellulitis and a high false negative rate in cases of confirmed NF. PMID:28611889
Brautbar, Ariel; Pompeii, Lisa A.; Dehghan, Abbas; Ngwa, Julius S.; Nambi, Vijay; Virani, Salim S.; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Witteman, Jacqueline C.M.; Pencina, Michael J.; Folsom, Aaron R.; Cupples, L. Adrienne; Ballantyne, Christie M.; Boerwinkle, Eric
2013-01-01
Objective Multiple studies have identified single-nucleotide polymorphisms (SNPs) that are associated with coronary heart disease (CHD). We examined whether SNPs selected based on predefined criteria will improve CHD risk prediction when added to traditional risk factors (TRFs). Methods SNPs were selected from the literature based on association with CHD, lack of association with a known CHD risk factor, and successful replication. A genetic risk score (GRS) was constructed based on these SNPs. Cox proportional hazards model was used to calculate CHD risk based on the Atherosclerosis Risk in Communities (ARIC) and Framingham CHD risk scores with and without the GRS. Results The GRS was associated with risk for CHD (hazard ratio [HR] = 1.10; 95% confidence interval [CI]: 1.07–1.13). Addition of the GRS to the ARIC risk score significantly improved discrimination, reclassification, and calibration beyond that afforded by TRFs alone in non-Hispanic whites in the ARIC study. The area under the receiver operating characteristic curve (AUC) increased from 0.742 to 0.749 (Δ= 0.007; 95% CI, 0.004–0.013), and the net reclassification index (NRI) was 6.3%. Although the risk estimates for CHD in the Framingham Offspring (HR = 1.12; 95% CI: 1.10–1.14) and Rotterdam (HR = 1.08; 95% CI: 1.02–1.14) Studies were significantly improved by adding the GRS to TRFs, improvements in AUC and NRI were modest. Conclusion Addition of a GRS based on direct associations with CHD to TRFs significantly improved discrimination and reclassification in white participants of the ARIC Study, with no significant improvement in the Rotterdam and Framingham Offspring Studies. PMID:22789513
Recalibration of the ACC/AHA Risk Score in Two Population-Based German Cohorts
de las Heras Gala, Tonia; Geisel, Marie Henrike; Peters, Annette; Thorand, Barbara; Baumert, Jens; Lehmann, Nils; Jöckel, Karl-Heinz; Moebus, Susanne; Erbel, Raimund; Meisinger, Christine
2016-01-01
Background The 2013 ACC/AHA guidelines introduced an algorithm for risk assessment of atherosclerotic cardiovascular disease (ASCVD) within 10 years. In Germany, risk assessment with the ESC SCORE is limited to cardiovascular mortality. Applicability of the novel ACC/AHA risk score to the German population has not yet been assessed. We therefore sought to recalibrate and evaluate the ACC/AHA risk score in two German cohorts and to compare it to the ESC SCORE. Methods We studied 5,238 participants from the KORA surveys S3 (1994–1995) and S4 (1999–2001) and 4,208 subjects from the Heinz Nixdorf Recall (HNR) Study (2000–2003). There were 383 (7.3%) and 271 (6.4%) first non-fatal or fatal ASCVD events within 10 years in KORA and in HNR, respectively. Risk scores were evaluated in terms of calibration and discrimination performance. Results The original ACC/AHA risk score overestimated 10-year ASCVD rates by 37% in KORA and 66% in HNR. After recalibration, miscalibration diminished to 8% underestimation in KORA and 12% overestimation in HNR. Discrimination performance of the ACC/AHA risk score was not affected by the recalibration (KORA: C = 0.78, HNR: C = 0.74). The ESC SCORE overestimated by 5% in KORA and by 85% in HNR. The corresponding C-statistic was 0.82 in KORA and 0.76 in HNR. Conclusions The recalibrated ACC/AHA risk score showed strongly improved calibration compared to the original ACC/AHA risk score. Predicting only cardiovascular mortality, discrimination performance of the commonly used ESC SCORE remained somewhat superior to the ACC/AHA risk score. Nevertheless, the recalibrated ACC/AHA risk score may provide a meaningful tool for estimating 10-year risk of fatal and non-fatal cardiovascular disease in Germany. PMID:27732641
Recalibration of the ACC/AHA Risk Score in Two Population-Based German Cohorts.
de Las Heras Gala, Tonia; Geisel, Marie Henrike; Peters, Annette; Thorand, Barbara; Baumert, Jens; Lehmann, Nils; Jöckel, Karl-Heinz; Moebus, Susanne; Erbel, Raimund; Meisinger, Christine; Mahabadi, Amir Abbas; Koenig, Wolfgang
2016-01-01
The 2013 ACC/AHA guidelines introduced an algorithm for risk assessment of atherosclerotic cardiovascular disease (ASCVD) within 10 years. In Germany, risk assessment with the ESC SCORE is limited to cardiovascular mortality. Applicability of the novel ACC/AHA risk score to the German population has not yet been assessed. We therefore sought to recalibrate and evaluate the ACC/AHA risk score in two German cohorts and to compare it to the ESC SCORE. We studied 5,238 participants from the KORA surveys S3 (1994-1995) and S4 (1999-2001) and 4,208 subjects from the Heinz Nixdorf Recall (HNR) Study (2000-2003). There were 383 (7.3%) and 271 (6.4%) first non-fatal or fatal ASCVD events within 10 years in KORA and in HNR, respectively. Risk scores were evaluated in terms of calibration and discrimination performance. The original ACC/AHA risk score overestimated 10-year ASCVD rates by 37% in KORA and 66% in HNR. After recalibration, miscalibration diminished to 8% underestimation in KORA and 12% overestimation in HNR. Discrimination performance of the ACC/AHA risk score was not affected by the recalibration (KORA: C = 0.78, HNR: C = 0.74). The ESC SCORE overestimated by 5% in KORA and by 85% in HNR. The corresponding C-statistic was 0.82 in KORA and 0.76 in HNR. The recalibrated ACC/AHA risk score showed strongly improved calibration compared to the original ACC/AHA risk score. Predicting only cardiovascular mortality, discrimination performance of the commonly used ESC SCORE remained somewhat superior to the ACC/AHA risk score. Nevertheless, the recalibrated ACC/AHA risk score may provide a meaningful tool for estimating 10-year risk of fatal and non-fatal cardiovascular disease in Germany.
Sharma, Sheena; Denburg, Michelle R; Furth, Susan L
2017-08-01
Children with chronic kidney disease (CKD) have a high prevalence of cardiovascular disease (CVD) risk factors which may contribute to the development of cardiovascular events in adulthood. Among adults with CKD, cystatin C-based estimates of glomerular filtration rate (eGFR) demonstrate a stronger predictive value for cardiovascular events than creatinine-based eGFR. The PDAY (Pathobiological Determinants of Atherosclerosis in Youth) risk score is a validated tool used to estimate the probability of advanced coronary atherosclerotic lesions in young adults. To assess the association between cystatin C-based versus creatinine-based eGFR (eGFR cystatin C and eGFR creatinine, respectively) and cardiovascular risk using a modified PDAY risk score as a proxy for CVD in children and young adults. We performed a cross-sectional study of 71 participants with CKD [median age 15.5 years; inter-quartile range (IQR) 13, 17], and 33 healthy controls (median age 15.1 years; IQR 13, 17). eGFR was calculated using age-appropriate creatinine- and cystatin C-based formulas. Median eGFR creatinine and eGFR cystatin C for CKD participants were 50 (IQR 30, 75) and 53 (32, 74) mL/min/1.73 m 2 , respectively. For the healthy controls, median eGFR creatinine and eGFR cystatin were 112 (IQR 85, 128) and 106 mL/min/1.73m 2 (95, 123) mL/min/1.73 m 2 , respectively. A modified PDAY risk score was calculated based on sex, age, serum lipoprotein concentrations, obesity, smoking status, hypertension, and hyperglycemia. Modified PDAY scores ranged from -2 to 20. The Spearman's correlations of eGFR creatinine and eGFR cystatin C with coronary artery PDAY scores were -0.23 (p = 0.02) and -0.28 (p = 0.004), respectively. Ordinal logistic regression also showed a similar association of higher eGFR creatinine and higher eGFR cystatin C with lower PDAY scores. When stratified by age <18 or ≥18 years, the correlations of eGFR creatinine and eGFR cystatin C with PDAY score were modest and similar in children [-0.29 (p = 0.008) vs. -0.32 (p = 0.004), respectively]. Despite a smaller sample size, the correlation in adults was stronger for eGFR cystatin C (-0.57; p = 0.006) than for eGFR creatinine (-0.40; p = 0.07). Overall, the correlation between cystatin C- or creatinine-based eGFR with PDAY risk score was similar in children. Further studies in children with CKD should explore the association between cystatin C and cardiovascular risk.
Vázquez-Acosta, Jorge A; Ramírez-Gutiérrez, Álvaro E; Cerecedo-Rosendo, Mario A; Olivera-Barrera, Francisco M; Tenorio-Sánchez, Salvador S; Nieto-Villarreal, Javier; González-Borjas, José M; Villanueva-Rodríguez, Estefanie
2016-01-01
To evaluate the risk of stroke and bleeding using the CHA2DS2-VASc and HAS-BLED scores in Mexican patients with atrial fibrillation and to analyze whether the risk score obtained determined treatment decisions regarding antithrombotic therapy. This is an observational, retrospective study in Mexican patients recently diagnosed with atrial fibrillation. The risk of stroke was assessed using the CHA2DS2-VASc scores. The bleeding risk was evaluated using the HAS-BLED score. The frequency of use of antithrombotic therapy was calculated according to the results of the score risk assessment. A total of 350 patients with non-valvular atrial fibrillation were analyzed. A 92.9% of patients had a high risk (score ≥ 2) of stroke according to the CHA2DS2-VASc score and only 17.2% were treated with anticoagulants. A high proportion of patients with atrial fibrillation (72.5%) showed both a high risk of stroke and a high risk of bleeding based on HAS-BLED score. In this group of patients with atrial fibrillation, from Northeast Mexico, there is a remarkably underutilization of anticoagulation despite the high risk of stroke of these patients.
Austin, Peter C; Walraven, Carl van
2011-10-01
Logistic regression models that incorporated age, sex, and indicator variables for the Johns Hopkins' Aggregated Diagnosis Groups (ADGs) categories have been shown to accurately predict all-cause mortality in adults. To develop 2 different point-scoring systems using the ADGs. The Mortality Risk Score (MRS) collapses age, sex, and the ADGs to a single summary score that predicts the annual risk of all-cause death in adults. The ADG Score derives weights for the individual ADG diagnosis groups. : Retrospective cohort constructed using population-based administrative data. All 10,498,413 residents of Ontario, Canada, between the age of 20 and 100 years who were alive on their birthday in 2007, participated in this study. Participants were randomly divided into derivation and validation samples. : Death within 1 year. In the derivation cohort, the MRS ranged from -21 to 139 (median value 29, IQR 17 to 44). In the validation group, a logistic regression model with the MRS as the sole predictor significantly predicted the risk of 1-year mortality with a c-statistic of 0.917. A regression model with age, sex, and the ADG Score has similar performance. Both methods accurately predicted the risk of 1-year mortality across the 20 vigintiles of risk. The MRS combined values for a person's age, sex, and the John Hopkins ADGs to accurately predict 1-year mortality in adults. The ADG Score is a weighted score representing the presence or absence of the 32 ADG diagnosis groups. These scores will facilitate health services researchers conducting risk adjustment using administrative health care databases.
Mahabadi, Amir A; Möhlenkamp, Stefan; Moebus, Susanne; Dragano, Nico; Kälsch, Hagen; Bauer, Marcus; Jöckel, Karl-Heinz; Erbel, Raimund
2011-10-01
Non-contrast-enhanced computed tomography (CT) imaging of the heart enables noninvasive quantification of coronary artery calcification (CAC), a surrogate marker of the atherosclerotic burden in the coronary artery tree. Multiple studies have underlined the ability of CAC score for individual risk stratification and, accordingly, the American Heart Association recommended cardiac CT for risk assessment in individuals with an intermediate risk of cardiovascular events as measured by Framingham Risk Score. However, limitations in transcribing risk stratification algorithms based on American cohort studies into European populations have been acknowledged in the past. Moreover, data on implications for reclassification into higher- or lower-risk groups based on CAC scores were lacking. The Heinz Nixdorf Recall (HNR) study is a population-based cohort study that investigated the ability of CAC scoring in risk prediction for major cardiovascular events above and beyond traditional cardiovascular risk factors. According to Heinz Nixdorf Recall findings, CAC can be used for reclassification, especially in those in the intermediate-risk group, to advise on lifestyle changes for the reclassified low-risk category, or to implement intensive treatments for the reclassified high-risk individuals. This article discusses the present findings of the Heinz Nixdorf Recall Study with respect to the current literature, risk stratification algorithms, and current European guidelines for risk prediction.
Predicting risk of substantial weight gain in German adults-a multi-center cohort approach.
Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf; Steffen, Annika
2017-08-01
A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score's discriminatory accuracy. The cross-validated c index (95% CI) was 0.71 (0.67-0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association.
Rücker, Viktoria; Keil, Ulrich; Fitzgerald, Anthony P; Malzahn, Uwe; Prugger, Christof; Ertl, Georg; Heuschmann, Peter U; Neuhauser, Hannelore
2016-01-01
Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008–11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40–65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk. PMID:27612145
Novel risk score of contrast-induced nephropathy after percutaneous coronary intervention.
Ji, Ling; Su, XiaoFeng; Qin, Wei; Mi, XuHua; Liu, Fei; Tang, XiaoHong; Li, Zi; Yang, LiChuan
2015-08-01
Contrast-induced nephropathy (CIN) post-percutaneous coronary intervention (PCI) is a major cause of acute kidney injury. In this study, we established a comprehensive risk score model to assess risk of CIN after PCI procedure, which could be easily used in a clinical environment. A total of 805 PCI patients, divided into analysis cohort (70%) and validation cohort (30%), were enrolled retrospectively in this study. Risk factors for CIN were identified using univariate analysis and multivariate logistic regression in the analysis cohort. Risk score model was developed based on multiple regression coefficients. Sensitivity and specificity of the new risk score system was validated in the validation cohort. Comparisons between the new risk score model and previous reported models were applied. The incidence of post-PCI CIN in the analysis cohort (n = 565) was 12%. Considerably high CIN incidence (50%) was observed in patients with chronic kidney disease (CKD). Age >75, body mass index (BMI) >25, myoglobin level, cardiac function level, hypoalbuminaemia, history of chronic kidney disease (CKD), Intra-aortic balloon pump (IABP) and peripheral vascular disease (PVD) were identified as independent risk factors of post-PCI CIN. A novel risk score model was established using multivariate regression coefficients, which showed highest sensitivity and specificity (0.917, 95%CI 0.877-0.957) compared with previous models. A new post-PCI CIN risk score model was developed based on a retrospective study of 805 patients. Application of this model might be helpful to predict CIN in patients undergoing PCI procedure. © 2015 Asian Pacific Society of Nephrology.
Eapen, Danny J; Manocha, Pankaj; Patel, Riyaz S; Hammadah, Muhammad; Veledar, Emir; Wassel, Christina; Nanjundappa, Ravi A; Sikora, Sergey; Malayter, Dylan; Wilson, Peter W F; Sperling, Laurence; Quyyumi, Arshed A; Epstein, Stephen E
2013-07-23
This study sought to determine an aggregate, pathway-specific risk score for enhanced prediction of death and myocardial infarction (MI). Activation of inflammatory, coagulation, and cellular stress pathways contribute to atherosclerotic plaque rupture. We hypothesized that an aggregate risk score comprised of biomarkers involved in these different pathways-high-sensitivity C-reactive protein (CRP), fibrin degradation products (FDP), and heat shock protein 70 (HSP70) levels-would be a powerful predictor of death and MI. Serum levels of CRP, FDP, and HSP70 were measured in 3,415 consecutive patients with suspected or confirmed coronary artery disease (CAD) undergoing cardiac catheterization. Survival analyses were performed with models adjusted for established risk factors. Median follow-up was 2.3 years. Hazard ratios (HRs) for all-cause death and MI based on cutpoints were as follows: CRP ≥3.0 mg/l, HR: 1.61; HSP70 >0.625 ng/ml, HR; 2.26; and FDP ≥1.0 μg/ml, HR: 1.62 (p < 0.0001 for all). An aggregate biomarker score between 0 and 3 was calculated based on these cutpoints. Compared with the group with a 0 score, HRs for all-cause death and MI were 1.83, 3.46, and 4.99 for those with scores of 1, 2, and 3, respectively (p for each: <0.001). Annual event rates were 16.3% for the 4.2% of patients with a score of 3 compared with 2.4% in 36.4% of patients with a score of 0. The C statistic and net reclassification improved (p < 0.0001) with the addition of the biomarker score. An aggregate score based on serum levels of CRP, FDP, and HSP70 is a predictor of future risk of death and MI in patients with suspected or known CAD. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Vikhireva, Olga; Broda, Grazyna; Kubinova, Ruzena; Malyutina, Sofia; Pająk, Andrzej; Tamosiunas, Abdonas; Skodova, Zdena; Simonova, Galina; Bobak, Martin; Pikhart, Hynek
2014-01-01
The SCORE scale predicts the 10-year risk of fatal atherosclerotic cardiovascular disease (CVD), based on conventional risk factors. The high-risk version of SCORE is recommended for Central and Eastern Europe and former Soviet Union (CEE/FSU), due to high CVD mortality rates in these countries. Given the pronounced social gradient in cardiovascular mortality in the region, it is important to consider social factors in the CVD risk prediction. We investigated whether adding education and marital status to SCORE benefits its prognostic performance in two sets of population-based CEE/FSU cohorts. The WHO MONICA (MONItoring of trends and determinants in CArdiovascular disease) cohorts from the Czech Republic, Poland (Warsaw and Tarnobrzeg), Lithuania (Kaunas), and Russia (Novosibirsk) were followed from the mid-1980s (577 atherosclerotic CVD deaths among 14,969 participants with non-missing data). The HAPIEE (Health, Alcohol, and Psychosocial factors In Eastern Europe) study follows Czech, Polish (Krakow), and Russian (Novosibirsk) cohorts from 2002-05 (395 atherosclerotic CVD deaths in 19,900 individuals with non-missing data). In MONICA and HAPIEE, the high-risk SCORE ≥5% at baseline strongly and significantly predicted fatal CVD both before and after adjustment for education and marital status. After controlling for SCORE, lower education and non-married status were significantly associated with CVD mortality in some samples. SCORE extension by these additional risk factors only slightly improved indices of calibration and discrimination (integrated discrimination improvement <5% in men and ≤1% in women). Extending SCORE by education and marital status failed to substantially improve its prognostic performance in population-based CEE/FSU cohorts.
Dudek, Dominika; Siwek, Marcin; Jaeschke, Rafał; Drozdowicz, Katarzyna; Styczeń, Krzysztof; Arciszewska, Aleksandra; Chrobak, Adrian A; Rybakowski, Janusz K
2016-06-01
We hypothesised that men and women who engage in extreme or high-risk sports would score higher on standardised measures of bipolarity and impulsivity compared to age and gender matched controls. Four-hundred and eighty extreme or high-risk athletes (255 males and 225 females) and 235 age-matched control persons (107 males and 128 females) were enrolled into the web-based case-control study. The Mood Disorder Questionnaire (MDQ) and Barratt Impulsiveness Scale (BIS-11) were administered to screen for bipolarity and impulsive behaviours, respectively. Results indicated that extreme or high-risk athletes had significantly higher scores of bipolarity and impulsivity, and lower scores on cognitive complexity of the BIS-11, compared to controls. Further, there were positive correlations between the MDQ and BIS-11 scores. These results showed greater rates of bipolarity and impulsivity, in the extreme or high-risk athletes, suggesting these measures are sensitive to high-risk behaviours.
Klein, A A; Collier, T; Yeates, J; Miles, L F; Fletcher, S N; Evans, C; Richards, T
2017-09-01
A simple and accurate scoring system to predict risk of transfusion for patients undergoing cardiac surgery is lacking. We identified independent risk factors associated with transfusion by performing univariate analysis, followed by logistic regression. We then simplified the score to an integer-based system and tested it using the area under the receiver operator characteristic (AUC) statistic with a Hosmer-Lemeshow goodness-of-fit test. Finally, the scoring system was applied to the external validation dataset and the same statistical methods applied to test the accuracy of the ACTA-PORT score. Several factors were independently associated with risk of transfusion, including age, sex, body surface area, logistic EuroSCORE, preoperative haemoglobin and creatinine, and type of surgery. In our primary dataset, the score accurately predicted risk of perioperative transfusion in cardiac surgery patients with an AUC of 0.76. The external validation confirmed accuracy of the scoring method with an AUC of 0.84 and good agreement across all scores, with a minor tendency to under-estimate transfusion risk in very high-risk patients. The ACTA-PORT score is a reliable, validated tool for predicting risk of transfusion for patients undergoing cardiac surgery. This and other scores can be used in research studies for risk adjustment when assessing outcomes, and might also be incorporated into a Patient Blood Management programme. © The Author 2017. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Agerbo, Esben; Sullivan, Patrick F; Vilhjálmsson, Bjarni J; Pedersen, Carsten B; Mors, Ole; Børglum, Anders D; Hougaard, David M; Hollegaard, Mads V; Meier, Sandra; Mattheisen, Manuel; Ripke, Stephan; Wray, Naomi R; Mortensen, Preben B
2015-07-01
Schizophrenia has a complex etiology influenced both by genetic and nongenetic factors but disentangling these factors is difficult. To estimate (1) how strongly the risk for schizophrenia relates to the mutual effect of the polygenic risk score, parental socioeconomic status, and family history of psychiatric disorders; (2) the fraction of cases that could be prevented if no one was exposed to these factors; (3) whether family background interacts with an individual's genetic liability so that specific subgroups are particularly risk prone; and (4) to what extent a proband's genetic makeup mediates the risk associated with familial background. We conducted a nested case-control study based on Danish population-based registers. The study consisted of 866 patients diagnosed as having schizophrenia between January 1, 1994, and December 31, 2006, and 871 matched control individuals. Genome-wide data and family psychiatric and socioeconomic background information were obtained from neonatal biobanks and national registers. Results from a separate meta-analysis (34,600 cases and 45,968 control individuals) were applied to calculate polygenic risk scores. Polygenic risk scores, parental socioeconomic status, and family psychiatric history. Odds ratios (ORs), attributable risks, liability R2 values, and proportions mediated. Schizophrenia was associated with the polygenic risk score (OR, 8.01; 95% CI, 4.53-14.16 for highest vs lowest decile), socioeconomic status (OR, 8.10; 95% CI, 3.24-20.3 for 6 vs no exposures), and a history of schizophrenia/psychoses (OR, 4.18; 95% CI, 2.57-6.79). The R2 values were 3.4% (95% CI, 2.1-4.6) for the polygenic risk score, 3.1% (95% CI, 1.9-4.3) for parental socioeconomic status, and 3.4% (95% CI, 2.1-4.6) for family history. Socioeconomic status and psychiatric history accounted for 45.8% (95% CI, 36.1-55.5) and 25.8% (95% CI, 21.2-30.5) of cases, respectively. There was an interaction between the polygenic risk score and family history (P = .03). A total of 17.4% (95% CI, 9.1-26.6) of the effect associated with family history of schizophrenia/psychoses was mediated through the polygenic risk score. Schizophrenia was associated with the polygenic risk score, family psychiatric history, and socioeconomic status. Our study demonstrated that family history of schizophrenia/psychoses is partly mediated through the individual's genetic liability.
Personalized Risk Scoring for Critical Care Prognosis Using Mixtures of Gaussian Processes.
Alaa, Ahmed M; Yoon, Jinsung; Hu, Scott; van der Schaar, Mihaela
2018-01-01
In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients' heterogeneity. The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.
New scoring system for intra-abdominal injury diagnosis after blunt trauma.
Shojaee, Majid; Faridaalaee, Gholamreza; Yousefifard, Mahmoud; Yaseri, Mehdi; Arhami Dolatabadi, Ali; Sabzghabaei, Anita; Malekirastekenari, Ali
2014-01-01
An accurate scoring system for intra-abdominal injury (IAI) based on clinical manifestation and examination may decrease unnecessary CT scans, save time, and reduce healthcare cost. This study is designed to provide a new scoring system for a better diagnosis of IAI after blunt trauma. This prospective observational study was performed from April 2011 to October 2012 on patients aged above 18 years and suspected with blunt abdominal trauma (BAT) admitted to the emergency department (ED) of Imam Hussein Hospital and Shohadaye Hafte Tir Hospital. All patients were assessed and treated based on Advanced Trauma Life Support and ED protocol. Diagnosis was done according to CT scan findings, which was considered as the gold standard. Data were gathered based on patient's history, physical exam, ultrasound and CT scan findings by a general practitioner who was not blind to this study. Chi-square test and logistic regression were done. Factors with significant relationship with CT scan were imported in multivariate regression models, where a coefficient (β) was given based on the contribution of each of them. Scoring system was developed based on the obtained total β of each factor. Altogether 261 patients (80.1% male) were enrolled (48 cases of IAI). A 24-point blunt abdominal trauma scoring system (BATSS) was developed. Patients were divided into three groups including low (score<8), moderate (8≤score<12) and high risk (score≥12). In high risk group immediate laparotomy should be done, moderate group needs further assessments, and low risk group should be kept under observation. Low risk patients did not show positive CT-scans (specificity 100%). Conversely, all high risk patients had positive CT-scan findings (sensitivity 100%). The receiver operating characteristic curve indicated a close relationship between the results of CT scan and BATSS (sensitivity=99.3%). The present scoring system furnishes a high precision and reproducible diagnostic tool for BAT detection and has the potential to reduce unnecessary CT scan and cut unnecessary costs.
Cardiovascular Disease Risk Score: Results from the Filipino-American Women Cardiovascular Study.
Ancheta, Irma B; Battie, Cynthia A; Volgman, Annabelle S; Ancheta, Christine V; Palaniappan, Latha
2017-02-01
Although cardiovascular disease (CVD) is a leading cause of morbidity and mortality of Filipino-Americans, conventional CVD risk calculators may not be accurate for this population. CVD risk scores of a group of Filipino-American women (FAW) were measured using the major risk calculators. Secondly, the sensitivity of the various calculators to obesity was determined. This is a cross-sectional descriptive study that enrolled 40-65-year-old FAW (n = 236), during a community-based health screening study. Ten-year CVD risk was calculated using the Framingham Risk Score (FRS), Reynolds Risk Score (RRS), and Atherosclerotic Cardiovascular Disease (ASCVD) calculators. The 30-year risk FRS and the lifetime ASCVD calculators were also determined. Levels of predicted CVD risk varied as a function of the calculator. The 10-year ASCVD calculator classified 12 % of participants with ≥10 % risk, but the 10-year FRS and RRS calculators classified all participants with ≤10 % risk. The 30-year "Hard" Lipid and BMI FRS calculators classified 32 and 43 % of participants with high (≥20 %) risk, respectively, while 95 % of participants were classified with ≥20 % risk by the lifetime ASCVD calculator. The percent of participants with elevated CVD risk increased as a function of waist circumference for most risk score calculators. Differences in risk score as a function of the risk score calculator indicate the need for outcome studies in this population. Increased waist circumference was associated with increased CVD risk scores underscoring the need for obesity control as a primary prevention of CVD in FAW.
Hu, Pei Lin; Koh, Yi Ling Eileen; Tan, Ngiap Chuan
2016-12-01
The prevalence of type 2 diabetes mellitus is rising, with many Asian countries featured in the top 10 countries with the highest numbers of persons with diabetes. Reliable diabetes risk scores enable the identification of individuals at risk of developing diabetes for early intervention. This article aims to identify common risk factors in the risk scores with the highest discrimination; factors with the most influence on the risk score in Asian populations, and to propose a set of factors translatable to the multi-ethnic Singapore population. A systematic search of PubMed and EMBASE databases was conducted to identify studies published before August 2016 that developed risk prediction models for incident diabetes. 12 studies were identified. Risk scores that included laboratory measurements had better discrimination. Coefficient analysis showed fasting glucose and HbA1c having the greatest impact on the risk score. A proposed Asian risk score would include: family history of diabetes, age, gender, smoking status, body mass index, waist circumference, hypertension, fasting plasma glucose, HbA1c, HDL-cholesterol and triglycerides. Future research is required on the influence of ethnicity in Singapore. The risk score may potentially be used to stratify individuals for enrolment into diabetes prevention programmes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Antic, Darko; Milic, Natasa; Nikolovski, Srdjan; Todorovic, Milena; Bila, Jelena; Djurdjevic, Predrag; Andjelic, Bosko; Djurasinovic, Vladislava; Sretenovic, Aleksandra; Vukovic, Vojin; Jelicic, Jelena; Hayman, Suzanne; Mihaljevic, Biljana
2016-10-01
Lymphoma patients are at increased risk of thromboembolic events but thromboprophylaxis in these patients is largely underused. We sought to develop and validate a simple model, based on individual clinical and laboratory patient characteristics that would designate lymphoma patients at risk for thromboembolic event. The study population included 1,820 lymphoma patients who were treated in the Lymphoma Departments at the Clinics of Hematology, Clinical Center of Serbia and Clinical Center Kragujevac. The model was developed using data from a derivation cohort (n = 1,236), and further assessed in the validation cohort (n = 584). Sixty-five patients (5.3%) in the derivation cohort and 34 (5.8%) patients in the validation cohort developed thromboembolic events. The variables independently associated with risk for thromboembolism were: previous venous and/or arterial events, mediastinal involvement, BMI>30 kg/m(2) , reduced mobility, extranodal localization, development of neutropenia and hemoglobin level < 100g/L. Based on the risk model score, the population was divided into the following risk categories: low (score 0-1), intermediate (score 2-3), and high (score >3). For patients classified at risk (intermediate and high-risk scores), the model produced negative predictive value of 98.5%, positive predictive value of 25.1%, sensitivity of 75.4%, and specificity of 87.5%. A high-risk score had positive predictive value of 65.2%. The diagnostic performance measures retained similar values in the validation cohort. Developed prognostic Thrombosis Lymphoma - ThroLy score is more specific for lymphoma patients than any other available score targeting thrombosis in cancer patients. Am. J. Hematol. 91:1014-1019, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Lin, Daniel W; Crawford, E David; Keane, Thomas; Evans, Brent; Reid, Julia; Rajamani, Saradha; Brown, Krystal; Gutin, Alexander; Tward, Jonathan; Scardino, Peter; Brawer, Michael; Stone, Steven; Cuzick, Jack
2018-06-01
A combined clinical cell-cycle risk (CCR) score that incorporates prognostic molecular and clinical information has been recently developed and validated to improve prostate cancer mortality (PCM) risk stratification over clinical features alone. As clinical features are currently used to select men for active surveillance (AS), we developed and validated a CCR score threshold to improve the identification of men with low-risk disease who are appropriate for AS. The score threshold was selected based on the 90th percentile of CCR scores among men who might typically be considered for AS based on NCCN low/favorable-intermediate risk criteria (CCR = 0.8). The threshold was validated using 10-year PCM in an unselected, conservatively managed cohort and in the subset of the same cohort after excluding men with high-risk features. The clinical effect was evaluated in a contemporary clinical cohort. In the unselected validation cohort, men with CCR scores below the threshold had a predicted mean 10-year PCM of 2.7%, and the threshold significantly dichotomized low- and high-risk disease (P = 1.2 × 10 -5 ). After excluding high-risk men from the validation cohort, men with CCR scores below the threshold had a predicted mean 10-year PCM of 2.3%, and the threshold significantly dichotomized low- and high-risk disease (P = 0.020). There were no prostate cancer-specific deaths in men with CCR scores below the threshold in either analysis. The proportion of men in the clinical testing cohort identified as candidates for AS was substantially higher using the threshold (68.8%) compared to clinicopathologic features alone (42.6%), while mean 10-year predicted PCM risks remained essentially identical (1.9% vs. 2.0%, respectively). The CCR score threshold appropriately dichotomized patients into low- and high-risk groups for 10-year PCM, and may enable more appropriate selection of patients for AS. Copyright © 2018 Elsevier Inc. All rights reserved.
Clinical predictors of risk for atrial fibrillation: implications for diagnosis and monitoring.
Brunner, Kyle J; Bunch, T Jared; Mullin, Christopher M; May, Heidi T; Bair, Tami L; Elliot, David W; Anderson, Jeffrey L; Mahapatra, Srijoy
2014-11-01
To create a risk score using clinical factors to determine whom to screen and monitor for atrial fibrillation (AF). The AF risk score was developed based on the summed odds ratios (ORs) for AF development of 7 accepted clinical risk factors. The AF risk score is intended to assess the risk of AF similar to how the CHA2DS2-VASc score assesses stroke risk. Seven validated risk factors for AF were used to develop the AF risk score: age, coronary artery disease, diabetes mellitus, sex, heart failure, hypertension, and valvular disease. The AF risk score was tested within a random population sample of the Intermountain Healthcare outpatient database. Outcomes were stratified by AF risk score for OR and Kaplan-Meier analysis. A total of 100,000 patient records with an index follow-up from January 1, 2002, through December 31, 2007, were selected and followed up for the development of AF through the time of this analysis, May 13, 2013, through September 6, 2013. Mean ± SD follow-up time was 3106±819 days. The ORs of subsequent AF diagnosis of patients with AF risk scores of 1, 2, 3, 4, and 5 or higher were 3.05, 12.9, 22.8, 34.0, and 48.0, respectively. The area under the curve statistic for the AF risk score was 0.812 (95% CI, 0.805-0.820). We developed a simple AF risk score made up of common clinical factors that may be useful to possibly select patients for long-term monitoring for AF detection. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
Psoriasis and cardiovascular risk. Assessment by different cardiovascular risk scores.
Fernández-Torres, R; Pita-Fernández, S; Fonseca, E
2013-12-01
Psoriasis is an inflammatory disease associated with an increased risk of cardiovascular morbidity and mortality. However, very few studies determine cardiovascular risk by means of Framingham risk score or other indices more appropriate for countries with lower prevalence of cardiovascular risk factors. To determine multiple cardiovascular risk scores in psoriasis patients, the relation between cardiovascular risk and psoriasis features and to compare our results with those in the literature. We assessed demographic data, smoking status, psoriasis features, blood pressure and analytical data. Cardiovascular risk was determined by means of Framingham, SCORE, DORICA and REGICOR scores. A total of 395 patients (59.7% men and 40.3% women) aged 18-86 years were included. The proportion of patients at intermediate and high risk of suffering a major cardiovascular event in the next 10 years was 30.5% and 11.4%, respectively, based on Framingham risk score; 26.9% and 2.2% according to DORICA and 6.8% and 0% using REGICOR score. According to the SCORE index, 22.1% of patients had a high risk of death due to a cardiovascular event over the next 10 years. Cardiovascular risk was not related to psoriasis characteristics, except for the Framingham index, with higher risk in patients with more severe psoriasis (P = 0.032). A considerable proportion of patients had intermediate or high cardiovascular risk, without relevant relationship with psoriasis characteristics and treatment schedules. Therefore, systematic evaluation of cardiovascular risk scores in all psoriasis patients could be useful to identify those with increased cardiovascular risk, subsidiary of lifestyle changes or therapeutic interventions. © 2012 The Authors. Journal of the European Academy of Dermatology and Venereology © 2012 European Academy of Dermatology and Venereology.
Risk assessment of Pakistani individuals for diabetes (RAPID).
Riaz, Musarrat; Basit, Abdul; Hydrie, Muhammad Zafar Iqbal; Shaheen, Fariha; Hussain, Akhtar; Hakeem, Rubina; Shera, Abdus Samad
2012-12-01
To develop and evaluate a risk score to predict people at high risk of developing type 2 diabetes in Pakistan. Cross sectional data regarding primary prevention of diabetes in Pakistan. Diabetes risk score was developed by using simple parameters namely age, waist circumference, and family history of diabetes. Odds ratios of the model were used to assign a score value for each variable and the diabetes risk score was calculated as the sum of those scores. We externally validated the score using two data from 1264 subjects and 856 subjects aged 25 years and above from two separate studies respectively. Validating this score using the first data from the second screening study gave an area under the receive operator characteristics curve [AROC] of 0.758. A cut point of 4 had a sensitivity of 47.0% and specificity of 88% and in the second data AROC is 0.7 with 44% sensitivity and 89% specificity. A simple diabetes risk score, based on a set of variables can be used for the identification of high risk individuals for early intervention to delay or prevent type 2 diabetes in Pakistani population. Copyright © 2012 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Nykanen, David G; Forbes, Thomas J; Du, Wei; Divekar, Abhay A; Reeves, Jaxk H; Hagler, Donald J; Fagan, Thomas E; Pedra, Carlos A C; Fleming, Gregory A; Khan, Danyal M; Javois, Alexander J; Gruenstein, Daniel H; Qureshi, Shakeel A; Moore, Phillip M; Wax, David H
2016-02-01
We sought to develop a scoring system that predicts the risk of serious adverse events (SAE's) for individual pediatric patients undergoing cardiac catheterization procedures. Systematic assessment of risk of SAE in pediatric catheterization can be challenging in view of a wide variation in procedure and patient complexity as well as rapidly evolving technology. A 10 component scoring system was originally developed based on expert consensus and review of the existing literature. Data from an international multi-institutional catheterization registry (CCISC) between 2008 and 2013 were used to validate this scoring system. In addition we used multivariate methods to further refine the original risk score to improve its predictive power of SAE's. Univariate analysis confirmed the strong correlation of each of the 10 components of the original risk score with SAE attributed to a pediatric cardiac catheterization (P < 0.001 for all variables). Multivariate analysis resulted in a modified risk score (CRISP) that corresponds to an increase in value of area under a receiver operating characteristic curve (AUC) from 0.715 to 0.741. The CRISP score predicts risk of occurrence of an SAE for individual patients undergoing pediatric cardiac catheterization procedures. © 2015 Wiley Periodicals, Inc.
Construction of an Exome-Wide Risk Score for Schizophrenia Based on a Weighted Burden Test.
Curtis, David
2018-01-01
Polygenic risk scores obtained as a weighted sum of associated variants can be used to explore association in additional data sets and to assign risk scores to individuals. The methods used to derive polygenic risk scores from common SNPs are not suitable for variants detected in whole exome sequencing studies. Rare variants, which may have major effects, are seen too infrequently to judge whether they are associated and may not be shared between training and test subjects. A method is proposed whereby variants are weighted according to their frequency, their annotations and the genes they affect. A weighted sum across all variants provides an individual risk score. Scores constructed in this way are used in a weighted burden test and are shown to be significantly different between schizophrenia cases and controls using a five-way cross-validation procedure. This approach represents a first attempt to summarise exome sequence variation into a summary risk score, which could be combined with risk scores from common variants and from environmental factors. It is hoped that the method could be developed further. © 2017 John Wiley & Sons Ltd/University College London.
Heidegger, Isabel; Porres, Daniel; Veek, Nica; Heidenreich, Axel; Pfister, David
2017-01-01
Malignancies and cisplatin-based chemotherapy are both known to correlate with a high risk of venous thrombotic events (VTT). In testicular cancer, the information regarding the incidence and reason of VTT in patients undergoing cisplatin-based chemotherapy is still discussed controversially. Moreover, no risk factors for developing a VTT during cisplatin-based chemotherapy have been elucidated so far. We retrospectively analyzed 153 patients with testicular cancer undergoing cisplatin-based chemotherapy at our institution for the development of a VTT during or after chemotherapy. Clinical and pathological parameters for identifying possible risk factors for VTT were analyzed. The Khorana risk score was used to calculate the risk of VTT. Student t test was applied for calculating the statistical significance of differences between the treatment groups. Twenty-six out of 153 patients (17%) developed a VTT during chemotherapy. When we analyzed the risk factors for developing a VTT, we found that Lugano stage ≥IIc was significantly (p = 0.0006) correlated with the risk of developing a VTT during chemotherapy. On calculating the VTT risk using the Khorana risk score model, we found that only 2 out of 26 patients (7.7%) were in the high-risk Khorana group (≥3). Patients with testicular cancer with a high tumor volume have a significant risk of developing a VTT with cisplatin-based chemotherapy. The Khorana risk score is not an accurate tool for predicting VTT in testicular cancer. © 2017 S. Karger AG, Basel.
Brautbar, Ariel; Pompeii, Lisa A; Dehghan, Abbas; Ngwa, Julius S; Nambi, Vijay; Virani, Salim S; Rivadeneira, Fernando; Uitterlinden, André G; Hofman, Albert; Witteman, Jacqueline C M; Pencina, Michael J; Folsom, Aaron R; Cupples, L Adrienne; Ballantyne, Christie M; Boerwinkle, Eric
2012-08-01
Multiple studies have identified single-nucleotide polymorphisms (SNPs) that are associated with coronary heart disease (CHD). We examined whether SNPs selected based on predefined criteria will improve CHD risk prediction when added to traditional risk factors (TRFs). SNPs were selected from the literature based on association with CHD, lack of association with a known CHD risk factor, and successful replication. A genetic risk score (GRS) was constructed based on these SNPs. Cox proportional hazards model was used to calculate CHD risk based on the Atherosclerosis Risk in Communities (ARIC) and Framingham CHD risk scores with and without the GRS. The GRS was associated with risk for CHD (hazard ratio [HR] = 1.10; 95% confidence interval [CI]: 1.07-1.13). Addition of the GRS to the ARIC risk score significantly improved discrimination, reclassification, and calibration beyond that afforded by TRFs alone in non-Hispanic whites in the ARIC study. The area under the receiver operating characteristic curve (AUC) increased from 0.742 to 0.749 (Δ = 0.007; 95% CI, 0.004-0.013), and the net reclassification index (NRI) was 6.3%. Although the risk estimates for CHD in the Framingham Offspring (HR = 1.12; 95% CI: 1.10-1.14) and Rotterdam (HR = 1.08; 95% CI: 1.02-1.14) Studies were significantly improved by adding the GRS to TRFs, improvements in AUC and NRI were modest. Addition of a GRS based on direct associations with CHD to TRFs significantly improved discrimination and reclassification in white participants of the ARIC Study, with no significant improvement in the Rotterdam and Framingham Offspring Studies. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Objective: To determine the extent to which the risk for incident coronary heart disease (CHD) increases in relation to a genetic risk score (GRS) that additively integrates the influence of high-risk alleles in nine documented single nucleotide polymorphisms (SNPs) for CHD, and to examine whether t...
Koppad, Anand K; Kaulgud, Ram S; Arun, B S
2017-09-01
It has been observed that metabolic syndrome is risk factor for Coronary Artery Disease (CAD) and exerts its effects through fat deposition and vascular aging. CAD has been acknowledged as a leading cause of death. In earlier studies, the metabolic risk has been estimated by Framingham risk score. Recent studies have shown that Neck Circumference (NC) has a good correlation with other traditional anthropometric measurements and can be used as marker of obesity. It also correlates with Framingham risk score, which is slightly more sophisticated measure of CAD risk. To assess the risk of CAD in a subject based on NC and to correlate the NC to Framingham risk score. The present cross-sectional study, done at Karnataka Institute of Medical Sciences, Hubli, Karnataka, India, includes 100 subjects. The study duration was of one year from 1 st January 2015 to 31 st December 2015. Anthropometric indices Body Mass Index (BMI) and NC were correlated with 10 year CAD risk as calculated by Framingham risk score. The correlation between BMI, NC, vascular age and Framingham risk score was calculated using Karl Pearson's correlation method. NC has a strong correlation with 10 year CAD risk (p≤0.001). NC was significantly greater in males as compared to females (p≤0.001). Males had greater risk of cardiovascular disease as reflected by higher 10 year Framingham risk score (p≤0.0035). NC gives simple and easy prediction of CAD risk and is more reliable than traditional risk markers like BMI. NC correlates positively with 10 year Framingham risk score.
Validation of the German Diabetes Risk Score within a population-based representative cohort.
Hartwig, S; Kuss, O; Tiller, D; Greiser, K H; Schulze, M B; Dierkes, J; Werdan, K; Haerting, J; Kluttig, A
2013-09-01
To validate the German Diabetes Risk Score within the population-based cohort of the Cardiovascular Disease - Living and Ageing in Halle (CARLA) study. The sample included 582 women and 719 men, aged 45-83 years, who did not have diabetes at baseline. The individual risk of every participant was calculated using the German Diabetes Risk Score, which was modified for 4 years of follow-up. Predicted probabilities and observed outcomes were compared using Hosmer-Lemeshow goodness-of-fit tests and receiver-operator characteristic analyses. Changes in prediction power were investigated by expanding the German Diabetes Risk Score to include metabolic variables and by subgroup analyses. We found 58 cases of incident diabetes. The median 4-year probability of developing diabetes based on the German Diabetes Risk Score was 6.5%. The observed and predicted probabilities of developing diabetes were similar, although estimation was imprecise owing to the small number of cases, and the Hosmer-Lemeshow test returned a poor correlation (chi-squared = 55.3; P = 5.8*10⁻¹²). The area under the receiver-operator characteristic curve (AUC) was 0.70 (95% CI 0.64-0.77), and after excluding participants ≥66 years old, the AUC increased to 0.77 (95% CI 0.70-0.84). Consideration of glycaemic diagnostic variables, in addition to self-reported diabetes, reduced the AUC to 0.65 (95% CI 0.58-0.71). A new model that included the German Diabetes Risk Score and blood glucose concentration (AUC 0.81; 95% CI 0.76-0.86) or HbA(1c) concentration (AUC 0.84; 95% CI 0.80-0.91) was found to peform better. Application of the German Diabetes Risk Score in the CARLA cohort did not reproduce the findings in the European Prospective Investigation into Cancer and Nutrition (EPIC) Potsdam study, which may be explained by cohort differences and model overfit in the latter; however, a high score does provide an indication of increased risk of diabetes. © 2013 The Authors. Diabetic Medicine © 2013 Diabetes UK.
2011-01-01
Background It is desirable that those at highest risk of cardiovascular disease should have priority for preventive measures, eg. treatment with prescription drugs to modify their risk. We wanted to investigate to what extent present use of cardiovascular medication (CVM) correlates with cardiovascular risk estimated by three different risk scores (Framingham, SCORE and NORRISK) ten years ago. Methods Prospective logitudinal observational study of 20 252 participants in The Hordaland Health Study born 1950-57, not using CVM in 1997-99. Prescription data obtained from The Norwegian Prescription Database in 2008. Results 26% of men and 22% of women aged 51-58 years had started to use some CVM during the previous decade. As a group, persons using CVM scored significantly higher on the risk algorithms Framingham, SCORE and NORRISK compared to those not treated. 16-20% of men and 20-22% of women with risk scores below the high-risk thresholds for the three risk scores were treated with CVM, while 60-65% of men and 25-45% of women with scores above the high-risk thresholds received no treatment. Among women using CVM, only 2.2% (NORRISK), 4.4% (SCORE) and 14.5% (Framingham) had risk scores above the high-risk values. Low education, poor self-reported general health, muscular pains, mental distress (in females only) and a family history of premature cardiovascular disease correlated with use of CVM. Elevated blood pressure was the single factor most strongly predictive of CVM treatment. Conclusion Prescription of CVM to middle-aged individuals by large seems to occur independently of estimated total cardiovascular risk, and this applies especially to females. PMID:21366925
Alizai, Hamza; Roemer, Frank W; Hayashi, Daichi; Crema, Michel D; Felson, David T; Guermazi, Ali
2015-03-01
Arthroscopy-based semiquantitative scoring systems such as Outerbridge and Noyes' scores were the first to be developed for the purpose of grading cartilage defects. As magnetic resonance imaging (MRI) became available for evaluation of the osteoarthritic knee joint, these systems were adapted for use with MRI. Later on, grading methods such as the Whole Organ Magnetic Resonance Score, the Boston-Leeds Osteoarthritis Knee Score and the MRI Osteoarthritis Knee Score were designed specifically for performing whole-organ assessment of the knee joint structures, including cartilage. Cartilage grades on MRI obtained with these scoring systems represent optimal outcome measures for longitudinal studies, and are designed to enhance understanding of the knee osteoarthritis disease process. The purpose of this narrative review is to describe cartilage assessment in knee osteoarthritis using currently available MRI-based semiquantitative whole-organ scoring systems, and to provide an update on the risk factors for cartilage loss in knee osteoarthritis as assessed with these scoring systems.
Kengkla, K; Charoensuk, N; Chaichana, M; Puangjan, S; Rattanapornsompong, T; Choorassamee, J; Wilairat, P; Saokaew, S
2016-05-01
Extended spectrum β-lactamase-producing Escherichia coli (ESBL-EC) has important implications for infection control and empiric antibiotic prescribing. This study aims to develop a risk scoring system for predicting ESBL-EC infection based on local epidemiology. The study retrospectively collected eligible patients with a positive culture for E. coli during 2011 to 2014. The risk scoring system was developed using variables independently associated with ESBL-EC infection through logistic regression-based prediction. Area under the receiver-operator characteristic curve (AuROC) was determined to confirm the prediction power of the model. Predictors for ESBL-EC infection were male gender [odds ratio (OR): 1.53], age ≥55 years (OR: 1.50), healthcare-associated infection (OR: 3.21), hospital-acquired infection (OR: 2.28), sepsis (OR: 1.79), prolonged hospitalization (OR: 1.88), history of ESBL infection within one year (OR: 7.88), prior use of broad-spectrum cephalosporins within three months (OR: 12.92), and prior use of other antibiotics within three months (OR: 2.14). Points scored ranged from 0 to 47, and were divided into three groups based on diagnostic performance parameters: low risk (score: 0-8; 44.57%), moderate risk (score: 9-11; 21.85%) and high risk (score: ≥12; 33.58%). The model displayed moderate power of prediction (AuROC: 0.773; 95% confidence interval: 0.742-0.805) and good calibration (Hosmer-Lemeshow χ(2) = 13.29; P = 0.065). This tool may optimize the prescribing of empirical antibiotic therapy, minimize time to identify patients, and prevent spreading of ESBL-EC. Prior to adoption into routine clinical practice, further validation study of the tool is needed. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Predicting risk of substantial weight gain in German adults—a multi-center cohort approach
Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf
2017-01-01
Abstract Background A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. Methods We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score’s discriminatory accuracy. Results The cross-validated c index (95% CI) was 0.71 (0.67–0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. Conclusions The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. PMID:28013243
NASA Astrophysics Data System (ADS)
Otieno, George A.; Loosen, Alexander E.
2016-05-01
Concentrated Solar Power projects have impacts on local environment and social conditions. This research set out to investigate the environmental and social risks in the development of such projects and rank these risks from highest to lowest. The risks were analysed for parabolic trough and tower technologies only. A literature review was undertaken, identifying seventeen risks that were then proposed to six CSP experts for scoring. The risks were scored based of five factors on a five tier scale. The scores from the experts were compiled to develop an overall rank of the identified risks. The risk of disruption of local water resources was found to represent the highest risk before and after mitigation with a score of moderate-high and moderate respectively. This score is linked to the importance of water in water scarce regions typified by the best regions for CSP. The risks to avian species, to worker health and safety, due to noise on the environment, to visual and recreational resources completed the top five risks after mitigation.
Yahng, Seung-Ah; Jang, Eun-Jung; Choi, Soo-Young; Lee, Sung-Eun; Kim, Soo-Hyun; Kim, Dong-Wook
2014-08-01
Beyond the conventional Sokal and Euro scores, a new prognostic risk classification, based on the European Treatment Outcome Study (EUTOS), has been developed to predict the outcome of treatment with tyrosine kinase inhibitors (TKI) in chronic myeloid leukemia (CML). In the present study, each risk score was validated by various endpoints in 206 Korean patients with early chronic-phase CML treated with up-front standard dose imatinib. In our analysis, all three scores were found to be valid. The 5-year event-free survival (EFS) was significantly discriminated using Sokal (P = 0.002), Euro (P = 0.003), and EUTOS (P = 0.029), with the worst probability by Euro high-risk (62 vs. 49 vs. 67 %) and better EFS in Sokal low-risk (89 vs. 86 vs. 82 %). Combining all scores identified 6 % of all patients having homogeneous high-risk with distinctively worse outcomes (5-year EFS of 41 %, cumulative complete cytogenetic response rate of 56 %, and cumulative major molecular response rate of 27 %), whereas the group of discordance in risk scores (60 %) had similar results to those of intermediate-risk groups of Sokal and Euro scores. Combining all risk scores for baseline risk assessment may be useful in clinical practice for identifying groups of patients who may benefit from treatment initiation with a more potent TKI among the currently available first-line TKIs.
Pang, Hui; Han, Bing; Fu, Qiang; Zong, Zhenkun
2017-07-05
The presence of acute myocardial infarction (AMI) confers a poor prognosis in atrial fibrillation (AF), associated with increased mortality dramatically. This study aimed to evaluate the predictive value of CHADS 2 and CHA 2 DS 2 -VASc scores for AMI in patients with AF. This retrospective study enrolled 5140 consecutive nonvalvular AF patients, 300 patients with AMI and 4840 patients without AMI. We identified the optimal cut-off values of the CHADS 2 and CHA 2 DS 2 -VASc scores each based on receiver operating characteristic curves to predict the risk of AMI. Both CHADS 2 score and CHA 2 DS 2 -VASc score were associated with an increased odds ratio of the prevalence of AMI in patients with AF, after adjustment for hyperlipidaemia, hyperuricemia, hyperthyroidism, hypothyroidism and obstructive sleep apnea. The present results showed that the area under the curve (AUC) for CHADS 2 score was 0.787 with a similar accuracy of the CHA 2 DS 2 -VASc score (AUC 0.750) in predicting "high-risk" AF patients who developed AMI. However, the predictive accuracy of the two clinical-based risk scores was fair. The CHA 2 DS 2 -VASc score has fair predictive value for identifying high-risk patients with AF and is not significantly superior to CHADS 2 in predicting patients who develop AMI.
Simple Scoring System to Predict In-Hospital Mortality After Surgery for Infective Endocarditis.
Gatti, Giuseppe; Perrotti, Andrea; Obadia, Jean-François; Duval, Xavier; Iung, Bernard; Alla, François; Chirouze, Catherine; Selton-Suty, Christine; Hoen, Bruno; Sinagra, Gianfranco; Delahaye, François; Tattevin, Pierre; Le Moing, Vincent; Pappalardo, Aniello; Chocron, Sidney
2017-07-20
Aspecific scoring systems are used to predict the risk of death postsurgery in patients with infective endocarditis (IE). The purpose of the present study was both to analyze the risk factors for in-hospital death, which complicates surgery for IE, and to create a mortality risk score based on the results of this analysis. Outcomes of 361 consecutive patients (mean age, 59.1±15.4 years) who had undergone surgery for IE in 8 European centers of cardiac surgery were recorded prospectively, and a risk factor analysis (multivariable logistic regression) for in-hospital death was performed. The discriminatory power of a new predictive scoring system was assessed with the receiver operating characteristic curve analysis. Score validation procedures were carried out. Fifty-six (15.5%) patients died postsurgery. BMI >27 kg/m 2 (odds ratio [OR], 1.79; P =0.049), estimated glomerular filtration rate <50 mL/min (OR, 3.52; P <0.0001), New York Heart Association class IV (OR, 2.11; P =0.024), systolic pulmonary artery pressure >55 mm Hg (OR, 1.78; P =0.032), and critical state (OR, 2.37; P =0.017) were independent predictors of in-hospital death. A scoring system was devised to predict in-hospital death postsurgery for IE (area under the receiver operating characteristic curve, 0.780; 95% CI, 0.734-0.822). The score performed better than 5 of 6 scoring systems for in-hospital death after cardiac surgery that were considered. A simple scoring system based on risk factors for in-hospital death was specifically created to predict mortality risk postsurgery in patients with IE. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Sonne, Michael; Villalta, Dino L; Andrews, David M
2012-01-01
The Rapid Office Strain Assessment (ROSA) was designed to quickly quantify risks associated with computer work and to establish an action level for change based on reports of worker discomfort. Computer use risk factors were identified in previous research and standards on office design for the chair, monitor, telephone, keyboard and mouse. The risk factors were diagrammed and coded as increasing scores from 1 to 3. ROSA final scores ranged in magnitude from 1 to 10, with each successive score representing an increased presence of risk factors. Total body discomfort and ROSA final scores for 72 office workstations were significantly correlated (R = 0.384). ROSA final scores exhibited high inter- and intra-observer reliability (ICCs of 0.88 and 0.91, respectively). Mean discomfort increased with increasing ROSA scores, with a significant difference occurring between scores of 3 and 5 (out of 10). A ROSA final score of 5 might therefore be useful as an action level indicating when immediate change is necessary. ROSA proved to be an effective and reliable method for identifying computer use risk factors related to discomfort. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Enhancing the Value of Population-Based Risk Scores for Institutional-Level Use.
Raza, Sajjad; Sabik, Joseph F; Rajeswaran, Jeevanantham; Idrees, Jay J; Trezzi, Matteo; Riaz, Haris; Javadikasgari, Hoda; Nowicki, Edward R; Svensson, Lars G; Blackstone, Eugene H
2016-07-01
We hypothesized that factors associated with an institution's residual risk unaccounted for by population-based models may be identifiable and used to enhance the value of population-based risk scores for quality improvement. From January 2000 to January 2010, 4,971 patients underwent aortic valve replacement (AVR), either isolated (n = 2,660) or with concomitant coronary artery bypass grafting (AVR+CABG; n = 2,311). Operative mortality and major morbidity and mortality predicted by The Society of Thoracic Surgeons (STS) risk models were compared with observed values. After adjusting for patients' STS score, additional and refined risk factors were sought to explain residual risk. Differences between STS model coefficients (risk-factor strength) and those specific to our institution were calculated. Observed operative mortality was less than predicted for AVR (1.6% [42 of 2,660] vs 2.8%, p < 0.0001) and AVR+CABG (2.6% [59 of 2,311] vs 4.9%, p < 0.0001). Observed major morbidity and mortality was also lower than predicted for isolated AVR (14.6% [389 of 2,660] vs 17.5%, p < 0.0001) and AVR+CABG (20.0% [462 of 2,311] vs 25.8%, p < 0.0001). Shorter height, higher bilirubin, and lower albumin were identified as additional institution-specific risk factors, and body surface area, creatinine, glomerular filtration rate, blood urea nitrogen, and heart failure across all levels of functional class were identified as refined risk-factor variables associated with residual risk. In many instances, risk-factor strength differed substantially from that of STS models. Scores derived from population-based models can be enhanced for institutional level use by adjusting for institution-specific additional and refined risk factors. Identifying these and measuring differences in institution-specific versus population-based risk-factor strength can identify areas to target for quality improvement initiatives. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Raffetti, Elena; Donato, Francesco; Pezzoli, Chiara; Digiambenedetto, Simona; Bandera, Alessandra; Di Pietro, Massimo; Di Filippo, Elisa; Maggiolo, Franco; Sighinolfi, Laura; Fornabaio, Chiara; Castelnuovo, Filippo; Ladisa, Nicoletta; Castelli, Francesco; Quiros Roldan, Eugenia
2015-08-15
Recently, some systemic inflammation-based biomarkers have been demonstrated useful for predicting risk of death in patients with solid cancer independently of tumor characteristics. This study aimed to investigate the prognostic role of systemic inflammation-based biomarkers in HIV-infected patients with solid tumors and to propose a risk score for mortality in these subjects. Clinical and pathological data on solid AIDS-defining cancer (ADC) and non-AIDS-defining cancer (NADC), diagnosed between 1998 and 2012 in an Italian cohort, were analyzed. To evaluate the prognostic role of systemic inflammation- and nutrition-based markers, univariate and multivariable Cox regression models were applied. To compute the risk score equation, the patients were randomly assigned to a derivation and a validation sample. A total of 573 patients (76.3% males) with a mean age of 46.2 years (SD = 10.3) were enrolled. 178 patients died during a median of 3.2 years of follow-up. For solid NADCs, elevated Glasgow Prognostic Score, modified Glasgow Prognostic Score, neutrophil/lymphocyte ratio, platelet/lymphocyte ratio, and Prognostic Nutritional Index were independently associated with risk of death; for solid ADCs, none of these markers was associated with risk of death. For solid NADCs, we computed a mortality risk score on the basis of age at cancer diagnosis, intravenous drug use, and Prognostic Nutritional Index. The areas under the receiver operating characteristic curve were 0.67 (95% confidence interval: 0.58 to 0.75) in the derivation sample and 0.66 (95% confidence interval: 0.54 to 0.79) in the validation sample. Inflammatory biomarkers were associated with risk of death in HIV-infected patients with solid NADCs but not with ADCs.
Joint use of cardio-embolic and bleeding risk scores in elderly patients with atrial fibrillation.
Marcucci, Maura; Nobili, Alessandro; Tettamanti, Mauro; Iorio, Alfonso; Pasina, Luca; Djade, Codjo D; Franchi, Carlotta; Marengoni, Alessandra; Salerno, Francesco; Corrao, Salvatore; Violi, Francesco; Mannucci, Pier Mannuccio
2013-12-01
Scores for cardio-embolic and bleeding risk in patients with atrial fibrillation are described in the literature. However, it is not clear how they co-classify elderly patients with multimorbidity, nor whether and how they affect the physician's decision on thromboprophylaxis. Four scores for cardio-embolic and bleeding risks were retrospectively calculated for ≥ 65 year old patients with atrial fibrillation enrolled in the REPOSI registry. The co-classification of patients according to risk categories based on different score combinations was described and the relationship between risk categories tested. The association between the antithrombotic therapy received and the scores was investigated by logistic regressions and CART analyses. At admission, among 543 patients the median scores (range) were: CHADS2 2 (0-6), CHA2DS2-VASc 4 (1-9), HEMORR2HAGES 3 (0-7), HAS-BLED 2 (1-6). Most of the patients were at high cardio-embolic/high-intermediate bleeding risk (70.5% combining CHADS2 and HEMORR2HAGES, 98.3% combining CHA2DS2-VASc and HAS-BLED). 50-60% of patients were classified in a cardio-embolic risk category higher than the bleeding risk category. In univariate and multivariable analyses, a higher bleeding score was negatively associated with warfarin prescription, and positively associated with aspirin prescription. The cardio-embolic scores were associated with the therapeutic choice only after adjusting for bleeding score or age. REPOSI patients represented a population at high cardio-embolic and bleeding risks, but most of them were classified by the scores as having a higher cardio-embolic than bleeding risk. Yet, prescription and type of antithrombotic therapy appeared to be primarily dictated by the bleeding risk. © 2013.
Predictive accuracy of combined genetic and environmental risk scores.
Dudbridge, Frank; Pashayan, Nora; Yang, Jian
2018-02-01
The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.
Predictive accuracy of combined genetic and environmental risk scores
Pashayan, Nora; Yang, Jian
2017-01-01
ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508
A Simple Model to Rank Shellfish Farming Areas Based on the Risk of Disease Introduction and Spread.
Thrush, M A; Pearce, F M; Gubbins, M J; Oidtmann, B C; Peeler, E J
2017-08-01
The European Union Council Directive 2006/88/EC requires that risk-based surveillance (RBS) for listed aquatic animal diseases is applied to all aquaculture production businesses. The principle behind this is the efficient use of resources directed towards high-risk farm categories, animal types and geographic areas. To achieve this requirement, fish and shellfish farms must be ranked according to their risk of disease introduction and spread. We present a method to risk rank shellfish farming areas based on the risk of disease introduction and spread and demonstrate how the approach was applied in 45 shellfish farming areas in England and Wales. Ten parameters were used to inform the risk model, which were grouped into four risk themes based on related pathways for transmission of pathogens: (i) live animal movement, (ii) transmission via water, (iii) short distance mechanical spread (birds) and (iv) long distance mechanical spread (vessels). Weights (informed by expert knowledge) were applied both to individual parameters and to risk themes for introduction and spread to reflect their relative importance. A spreadsheet model was developed to determine quantitative scores for the risk of pathogen introduction and risk of pathogen spread for each shellfish farming area. These scores were used to independently rank areas for risk of introduction and for risk of spread. Thresholds were set to establish risk categories (low, medium and high) for introduction and spread based on risk scores. Risk categories for introduction and spread for each area were combined to provide overall risk categories to inform a risk-based surveillance programme directed at the area level. Applying the combined risk category designation framework for risk of introduction and spread suggested by European Commission guidance for risk-based surveillance, 4, 10 and 31 areas were classified as high, medium and low risk, respectively. © 2016 Crown copyright.
Chao, Tze-Fan; Lip, Gregory Y H; Lin, Yenn-Jiang; Chang, Shih-Lin; Lo, Li-Wei; Hu, Yu-Feng; Tuan, Ta-Chuan; Liao, Jo-Nan; Chung, Fa-Po; Chen, Tzeng-Ji; Chen, Shih-Ann
2018-03-01
While modifiable bleeding risks should be addressed in all patients with atrial fibrillation (AF), use of a bleeding risk score enables clinicians to 'flag up' those at risk of bleeding for more regular patient contact reviews. We compared a risk assessment strategy for major bleeding and intracranial hemorrhage (ICH) based on modifiable bleeding risk factors (referred to as a 'MBR factors' score) against established bleeding risk stratification scores (HEMORR 2 HAGES, HAS-BLED, ATRIA, ORBIT). A nationwide cohort study of 40,450 AF patients who received warfarin for stroke prevention was performed. The clinical endpoints included ICH and major bleeding. Bleeding scores were compared using receiver operating characteristic (ROC) curves (areas under the ROC curves [AUCs], or c-index) and the net reclassification index (NRI). During a follow up of 4.60±3.62years, 1581 (3.91%) patients sustained ICH and 6889 (17.03%) patients sustained major bleeding events. All tested bleeding risk scores at baseline were higher in those sustaining major bleeds. When compared to no ICH, patients sustaining ICH had higher baseline HEMORR 2 HAGES (p=0.003), HAS-BLED (p<0.001) and MBR factors score (p=0.013) but not ATRIA and ORBIT scores. When HAS-BLED was compared to other bleeding scores, c-indexes were significantly higher compared to MBR factors (p<0.001) and ORBIT (p=0.05) scores for major bleeding. C-indexes for the MBR factors score was significantly lower compared to all other scores (De long test, all p<0.001). When NRI was performed, HAS-BLED outperformed all other bleeding risk scores for major bleeding (all p<0.001). C-indexes for ATRIA and ORBIT scores suggested no significant prediction for ICH. All contemporary bleeding risk scores had modest predictive value for predicting major bleeding but the best predictive value and NRI was found for the HAS-BLED score. Simply depending on modifiable bleeding risk factors had suboptimal predictive value for the prediction of major bleeding in AF patients, when compared to the HAS-BLED score. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Upper gastrointestinal bleeding risk scores: Who, when and why?
Monteiro, Sara; Gonçalves, Tiago Cúrdia; Magalhães, Joana; Cotter, José
2016-01-01
Upper gastrointestinal bleeding (UGIB) remains a significant cause of hospital admission. In order to stratify patients according to the risk of the complications, such as rebleeding or death, and to predict the need of clinical intervention, several risk scores have been proposed and their use consistently recommended by international guidelines. The use of risk scoring systems in early assessment of patients suffering from UGIB may be useful to distinguish high-risks patients, who may need clinical intervention and hospitalization, from low risk patients with a lower chance of developing complications, in which management as outpatients can be considered. Although several scores have been published and validated for predicting different outcomes, the most frequently cited ones are the Rockall score and the Glasgow Blatchford score (GBS). While Rockall score, which incorporates clinical and endoscopic variables, has been validated to predict mortality, the GBS, which is based on clinical and laboratorial parameters, has been studied to predict the need of clinical intervention. Despite the advantages previously reported, their use in clinical decisions is still limited. This review describes the different risk scores used in the UGIB setting, highlights the most important research, explains why and when their use may be helpful, reflects on the problems that remain unresolved and guides future research with practical impact. PMID:26909231
Genetic markers enhance coronary risk prediction in men: the MORGAM prospective cohorts.
Hughes, Maria F; Saarela, Olli; Stritzke, Jan; Kee, Frank; Silander, Kaisa; Klopp, Norman; Kontto, Jukka; Karvanen, Juha; Willenborg, Christina; Salomaa, Veikko; Virtamo, Jarmo; Amouyel, Phillippe; Arveiler, Dominique; Ferrières, Jean; Wiklund, Per-Gunner; Baumert, Jens; Thorand, Barbara; Diemert, Patrick; Trégouët, David-Alexandre; Hengstenberg, Christian; Peters, Annette; Evans, Alun; Koenig, Wolfgang; Erdmann, Jeanette; Samani, Nilesh J; Kuulasmaa, Kari; Schunkert, Heribert
2012-01-01
More accurate coronary heart disease (CHD) prediction, specifically in middle-aged men, is needed to reduce the burden of disease more effectively. We hypothesised that a multilocus genetic risk score could refine CHD prediction beyond classic risk scores and obtain more precise risk estimates using a prospective cohort design. Using data from nine prospective European cohorts, including 26,221 men, we selected in a case-cohort setting 4,818 healthy men at baseline, and used Cox proportional hazards models to examine associations between CHD and risk scores based on genetic variants representing 13 genomic regions. Over follow-up (range: 5-18 years), 1,736 incident CHD events occurred. Genetic risk scores were validated in men with at least 10 years of follow-up (632 cases, 1361 non-cases). Genetic risk score 1 (GRS1) combined 11 SNPs and two haplotypes, with effect estimates from previous genome-wide association studies. GRS2 combined 11 SNPs plus 4 SNPs from the haplotypes with coefficients estimated from these prospective cohorts using 10-fold cross-validation. Scores were added to a model adjusted for classic risk factors comprising the Framingham risk score and 10-year risks were derived. Both scores improved net reclassification (NRI) over the Framingham score (7.5%, p = 0.017 for GRS1, 6.5%, p = 0.044 for GRS2) but GRS2 also improved discrimination (c-index improvement 1.11%, p = 0.048). Subgroup analysis on men aged 50-59 (436 cases, 603 non-cases) improved net reclassification for GRS1 (13.8%) and GRS2 (12.5%). Net reclassification improvement remained significant for both scores when family history of CHD was added to the baseline model for this male subgroup improving prediction of early onset CHD events. Genetic risk scores add precision to risk estimates for CHD and improve prediction beyond classic risk factors, particularly for middle aged men.
Isma'eel, Hussain A; Almedawar, Mohamad M; Harbieh, Bernard; Alajaji, Wissam; Al-Shaar, Laila; Hourani, Mukbil; El-Merhi, Fadi; Alam, Samir; Abchee, Antoine
2015-10-01
The use of the Coronary Artery Calcium Score (CACS) for risk categorization instead of the Framingham Risk Score (FRS) or European Heart SCORE (EHS) to improve classification of individuals is well documented. However, the impact of reclassifying individuals using CACS on initiating lipid lowering therapy is not well understood. We aimed to determine the percentage of individuals not requiring lipid lowering therapy as per the FRS and EHS models but are found to require it using CACS and vice versa; and to determine the level of agreement between CACS, FRS and EHS based models. Data was collected for 500 consecutive patients who had already undergone CACS. However, only 242 patients met the inclusion criteria and were included in the analysis. Risk stratification comparisons were conducted according to CACS, FRS, and EHS, and the agreement (Kappa) between them was calculated. In accordance with the models, 79.7% to 81.5% of high-risk individuals were down-classified by CACS, while 6.8% to 7.6% of individuals at intermediate risk were up-classified to high risk by CACS, with slight to moderate agreement. Moreover, CACS recommended treatment to 5.7% and 5.8% of subjects untreated according to European and Canadian guidelines, respectively; whereas 75.2% to 81.2% of those treated in line with the guidelines would not be treated based on CACS. In this simulation, using CACS for risk categorization warrants lipid lowering treatment for 5-6% and spares 70-80% from treatment in accordance with the guidelines. Current strong evidence from double randomized clinical trials is in support of guideline recommendations. Our results call for a prospective trial to explore the benefits/risks of a CACS-based approach before any recommendations can be made.
Isma’eel, Hussain A.; Almedawar, Mohamad M.; Harbieh, Bernard; Alajaji, Wissam; Al-Shaar, Laila; Hourani, Mukbil; El-Merhi, Fadi; Alam, Samir; Abchee, Antoine
2015-01-01
Background The use of the Coronary Artery Calcium Score (CACS) for risk categorization instead of the Framingham Risk Score (FRS) or European Heart SCORE (EHS) to improve classification of individuals is well documented. However, the impact of reclassifying individuals using CACS on initiating lipid lowering therapy is not well understood. We aimed to determine the percentage of individuals not requiring lipid lowering therapy as per the FRS and EHS models but are found to require it using CACS and vice versa; and to determine the level of agreement between CACS, FRS and EHS based models. Methods Data was collected for 500 consecutive patients who had already undergone CACS. However, only 242 patients met the inclusion criteria and were included in the analysis. Risk stratification comparisons were conducted according to CACS, FRS, and EHS, and the agreement (Kappa) between them was calculated. Results In accordance with the models, 79.7% to 81.5% of high-risk individuals were down-classified by CACS, while 6.8% to 7.6% of individuals at intermediate risk were up-classified to high risk by CACS, with slight to moderate agreement. Moreover, CACS recommended treatment to 5.7% and 5.8% of subjects untreated according to European and Canadian guidelines, respectively; whereas 75.2% to 81.2% of those treated in line with the guidelines would not be treated based on CACS. Conclusion In this simulation, using CACS for risk categorization warrants lipid lowering treatment for 5–6% and spares 70–80% from treatment in accordance with the guidelines. Current strong evidence from double randomized clinical trials is in support of guideline recommendations. Our results call for a prospective trial to explore the benefits/risks of a CACS-based approach before any recommendations can be made. PMID:26557741
Development of a self-assessment score for metabolic syndrome risk in non-obese Korean adults.
Je, Youjin; Kim, Youngyo; Park, Taeyoung
2017-03-01
There is a need for simple risk scores that identify individuals at high risk for metabolic syndrome (MetS). Therefore, this study was performed to develop and validate a self-assessment score for MetS risk in non-obese Korean adults. Data from the fourth Korea National Health and Nutrition Examination Survey (KNHANES IV), 2007-2009 were used to develop a MetS risk score. We included a total of 5,508 non-obese participants aged 19-64 years who were free of a self-reported diagnosis of diabetes, hyperlipidemia, hypertension, stroke, angina, or cancer. Multivariable logistic regression model coefficients were used to assign each variable category a score. The validity of the score was assessed in an independent population survey performed in 2010 and 2011, KNHANES V (n=3,892). Age, BMI, physical activity, smoking, alcohol consumption, dairy consumption, dietary habit of eating less salty and food insecurity were selected as categorical variables. The MetS risk score value varied from 0 to 13, and a cut-point MetS risk score of >=7 was selected based on the highest Youden index. The cut-point provided a sensitivity of 81%, specificity of 61%, positive predictive value of 14%, and negative predictive value of 98%, with an area under the curve (AUC) of 0.78. Consistent results were obtained in the validation data sets. This simple risk score may be used to identify individuals at high risk for MetS without laboratory tests among non-obese Korean adults. Further studies are needed to verify the usefulness and feasibility of this score in various settings.
Factors predicting high estimated 10-year stroke risk: thai epidemiologic stroke study.
Hanchaiphiboolkul, Suchat; Puthkhao, Pimchanok; Towanabut, Somchai; Tantirittisak, Tasanee; Wangphonphatthanasiri, Khwanrat; Termglinchan, Thanes; Nidhinandana, Samart; Suwanwela, Nijasri Charnnarong; Poungvarin, Niphon
2014-08-01
The purpose of the study was to determine the factors predicting high estimated 10-year stroke risk based on a risk score, and among the risk factors comprising the risk score, which factors had a greater impact on the estimated risk. Thai Epidemiologic Stroke study was a community-based cohort study, which recruited participants from the general population from 5 regions of Thailand. Cross-sectional baseline data of 16,611 participants aged 45-69 years who had no history of stroke were included in this analysis. Multiple logistic regression analysis was used to identify the predictors of high estimated 10-year stroke risk based on the risk score of the Japan Public Health Center Study, which estimated the projected 10-year risk of incident stroke. Educational level, low personal income, occupation, geographic area, alcohol consumption, and hypercholesterolemia were significantly associated with high estimated 10-year stroke risk. Among these factors, unemployed/house work class had the highest odds ratio (OR, 3.75; 95% confidence interval [CI], 2.47-5.69) followed by illiterate class (OR, 2.30; 95% CI, 1.44-3.66). Among risk factors comprising the risk score, the greatest impact as a stroke risk factor corresponded to age, followed by male sex, diabetes mellitus, systolic blood pressure, and current smoking. Socioeconomic status, in particular, unemployed/house work and illiterate class, might be good proxy to identify the individuals at higher risk of stroke. The most powerful risk factors were older age, male sex, diabetes mellitus, systolic blood pressure, and current smoking. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Yuan, Shaoxin; Gao, Yusong; Ji, Wenqing; Song, Junshuai; Mei, Xue
2018-05-01
The aim of this study was to assess the ability of acute physiology and chronic health evaluation II (APACHE II) score, poisoning severity score (PSS) as well as sequential organ failure assessment (SOFA) score combining with lactate (Lac) to predict mortality in the Emergency Department (ED) patients who were poisoned with organophosphate.A retrospective review of 59 stands-compliant patients was carried out. Receiver operating characteristic (ROC) curves were constructed based on the APACHE II score, PSS, SOFA score with or without Lac, respectively, and the areas under the ROC curve (AUCs) were determined to assess predictive value. According to SOFA-Lac (a combination of SOFA and Lac) classification standard, acute organophosphate pesticide poisoning (AOPP) patients were divided into low-risk and high-risk groups. Then mortality rates were compared between risk levels.Between survivors and non-survivors, there were significant differences in the APACHE II score, PSS, SOFA score, and Lac (all P < .05). The AUCs of the APACHE II score, PSS, and SOFA score were 0.876, 0.811, and 0.837, respectively. However, after combining with Lac, the AUCs were 0.922, 0.878, and 0.956, respectively. According to SOFA-Lac, the mortality of high-risk group was significantly higher than low-risk group (P < .05) and the patients of the non-survival group were all at high risk.These data suggest the APACHE II score, PSS, SOFA score can all predict the prognosis of AOPP patients. For its simplicity and objectivity, the SOFA score is a superior predictor. Lac significantly improved the predictive abilities of the 3 scoring systems, especially for the SOFA score. The SOFA-Lac system effectively distinguished the high-risk group from the low-risk group. Therefore, the SOFA-Lac system is significantly better at predicting mortality in AOPP patients.
Aoki, Tomonori; Nagata, Naoyoshi; Shimbo, Takuro; Niikura, Ryota; Sakurai, Toshiyuki; Moriyasu, Shiori; Okubo, Hidetaka; Sekine, Katsunori; Watanabe, Kazuhiro; Yokoi, Chizu; Yanase, Mikio; Akiyama, Junichi; Mizokami, Masashi; Uemura, Naomi
2016-11-01
We aimed to develop and validate a risk scoring system to determine the risk of severe lower gastrointestinal bleeding (LGIB) and predict patient outcomes. We first performed a retrospective analysis of data from 439 patients emergently hospitalized for acute LGIB at the National Center for Global Health and Medicine in Japan, from January 2009 through December 2013. We used data on comorbidities, medication, presenting symptoms, and vital signs, and laboratory test results to develop a scoring system for severe LGIB (defined as continuous and/or recurrent bleeding). We validated the risk score in a prospective study of 161 patients with acute LGIB admitted to the same center from April 2014 through April 2015. We assessed the system's accuracy in predicting patient outcome using area under the receiver operating characteristics curve (AUC) analysis. All patients underwent colonoscopy. In the first study, 29% of the patients developed severe LGIB. We devised a risk scoring system based on nonsteroidal anti-inflammatory drugs use, no diarrhea, no abdominal tenderness, blood pressure of 100 mm Hg or lower, antiplatelet drugs use, albumin level less than 3.0 g/dL, disease scores of 2 or higher, and syncope (NOBLADS), which all were independent correlates of severe LGIB. Severe LGIB developed in 75.7% of patients with scores of 5 or higher compared with 2% of patients without any of the factors correlated with severe LGIB (P < .001). The NOBLADS score determined the severity of LGIB with an AUC value of 0.77. In the validation (second) study, severe LGIB developed in 35% of patients; the NOBLADS score predicted the severity of LGIB with an AUC value of 0.76. Higher NOBLADS scores were associated with a requirement for blood transfusion, longer hospital stay, and intervention (P < .05 for trend). We developed and validated a scoring system for risk of severe LGIB based on 8 factors (NOBLADS score). The system also determined the risk for blood transfusion, longer hospital stay, and intervention. It might be used in decision making regarding intervention and management. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
Baskaran, Lohendran; Danad, Ibrahim; Gransar, Heidi; Ó Hartaigh, Bríain; Schulman-Marcus, Joshua; Lin, Fay Y; Peña, Jessica M; Hunter, Amanda; Newby, David E; Adamson, Philip D; Min, James K
2018-04-13
This study sought to compare the performance of history-based risk scores in predicting obstructive coronary artery disease (CAD) among patients with stable chest pain from the SCOT-HEART study. Risk scores for estimating pre-test probability of CAD are derived from referral-based populations with a high prevalence of disease. The generalizability of these scores to lower prevalence populations in the initial patient encounter for chest pain is uncertain. We compared 3 scores among patients with suspected CAD in the coronary computed tomographic angiography (CTA) randomized arm of the SCOT-HEART study for the outcome of obstructive CAD by coronary CTA: the updated Diamond-Forrester score (UDF), CAD Consortium clinical score (CAD2), and CONFIRM risk score (CRS). We tested calibration with goodness-of-fit, discrimination with area under the receiver-operating curve (AUC), and reclassification with net reclassification improvement (NRI) to identify low-risk patients. In 1,738 patients (58 ± 10 years and 44.0% women), overall calibration was best for UDF, with underestimation by CRS and CAD2. Discrimination by AUC was highest for CAD2 at 0.79 (95% confidence interval [CI]: 0.77 to 0.81) than for UDF (0.77 [95% CI: 0.74 to 0.79]) or CRS (0.75 [95% CI: 0.73 to 0.77]) (p < 0.001 for both comparisons). Reclassification of low-risk patients at the 10% probability threshold was best for CAD2 (NRI 0.31, 95% CI: 0.27 to 0.35) followed by CRS (NRI 0.21, 95% CI: 0.17 to 0.25) compared with UDF (p < 0.001 for all comparisons), with a consistent trend at the 15% threshold. In this multicenter clinic-based cohort of patients with suspected CAD and uniform CAD evaluation by coronary CTA, CAD2 provided the best discrimination and classification, despite overestimation of obstructive CAD as evaluated by coronary CTA. CRS exhibited intermediate performance followed by UDF for discrimination and reclassification. Copyright © 2018. Published by Elsevier Inc.
The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool
Stephen, Cook; Benjamin, Longo-Mbenza
2013-01-01
AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097
Paediatric nutrition risk scores in clinical practice: children with inflammatory bowel disease.
Wiskin, A E; Owens, D R; Cornelius, V R; Wootton, S A; Beattie, R M
2012-08-01
There has been increasing interest in the use of nutrition risk assessment tools in paediatrics to identify those who need nutrition support. Four non-disease specific screening tools have been developed, although there is a paucity of data on their application in clinical practice and the degree of inter-tool agreement. The concurrent validity of four nutrition screening tools [Screening Tool for the Assessment of Malnutrition in Paediatrics (STAMP), Screening Tool for Risk On Nutritional status and Growth (STRONGkids), Paediatric Yorkhill Malnutrition Score (PYMS) and Simple Paediatric Nutrition Risk Score (PNRS)] was examined in 46 children with inflammatory bowel disease. Degree of malnutrition was determined by anthropometry alone using World Health Organization International Classification of Diseases (ICD-10) criteria. There was good agreement between STAMP, STRONGkids and PNRS (kappa > 0.6) but there was only modest agreement between PYMS and the other scores (kappa = 0.3). No children scored low risk with STAMP, STRONGkids or PNRS; however, 23 children scored low risk with PYMS. There was no agreement between the risk tools and the degree of malnutrition based on anthropometric data (kappa < 0.1). Three children had anthropometry consistent with malnutrition and these were all scored high risk. Four children had body mass index SD scores < -2, one of which was scored at low nutrition risk. The relevance of nutrition screening tools for children with chronic disease is unclear. In addition, there is the potential to under recognise nutritional impairment (and therefore nutritional risk) in children with inflammatory bowel disease. © 2012 The Authors. Journal of Human Nutrition and Dietetics © 2012 The British Dietetic Association Ltd.
Functional Movement Screen: Pain versus composite score and injury risk.
Alemany, Joseph A; Bushman, Timothy T; Grier, Tyson; Anderson, Morgan K; Canham-Chervak, Michelle; North, William J; Jones, Bruce H
2017-11-01
The Functional Movement Screen (FMS™) has been used as a screening tool to determine musculoskeletal injury risk using composite scores based on movement quality and/or pain. However, no direct comparisons between movement quality and pain have been quantified. Retrospective injury data analysis. Male Soldiers (n=2154, 25.0±1.3years; 26.2±.7kg/m 2 ) completed the FMS (scored from 0 points (pain) to 3 points (no pain and perfect movement quality)) with injury data over the following six months. The FMS is seven movements. Injury data were collected six months after FMS completion. Sensitivity, specificity, receiver operator characteristics and positive and negative predictive values were calculated for pain occurrence and low (≤14 points) composite score. Risk, risk ratios (RR) and 95% confidence intervals were calculated for injury risk. Pain was associated with slightly higher injury risk (RR=1.62) than a composite score of ≤14 points (RR=1.58). When comparing injury risk between those who scored a 1, 2 or 3 on each individual movement, no differences were found (except deep squat). However, Soldiers who experienced pain on any movement had a greater injury risk than those who scored 3 points for that movement (p<0.05). A progressive increase in the relative risk occurred as the number of movements in which pain occurrence increased, so did injury risk (p<0.01). Pain occurrence may be a stronger indicator of injury risk than a low composite score and provides a simpler method of evaluating injury risk compared to the full FMS. Published by Elsevier Ltd.
Reversal of Hartmann's procedure: a high-risk operation?
Schmelzer, Thomas M; Mostafa, Gamal; Norton, H James; Newcomb, William L; Hope, William W; Lincourt, Amy E; Kercher, Kent W; Kuwada, Timothy S; Gersin, Keith S; Heniford, B Todd
2007-10-01
Patients who undergo Hartmann's procedure often do not have their colostomy closed based on the perceived risk of the operation. This study evaluated the outcome of reversal of Hartmann's procedure based on preoperative risk factors. We retrospectively reviewed adult patients who underwent reversal of Hartmann's procedure at our tertiary referral institution. Patient outcomes were compared based on identified risk factors (age >60 years, American Society of Anesthesiologists [ASA] score >2, and >2 preoperative comorbidities). One-hundred thirteen patients were included. Forty-four patients (39%) had an ASA score of >or=3. The mean hospital duration of stay was 6.8 days. There were 28 (25%) postoperative complications and no mortality. Patients >60 years old had significantly longer LOS compared with the rest of the group (P = .02). There were no differences in outcomes between groups based on ASA score or the presence of multiple preoperative comorbidities. An albumin level of <3.5 was the only significant predictor of postoperative complications (P = .04). The reversal of Hartmann's operation appears to be a safe operation with acceptable morbidity rates and can be considered in patients, including those with significant operative risk factors.
Management of heart failure in the new era: the role of scores.
Mantegazza, Valentina; Badagliacca, Roberto; Nodari, Savina; Parati, Gianfranco; Lombardi, Carolina; Di Somma, Salvatore; Carluccio, Erberto; Dini, Frank Lloyd; Correale, Michele; Magrì, Damiano; Agostoni, Piergiuseppe
2016-08-01
Heart failure is a widespread syndrome involving several organs, still characterized by high mortality and morbidity, and whose clinical course is heterogeneous and hardly predictable.In this scenario, the assessment of heart failure prognosis represents a fundamental step in clinical practice. A single parameter is always unable to provide a very precise prognosis. Therefore, risk scores based on multiple parameters have been introduced, but their clinical utility is still modest. In this review, we evaluated several prognostic models for acute, right, chronic, and end-stage heart failure based on multiple parameters. In particular, for chronic heart failure we considered risk scores essentially based on clinical evaluation, comorbidities analysis, baroreflex sensitivity, heart rate variability, sleep disorders, laboratory tests, echocardiographic imaging, and cardiopulmonary exercise test parameters. What is at present established is that a single parameter is not sufficient for an accurate prediction of prognosis in heart failure because of the complex nature of the disease. However, none of the scoring systems available is widely used, being in some cases complex, not user-friendly, or based on expensive or not easily available parameters. We believe that multiparametric scores for risk assessment in heart failure are promising but their widespread use needs to be experienced.
Chang, Xuling; Salim, Agus; Dorajoo, Rajkumar; Han, Yi; Khor, Chiea-Chuen; van Dam, Rob M; Yuan, Jian-Min; Koh, Woon-Puay; Liu, Jianjun; Goh, Daniel Yt; Wang, Xu; Teo, Yik-Ying; Friedlander, Yechiel; Heng, Chew-Kiat
2017-01-01
Background Although numerous phenotype based equations for predicting risk of 'hard' coronary heart disease are available, data on the utility of genetic information for such risk prediction is lacking in Chinese populations. Design Case-control study nested within the Singapore Chinese Health Study. Methods A total of 1306 subjects comprising 836 men (267 incident cases and 569 controls) and 470 women (128 incident cases and 342 controls) were included. A Genetic Risk Score comprising 156 single nucleotide polymorphisms that have been robustly associated with coronary heart disease or its risk factors ( p < 5 × 10 -8 ) in at least two independent cohorts of genome-wide association studies was built. For each gender, three base models were used: recalibrated Adult Treatment Panel III (ATPIII) Model (M 1 ); ATP III model fitted using Singapore Chinese Health Study data (M 2 ) and M 3 : M 2 + C-reactive protein + creatinine. Results The Genetic Risk Score was significantly associated with incident 'hard' coronary heart disease ( p for men: 1.70 × 10 -10 -1.73 × 10 -9 ; p for women: 0.001). The inclusion of the Genetic Risk Score in the prediction models improved discrimination in both genders (c-statistics: 0.706-0.722 vs. 0.663-0.695 from base models for men; 0.788-0.790 vs. 0.765-0.773 for women). In addition, the inclusion of the Genetic Risk Score also improved risk classification with a net gain of cases being reclassified to higher risk categories (men: 12.4%-16.5%; women: 10.2% (M 3 )), while not significantly reducing the classification accuracy in controls. Conclusions The Genetic Risk Score is an independent predictor for incident 'hard' coronary heart disease in our ethnic Chinese population. Inclusion of genetic factors into coronary heart disease prediction models could significantly improve risk prediction performance.
2013-01-01
Background Methicillin-resistant Staphylococcus aureus (MRSA) represents an important pathogen in healthcare-associated pneumonia (HCAP). The concept of HCAP, though, may not perform well as a screening test for MRSA and can lead to overuse of antibiotics. We developed a risk score to identify patients presenting to the hospital with pneumonia unlikely to have MRSA. Methods We identified patients admitted with pneumonia (Apr 2005 – Mar 2009) at 62 hospitals in the US. We only included patients with lab evidence of bacterial infection (e.g., positive respiratory secretions, blood, or pleural cultures or urinary antigen testing). We determined variables independently associated with the presence of MRSA based on logistic regression (two-thirds of cohort) and developed a risk prediction model based on these factors. We validated the model in the remaining population. Results The cohort included 5975 patients and MRSA was identified in 14%. The final risk score consisted of eight variables and a potential total score of 10. Points were assigned as follows: two for recent hospitalization or ICU admission; one each for age < 30 or > 79 years, prior IV antibiotic exposure, dementia, cerebrovascular disease, female with diabetes, or recent exposure to a nursing home/long term acute care facility/skilled nursing facility. This study shows how the prevalence of MRSA rose with increasing score after stratifying the scores into Low (0 to 1 points), Medium (2 to 5 points) and High (6 or more points) risk. When the score was 0 or 1, the prevalence of MRSA was < 10% while the prevalence of MRSA climbed to > 30% when the score was 6 or greater. Conclusions MRSA represents a cause of pneumonia presenting to the hospital. This simple risk score identifies patients at low risk for MRSA and in whom anti-MRSA therapy might be withheld. PMID:23742753
Kim, Seok Jin; Choi, Joon Young; Hyun, Seung Hyup; Ki, Chang-Seok; Oh, Dongryul; Ahn, Yong Chan; Ko, Young Hyeh; Choi, Sunkyu; Jung, Sin-Ho; Khong, Pek-Lan; Tang, Tiffany; Yan, Xuexian; Lim, Soon Thye; Kwong, Yok-Lam; Kim, Won Seog
2015-02-01
Assessment of tumour viability after treatment is essential for prediction of treatment failure in patients with extranodal natural killer/T-cell lymphoma (ENKTL). We aimed to assess the use of the post-treatment Deauville score on PET-CT and Epstein-Barr virus DNA as a predictor of residual tumour, to establish the risk of treatment failure in patients with newly diagnosed ENKTL. In a retrospective analysis of patient data we assessed the prognostic relevance of the Deauville score (five-point scale) on PET-CT and circulating Epstein-Barr virus DNA after completion of treatment in consecutive patients with ENKTL who met eligibility criteria (newly diagnosed and received non-anthracycline-based chemotherapy, concurrent chemoradiotherapy, or both together) diagnosed at the Samsung Medical Center in Seoul, South Korea. The primary aim was to assess the association between progression-free survival and risk stratification based on post-treatment Deauville score and Epstein-Barr virus DNA. With an independent cohort from two different hospitals (Hong Kong and Singapore), we validated the prognostic value of our risk model. We included 102 patients diagnosed with ENKTL between Jan 6, 2005, and Nov 18, 2013, in the study cohort, and 38 patients diagnosed with ENKTL between Jan 7, 2009, and June 27, 2013, in the validation cohort. In the study cohort after a median follow-up of 47·2 months (IQR 30·0-65·5), 45 (44%) patients had treatment failure and 33 (32%) had died. Post-treatment Deauville score and Epstein-Barr virus DNA positivity were independently associated with progression-free and overall survival in the multivariable analysis (for post-treatment Deauville score of 3-4, progression-free survival hazard ratio [HR] 3·607, 95% CI 1·772-7·341, univariable p<0·0001; for post-treatment Epstein-Barr virus DNA positivity, progression-free survival HR 3·595, 95% CI 1·598-8·089, univariable p<0·0001). We stratified patients into three groups based on risk of treatment failure: a low-risk group (post-treatment Epstein-Barr virus negativity and post-treatment Deauville score of 1-2), a high-risk group (post-treatment Epstein-Barr virus negativity with a Deauville score 3-4, or post-treatment Epstein-Barr virus positivity with a Deauville score 1-2), and treatment failure (Deauville score of 5 or post-treatment Epstein-Barr positivity with a Deauville of score 3-4). This risk model showed a significant association with progression-free survival (for low risk vs high risk, HR 7·761, 95% CI 2·592-23·233, p<0·0001; for low risk vs failure, HR 18·546, 95% CI 5·997-57·353, p<0·0001). The validation cohort showed the same associations (for low risk vs high risk, HR 22·909, 95% CI 2·850-184·162, p=0·003; for low risk vs failure, HR 50·652, 95% CI 6·114-419·610, p<0·0001). Post-treatment Deauville score on PET-CT scan and the presence of Epstein-Barr virus DNA can predict the risk of treatment failure in patients with ENKTL. Our results might be able to help guide clinical practice. Samsung Biomedical Research Institute. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bhupathiraju, Shilpa N; Lichtenstein, Alice H; Dawson-Hughes, Bess; Tucker, Katherine L
2011-03-01
In 2006, the AHA released diet and lifestyle recommendations (AHA-DLR) for cardiovascular disease (CVD) risk reduction. The effect of adherence to these recommendations on CVD risk is unknown. Our objective was to develop a unique diet and lifestyle score based on the AHA-DLR and to evaluate this score in relation to available CVD risk factors. In a cross-sectional study of Puerto Rican adults aged 45-75 y living in the greater Boston area, information was available for the following variables: diet (semiquantitative FFQ), blood pressure, waist circumference (WC), 10-y risk of coronary heart disease (CHD) (Framingham risk score), and fasting plasma lipids, serum glucose, insulin, and C-reactive protein (CRP) concentrations. We developed a diet and lifestyle score (AHA-DLS) based on the AHA-DLR. The AHA-DLS had both internal consistency and content validity. It was associated with plasma HDL cholesterol (P = 0.001), serum insulin (P = 0.0003), and CRP concentrations (P = 0.02), WC (P < 0.0001), and 10-y risk of CHD score (P = 0.01 in women). The AHA-DLS was inversely associated with serum glucose among those with a BMI < 25 (P = 0.01). Women and men in the highest quartile of the AHA-DLS had lower serum insulin (P-trend = 0.0003) and CRP concentrations (P-trend = 0.002), WC (P-trend = 0.0003), and higher HDL cholesterol (P-trend = 0.008). The AHA-DLS is a useful tool to measure adherence to the AHA-DLR and may be used to examine associations between diet and lifestyle behaviors and CVD risk.
Bhupathiraju, Shilpa N.; Lichtenstein, Alice H.; Dawson-Hughes, Bess; Tucker, Katherine L.
2011-01-01
In 2006, the AHA released diet and lifestyle recommendations (AHA-DLR) for cardiovascular disease (CVD) risk reduction. The effect of adherence to these recommendations on CVD risk is unknown. Our objective was to develop a unique diet and lifestyle score based on the AHA-DLR and to evaluate this score in relation to available CVD risk factors. In a cross-sectional study of Puerto Rican adults aged 45–75 y living in the greater Boston area, information was available for the following variables: diet (semiquantitative FFQ), blood pressure, waist circumference (WC), 10-y risk of coronary heart disease (CHD) (Framingham risk score), and fasting plasma lipids, serum glucose, insulin, and C-reactive protein (CRP) concentrations. We developed a diet and lifestyle score (AHA-DLS) based on the AHA-DLR. The AHA-DLS had both internal consistency and content validity. It was associated with plasma HDL cholesterol (P = 0.001), serum insulin (P = 0.0003), and CRP concentrations (P = 0.02), WC (P < 0.0001), and 10-y risk of CHD score (P = 0.01 in women). The AHA-DLS was inversely associated with serum glucose among those with a BMI < 25 (P = 0.01). Women and men in the highest quartile of the AHA-DLS had lower serum insulin (P-trend = 0.0003) and CRP concentrations (P-trend = 0.002), WC (P-trend = 0.0003), and higher HDL cholesterol (P-trend = 0.008). The AHA-DLS is a useful tool to measure adherence to the AHA-DLR and may be used to examine associations between diet and lifestyle behaviors and CVD risk. PMID:21270369
Nanri, Akiko; Mizoue, Tetsuya; Kurotani, Kayo; Goto, Atsushi; Oba, Shino; Noda, Mitsuhiko; Sawada, Norie; Tsugane, Shoichiro
2015-01-01
Evidence is sparse and contradictory regarding the association between low-carbohydrate diet score and type 2 diabetes risk, and no prospective study examined the association among Asians, who consume greater amount of carbohydrate. We prospectively investigated the association of low-carbohydrate diet score with type 2 diabetes risk. Participants were 27,799 men and 36,875 women aged 45-75 years who participated in the second survey of the Japan Public Health Center-Based Prospective Study and who had no history of diabetes. Dietary intake was ascertained by using a validated food-frequency questionnaire, and low-carbohydrate diet score was calculated from total carbohydrate, fat, and protein intake. The scores for high animal protein and fat or for high plant protein and fat were also calculated. Odds ratios of self-reported, physician-diagnosed type 2 diabetes over 5-year were estimated by using logistic regression. During the 5-year period, 1191 new cases of type 2 diabetes were self-reported. Low-carbohydrate diet score for high total protein and fat was significantly associated with a decreased risk of type 2 diabetes in women (P for trend <0.001); the multivariable-adjusted odds ratio of type 2 diabetes for the highest quintile of the score were 0.63 (95% confidence interval 0.46-0.84), compared with those for the lowest quintile. Additional adjustment for dietary glycemic load attenuated the association (odds ratio 0.75, 95% confidence interval 0.45-1.25). When the score separated for animal and for plant protein and fat, the score for high animal protein and fat was inversely associated with type 2 diabetes in women, whereas the score for high plant protein and fat was not associated in both men and women. Low-carbohydrate diet was associated with decreased risk of type 2 diabetes in Japanese women and this association may be partly attributable to high intake of white rice. The association for animal-based and plant-based low-carbohydrate diet warrants further investigation.
Kleber, M E; Goliasch, G; Grammer, T B; Pilz, S; Tomaschitz, A; Silbernagel, G; Maurer, G; März, W; Niessner, A
2014-08-01
Algorithms to predict the future long-term risk of patients with stable coronary artery disease (CAD) are rare. The VIenna and Ludwigshafen CAD (VILCAD) risk score was one of the first scores specifically tailored for this clinically important patient population. The aim of this study was to refine risk prediction in stable CAD creating a new prediction model encompassing various pathophysiological pathways. Therefore, we assessed the predictive power of 135 novel biomarkers for long-term mortality in patients with stable CAD. We included 1275 patients with stable CAD from the LUdwigshafen RIsk and Cardiovascular health study with a median follow-up of 9.8 years to investigate whether the predictive power of the VILCAD score could be improved by the addition of novel biomarkers. Additional biomarkers were selected in a bootstrapping procedure based on Cox regression to determine the most informative predictors of mortality. The final multivariable model encompassed nine clinical and biochemical markers: age, sex, left ventricular ejection fraction (LVEF), heart rate, N-terminal pro-brain natriuretic peptide, cystatin C, renin, 25OH-vitamin D3 and haemoglobin A1c. The extended VILCAD biomarker score achieved a significantly improved C-statistic (0.78 vs. 0.73; P = 0.035) and net reclassification index (14.9%; P < 0.001) compared to the original VILCAD score. Omitting LVEF, which might not be readily measureable in clinical practice, slightly reduced the accuracy of the new BIO-VILCAD score but still significantly improved risk classification (net reclassification improvement 12.5%; P < 0.001). The VILCAD biomarker score based on routine parameters complemented by novel biomarkers outperforms previous risk algorithms and allows more accurate classification of patients with stable CAD, enabling physicians to choose more personalized treatment regimens for their patients.
NASA Astrophysics Data System (ADS)
Yee, Kimbo Edward
Purpose. To examine the association of the Family Nutrition and Physical Activity (FNPA) screening tool, a behaviorally based screening tool designed to assess the obesogenic family environment and behaviors, with cardiovascular disease (CVD) risk factors in 10-year old children. Methods. One hundred nineteen children were assessed for body mass index (BMI), percent body fat (%BF), waist circumference (WC), total cholesterol, HDL-cholesterol, and resting blood pressure. A continuous CVD risk score was created using total cholesterol to HDL-cholesterol ratio (TC:HDL), mean arterial pressure (MAP), and WC. The FNPA survey was completed by parents. The associations between the FNPA score and individual CVD risk factors and the continuous CVD risk score were examined using correlation analyses. Results. Approximately 35% of the sample were overweight (19%) or obese (16%). The mean FNPA score was 24.6 +/- 2.5 (range 18 to 29). Significant correlations were found between the FNPA score and WC (r = -.35, p<.01), BMI percentile (r = -.38, p<.01), %BF (r = -.43, p<.01), and the continuous CVD risk score (r = -.22, p = .02). No significant association was found between the FNPA score and TC:HDL (r=0.10, p=0.88) or MAP (r=-0.12, p=0.20). Conclusion. Children from a high-risk, obesogenic family environment as indicated with a lower FNPA score have a higher CVD risk factor profile than children from a low-risk family environment.
Wolfson, Julian; Vock, David M; Bandyopadhyay, Sunayan; Kottke, Thomas; Vazquez-Benitez, Gabriela; Johnson, Paul; Adomavicius, Gediminas; O'Connor, Patrick J
2017-04-24
Clinicians who are using the Framingham Risk Score (FRS) or the American College of Cardiology/American Heart Association Pooled Cohort Equations (PCE) to estimate risk for their patients based on electronic health data (EHD) face 4 questions. (1) Do published risk scores applied to EHD yield accurate estimates of cardiovascular risk? (2) Are FRS risk estimates, which are based on data that are up to 45 years old, valid for a contemporary patient population seeking routine care? (3) Do the PCE make the FRS obsolete? (4) Does refitting the risk score using EHD improve the accuracy of risk estimates? Data were extracted from the EHD of 84 116 adults aged 40 to 79 years who received care at a large healthcare delivery and insurance organization between 2001 and 2011. We assessed calibration and discrimination for 4 risk scores: published versions of FRS and PCE and versions obtained by refitting models using a subset of the available EHD. The published FRS was well calibrated (calibration statistic K=9.1, miscalibration ranging from 0% to 17% across risk groups), but the PCE displayed modest evidence of miscalibration (calibration statistic K=43.7, miscalibration from 9% to 31%). Discrimination was similar in both models (C-index=0.740 for FRS, 0.747 for PCE). Refitting the published models using EHD did not substantially improve calibration or discrimination. We conclude that published cardiovascular risk models can be successfully applied to EHD to estimate cardiovascular risk; the FRS remains valid and is not obsolete; and model refitting does not meaningfully improve the accuracy of risk estimates. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
SCORE should be preferred to Framingham to predict cardiovascular death in French population.
Marchant, Ivanny; Boissel, Jean-Pierre; Kassaï, Behrouz; Bejan, Theodora; Massol, Jacques; Vidal, Chrystelle; Amsallem, Emmanuel; Naudin, Florence; Galan, Pilar; Czernichow, Sébastien; Nony, Patrice; Gueyffier, François
2009-10-01
Numerous studies have examined the validity of available scores to predict the absolute cardiovascular risk. We developed a virtual population based on data representative of the French population and compared the performances of the two most popular risk equations to predict cardiovascular death: Framingham and SCORE. A population was built based on official French demographic statistics and summarized data from representative observational studies. The 10-year coronary and cardiovascular death risk and their ratio were computed for each individual by SCORE and Framingham equations. The resulting rates were compared with those derived from national vital statistics. Framingham overestimated French coronary deaths by 2.8 in men and 1.9 in women, and cardiovascular deaths by 1.5 in men and 1.3 in women. SCORE overestimated coronary death by 1.6 in men and 1.7 in women, and underestimated cardiovascular death by 0.94 in men and 0.85 in women. Our results revealed an exaggerated representation of coronary among cardiovascular death predicted by Framingham, with coronary death exceeding cardiovascular death in some individual profiles. Sensitivity analyses gave some insights to explain the internal inconsistency of the Framingham equations. Evidence is that SCORE should be preferred to Framingham to predict cardiovascular death risk in French population. This discrepancy between prediction scores is likely to be observed in other populations. To improve the validation of risk equations, specific guidelines should be issued to harmonize the outcomes definition across epidemiologic studies. Prediction models should be calibrated for risk differences in the space and time dimensions.
Green, Malcolm; Lander, Harvey; Snyder, Ashley; Hudson, Paul; Churpek, Matthew; Edelson, Dana
2018-02-01
Traditionally, paper based observation charts have been used to identify deteriorating patients, with emerging recent electronic medical records allowing electronic algorithms to risk stratify and help direct the response to deterioration. We sought to compare the Between the Flags (BTF) calling criteria to the Modified Early Warning Score (MEWS), National Early Warning Score (NEWS) and electronic Cardiac Arrest Risk Triage (eCART) score. Multicenter retrospective analysis of electronic health record data from all patients admitted to five US hospitals from November 2008-August 2013. Cardiac arrest, ICU transfer or death within 24h of a score RESULTS: Overall accuracy was highest for eCART, with an AUC of 0.801 (95% CI 0.799-0.802), followed by NEWS, MEWS and BTF respectively (0.718 [0.716-0.720]; 0.698 [0.696-0.700]; 0.663 [0.661-0.664]). BTF criteria had a high risk (Red Zone) specificity of 95.0% and a moderate risk (Yellow Zone) specificity of 27.5%, which corresponded to MEWS thresholds of >=4 and >=2, NEWS thresholds of >=5 and >=2, and eCART thresholds of >=12 and >=4, respectively. At those thresholds, eCART caught 22 more adverse events per 10,000 patients than BTF using the moderate risk criteria and 13 more using high risk criteria, while MEWS and NEWS identified the same or fewer. An electronically generated eCART score was more accurate than commonly used paper based observation tools for predicting the composite outcome of in-hospital cardiac arrest, ICU transfer and death within 24h of observation. The outcomes of this analysis lend weight for a move towards an algorithm based electronic risk identification tool for deteriorating patients to ensure earlier detection and prevent adverse events in the hospital. Copyright © 2017 Elsevier B.V. All rights reserved.
Chao, Tze-Fan; Lip, Gregory Y H; Lin, Yenn-Jiang; Chang, Shih-Lin; Lo, Li-Wei; Hu, Yu-Feng; Tuan, Ta-Chuan; Liao, Jo-Nan; Chung, Fa-Po; Chen, Tzeng-Ji; Chen, Shih-Ann
2018-04-01
When assessing bleeding risk in patients with atrial fibrillation (AF), risk stratification is often based on the baseline risks. We aimed to investigate changes in bleeding risk factors and alterations in the HAS-BLED score in AF patients. We hypothesized that a follow-up HAS-BLED score and the 'delta HAS-BLED score' (reflecting the change in score between baseline and follow-up) would be more predictive of major bleeding, when compared with baseline HAS-BLED score. A total of 19,566 AF patients receiving warfarin and baseline HAS-BLED score ≤2 were studied. After a follow-up of 93,783 person-years, 3,032 major bleeds were observed. The accuracies of baseline, follow-up, and delta HAS-BLED scores as well as cumulative numbers of baseline modifiable bleeding risk factors, in predicting subsequent major bleeding, were analysed and compared. The mean baseline HAS-BLED score was 1.43 which increased to 2.45 with a mean 'delta HAS-BLED score' of 1.03. The HAS-BLED score remained unchanged in 38.2% of patients. Of those patients experiencing major bleeding, 76.6% had a 'delta HAS-BLED' score ≥1, compared with only 59.0% in patients without major bleeding ( p < 0.001). For prediction of major bleeding, AUC was significantly higher for the follow-up HAS-BLED (0.63) or delta HAS-BLED (0.62) scores, compared with baseline HAS-BLED score (0.54). The number of baseline modifiable risk factors was non-significantly predictive of major bleeding (AUC = 0.49). In this 'real-world' nationwide AF cohort, follow-up HAS-BLED or 'delta HAS-BLED score' was more predictive of major bleeding compared with baseline HAS-BLED or the simple determination of 'modifiable bleeding risk factors'. Bleeding risk in AF is a dynamic process and use of the HAS-BLED score should be to 'flag up' patients potentially at risk for more regular review and follow-up, and to address the modifiable bleeding risk factors during follow-up visits. Schattauer GmbH Stuttgart.
Spangler, Gottfried; Bovenschen, Ina; Globisch, Jutta; Krippl, Martin; Ast-Scheitenberger, Stephanie
2009-01-01
The Child Abuse Potential Inventory (CAPI) is an evidence-based procedure for the assessment of the risk for child abuse in parents. In this study, a German translation of the CAPI was applied to a normal sample of German parents (N = 944). Descriptive analysis of the CAPI scores in the German provides findings comparable to the original standardization sample. The subjects' child abuse risk score was associated with demographic characteristics like education, marital status, occupation and gender. Long-term stability of the child abuse risk score and associations with individual differences in emotional regulation and attachment were investigated in a sub-sample of mothers with high and low child abuse risk scores (N = 69). The findings proved long-term stability. Furthermore associations between the child abuse risk score and anger dispositions were found which, however, were moderated by attachment differences. The findings suggest attachment security as a protective factor against child abuse.
Ali, Ali; Bailey, Claire; Abdelhafiz, Ahmed H
2012-08-01
With advancing age, the prevalence of both stroke and non valvular atrial fibrillation (NVAF) is increasing. NVAF in old age has a high embolic potential if not anticoagulated. Oral anticoagulation therapy is cost effective in older people with NVAF due to their high base line stroke risk. The current stroke and bleeding risk scoring schemes have been based on complex scoring systems that are difficult to apply in clinical practice. Both scoring schemes include similar risk factors for ischemic and bleeding events which may lead to confusion in clinical decision making to balance the risks of bleeding against the risks of stroke, thereby limiting the applicability of such schemes. The difficulty in application of such schemes combined with physicians' fear of inducing bleeding complications has resulted in under use of anticoagulation therapy in older people. As older people (≥75 years) with NVAF are all at high risk of stroke, we are suggesting a pragmatic approach based on a yes/no decision rather than a risk scoring stratification which involves an opt out rather an opt in approach unless there is a contraindication for oral anticoagulation. Antiplatelet agents should not be an alternative option for antithrombotic treatment in older people with NVAF due to lack of efficacy and the potential of being used as an excuse of not prescribing anticoagulation. Bleeding risk should be assessed on individual basis and the decision to anticoagulate should include patients' views.
Ali, Ali; Bailey, Claire; Abdelhafiz, Ahmed H
2012-01-01
With advancing age, the prevalence of both stroke and non valvular atrial fibrillation (NVAF) is increasing. NVAF in old age has a high embolic potential if not anticoagulated. Oral anticoagulation therapy is cost effective in older people with NVAF due to their high base line stroke risk. The current stroke and bleeding risk scoring schemes have been based on complex scoring systems that are difficult to apply in clinical practice. Both scoring schemes include similar risk factors for ischemic and bleeding events which may lead to confusion in clinical decision making to balance the risks of bleeding against the risks of stroke, thereby limiting the applicability of such schemes. The difficulty in application of such schemes combined with physicians’ fear of inducing bleeding complications has resulted in under use of anticoagulation therapy in older people. As older people (≥75 years) with NVAF are all at high risk of stroke, we are suggesting a pragmatic approach based on a yes/no decision rather than a risk scoring stratification which involves an opt out rather an opt in approach unless there is a contraindication for oral anticoagulation. Antiplatelet agents should not be an alternative option for antithrombotic treatment in older people with NVAF due to lack of efficacy and the potential of being used as an excuse of not prescribing anticoagulation. Bleeding risk should be assessed on individual basis and the decision to anticoagulate should include patients’ views. PMID:23185715
Predicting stroke through genetic risk functions: the CHARGE Risk Score Project.
Ibrahim-Verbaas, Carla A; Fornage, Myriam; Bis, Joshua C; Choi, Seung Hoan; Psaty, Bruce M; Meigs, James B; Rao, Madhu; Nalls, Mike; Fontes, Joao D; O'Donnell, Christopher J; Kathiresan, Sekar; Ehret, Georg B; Fox, Caroline S; Malik, Rainer; Dichgans, Martin; Schmidt, Helena; Lahti, Jari; Heckbert, Susan R; Lumley, Thomas; Rice, Kenneth; Rotter, Jerome I; Taylor, Kent D; Folsom, Aaron R; Boerwinkle, Eric; Rosamond, Wayne D; Shahar, Eyal; Gottesman, Rebecca F; Koudstaal, Peter J; Amin, Najaf; Wieberdink, Renske G; Dehghan, Abbas; Hofman, Albert; Uitterlinden, André G; Destefano, Anita L; Debette, Stephanie; Xue, Luting; Beiser, Alexa; Wolf, Philip A; Decarli, Charles; Ikram, M Arfan; Seshadri, Sudha; Mosley, Thomas H; Longstreth, W T; van Duijn, Cornelia M; Launer, Lenore J
2014-02-01
Beyond the Framingham Stroke Risk Score, prediction of future stroke may improve with a genetic risk score (GRS) based on single-nucleotide polymorphisms associated with stroke and its risk factors. The study includes 4 population-based cohorts with 2047 first incident strokes from 22,720 initially stroke-free European origin participants aged ≥55 years, who were followed for up to 20 years. GRSs were constructed with 324 single-nucleotide polymorphisms implicated in stroke and 9 risk factors. The association of the GRS to first incident stroke was tested using Cox regression; the GRS predictive properties were assessed with area under the curve statistics comparing the GRS with age and sex, Framingham Stroke Risk Score models, and reclassification statistics. These analyses were performed per cohort and in a meta-analysis of pooled data. Replication was sought in a case-control study of ischemic stroke. In the meta-analysis, adding the GRS to the Framingham Stroke Risk Score, age and sex model resulted in a significant improvement in discrimination (all stroke: Δjoint area under the curve=0.016, P=2.3×10(-6); ischemic stroke: Δjoint area under the curve=0.021, P=3.7×10(-7)), although the overall area under the curve remained low. In all the studies, there was a highly significantly improved net reclassification index (P<10(-4)). The single-nucleotide polymorphisms associated with stroke and its risk factors result only in a small improvement in prediction of future stroke compared with the classical epidemiological risk factors for stroke.
Karagiozoglou-Lampoudi, Thomais; Daskalou, Efstratia; Lampoudis, Dimitrios; Apostolou, Aggeliki; Agakidis, Charalampos
2015-05-01
The study aimed to test the hypothesis that computer-based calculation of malnutrition risk may enhance the ability to identify pediatric patients at malnutrition-related risk for an unfavorable outcome. The Pediatric Digital Scaled MAlnutrition Risk screening Tool (PeDiSMART), incorporating the World Health Organization (WHO) growth reference data and malnutrition-related parameters, was used. This was a prospective cohort study of 500 pediatric patients aged 1 month to 17 years. Upon admission, the PeDiSMART score was calculated and anthropometry was performed. Pediatric Yorkhill Malnutrition Score (PYMS), Screening Tool Risk on Nutritional Status and Growth (STRONGkids), and Screening Tool for the Assessment of Malnutrition in Pediatrics (STAMP) malnutrition screening tools were also applied. PeDiSMART's association with the clinical outcome measures (weight loss/nutrition support and hospitalization duration) was assessed and compared with the other screening tools. The PeDiSMART score was inversely correlated with anthropometry and bioelectrical impedance phase angle (BIA PhA). The score's grading scale was based on BIA Pha quartiles. Weight loss/nutrition support during hospitalization was significantly independently associated with the malnutrition risk group allocation on admission, after controlling for anthropometric parameters and age. Receiver operating characteristic curve analysis showed a sensitivity of 87% and a specificity of 75% and a significant area under the curve, which differed significantly from that of STRONGkids and STAMP. In the subgroups of patients with PeDiSMART-based risk allocation different from that based on the other tools, PeDiSMART allocation was more closely related to outcome measures. PeDiSMART, applicable to the full age range of patients hospitalized in pediatric departments, graded according to BIA PhA, and embeddable in medical electronic records, enhances efficacy and reproducibility in identifying pediatric patients at malnutrition-related risk for an unfavorable outcome. Patient allocation according to the PeDiSMART score on admission is associated with clinical outcome measures. © 2014 American Society for Parenteral and Enteral Nutrition.
Peng, Jian-Hong; Fang, Yu-Jing; Li, Cai-Xia; Ou, Qing-Jian; Jiang, Wu; Lu, Shi-Xun; Lu, Zhen-Hai; Li, Pei-Xing; Yun, Jing-Ping; Zhang, Rong-Xin; Pan, Zhi-Zhong; Wan, De Sen
2016-04-19
Nearly 20% patients with stage II A colon cancer will develop recurrent disease post-operatively. The present study aims to develop a scoring system based on Artificial Neural Network (ANN) model for predicting 10-year survival outcome. The clinical and molecular data of 117 stage II A colon cancer patients from Sun Yat-sen University Cancer Center were used for training set and test set; poor pathological grading (score 49), reduced expression of TGFBR2 (score 33), over-expression of TGF-β (score 45), MAPK (score 32), pin1 (score 100), β-catenin in tumor tissue (score 50) and reduced expression of TGF-β in normal mucosa (score 22) were selected as the prognostic risk predictors. According to the developed scoring system, the patients were divided into 3 subgroups, which were supposed with higher, moderate and lower risk levels. As a result, for the 3 subgroups, the 10-year overall survival (OS) rates were 16.7%, 62.9% and 100% (P < 0.001); and the 10-year disease free survival (DFS) rates were 16.7%, 61.8% and 98.8% (P < 0.001) respectively. It showed that this scoring system for stage II A colon cancer could help to predict long-term survival and screen out high-risk individuals for more vigorous treatment.
Chrubasik, Sigrun A; Chrubasik, Cosima A; Piper, Jörg; Schulte-Moenting, Juergen; Erne, Paul
2015-01-01
In models and scores for estimating cardiovascular risk (CVR), the relative weightings given to blood pressure measurements (BPMs), and biometric and laboratory variables are such that even large differences in blood pressure lead to rather low differences in the resulting total risk when compared with other concurrent risk factors. We evaluated this phenomenon based on the PROCAM score, using BPMs made by volunteer subjects at home (HBPMs) and automated ambulatory BPMs (ABPMs) carried out in the same subjects. A total of 153 volunteers provided the data needed to estimate their CVR by means of the PROCAM formula. Differences (deltaCVR) between the risk estimated by entering the ABPM and that estimated with the HBPM were compared with the differences (deltaBPM) between the ABPM and the corresponding HBPM. In addition to the median values (= second quartile), the first and third quartiles of blood pressure profiles were also considered. PROCAM risk values were converted to European Society of Cardiology (ESC) risk values and all participants were assigned to the risk groups low, medium and high. Based on the PROCAM score, 132 participants had a low risk for suffering myocardial infarction, 16 a medium risk and 5 a high risk. The calculated ESC scores classified 125 participants into the low-risk group, 26 into the medium- and 2 into the high-risk group for death from a cardiovascular event. Mean ABPM tended to be higher than mean HBPM. Use of mean systolic ABPM or HBPM in the PROCAM formula had no major impact on the risk level. Our observations are in agreement with the rather low weighting of blood pressure as risk determinant in the PROCAM score. BPMs assessed with different methods had relatively little impact on estimation of cardiovascular risk in the given context of other important determinants. The risk calculations in our unselected population reflect the given classification of Switzerland as a so-called cardiovascular "low risk country".
Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike
2011-06-28
Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.
2011-01-01
Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504
Adkin, A; Brouwer, A; Downs, S H; Kelly, L
2016-01-01
The adoption of bovine tuberculosis (bTB) risk-based trading (RBT) schemes has the potential to reduce the risk of bTB spread. However, any scheme will have cost implications that need to be balanced against its likely success in reducing bTB. This paper describes the first stochastic quantitative model assessing the impact of the implementation of a cattle risk-based trading scheme to inform policy makers and contribute to cost-benefit analyses. A risk assessment for England and Wales was developed to estimate the number of infected cattle traded using historic movement data recorded between July 2010 and June 2011. Three scenarios were implemented: cattle traded with no RBT scheme in place, voluntary provision of the score and a compulsory, statutory scheme applying a bTB risk score to each farm. For each scenario, changes in trade were estimated due to provision of the risk score to potential purchasers. An estimated mean of 3981 bTB infected animals were sold to purchasers with no RBT scheme in place in one year, with 90% confidence the true value was between 2775 and 5288. This result is dependent on the estimated between herd prevalence used in the risk assessment which is uncertain. With the voluntary provision of the risk score by farmers, on average, 17% of movements was affected (purchaser did not wish to buy once the risk score was available), with a reduction of 23% in infected animals being purchased initially. The compulsory provision of the risk score in a statutory scheme resulted in an estimated mean change to 26% of movements, with a reduction of 37% in infected animals being purchased initially, increasing to a 53% reduction in infected movements from higher risk sellers (score 4 and 5). The estimated mean reduction in infected animals being purchased could be improved to 45% given a 10% reduction in risky purchase behaviour by farmers which may be achieved through education programmes, or to an estimated mean of 49% if a rule was implemented preventing farmers from the purchase of animals of higher risk than their own herd. Given voluntary trials currently taking place of a trading scheme, recommendations for future work include the monitoring of initial uptake and changes in the purchase patterns of farmers. Such data could be used to update the risk assessment to reduce uncertainty associated with model estimates. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin
2017-01-01
The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography (FFDM) images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a “scoring fusion” artificial neural network (ANN) classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793±0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions. PMID:27997380
PREDICT-PD: An online approach to prospectively identify risk indicators of Parkinson's disease.
Noyce, Alastair J; R'Bibo, Lea; Peress, Luisa; Bestwick, Jonathan P; Adams-Carr, Kerala L; Mencacci, Niccolo E; Hawkes, Christopher H; Masters, Joseph M; Wood, Nicholas; Hardy, John; Giovannoni, Gavin; Lees, Andrew J; Schrag, Anette
2017-02-01
A number of early features can precede the diagnosis of Parkinson's disease (PD). To test an online, evidence-based algorithm to identify risk indicators of PD in the UK population. Participants aged 60 to 80 years without PD completed an online survey and keyboard-tapping task annually over 3 years, and underwent smell tests and genotyping for glucocerebrosidase (GBA) and leucine-rich repeat kinase 2 (LRRK2) mutations. Risk scores were calculated based on the results of a systematic review of risk factors and early features of PD, and individuals were grouped into higher (above 15th centile), medium, and lower risk groups (below 85th centile). Previously defined indicators of increased risk of PD ("intermediate markers"), including smell loss, rapid eye movement-sleep behavior disorder, and finger-tapping speed, and incident PD were used as outcomes. The correlation of risk scores with intermediate markers and movement of individuals between risk groups was assessed each year and prospectively. Exploratory Cox regression analyses with incident PD as the dependent variable were performed. A total of 1323 participants were recruited at baseline and >79% completed assessments each year. Annual risk scores were correlated with intermediate markers of PD each year and baseline scores were correlated with intermediate markers during follow-up (all P values < 0.001). Incident PD diagnoses during follow-up were significantly associated with baseline risk score (hazard ratio = 4.39, P = .045). GBA variants or G2019S LRRK2 mutations were found in 47 participants, and the predictive power for incident PD was improved by the addition of genetic variants to risk scores. The online PREDICT-PD algorithm is a unique and simple method to identify indicators of PD risk. © 2017 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
Silvera, Stephanie A. Navarro; Mayne, Susan T; Risch, Harvey A.; Gammon, Marilie D; Vaughan, Thomas; Chow, Wong-Ho; Dubin, Joel A; Dubrow, Robert; Schoenberg, Janet; Stanford, Janet L; West, A. Brian; Rotterdam, Heidrun; Blot, William J
2011-01-01
Purpose To perform pattern analyses of dietary and lifestyle factors in relation to risk of esophageal and gastric cancers. Methods We evaluated risk factors for esophageal adenocarcinoma (EA), esophageal squamous cell carcinoma (ESCC), gastric cardia adenocarcinoma (GCA), and other gastric cancers (OGA) using data from a population-based case-control study conducted in Connecticut, New Jersey, and western Washington state. Dietary/lifestyle patterns were created using principal component analysis (PCA). Impact of the resultant scores on cancer risk was estimated through logistic regression. Results PCA identified six patterns: meat/nitrite, fruit/vegetable, smoking/alcohol, legume/meat alternate, GERD/BMI, and fish/vitamin C. Risk of each cancer under study increased with rising meat/nitrite score. Risk of EA increased with increasing GERD/BMI score, and risk of ESCC rose with increasing smoking/alcohol score and decreasing GERD/BMI score. Fruit/vegetable scores were inversely associated with EA, ESCC, and GCA. Conclusions PCA may provide a useful approach for summarizing extensive dietary/lifestyle data into fewer interpretable combinations that discriminate between cancer cases and controls. The analyses suggest that meat/nitrite intake is associated with elevated risk of each cancer under study, while fruit/vegetable intake reduces risk of EA, ESCC, and GCA. GERD/obesity were confirmed as risk factors for EA and smoking/alcohol as risk factors for ESCC. PMID:21435900
Risk factors for child maltreatment in an Australian population-based birth cohort.
Doidge, James C; Higgins, Daryl J; Delfabbro, Paul; Segal, Leonie
2017-02-01
Child maltreatment and other adverse childhood experiences adversely influence population health and socioeconomic outcomes. Knowledge of the risk factors for child maltreatment can be used to identify children at risk and may represent opportunities for prevention. We examined a range of possible child, parent and family risk factors for child maltreatment in a prospective 27-year population-based birth cohort of 2443 Australians. Physical abuse, sexual abuse, emotional abuse, neglect and witnessing of domestic violence were recorded retrospectively in early adulthood. Potential risk factors were collected prospectively during childhood or reported retrospectively. Associations were estimated using bivariate and multivariate logistic regressions and combined into cumulative risk scores. Higher levels of economic disadvantage, poor parental mental health and substance use, and social instability were strongly associated with increased risk of child maltreatment. Indicators of child health displayed mixed associations and infant temperament was uncorrelated to maltreatment. Some differences were observed across types of maltreatment but risk profiles were generally similar. In multivariate analyses, nine independent risk factors were identified, including some that are potentially modifiable: economic disadvantage and parental substance use problems. Risk of maltreatment increased exponentially with the number of risk factors experienced, with prevalence of maltreatment in the highest risk groups exceeding 80%. A cumulative risk score based on the independent risk factors allowed identification of individuals at very high risk of maltreatment, while a score that incorporated all significant risk and protective factors provided better identification of low-risk individuals. Copyright © 2016 Elsevier Ltd. All rights reserved.
Diesel, J C; Eckhardt, C L; Day, N L; Brooks, M M; Arslanian, S A; Bodnar, L M
2015-09-01
To study the association between gestational weight gain (GWG) and offspring obesity risk at ages chosen to approximate prepuberty (10 years) and postpuberty (16 years). Prospective pregnancy cohort. Pittsburgh, PA, USA. Low-income pregnant women (n = 514) receiving prenatal care at an obstetric residency clinic and their singleton offspring. Gestational weight gain was classified based on maternal GWG-for-gestational-age Z-score charts and was modelled using flexible spline terms in modified multivariable Poisson regression models. Obesity at 10 or 16 years, defined as body mass index (BMI) Z-scores ≥95th centile of the 2000 CDC references, based on measured height and weight. The prevalence of offspring obesity was 20% at 10 years and 22% at 16 years. In the overall sample, the risk of offspring obesity at 10 and 16 years increased when GWG exceeded a GWG Z-score of 0 SD (equivalent to 30 kg at 40 weeks); but for gains below a Z-score of 0 SD there was no relationship with child obesity risk. The association between GWG and offspring obesity varied by prepregnancy BMI. Among mothers with a pregravid BMI <25 kg/m(2) , the risk of offspring obesity increased when GWG Z-score exceeded 0 SD, yet among overweight women (BMI ≥25 kg/m(2) ), there was no association between GWG Z-scores and offspring obesity risk. Among lean women, higher GWG may have lasting effects on offspring obesity risk. © 2015 Royal College of Obstetricians and Gynaecologists.
Park, Yoonyoung; Franklin, Jessica M; Schneeweiss, Sebastian; Levin, Raisa; Crystal, Stephen; Gerhard, Tobias; Huybrechts, Krista F
2015-03-01
To determine whether adjustment for prognostic indices specifically developed for nursing home (NH) populations affect the magnitude of previously observed associations between mortality and conventional and atypical antipsychotics. Cohort study. A merged data set of Medicaid, Medicare, Minimum Data Set (MDS), Online Survey Certification and Reporting system, and National Death Index for 2001 to 2005. Dual-eligible individuals aged 65 and older who initiated antipsychotic treatment in a NH (N=75,445). Three mortality risk scores (Mortality Risk Index Score, Revised MDS Mortality Risk Index, Advanced Dementia Prognostic Tool) were derived for each participant using baseline MDS data, and their performance was assessed using c-statistics and goodness-of-fit tests. The effect of adjusting for these indices in addition to propensity scores (PSs) on the association between antipsychotic medication and mortality was evaluated using Cox models with and without adjustment for risk scores. Each risk score showed moderate discrimination for 6-month mortality, with c-statistics ranging from 0.61 to 0.63. There was no evidence of lack of fit. Imbalances in risk scores between conventional and atypical antipsychotic users, suggesting potential confounding, were much lower within PS deciles than the imbalances in the full cohort. Accounting for each score in the Cox model did not change the relative risk estimates: 2.24 with PS-only adjustment versus 2.20, 2.20, and 2.22 after further adjustment for the three risk scores. Although causality cannot be proven based on nonrandomized studies, this study adds to the body of evidence rejecting explanations other than causality for the greater mortality risk associated with conventional antipsychotics than with atypical antipsychotics. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.
Pawar, Shivshakti D; Naik, Jayashri D; Prabhu, Priya; Jatti, Gajanan M; Jadhav, Sachin B; Radhe, B K
2017-01-01
India is currently becoming capital for diabetes mellitus. This significantly increasing incidence of diabetes putting an additional burden on health care in India. Unfortunately, half of diabetic individuals are unknown about their diabetic status. Hence, there is an emergent need of effective screening instrument to identify "diabetes risk" individuals. The aim is to evaluate and compare the diagnostic accuracy and clinical utility of Indian Diabetes Risk Score (IDRS) and Finnish Diabetes Risk Score (FINDRISC). This is retrospective, record-based study of diabetes detection camp organized by a teaching hospital. Out of 780 people attended this camp voluntarily only 763 fulfilled inclusion criteria of the study. In this camp, pro forma included the World Health Organization STEP guidelines for surveillance of noncommunicable diseases. Included primary sociodemographic characters, physical measurements, and clinical examination. After that followed the random blood glucose estimation of each individual. Diagnostic accuracy of IDRS and FINDRISC compared by using receiver operative characteristic curve (ROC). Sensitivity, specificity, likelihood ratio, positive predictive and negative predictive values were compared. Clinical utility index (CUI) of each score also compared. SPSS version 22, Stata 13, R3.2.9 used. Out of 763 individuals, 38 were new diabetics. By IDRS 347 and by FINDRISC 96 people were included in high-risk category for diabetes. Odds ratio for high-risk people in FINDRISC for getting affected by diabetes was 10.70. Similarly, it was 4.79 for IDRS. Area under curves of ROCs of both scores were indifferent ( P = 0.98). Sensitivity and specificity of IDRS was 78.95% and 56.14%; whereas for FINDRISC it was 55.26% and 89.66%, respectively. CUI was excellent (0.86) for FINDRISC while IDRS it was "satisfactory" (0.54). Bland-Altman plot and Cohen's Kappa suggested fair agreement between these score in measuring diabetes risk. Diagnostic accuracy and clinical utility of FINDRISC is fairly good than IDRS.
Pereira, T; Maldonado, J; Polónia, J; Silva, J A; Morais, J; Rodrigues, T; Marques, M
2014-04-01
HeartSCORE is a tool for assessing cardiovascular risk, basing its estimates on the relative weight of conventional cardiovascular risk factors. However, new markers of cardiovascular risk have been identified, such as aortic pulse wave velocity (PWV). The purpose of this study was to evaluate to what extent the incorporation of PWV in HeartSCORE increases its discriminative power of major cardiovascular events (MACE). This study is a sub-analysis of the EDIVA project, which is a prospective cohort, multicenter and observational study involving 2200 individuals of Portuguese nationality (1290 men and 910 women) aged between 18 and 91 years (mean 46.33 ± 13.76 years), with annual measurements of PWV (Complior). Only participants above 35 years old were included in the present re-analysis, resulting in a population of 1709 participants. All MACE - death, cerebrovascular accident, coronary accidents (coronary heart disease), peripheral arterial disease and renal failure - were recorded. During a mean follow-up period of 21.42 ± 10.76 months, there were 47 non-fatal MACE (2.1% of the sample). Cardiovascular risk was estimated in all patients based on the HeartSCORE risk factors. For the analysis, the refitted HeartSCORE and PWV were divided into three risk categories. The event-free survival at 2 years was 98.6%, 98.0% and 96.1%, respectively in the low-, intermediate- and high-risk categories of HeartSCORE (log-rank p < 0.001). The multi-adjusted hazard ratio (HR) per 1 - standard deviation (SD) of MACE was 1.86 (95% CI 1.37-2.53, p < 0.001) for PWV. The risk of MACE by tertiles of PWV and risk categories of the HeartSCORE increased linearly, and the risk was particularly more pronounced in the highest tertile of PWV for any category of the HeartSCORE, demonstrating an improvement in the prediction of cardiovascular risk. It was clearly depicted a high discriminative capacity of PWV even in groups of apparent intermediate cardiovascular risk. Measures of model fit, discrimination and calibration revealed an improvement in risk classification when PWV was added to the risk-factor model. The C statistics improved from 0.69 to 0.78 (adding PWV, p = 0.005). The net reclassification improvement (NRI) and integrated discrimination improvement (IDI) were also determined, and indicated further evidence of improvements in discrimination of the outcome when including PWV in the risk-factor model (NRI = 0.265; IDI = 0.012). The results clearly illustrate the benefits of integrating PWV in the risk assessment strategies, as advocated by HeartSCORE, insofar as it contributes to a better discriminative capacity of global cardiovascular risk, particularly in individuals with low or moderate cardiovascular risk.
Warren Andersen, Shaneda; Trentham-Dietz, Amy; Gangnon, Ronald E; Hampton, John M; Figueroa, Jonine D; Skinner, Halcyon G; Engelman, Corinne D; Klein, Barbara E; Titus, Linda J; Newcomb, Polly A
2013-07-01
We evaluated whether 13 single nucleotide polymorphisms (SNPs) identified in genome-wide association studies interact with one another and with reproductive and menstrual risk factors in association with breast cancer risk. DNA samples and information on parity, breastfeeding, age at menarche, age at first birth, and age at menopause were collected through structured interviews from 1,484 breast cancer cases and 1,307 controls who participated in a population-based case-control study conducted in three US states. A polygenic score was created as the sum of risk allele copies multiplied by the corresponding log odds estimate. Logistic regression was used to test the associations between SNPs, the score, reproductive and menstrual factors, and breast cancer risk. Nonlinearity of the score was assessed by the inclusion of a quadratic term for polygenic score. Interactions between the aforementioned variables were tested by including a cross-product term in models. We confirmed associations between rs13387042 (2q35), rs4973768 (SLC4A7), rs10941679 (5p12), rs2981582 (FGFR2), rs3817198 (LSP1), rs3803662 (TOX3), and rs6504950 (STXBP4) with breast cancer. Women in the score's highest quintile had 2.2-fold increased risk when compared to women in the lowest quintile (95 % confidence interval: 1.67-2.88). The quadratic polygenic score term was not significant in the model (p = 0.85), suggesting that the established breast cancer loci are not associated with increased risk more than the sum of risk alleles. Modifications of menstrual and reproductive risk factors associations with breast cancer risk by polygenic score were not observed. Our results suggest that the interactions between breast cancer susceptibility loci and reproductive factors are not strong contributors to breast cancer risk.
Sparks, Lauren A; Trentacosta, Christopher J; Owusu, Erika; McLear, Caitlin; Smith-Darden, Joanne
2018-08-01
Secure attachment relationships have been linked to social competence in at-risk children. In the current study, we examined the role of parent secure base scripts in predicting at-risk kindergarteners' social competence. Parent representations of secure attachment were hypothesized to mediate the relationship between lower family cumulative risk and children's social competence. Participants included 106 kindergarteners and their primary caregivers recruited from three urban charter schools serving low-income families as a part of a longitudinal study. Lower levels of cumulative risk predicted greater secure attachment representations in parents, and scores on the secure base script assessment predicted children's social competence. An indirect relationship between lower cumulative risk and kindergarteners' social competence via parent secure base script scores was also supported. Parent script-based representations of the attachment relationship appear to be an important link between lower levels of cumulative risk and low-income kindergarteners' social competence. Implications of these findings for future interventions are discussed.
Hausfater, Pierre; Amin, Devendra; Amin, Adina; Canavaggio, Pauline; Sauvin, Gabrielle; Bernard, Maguy; Conca, Antoinette; Haubitz, Sebastian; Struja, Tristan; Huber, Andreas; Mueller, Beat; Schuetz, Philipp
2016-01-01
Introduction The inflammatory biomarker pro-adrenomedullin (ProADM) provides additional prognostic information for the risk stratification of general medical emergency department (ED) patients. The aim of this analysis was to develop a triage algorithm for improved prognostication and later use in an interventional trial. Methods We used data from the multi-national, prospective, observational TRIAGE trial including consecutive medical ED patients from Switzerland, France and the United States. We investigated triage effects when adding ProADM at two established cut-offs to a five-level ED triage score with respect to adverse clinical outcome. Results Mortality in the 6586 ED patients showed a step-wise, 25-fold increase from 0.6% to 4.5% and 15.4%, respectively, at the two ProADM cut-offs (≤0.75nmol/L, >0.75–1.5nmol/L, >1.5nmol/L, p ANOVA <0.0001). Risk stratification by combining ProADM within cut-off groups and the triage score resulted in the identification of 1662 patients (25.2% of the population) at a very low risk of mortality (0.3%, n = 5) and 425 patients (6.5% of the population) at very high risk of mortality (19.3%, n = 82). Risk estimation by using ProADM and the triage score from a logistic regression model allowed for a more accurate risk estimation in the whole population with a classification of 3255 patients (49.4% of the population) in the low risk group (0.3% mortality, n = 9) and 1673 (25.4% of the population) in the high-risk group (15.1% mortality, n = 252). Conclusions Within this large international multicenter study, a combined triage score based on ProADM and established triage scores allowed a more accurate mortality risk discrimination. The TRIAGE-ProADM score improved identification of both patients at the highest risk of mortality who may benefit from early therapeutic interventions (rule in), and low risk patients where deferred treatment without negatively affecting outcome may be possible (rule out). PMID:28005916
Makarem, Nour; Lin, Yong; Bandera, Elisa V; Jacques, Paul F; Parekh, Niyati
2015-02-01
This prospective cohort study evaluates associations between healthful behaviors consistent with WCRF/AICR cancer prevention guidelines and obesity-related cancer risk, as a third of cancers are estimated to be preventable. The study sample consisted of adults from the Framingham Offspring cohort (n = 2,983). From 1991 to 2008, 480 incident doctor-diagnosed obesity-related cancers were identified. Data on diet, measured by a food frequency questionnaire, anthropometric measures, and self-reported physical activity, collected in 1991 was used to construct a 7-component score based on recommendations for body fatness, physical activity, foods that promote weight gain, plant foods, animal foods, alcohol, and food preservation, processing, and preparation. Multivariable Cox regression models were used to estimate associations between the computed score, its components, and subcomponents in relation to obesity-related cancer risk. The overall score was not associated with obesity-related cancer risk after adjusting for age, sex, smoking, energy, and preexisting conditions (HR 0.94, 95 % CI 0.86-1.02). When score components were evaluated separately, for every unit increment in the alcohol score, there was 29 % lower risk of obesity-related cancers (HR 0.71, 95 % CI 0.51-0.99) and 49-71 % reduced risk of breast, prostate, and colorectal cancers. Every unit increment in the subcomponent score for non-starchy plant foods (fruits, vegetables, and legumes) among participants who consume starchy vegetables was associated with 66 % reduced risk of colorectal cancer (HR 0.44, 95 % CI 0.22-0.88). Lower alcohol consumption and a plant-based diet consistent with the cancer prevention guidelines were associated with reduced risk of obesity-related cancers in this population.
Rosswog, Carolina; Schmidt, Rene; Oberthuer, André; Juraeva, Dilafruz; Brors, Benedikt; Engesser, Anne; Kahlert, Yvonne; Volland, Ruth; Bartenhagen, Christoph; Simon, Thorsten; Berthold, Frank; Hero, Barbara; Faldum, Andreas; Fischer, Matthias
2017-12-01
Current risk stratification systems for neuroblastoma patients consider clinical, histopathological, and genetic variables, and additional prognostic markers have been proposed in recent years. We here sought to select highly informative covariates in a multistep strategy based on consecutive Cox regression models, resulting in a risk score that integrates hazard ratios of prognostic variables. A cohort of 695 neuroblastoma patients was divided into a discovery set (n=75) for multigene predictor generation, a training set (n=411) for risk score development, and a validation set (n=209). Relevant prognostic variables were identified by stepwise multivariable L1-penalized least absolute shrinkage and selection operator (LASSO) Cox regression, followed by backward selection in multivariable Cox regression, and then integrated into a novel risk score. The variables stage, age, MYCN status, and two multigene predictors, NB-th24 and NB-th44, were selected as independent prognostic markers by LASSO Cox regression analysis. Following backward selection, only the multigene predictors were retained in the final model. Integration of these classifiers in a risk scoring system distinguished three patient subgroups that differed substantially in their outcome. The scoring system discriminated patients with diverging outcome in the validation cohort (5-year event-free survival, 84.9±3.4 vs 63.6±14.5 vs 31.0±5.4; P<.001), and its prognostic value was validated by multivariable analysis. We here propose a translational strategy for developing risk assessment systems based on hazard ratios of relevant prognostic variables. Our final neuroblastoma risk score comprised two multigene predictors only, supporting the notion that molecular properties of the tumor cells strongly impact clinical courses of neuroblastoma patients. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Wang, Yuanjia; Chen, Tianle; Zeng, Donglin
2016-01-01
Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.
Kikuchi, Ken; Shigihara, Takeshi; Hashimoto, Yuko; Miyajima, Masayuki; Haga, Nobuhiro; Kojima, Yoshiyuki; Shishido, Fumio
2017-01-01
Abstract AIMS: To evaluate the relationship between the apparent diffusion coefficient (ADC) value for bladder cancer and the recurrence/progression risk of post-transurethral resection (TUR). METHODS: Forty-one patients with initial and non-muscle-invasive bladder cancer underwent MRI from 2009 to 2012. Two radiologists measured ADC values. A pathologist calculated the recurrence/progression scores, and risk was classified based on the scores. Pearson’s correlation was used to analyze the correlations of ADC value with each score and with each risk group, and the optimal cut-off value was established based on receiver operating characteristic (ROC) curve analysis. Furthermore, the relationship between actual recurrence / progression of cases and ADC values was examined by Unpaird U test. RESULTS: There were significant correlations between ADC value and the recurrence score as well as the progression score (P<0.01, P<0.01, respectively). There were also significant correlations between ADC value and the recurrence risk group as well as progression risk group (P=0.042, P<0.01, respectively). The ADC cut-off value on ROC analysis was 1.365 (sensitivity 100%; specificity 97.4%) for the low and intermediate recurrence risk groups, 1.024 (sensitivity 47.4%; specificity 100%) for the intermediate and high recurrence risk groups, 1.252 (sensitivity 83.3%; specificity 81.3%) for the low and intermediate progression risk groups, and 0.955 (sensitivity 87.5%; specificity 63.2%) between the intermediate and high progression risk groups. The difference between the ADC values of the recurrence and nonrecurrence group in Unpaired t test was significant (P<0.05). CONCLUSION: ADC on MRI in bladder cancer could potentially be useful, non-invasive measurement for estimating the risks of recurrence and progression. PMID:28680010
Oral Hygiene and Cardiometabolic Disease Risk in the Survey of the Health of Wisconsin
VanWormer, Jeffrey J.; Acharya, Amit; Greenlee, Robert T.; Nieto, F. Javier
2012-01-01
Objectives Poor oral health is an increasingly recognized risk factor for cardiovascular disease (CVD) and type 2 diabetes (T2D), but little is known about the association between toothbrushing or flossing and cardiometabolic disease risk. The purpose of this study was to examine the degree to which an oral hygiene index was associated with CVD and T2D risk scores among disease-free adults in the Survey of the Health of Wisconsin. Methods All variables were measured in 2008–2010 in this cross-sectional design. Based on toothbrushing and flossing frequency, and oral hygiene index (poor, fair, good, excellent) was created as the primary predictor variable. The outcomes, CVD and T2D risk score, were based on previous estimates from large cohort studies. There were 712 and 296 individuals with complete data available for linear regression analyses in the CVD and T2D samples, respectively. Results After covariate adjustment, the final model indicated that participants in the excellent (β±SE=−0.019±0.008, p=0.020) oral hygiene category had a significantly lower CVD risk score as compared to participants in the poor oral hygiene category. Sensitivity analyses indicated that both toothbrushing and flossing were independently associated with CVD risk score, and various modifiable risk factors. Oral hygiene was not significantly associated with T2D risk score. Conclusions Regular toothbrushing and flossing are associated with a more favorable CVD risk profile, but more experimental research is needed in this area to precisely determine the effects of various oral self-care maintenance behaviors on the control of individual cardiometabolic risk factors. These findings may inform future joint medical-dental initiatives designed to close gaps in the primary prevention of oral and systemic diseases. PMID:23106415
Validation of an imaging based cardiovascular risk score in a Scottish population.
Kockelkoren, Remko; Jairam, Pushpa M; Murchison, John T; Debray, Thomas P A; Mirsadraee, Saeed; van der Graaf, Yolanda; Jong, Pim A de; van Beek, Edwin J R
2018-01-01
A radiological risk score that determines 5-year cardiovascular disease (CVD) risk using routine care CT and patient information readily available to radiologists was previously developed. External validation in a Scottish population was performed to assess the applicability and validity of the risk score in other populations. 2915 subjects aged ≥40 years who underwent routine clinical chest CT scanning for non-cardiovascular diagnostic indications were followed up until first diagnosis of, or death from, CVD. Using a case-cohort approach, all cases and a random sample of 20% of the participant's CT examinations were visually graded for cardiovascular calcifications and cardiac diameter was measured. The radiological risk score was determined using imaging findings, age, gender, and CT indication. Performance on 5-year CVD risk prediction was assessed. 384 events occurred in 2124 subjects during a mean follow-up of 4.25 years (0-6.4 years). The risk score demonstrated reasonable performance in the studied population. Calibration showed good agreement between actual and 5-year predicted risk of CVD. The c-statistic was 0.71 (95%CI:0.67-0.75). The radiological CVD risk score performed adequately in the Scottish population offering a potential novel strategy for identifying patients at high risk for developing cardiovascular disease using routine care CT data. Copyright © 2017 Elsevier B.V. All rights reserved.
Multilocus genetic risk scores for venous thromboembolism risk assessment.
Soria, José Manuel; Morange, Pierre-Emmanuel; Vila, Joan; Souto, Juan Carlos; Moyano, Manel; Trégouët, David-Alexandre; Mateo, José; Saut, Noémi; Salas, Eduardo; Elosua, Roberto
2014-10-23
Genetics plays an important role in venous thromboembolism (VTE). Factor V Leiden (FVL or rs6025) and prothrombin gene G20210A (PT or rs1799963) are the genetic variants currently tested for VTE risk assessment. We hypothesized that primary VTE risk assessment can be improved by using genetic risk scores with more genetic markers than just FVL-rs6025 and prothrombin gene PT-rs1799963. To this end, we have designed a new genetic risk score called Thrombo inCode (TiC). TiC was evaluated in terms of discrimination (Δ of the area under the receiver operating characteristic curve) and reclassification (integrated discrimination improvement and net reclassification improvement). This evaluation was performed using 2 age- and sex-matched case-control populations: SANTPAU (248 cases, 249 controls) and the Marseille Thrombosis Association study (MARTHA; 477 cases, 477 controls). TiC was compared with other literature-based genetic risk scores. TiC including F5 rs6025/rs118203906/rs118203905, F2 rs1799963, F12 rs1801020, F13 rs5985, SERPINC1 rs121909548, and SERPINA10 rs2232698 plus the A1 blood group (rs8176719, rs7853989, rs8176743, rs8176750) improved the area under the curve compared with a model based only on F5-rs6025 and F2-rs1799963 in SANTPAU (0.677 versus 0.575, P<0.001) and MARTHA (0.605 versus 0.576, P=0.008). TiC showed good integrated discrimination improvement of 5.49 (P<0.001) for SANTPAU and 0.96 (P=0.045) for MARTHA. Among the genetic risk scores evaluated, the proportion of VTE risk variance explained by TiC was the highest. We conclude that TiC greatly improves prediction of VTE risk compared with other genetic risk scores. TiC should improve prevention, diagnosis, and treatment of VTE. © 2014 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Multilocus Genetic Risk Scores for Venous Thromboembolism Risk Assessment
Soria, José Manuel; Morange, Pierre‐Emmanuel; Vila, Joan; Souto, Juan Carlos; Moyano, Manel; Trégouët, David‐Alexandre; Mateo, José; Saut, Noémi; Salas, Eduardo; Elosua, Roberto
2014-01-01
Background Genetics plays an important role in venous thromboembolism (VTE). Factor V Leiden (FVL or rs6025) and prothrombin gene G20210A (PT or rs1799963) are the genetic variants currently tested for VTE risk assessment. We hypothesized that primary VTE risk assessment can be improved by using genetic risk scores with more genetic markers than just FVL‐rs6025 and prothrombin gene PT‐rs1799963. To this end, we have designed a new genetic risk score called Thrombo inCode (TiC). Methods and Results TiC was evaluated in terms of discrimination (Δ of the area under the receiver operating characteristic curve) and reclassification (integrated discrimination improvement and net reclassification improvement). This evaluation was performed using 2 age‐ and sex‐matched case–control populations: SANTPAU (248 cases, 249 controls) and the Marseille Thrombosis Association study (MARTHA; 477 cases, 477 controls). TiC was compared with other literature‐based genetic risk scores. TiC including F5 rs6025/rs118203906/rs118203905, F2 rs1799963, F12 rs1801020, F13 rs5985, SERPINC1 rs121909548, and SERPINA10 rs2232698 plus the A1 blood group (rs8176719, rs7853989, rs8176743, rs8176750) improved the area under the curve compared with a model based only on F5‐rs6025 and F2‐rs1799963 in SANTPAU (0.677 versus 0.575, P<0.001) and MARTHA (0.605 versus 0.576, P=0.008). TiC showed good integrated discrimination improvement of 5.49 (P<0.001) for SANTPAU and 0.96 (P=0.045) for MARTHA. Among the genetic risk scores evaluated, the proportion of VTE risk variance explained by TiC was the highest. Conclusions We conclude that TiC greatly improves prediction of VTE risk compared with other genetic risk scores. TiC should improve prevention, diagnosis, and treatment of VTE. PMID:25341889
Calabria, Bianca; Clifford, Anton; Shakeshaft, Anthony P; Conigrave, Katherine M; Simpson, Lynette; Bliss, Donna; Allan, Julaine
2014-09-01
The Alcohol Use Disorders Identification Test (AUDIT) is a 10-item alcohol screener that has been recommended for use in Aboriginal primary health care settings. The time it takes respondents to complete AUDIT, however, has proven to be a barrier to its routine delivery. Two shorter versions, AUDIT-C and AUDIT-3, have been used as screening instruments in primary health care. This paper aims to identify the AUDIT-C and AUDIT-3 cutoff scores that most closely identify individuals classified as being at-risk drinkers, high-risk drinkers, or likely alcohol dependent by the 10-item AUDIT. Two cross-sectional surveys were conducted from June 2009 to May 2010 and from July 2010 to June 2011. Aboriginal Australian participants (N = 156) were recruited through an Aboriginal Community Controlled Health Service, and a community-based drug and alcohol treatment agency in rural New South Wales (NSW), and through community-based Aboriginal groups in Sydney NSW. Sensitivity, specificity, and positive and negative predictive values of each score on the AUDIT-C and AUDIT-3 were calculated, relative to cutoff scores on the 10-item AUDIT for at-risk, high-risk, and likely dependent drinkers. Receiver operating characteristic (ROC) curve analyses were conducted to measure the detection characteristics of AUDIT-C and AUDIT-3 for the three categories of risk. The areas under the receiver operating characteristic (AUROC) curves were high for drinkers classified as being at-risk, high-risk, and likely dependent. Recommended cutoff scores for Aboriginal Australians are as follows: at-risk drinkers AUDIT-C ≥ 5, AUDIT-3 ≥ 1; high-risk drinkers AUDIT-C ≥ 6, AUDIT-3 ≥ 2; and likely dependent drinkers AUDIT-C ≥ 9, AUDIT-3 ≥ 3. Adequate sensitivity and specificity were achieved for recommended cutoff scores. AUROC curves were above 0.90.
Measuring coding intensity in the Medicare Advantage program.
Kronick, Richard; Welch, W Pete
2014-01-01
In 2004, Medicare implemented a system of paying Medicare Advantage (MA) plans that gave them greater incentive than fee-for-service (FFS) providers to report diagnoses. Risk scores for all Medicare beneficiaries 2004-2013 and Medicare Current Beneficiary Survey (MCBS) data, 2006-2011. Change in average risk score for all enrollees and for stayers (beneficiaries who were in either FFS or MA for two consecutive years). Prevalence rates by Hierarchical Condition Category (HCC). Each year the average MA risk score increased faster than the average FFS score. Using the risk adjustment model in place in 2004, the average MA score as a ratio of the average FFS score would have increased from 90% in 2004 to 109% in 2013. Using the model partially implemented in 2014, the ratio would have increased from 88% to 102%. The increase in relative MA scores appears to largely reflect changes in diagnostic coding, not real increases in the morbidity of MA enrollees. In survey-based data for 2006-2011, the MA-FFS ratio of risk scores remained roughly constant at 96%. Intensity of coding varies widely by contract, with some contracts coding very similarly to FFS and others coding much more intensely than the MA average. Underpinning this relative growth in scores is particularly rapid relative growth in a subset of HCCs. Medicare has taken significant steps to mitigate the effects of coding intensity in MA, including implementing a 3.4% coding intensity adjustment in 2010 and revising the risk adjustment model in 2013 and 2014. Given the continuous relative increase in the average MA risk score, further policy changes will likely be necessary.
Reeh, Matthias; Metze, Johannes; Uzunoglu, Faik G; Nentwich, Michael; Ghadban, Tarik; Wellner, Ullrich; Bockhorn, Maximilian; Kluge, Stefan; Izbicki, Jakob R; Vashist, Yogesh K
2016-02-01
Esophageal resection in patients with esophageal cancer (EC) is still associated with high mortality and morbidity rates. We aimed to develop a simple preoperative risk score for the prediction of short-term and long-term outcomes for patients with EC treated by esophageal resection. In total, 498 patients suffering from esophageal carcinoma, who underwent esophageal resection, were included in this retrospective cohort study. Three preoperative esophagectomy risk (PER) groups were defined based on preoperative functional evaluation of different organ systems by validated tools (revised cardiac risk index, model for end-stage liver disease score, and pulmonary function test). Clinicopathological parameters, morbidity, and mortality as well as disease-free survival (DFS) and overall survival (OS) were correlated to the PER score. The PER score significantly predicted the short-term outcome of patients with EC who underwent esophageal resection. PER 2 and PER 3 patients had at least double the risk of morbidity and mortality compared to PER 1 patients. Furthermore, a higher PER score was associated with shorter DFS (P < 0.001) and OS (P < 0.001). The PER score was identified as an independent predictor of tumor recurrence (hazard ratio [HR] 2.1; P < 0.001) and OS (HR 2.2; P < 0.001). The PER score allows preoperative objective allocation of patients with EC into different risk categories for morbidity, mortality, and long-term outcomes. Thus, multicenter studies are needed for independent validation of the PER score.
Zafar, Farhan; Jaquiss, Robert D; Almond, Christopher S; Lorts, Angela; Chin, Clifford; Rizwan, Raheel; Bryant, Roosevelt; Tweddell, James S; Morales, David L S
2018-03-01
In this study we sought to quantify hazards associated with various donor factors into a cumulative risk scoring system (the Pediatric Heart Donor Assessment Tool, or PH-DAT) to predict 1-year mortality after pediatric heart transplantation (PHT). PHT data with complete donor information (5,732) were randomly divided into a derivation cohort and a validation cohort (3:1). From the derivation cohort, donor-specific variables associated with 1-year mortality (exploratory p-value < 0.2) were incorporated into a multivariate logistic regression model. Scores were assigned to independent predictors (p < 0.05) based on relative odds ratios (ORs). The final model had an acceptable predictive value (c-statistic = 0.62). The significant 5 variables (ischemic time, stroke as the cause of death, donor-to-recipient height ratio, donor left ventricular ejection fraction, glomerular filtration rate) were used for the scoring system. The validation cohort demonstrated a strong correlation between the observed and expected rates of 1-year mortality (r = 0.87). The risk of 1-year mortality increases by 11% (OR 1.11 [1.08 to 1.14]; p < 0.001) in the derivation cohort and 9% (OR 1.09 [1.04 to 1.14]; p = 0.001) in the validation cohort with an increase of 1-point in score. Mortality risk increased 5 times from the lowest to the highest donor score in this cohort. Based on this model, a donor score range of 10 to 28 predicted 1-year recipient mortality of 11% to 31%. This novel pediatric-specific, donor risk scoring system appears capable of predicting post-transplant mortality. Although the PH-DAT may benefit organ allocation and assessment of recipient risk while controlling for donor risk, prospective validation of this model is warranted. Copyright © 2018 International Society for the Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.
Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A
2001-01-01
Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.
[Prognostic scores for pulmonary embolism].
Junod, Alain
2016-03-23
Nine prognostic scores for pulmonary embolism (PE), based on retrospective and prospective studies, published between 2000 and 2014, have been analyzed and compared. Most of them aim at identifying PE cases with a low risk to validate their ambulatory care. Important differences in the considered outcomes: global mortality, PE-specific mortality, other complications, sizes of low risk groups, exist between these scores. The most popular score appears to be the PESI and its simplified version. Few good quality studies have tested the applicability of these scores to PE outpatient care, although this approach tends to already generalize in the medical practice.
Clinical utility of metabolic syndrome severity scores: considerations for practitioners
DeBoer, Mark D; Gurka, Matthew J
2017-01-01
The metabolic syndrome (MetS) is marked by abnormalities in central obesity, high blood pressure, high triglycerides, low high-density lipoprotein-cholesterol, and high fasting glucose and appears to be produced by underlying processes of inflammation, oxidative stress, and adipocyte dysfunction. MetS has traditionally been classified based on dichotomous criteria that deny that MetS-related risk likely exists as a spectrum. Continuous MetS scores provide a way to track MetS-related risk over time. We generated MetS severity scores that are sex- and race/ethnicity-specific, acknowledging that the way MetS is manifested may be different by sex and racial/ethnic subgroup. These scores are correlated with long-term risk for type 2 diabetes mellitus and cardiovascular disease. Clinical use of scores like these provide a potential opportunity to identify patients at highest risk, motivate patients toward lifestyle change, and follow treatment progress over time. PMID:28255250
Corbitt, Holly; Maslen, Cheryl; Prakash, Siddharth; Morris, Shaine A; Silberbach, Michael
2018-02-01
In Turner syndrome, the potential to form thoracic aortic aneurysms requires routine patient monitoring. However, the short stature that typically occurs complicates the assessment of severity and risk because the relationship of body size to aortic dimensions is different in Turner syndrome compared to the general population. Three allometric formula have been proposed to adjust aortic dimensions, all employing body surface area: aortic size index, Turner syndrome-specific Z-scores, and Z-scores based on a general pediatric and young adult population. In order to understand the differences between these formula we evaluated the relationship between age and aortic size index and compared Turner syndrome-specific Z-scores and pediatric/young adult based Z-scores in a group of girls and women with Turner syndrome. Our results suggest that the aortic size index is highly age-dependent for those under 15 years; and that Turner-specific Z-scores are significantly lower than Z-scores referenced to the general population. Higher Z-scores derived from the general reference population could result in stigmatization, inappropriate restriction from sports, and increasing the risk of unneeded medical or operative treatments. We propose that when estimating aortic dissection risk clinicians use Turner syndrome-specific Z-score for those under fifteen years of age. © 2017 Wiley Periodicals, Inc.
Mitu, Ovidiu; Roca, Mihai; Floria, Mariana; Petris, Antoniu Octavian; Graur, Mariana; Mitu, Florin
The aim of this study is to evaluate the relationship and the accuracy of SCORE (Systematic Coronary Risk Evaluation Project) risk correlated to multiple methods for determining subclinical cardiovascular disease (CVD) in a healthy population. This cross-sectional study included 120 completely asymptomatic subjects, with an age range 35-75 years, and randomly selected from the general population. The individuals were evaluated clinically and biochemical, and the SCORE risk was computed. Subclinical atherosclerosis was assessed by various methods: carotid ultrasound for intima-media thickness (cIMT) and plaque detection; aortic pulse wave velocity (aPWV); echocardiography - left ventricular mass index (LVMI) and aortic atheromatosis (AA); ankle-brachial index (ABI). SCORE mean value was 2.95±2.71, with 76% of subjects having SCORE <5. Sixty-four percent of all subjects have had increased subclinical CVD changes, and SCORE risk score was correlated positively with all markers, except for ABI. In the multivariate analysis, increased cIMT and aPWV were significantly associated with high value of SCORE risk (OR 4.14, 95% CI: 1.42-12.15, p=0.009; respectively OR 1.41, 95% CI: 1.01-1.96, p=0.039). A positive linear relationship was observed between 3 territories of subclinical CVD (cIMT, LVMI, aPWV) and SCORE risk (p<0.0001). There was evidence of subclinical CVD in 60% of subjects with a SCORE value <5. As most subjects with a SCORE value <5 have subclinical CVD abnormalities, a more tailored subclinical CVD primary prevention program should be encouraged. Copyright © 2016 Sociedad Española de Arteriosclerosis. Publicado por Elsevier España, S.L.U. All rights reserved.
Liu, Dan; Hu, Kai; Schmidt, Marie; Müntze, Jonas; Maniuc, Octavian; Gensler, Daniel; Oder, Daniel; Salinger, Tim; Weidemann, Frank; Ertl, Georg; Frantz, Stefan; Wanner, Christoph; Nordbeck, Peter
2018-05-24
To evaluate potential risk factors for stroke or transient ischemic attacks (TIA) and to test the feasibility and efficacy of a Fabry-specific stroke risk score in Fabry disease (FD) patients without atrial fibrillation (AF). FD patients often experience cerebrovascular events (stroke/TIA) at young age. 159 genetically confirmed FD patients without AF (aged 40 ± 14 years, 42.1% male) were included, and risk factors for stroke/TIA events were determined. All patients were followed up over a median period of 60 (quartiles 35-90) months. The pre-defined primary outcomes included new-onset or recurrent stroke/TIA and all-cause death. Prior stroke/TIA (HR 19.97, P < .001), angiokeratoma (HR 4.06, P = .010), elevated creatinine (HR 3.74, P = .011), significant left ventricular hypertrophy (HR 4.07, P = .017), and reduced global systolic strain (GLS, HR 5.19, P = .002) remained as independent risk predictors of new-onset or recurrent stroke/TIA in FD patients without AF. A Fabry-specific score was established based on above defined risk factors, proving somehow superior to the CHA 2 DS 2 -VASc score in predicting new-onset or recurrent stroke/TIA in this cohort (AUC 0.87 vs. 0.75, P = .199). Prior stroke/TIA, angiokeratoma, renal dysfunction, left ventricular hypertrophy, and global systolic dysfunction are independent risk factors for new-onset or recurrent stroke/TIA in FD patients without AF. It is feasible to predict new or recurrent cerebral events with the Fabry-specific score based on the above defined risk factors. Future studies are warranted to test if FD patients with high risk for new-onset or recurrent stroke/TIA, as defined by the Fabry-specific score (≥ 2 points), might benefit from antithrombotic therapy. Clinical trial registration HEAL-FABRY (evaluation of HEArt invoLvement in patients with FABRY disease, NCT03362164).
Using a genetic/clinical risk score to stop smoking (GeTSS): randomised controlled trial.
Nichols, John A A; Grob, Paul; Kite, Wendy; Williams, Peter; de Lusignan, Simon
2017-10-23
As genetic tests become cheaper, the possibility of their widespread availability must be considered. This study involves a risk score for lung cancer in smokers that is roughly 50% genetic (50% clinical criteria). The risk score has been shown to be effective as a smoking cessation motivator in hospital recruited subjects (not actively seeking cessation services). This was an RCT set in a United Kingdom National Health Service (NHS) smoking cessation clinic. Smokers were identified from medical records. Subjects that wanted to participate were randomised to a test group that was administered a gene-based risk test and given a lung cancer risk score, or a control group where no risk score was performed. Each group had 8 weeks of weekly smoking cessation sessions involving group therapy and advice on smoking cessation pharmacotherapy and follow-up at 6 months. The primary endpoint was smoking cessation at 6 months. Secondary outcomes included ranking of the risk score and other motivators. 67 subjects attended the smoking cessation clinic. The 6 months quit rates were 29.4%, (10/34; 95% CI 14.1-44.7%) for the test group and 42.9% (12/28; 95% CI 24.6-61.2%) for the controls. The difference is not significant. However, the quit rate for test group subjects with a "very high" risk score was 89% (8/9; 95% CI 68.4-100%) which was significant when compared with the control group (p = 0.023) and test group subjects with moderate risk scores had a 9.5% quit rate (2/21; 95% CI 2.7-28.9%) which was significantly lower than for above moderate risk score 61.5% (8/13; 95% CI 35.5-82.3; p = 0.03). Only the sub-group with the highest risk score showed an increased quit rate. Controls and test group subjects with a moderate risk score were relatively unlikely to have achieved and maintained non-smoker status at 6 months. ClinicalTrials.gov ID NCT01176383 (date of registration: 3 August 2010).
Qureshi, Waqas T; Michos, Erin D; Flueckiger, Peter; Blaha, Michael; Sandfort, Veit; Herrington, David M; Burke, Gregory; Yeboah, Joseph
2016-09-01
The increase in statin eligibility by the new cholesterol guidelines is mostly driven by the Pooled Cohort Equation (PCE) criterion (≥7.5% 10-year PCE). The impact of replacing the PCE with either the modified Framingham Risk Score (FRS) or the Systematic Coronary Risk Evaluation (SCORE) on assessment of atherosclerotic cardiovascular disease (ASCVD) risk assessment and statin eligibility remains unknown. We assessed the comparative benefits of using the PCE, FRS, and SCORE for ASCVD risk assessment in the Multi-Ethnic Study of Atherosclerosis. Of 6,815 participants, 654 (mean age 61.4 ± 10.3; 47.1% men; 37.1% whites; 27.2% blacks; 22.3% Hispanics; 12.0% Chinese-Americans) were included in analysis. Area under the curve (AUC) and decision curve analysis were used to compare the 3 risk scores. Decision curve analysis is the plot of net benefit versus probability thresholds; net benefit = true positive rate - (false positive rate × weighting factor). Weighting factor = Threshold probability/1 - threshold probability. After a median of 8.6 years, 342 (6.0%) ASCVD events (myocardial infarction, coronary heart disease death, fatal or nonfatal stroke) occurred. All 4 risk scores had acceptable discriminative ability for incident ASCVD events; (AUC [95% CI] PCE: 0.737 [0.713 to 0.762]; FRS: 0.717 [0.691 to 0.743], SCORE (high risk) 0.722 [0.696 to 0.747], and SCORE (low risk): 0.721 [0.696 to 0.746]. At the ASCVD risk threshold recommended for statin eligibility for primary prevention (≥7.5%), the PCE provides the best net benefit. Replacing the PCE with the SCORE (high), SCORE (low) and FRS results in a 2.9%, 8.9%, and 17.1% further increase in statin eligibility. The PCE has the best discrimination and net benefit for primary ASCVD risk assessment in a US-based multiethnic cohort compared with the SCORE or the FRS. Copyright © 2016 Elsevier Inc. All rights reserved.
Real-Time Risk Prediction on the Wards: A Feasibility Study.
Kang, Michael A; Churpek, Matthew M; Zadravecz, Frank J; Adhikari, Richa; Twu, Nicole M; Edelson, Dana P
2016-08-01
Failure to detect clinical deterioration in the hospital is common and associated with poor patient outcomes and increased healthcare costs. Our objective was to evaluate the feasibility and accuracy of real-time risk stratification using the electronic Cardiac Arrest Risk Triage score, an electronic health record-based early warning score. We conducted a prospective black-box validation study. Data were transmitted via HL7 feed in real time to an integration engine and database server wherein the scores were calculated and stored without visualization for clinical providers. The high-risk threshold was set a priori. Timing and sensitivity of electronic Cardiac Arrest Risk Triage score activation were compared with standard-of-care Rapid Response Team activation for patients who experienced a ward cardiac arrest or ICU transfer. Three general care wards at an academic medical center. A total of 3,889 adult inpatients. The system generated 5,925 segments during 5,751 admissions. The area under the receiver operating characteristic curve for electronic Cardiac Arrest Risk Triage score was 0.88 for cardiac arrest and 0.80 for ICU transfer, consistent with previously published derivation results. During the study period, eight of 10 patients with a cardiac arrest had high-risk electronic Cardiac Arrest Risk Triage scores, whereas the Rapid Response Team was activated on two of these patients (p < 0.05). Furthermore, electronic Cardiac Arrest Risk Triage score identified 52% (n = 201) of the ICU transfers compared with 34% (n = 129) by the current system (p < 0.001). Patients met the high-risk electronic Cardiac Arrest Risk Triage score threshold a median of 30 hours prior to cardiac arrest or ICU transfer versus 1.7 hours for standard Rapid Response Team activation. Electronic Cardiac Arrest Risk Triage score identified significantly more cardiac arrests and ICU transfers than standard Rapid Response Team activation and did so many hours in advance.
Quantifying the relative risk of sex offenders: risk ratios for static-99R.
Hanson, R Karl; Babchishin, Kelly M; Helmus, Leslie; Thornton, David
2013-10-01
Given the widespread use of empirical actuarial risk tools in corrections and forensic mental health, it is important that evaluators and decision makers understand how scores relate to recidivism risk. In the current study, we found strong evidence for a relative risk interpretation of Static-99R scores using 8 samples from Canada, United Kingdom, and Western Europe (N = 4,037 sex offenders). Each increase in Static-99R score was associated with a stable and consistent increase in relative risk (as measured by an odds ratio or hazard ratio of approximately 1.4). Hazard ratios from Cox regression were used to calculate risk ratios that can be reported for Static-99R. We recommend that evaluators consider risk ratios as a useful, nonarbitrary metric for quantifying and communicating risk information. To avoid misinterpretation, however, risk ratios should be presented with recidivism base rates.
Rah, Jeong-Eun; Manger, Ryan P; Yock, Adam D; Kim, Gwe-Ya
2016-12-01
To examine the abilities of a traditional failure mode and effects analysis (FMEA) and modified healthcare FMEA (m-HFMEA) scoring methods by comparing the degree of congruence in identifying high risk failures. The authors applied two prospective methods of the quality management to surface image guided, linac-based radiosurgery (SIG-RS). For the traditional FMEA, decisions on how to improve an operation were based on the risk priority number (RPN). The RPN is a product of three indices: occurrence, severity, and detectability. The m-HFMEA approach utilized two indices, severity and frequency. A risk inventory matrix was divided into four categories: very low, low, high, and very high. For high risk events, an additional evaluation was performed. Based upon the criticality of the process, it was decided if additional safety measures were needed and what they comprise. The two methods were independently compared to determine if the results and rated risks matched. The authors' results showed an agreement of 85% between FMEA and m-HFMEA approaches for top 20 risks of SIG-RS-specific failure modes. The main differences between the two approaches were the distribution of the values and the observation that failure modes (52, 54, 154) with high m-HFMEA scores do not necessarily have high FMEA-RPN scores. In the m-HFMEA analysis, when the risk score is determined, the basis of the established HFMEA Decision Tree™ or the failure mode should be more thoroughly investigated. m-HFMEA is inductive because it requires the identification of the consequences from causes, and semi-quantitative since it allows the prioritization of high risks and mitigation measures. It is therefore a useful tool for the prospective risk analysis method to radiotherapy.
Davies, John R; Chang, Yu-mei; Bishop, D Timothy; Armstrong, Bruce K; Bataille, Veronique; Bergman, Wilma; Berwick, Marianne; Bracci, Paige M; Elwood, J Mark; Ernstoff, Marc S; Green, Adele; Gruis, Nelleke A; Holly, Elizabeth A; Ingvar, Christian; Kanetsky, Peter A; Karagas, Margaret R; Lee, Tim K; Le Marchand, Loïc; Mackie, Rona M; Olsson, Håkan; Østerlind, Anne; Rebbeck, Timothy R; Reich, Kristian; Sasieni, Peter; Siskind, Victor; Swerdlow, Anthony J; Titus, Linda; Zens, Michael S; Ziegler, Andreas; Gallagher, Richard P.; Barrett, Jennifer H; Newton-Bishop, Julia
2015-01-01
Background We report the development of a cutaneous melanoma risk algorithm based upon 7 factors; hair colour, skin type, family history, freckling, nevus count, number of large nevi and history of sunburn, intended to form the basis of a self-assessment webtool for the general public. Methods Predicted odds of melanoma were estimated by analysing a pooled dataset from 16 case-control studies using logistic random coefficients models. Risk categories were defined based on the distribution of the predicted odds in the controls from these studies. Imputation was used to estimate missing data in the pooled datasets. The 30th, 60th and 90th centiles were used to distribute individuals into four risk groups for their age, sex and geographic location. Cross-validation was used to test the robustness of the thresholds for each group by leaving out each study one by one. Performance of the model was assessed in an independent UK case-control study dataset. Results Cross-validation confirmed the robustness of the threshold estimates. Cases and controls were well discriminated in the independent dataset (area under the curve 0.75, 95% CI 0.73-0.78). 29% of cases were in the highest risk group compared with 7% of controls, and 43% of controls were in the lowest risk group compared with 13% of cases. Conclusion We have identified a composite score representing an estimate of relative risk and successfully validated this score in an independent dataset. Impact This score may be a useful tool to inform members of the public about their melanoma risk. PMID:25713022
Associations of CAIDE Dementia Risk Score with MRI, PIB-PET measures, and cognition.
Stephen, Ruth; Liu, Yawu; Ngandu, Tiia; Rinne, Juha O; Kemppainen, Nina; Parkkola, Riitta; Laatikainen, Tiina; Paajanen, Teemu; Hänninen, Tuomo; Strandberg, Timo; Antikainen, Riitta; Tuomilehto, Jaakko; Keinänen Kiukaanniemi, Sirkka; Vanninen, Ritva; Helisalmi, Seppo; Levälahti, Esko; Kivipelto, Miia; Soininen, Hilkka; Solomon, Alina
2017-01-01
CAIDE Dementia Risk Score is the first validated tool for estimating dementia risk based on a midlife risk profile. This observational study investigated longitudinal associations of CAIDE Dementia Risk Score with brain MRI, amyloid burden evaluated with PIB-PET, and detailed cognition measures. FINGER participants were at-risk elderly without dementia. CAIDE Risk Score was calculated using data from previous national surveys (mean age 52.4 years). In connection to baseline FINGER visit (on average 17.6 years later, mean age 70.1 years), 132 participants underwent MRI scans, and 48 underwent PIB-PET scans. All 1,260 participants were cognitively assessed (Neuropsychological Test Battery, NTB). Neuroimaging assessments included brain cortical thickness and volumes (Freesurfer 5.0.3), visually rated medial temporal atrophy (MTA), white matter lesions (WML), and amyloid accumulation. Higher CAIDE Dementia Risk Score was related to more pronounced deep WML (OR 1.22, 95% CI 1.05-1.43), lower total gray matter (β-coefficient -0.29, p = 0.001) and hippocampal volume (β-coefficient -0.28, p = 0.003), lower cortical thickness (β-coefficient -0.19, p = 0.042), and poorer cognition (β-coefficients -0.31 for total NTB score, -0.25 for executive functioning, -0.33 for processing speed, and -0.20 for memory, all p < 0.001). Higher CAIDE Dementia Risk Score including APOE genotype was additionally related to more pronounced MTA (OR 1.15, 95% CI 1.00-1.30). No associations were found with periventricular WML or amyloid accumulation. The CAIDE Dementia Risk Score was related to indicators of cerebrovascular changes and neurodegeneration on MRI, and cognition. The lack of association with brain amyloid accumulation needs to be verified in studies with larger sample sizes.
Shivappa, Nitin; Hebert, James R; Anderson, Lesley A; Shrubsole, Martha J; Murray, Liam J; Getty, Lauren B; Coleman, Helen G
2017-05-01
The dietary inflammatory index (DIITM) is a novel composite score based on a range of nutrients and foods known to be associated with inflammation. DII scores have been linked to the risk of a number of cancers, including oesophageal squamous cell cancer and oesophageal adenocarcinoma (OAC). Given that OAC stems from acid reflux and that the oesophageal epithelium undergoes a metaplasia-dysplasia transition from the resulting inflammation, it is plausible that a high DII score (indicating a pro-inflammatory diet) may exacerbate risk of OAC and its precursor conditions. The aim of this analytical study was to explore the association between energy-adjusted dietary inflammatory index (E-DIITM) in relation to risk of reflux oesophagitis, Barrett's oesophagus and OAC. Between 2002 and 2005, reflux oesophagitis (n 219), Barrett's oesophagus (n 220) and OAC (n 224) patients, and population-based controls (n 256), were recruited to the Factors influencing the Barrett's Adenocarcinoma Relationship study in Northern Ireland and the Republic of Ireland. E-DII scores were derived from a 101-item FFQ. Unconditional logistic regression analysis was applied to determine odds of oesophageal lesions according to E-DII intakes, adjusting for potential confounders. High E-DII scores were associated with borderline increase in odds of reflux oesophagitis (OR 1·87; 95 % CI 0·93, 3·73), and significantly increased odds of Barrett's oesophagus (OR 2·05; 95 % CI 1·22, 3·47), and OAC (OR 2·29; 95 % CI 1·32, 3·96), when comparing the highest with the lowest tertiles of E-DII scores. In conclusion, a pro-inflammatory diet may exacerbate the risk of the inflammation-metaplasia-adenocarcinoma pathway in oesophageal carcinogenesis.
Perry, Jeffrey J; Losier, Justin H; Stiell, Ian G; Sharma, Mukul; Abdulaziz, Kasim
2016-01-01
Five percent of transient ischemic attack (TIA) patients have a subsequent stroke within 7 days. The Canadian TIA Score uses clinical findings to calculate the subsequent stroke risk within 7 days. Our objectives were to assess 1) anticipated use; 2) component face validity; 3) risk strata for stroke within 7 days; and 4) actions required, for a given risk for subsequent stroke. After a rigorous development process, a survey questionnaire was administered to a random sample of 300 emergency physicians selected from those registered in a national medical directory. The surveys were distributed using a modified Dillman technique. From a total of 271 eligible surveys, we received 131 (48.3%) completed surveys; 96.2% of emergency physicians would use a validated Canadian TIA Score; 8 of 13 components comprising the Canadian TIA Score were rated as Very Important or Important by survey respondents. Risk categories for subsequent stroke were defined as minimal-risk: 10% risk of subsequent stroke within 7 days. A validated Canadian TIA Score will likely be used by emergency physicians. Most components of the TIA Score have high face validity. Risk strata are definable, which may allow physicians to determine immediate actions, based on subsequent stroke risk, in the emergency department.
Wang, Hai-Qing; Yang, Jian; Yang, Jia-Yin; Wang, Wen-Tao; Yan, Lu-Nan
2015-08-01
Liver resection is a major surgery requiring perioperative blood transfusion. Predicting the need for blood transfusion for patients undergoing liver resection is of great importance. The present study aimed to develop and validate a model for predicting transfusion requirement in HBV-related hepatocellular carcinoma patients undergoing liver resection. A total of 1543 consecutive liver resections were included in the study. Randomly selected sample set of 1080 cases (70% of the study cohort) were used to develop a predictive score for transfusion requirement and the remaining 30% (n=463) was used to validate the score. Based on the preoperative and predictable intraoperative parameters, logistic regression was used to identify risk factors and to create an integer score for the prediction of transfusion requirement. Extrahepatic procedure, major liver resection, hemoglobin level and platelets count were identified as independent predictors for transfusion requirement by logistic regression analysis. A score system integrating these 4 factors was stratified into three groups which could predict the risk of transfusion, with a rate of 11.4%, 24.7% and 57.4% for low, moderate and high risk, respectively. The prediction model appeared accurate with good discriminatory abilities, generating an area under the receiver operating characteristic curve of 0.736 in the development set and 0.709 in the validation set. We have developed and validated an integer-based risk score to predict perioperative transfusion for patients undergoing liver resection in a high-volume surgical center. This score allows identifying patients at a high risk and may alter transfusion practices.
Li, Jiong; Cnattingus, Sven; Gissler, Mika; Vestergaard, Mogens; Obel, Carsten; Ahrensberg, Jette; Olsen, Jørn
2012-01-01
The aetiology of childhood cancer remains largely unknown but recent research indicates that uterine environment plays an important role. We aimed to examine the association between the Apgar score at 5 min after birth and the risk of childhood cancer. Nationwide population-based cohort study. Nationwide register data in Denmark and Sweden. All live-born singletons born in Denmark from 1978 to 2006 (N=1 771 615) and in Sweden from 1973 to 2006 (N=3 319 573). Children were followed up from birth to 14 years of age. Rates and HRs for all childhood cancers and for specific childhood cancers. A total of 8087 children received a cancer diagnosis (1.6 per 1000). Compared to children with a 5-min Apgar score of 9-10, children with a score of 0-5 had a 46% higher risk of cancer (adjusted HR 1.46, 95% CI 1.15 to 1.89). The potential effect of low Apgar score on overall cancer risk was mostly confined to children diagnosed before 6 months of age. Children with an Apgar score of 0-5 had higher risks for several specific childhood cancers including Wilms' tumour (HR 4.33, 95% CI 2.42 to 7.73). A low 5 min Apgar score was associated with a higher risk of childhood cancers diagnosed shortly after birth. Our data suggest that environmental factors operating before or during delivery may play a role on the development of several specific childhood cancers.
Jaja, Blessing N R; Schweizer, Tom A; Claassen, Jan; Le Roux, Peter; Mayer, Stephan A; Macdonald, R Loch
2018-06-01
Seizure is a significant complication in patients under acute admission for aneurysmal SAH and could result in poor outcomes. Treatment strategies to optimize management will benefit from methods to better identify at-risk patients. To develop and validate a risk score for convulsive seizure during acute admission for SAH. A risk score was developed in 1500 patients from a single tertiary hospital and externally validated in 852 patients. Candidate predictors were identified by systematic review of the literature and were included in a backward stepwise logistic regression model with in-hospital seizure as a dependent variable. The risk score was assessed for discrimination using the area under the receiver operator characteristics curve (AUC) and for calibration using a goodness-of-fit test. The SAFARI score, based on 4 items (age ≥ 60 yr, seizure occurrence before hospitalization, ruptured aneurysm in the anterior circulation, and hydrocephalus requiring cerebrospinal fluid diversion), had AUC = 0.77, 95% confidence interval (CI): 0.73-0.82 in the development cohort. The validation cohort had AUC = 0.65, 95% CI 0.56-0.73. A calibrated increase in the risk of seizure was noted with increasing SAFARI score points. The SAFARI score is a simple tool that adequately stratified SAH patients according to their risk for seizure using a few readily derived predictor items. It may contribute to a more individualized management of seizure following SAH.
Krug, Utz; Röllig, Christoph; Koschmieder, Anja; Heinecke, Achim; Sauerland, Maria Cristina; Schaich, Markus; Thiede, Christian; Kramer, Michael; Braess, Jan; Spiekermann, Karsten; Haferlach, Torsten; Haferlach, Claudia; Koschmieder, Steffen; Rohde, Christian; Serve, Hubert; Wörmann, Bernhard; Hiddemann, Wolfgang; Ehninger, Gerhard; Berdel, Wolfgang E; Büchner, Thomas; Müller-Tidow, Carsten
2010-12-11
About 50% of patients (age ≥60 years) who have acute myeloid leukaemia and are otherwise medically healthy (ie, able to undergo intensive chemotherapy) achieve a complete remission (CR) after intensive chemotherapy, but with a substantially increased risk of early death (ED) compared with younger patients. We verified the association of standard clinical and laboratory variables with CR and ED and developed a web-based application for risk assessment of intensive chemotherapy in these patients. Multivariate regression analysis was used to develop risk scores with or without knowledge of the cytogenetic and molecular risk profiles for a cohort of 1406 patients (aged ≥60 years) with acute myeloid leukaemia, but otherwise medically healthy, who were treated with two courses of intensive induction chemotherapy (tioguanine, standard-dose cytarabine, and daunorubicin followed by high-dose cytarabine and mitoxantrone; or with high-dose cytarabine and mitoxantrone in the first and second induction courses) in the German Acute Myeloid Leukaemia Cooperative Group 1999 study. Risk prediction was validated in an independent cohort of 801 patients (aged >60 years) with acute myeloid leukaemia who were given two courses of cytarabine and daunorubicin in the Acute Myeloid Leukaemia 1996 study. Body temperature, age, de-novo leukaemia versus leukaemia secondary to cytotoxic treatment or an antecedent haematological disease, haemoglobin, platelet count, fibrinogen, and serum concentration of lactate dehydrogenase were significantly associated with CR or ED. The probability of CR with knowledge of cytogenetic and molecular risk (score 1) was from 12% to 91%, and without knowledge (score 2) from 21% to 80%. The predicted risk of ED was from 6% to 69% for score 1 and from 7% to 63% for score 2. The predictive power of the risk scores was confirmed in the independent patient cohort (CR score 1, from 10% to 91%; CR score 2, from 16% to 80%; ED score 1, from 6% to 69%; and ED score 2, from 7% to 61%). The scores for acute myeloid leukaemia can be used to predict the probability of CR and the risk of ED in older patients with acute myeloid leukaemia, but otherwise medically healthy, for whom intensive induction chemotherapy is planned. This information can help physicians with difficult decisions for treatment of these patients. Deutsche Krebshilfe and Deutsche Forschungsgemeinschaft. Copyright © 2010 Elsevier Ltd. All rights reserved.
Sussman, Jeremy B; Wiitala, Wyndy L; Zawistowski, Matthew; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A
2017-09-01
Accurately estimating cardiovascular risk is fundamental to good decision-making in cardiovascular disease (CVD) prevention, but risk scores developed in one population often perform poorly in dissimilar populations. We sought to examine whether a large integrated health system can use their electronic health data to better predict individual patients' risk of developing CVD. We created a cohort using all patients ages 45-80 who used Department of Veterans Affairs (VA) ambulatory care services in 2006 with no history of CVD, heart failure, or loop diuretics. Our outcome variable was new-onset CVD in 2007-2011. We then developed a series of recalibrated scores, including a fully refit "VA Risk Score-CVD (VARS-CVD)." We tested the different scores using standard measures of prediction quality. For the 1,512,092 patients in the study, the Atherosclerotic cardiovascular disease risk score had similar discrimination as the VARS-CVD (c-statistic of 0.66 in men and 0.73 in women), but the Atherosclerotic cardiovascular disease model had poor calibration, predicting 63% more events than observed. Calibration was excellent in the fully recalibrated VARS-CVD tool, but simpler techniques tested proved less reliable. We found that local electronic health record data can be used to estimate CVD better than an established risk score based on research populations. Recalibration improved estimates dramatically, and the type of recalibration was important. Such tools can also easily be integrated into health system's electronic health record and can be more readily updated.
[Study on the infectious risk model of AIDS among men who have sex with men in Guangzhou].
Hu, Pei; Zhong, Fei; Cheng, Wei-Bin; Xu, Hui-Fang; Ling, Li
2012-07-01
To develop a human immune deficiency virus (HIV) infection risk appraisal model suitable for men who has sex with men (MSM) in Guangzhou, and to provide tools for follow-up the outcomes on health education and behavior intervention. A cros-sectional study was conducted in Guangzhou from 2008 to 2010. Based on the HIV surveillance data, the main risk factors of HIV infection among MSM were screened by means of logistic regression. Degree on relative risk was transformed into risk scores by adopting the statistics models. Individual risk scores, group risk scores and individual infection risk in comparison with usual MSM groups could then be calculated according to the rate of exposure on those risk factors appeared in data from the surveillance programs. Risk factors related to HIV infection among MSM and the quantitative assessment standard (risk scores and risk scores table of population groups) for those factors were set up by multiple logistic regression, including age, location of registered residence, monthly income, major location for finding their sexual partners, HIV testing in the past year, age when having the first sexual intercourse, rate of condom use in the past six months, symptoms related to sexually transmitted diseases (STDs) and syphilis in particular. The average risk score of population was 6.06, with risk scores for HIV positive and negative as 3.10 and 18.08 respectively (P < 0.001). The rates of HIV infection for different score groups were 0.9%, 2.0%, 7.0%, 14.4% and 33.3%, respectively. The sensitivity and specificity on the prediction of scores were 54.4% and 75.4% respectively, with the accuracy rate as 74.2%. HIV infection risk model could be used to quantify and classify the individual's infectious status and related factors among MSM more directly and effectively, so as to help the individuals to identify their high-risk behaviors as well as lifestyles. We felt that it could also serve as an important tool used for personalized HIV health education and behavior intervention programs.
[Clinical scores for the risk of bleeding with or without anticoagulation].
Junod, Alain
2016-09-14
The assessment of hemorragic risk related to therapeutic anticoagulation is made difficult because of the variety of existing drugs, the heterogeneity of treatment strategies and their duration. Six prognostic scores have been analyzed. For three of them, external validations have revealed a marked decrease in the discrimination power. One British study, Qbleed, based on the data of more than 1 million of ambulatory patients, has repeatedly satisfied quality criteria. Two scores have also studied the bleeding risk during hospital admission for acute medical disease. The development of new and effective anticoagulants with fewer side-effects is more likely to solve this problem than the production of new clinical scores.
Tian, Xiubiao; Liu, Yan; Han, Ying; Shi, Jieli; Zhu, Tiehong
2017-06-11
BACKGROUND Dysglycemia (pre-diabetes or diabetes) in young adults has increased rapidly. However, the risk scores for detecting dysglycemia in oil field staff and workers in China are limited. This study developed a risk score for the early identification of dysglycemia based on epidemiological and health examination data in an oil field working-age population with increased risk of diabetes. MATERIAL AND METHODS Multivariable logistic regression was used to develop the risk score model in a population-based, cross-sectional study. All subjects completed the questionnaires and underwent physical examination and oral glucose tolerance tests. The performance of the risk score models was evaluated using the area under the receiver operating characteristic curve (AUC). RESULTS The study population consisted of 1995 participants, 20-64 years old (49.4% males), with undiagnosed diabetes or pre-diabetes who underwent periodic health examinations from March 2014 to June 2015 in Dagang oil field, Tianjin, China. Age, sex, body mass index, history of high blood glucose, smoking, triglyceride, and fasting plasma glucose (FPG) constituted the Dagang dysglycemia risk score (Dagang DRS) model. The performance of Dagang DRS was superior to m-FINDRISC (AUC: 0.791; 95% confidence interval (CI), 0.773-0.809 vs. 0.633; 95% CI, 0.611-0.654). At the cut-off value of 5.6 mmol/L, the Dagang DRS (AUC: 0.616; 95% CI, 0.592-0.641) was better than the FPG value alone (AUC: 0.571; 95% CI, 0.546-0.596) in participants with FPG <6.1 mmol/L (n=1545, P=0.028). CONCLUSIONS Dagang DRS is a valuable tool for detecting dysglycemia, especially when FPG <6.1 mmol/L, in oil field workers in China.
Helland, Turid; Tjus, Tomas; Hovden, Marit; Ofte, Sonja; Heimann, Mikael
2011-01-01
This longitudinal study focused on the effects of two different principles of intervention in children at risk of developing dyslexia from 5 to 8 years old. The children were selected on the basis of a background questionnaire given to parents and preschool teachers, with cognitive and functional magnetic resonance imaging results substantiating group differences in neuropsychological processes associated with phonology, orthography, and phoneme-grapheme correspondence (i.e., alphabetic principle). The two principles of intervention were bottom-up (BU), "from sound to meaning", and top-down (TD), "from meaning to sound." Thus, four subgroups were established: risk/BU, risk/TD, control/BU, and control/TD. Computer-based training took place for 2 months every spring, and cognitive assessments were performed each fall of the project period. Measures of preliteracy skills for reading and spelling were phonological awareness, working memory, verbal learning, and letter knowledge. Literacy skills were assessed by word reading and spelling. At project end the control group scored significantly above age norm, whereas the risk group scored within the norm. In the at-risk group, training based on the BU principle had the strongest effects on phonological awareness and working memory scores, whereas training based on the TD principle had the strongest effects on verbal learning, letter knowledge, and literacy scores. It was concluded that appropriate, specific, data-based intervention starting in preschool can mitigate literacy impairment and that interventions should contain BU training for preliteracy skills and TD training for literacy training.
Efficacy of functional movement screening for predicting injuries in coast guard cadets.
Knapik, Joseph J; Cosio-Lima, Ludimila M; Reynolds, Katy L; Shumway, Richard S
2015-05-01
Functional movement screening (FMS) examines the ability of individuals to perform highly specific movements with the aim of identifying individuals who have functional limitations or asymmetries. It is assumed that individuals who can more effectively accomplish the required movements have a lower injury risk. This study determined the ability of FMS to predict injuries in the United States Coast Guard (USCG) cadets. Seven hundred seventy male and 275 female USCG freshman cadets were administered the 7 FMS tests before the physically intense 8-week Summer Warfare Annual Basic (SWAB) training. Physical training-related injuries were recorded during SWAB training. Cumulative injury incidence was calculated at various FMS cutpoint scores. The ability of the FMS total score to predict injuries was examined by calculating sensitivity and specificity. Determination of the FMS cutpoint that maximized specificity and sensitivity was determined from the Youden's index (sensitivity + specificity - 1). For men, FMS scores ≤ 12 were associated with higher injury risk than scores >12; for women, FMS scores ≤ 15 were associated with higher injury risk than scores >15. The Youden's Index indicated that the optimal FMS cutpoint was ≤ 11 for men (22% sensitivity, 87% specificity) and ≤ 14 for women (60% sensitivity, 61% specificity). Functional movement screening demonstrated moderate prognostic accuracy for determining injury risk among female Coast Guard cadets but relatively low accuracy among male cadets. Attempting to predict injury risk based on the FMS test seems to have some limited promise based on the present and past investigations.
Kaplan, David J.; Boorjian, Stephen A.; Ruth, Karen; Egleston, Brian L.; Chen, David Y.T.; Viterbo, Rosalia; Uzzo, Robert G.; Buyyounouski, Mark K.; Raysor, Susan; Giri, Veda N.
2009-01-01
Introduction Clinical factors in addition to PSA have been evaluated to improve risk assessment for prostate cancer. The Prostate Cancer Prevention Trial (PCPT) risk calculator provides an assessment of prostate cancer risk based on age, PSA, race, prior biopsy, and family history. This study evaluated the risk calculator in a screening cohort of young, racially diverse, high-risk men with a low baseline PSA enrolled in the Prostate Cancer Risk Assessment Program. Patients and Methods Eligibility for PRAP include men ages 35-69 who are African-American, have a family history of prostate cancer, or have a known BRCA1/2 mutation. PCPT risk scores were determined for PRAP participants, and were compared to observed prostate cancer rates. Results 624 participants were evaluated, including 382 (61.2%) African-American men and 375 (60%) men with a family history of prostate cancer. Median age was 49.0 years (range 34.0-69.0), and median PSA was 0.9 (range 0.1-27.2). PCPT risk score correlated with prostate cancer diagnosis, as the median baseline risk score in patients diagnosed with prostate cancer was 31.3%, versus 14.2% in patients not diagnosed with prostate cancer (p<0.0001). The PCPT calculator similarly stratified the risk of diagnosis of Gleason score ≥7 disease, as the median risk score was 36.2% in patients diagnosed with Gleason ≥7 prostate cancer versus 15.2% in all other participants (p<0.0001). Conclusion PCPT risk calculator score was found to stratify prostate cancer risk in a cohort of young, primarily African-American men with a low baseline PSA. These results support further evaluation of this predictive tool for prostate cancer risk assessment in high-risk men. PMID:19709072
Hobbs, F D R; Roalfe, A K; Lip, G Y H; Fletcher, K; Fitzmaurice, D A; Mant, J
2011-06-23
To compare the predictive power of the main existing and recently proposed schemes for stratification of risk of stroke in older patients with atrial fibrillation. Comparative cohort study of eight risk stratification scores. Trial of thromboprophylaxis in stroke, the Birmingham Atrial Fibrillation in the Aged (BAFTA) trial. 665 patients aged 75 or over with atrial fibrillation based in the community who were randomised to the BAFTA trial and were not taking warfarin throughout or for part of the study period. Events rates of stroke and thromboembolism. 54 (8%) patients had an ischaemic stroke, four (0.6%) had a systemic embolism, and 13 (2%) had a transient ischaemic attack. The distribution of patients classified into the three risk categories (low, moderate, high) was similar across three of the risk stratification scores (revised CHADS(2), NICE, ACC/AHA/ESC), with most patients categorised as high risk (65-69%, n = 460-457) and the remaining classified as moderate risk. The original CHADS(2) (Congestive heart failure, Hypertension, Age ≥ 75 years, Diabetes, previous Stroke) score identified the lowest number as high risk (27%, n = 180). The incremental risk scores of CHADS(2), Rietbrock modified CHADS(2), and CHA(2)DS(2)-VASc (CHA(2)DS(2)-Vascular disease, Age 65-74 years, Sex) failed to show an increase in risk at the upper range of scores. The predictive accuracy was similar across the tested schemes with C statistic ranging from 0.55 (original CHADS(2)) to 0.62 (Rietbrock modified CHADS(2)), with all except the original CHADS(2) predicting better than chance. Bootstrapped paired comparisons provided no evidence of significant differences between the discriminatory ability of the schemes. Based on this single trial population, current risk stratification schemes in older people with atrial fibrillation have only limited ability to predict the risk of stroke. Given the systematic undertreatment of older people with anticoagulation, and the relative safety of warfarin versus aspirin in those aged over 70, there could be a pragmatic rationale for classifying all patients over 75 as "high risk" until better tools are available.
Gómez-Pardo, Emilia; Fernández-Alvira, Juan Miguel; Vilanova, Marta; Haro, Domingo; Martínez, Ramona; Carvajal, Isabel; Carral, Vanesa; Rodríguez, Carla; de Miguel, Mercedes; Bodega, Patricia; Santos-Beneit, Gloria; Peñalvo, Jose Luis; Marina, Iñaki; Pérez-Farinós, Napoleón; Dal Re, Marian; Villar, Carmen; Robledo, Teresa; Vedanthan, Rajesh; Bansilal, Sameer; Fuster, Valentin
2016-02-09
Cardiovascular diseases stem from modifiable risk factors. Peer support is a proven strategy for many chronic illnesses. Randomized trials assessing the efficacy of this strategy for global cardiovascular risk factor modification are lacking. This study assessed the hypothesis that a peer group strategy would help improve healthy behaviors in individuals with cardiovascular risk factors. A total of 543 adults 25 to 50 years of age with at least 1 risk factor were recruited; risk factors included hypertension (20%), overweight (82%), smoking (31%), and physical inactivity (81%). Subjects were randomized 1:1 to a peer group-based intervention group (IG) or a self-management control group (CG) for 12 months. Peer-elected leaders moderated monthly meetings involving role-play, brainstorming, and activities to address emotions, diet, and exercise. The primary outcome was mean change in a composite score related to blood pressure, exercise, weight, alimentation, and tobacco (Fuster-BEWAT score, 0 to 15). Multilevel models with municipality as a cluster variable were applied to assess differences between groups. Participants' mean age was 42 ± 6 years, 71% were female, and they had a mean baseline Fuster-BEWAT score of 8.42 ± 2.35. After 1 year, the mean scores were significantly higher in the IG (n = 277) than in the CG (n = 266) (IG mean score: 8.84; 95% confidence interval (CI): 8.37 to 9.32; CG mean score: 8.17; 95% CI: 7.55 to 8.79; p = 0.02). The increase in the overall score was significantly larger in the IG compared with the CG (difference: 0.75; 95% CI: 0.32 to 1.18; p = 0.02). The mean improvement in the individual components was uniformly greater in the IG, with a significant difference for the tobacco component. The peer group intervention had beneficial effects on cardiovascular risk factors, with significant improvements in the overall score and specifically on tobacco cessation. A follow-up assessment will be performed 1 year after the final assessment reported here to determine long-term sustainability of the improvements associated with peer group intervention. (Peer-Group-Based Intervention Program [Fifty-Fifty]; NCT02367963). Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Busch, Robert; Hobbs, Brian D; Zhou, Jin; Castaldi, Peter J; McGeachie, Michael J; Hardin, Megan E; Hawrylkiewicz, Iwona; Sliwinski, Pawel; Yim, Jae-Joon; Kim, Woo Jin; Kim, Deog K; Agusti, Alvar; Make, Barry J; Crapo, James D; Calverley, Peter M; Donner, Claudio F; Lomas, David A; Wouters, Emiel F; Vestbo, Jørgen; Tal-Singer, Ruth; Bakke, Per; Gulsvik, Amund; Litonjua, Augusto A; Sparrow, David; Paré, Peter D; Levy, Robert D; Rennard, Stephen I; Beaty, Terri H; Hokanson, John; Silverman, Edwin K; Cho, Michael H
2017-07-01
The heritability of chronic obstructive pulmonary disease (COPD) cannot be fully explained by recognized genetic risk factors identified as achieving genome-wide significance. In addition, the combined contribution of genetic variation to COPD risk has not been fully explored. We sought to determine: (1) whether studies of variants from previous studies of COPD or lung function in a larger sample could identify additional associated variants, particularly for severe COPD; and (2) the impact of genetic risk scores on COPD. We genotyped 3,346 single-nucleotide polymorphisms (SNPs) in 2,588 cases (1,803 severe COPD) and 1,782 control subjects from four cohorts, and performed association testing with COPD, combining these results with existing genotyping data from 6,633 cases (3,497 severe COPD) and 5,704 control subjects. In addition, we developed genetic risk scores from SNPs associated with lung function and COPD and tested their discriminatory power for COPD-related measures. We identified significant associations between SNPs near PPIC (P = 1.28 × 10 -8 ) and PPP4R4/SERPINA1 (P = 1.01 × 10 -8 ) and severe COPD; the latter association may be driven by recognized variants in SERPINA1. Genetic risk scores based on SNPs previously associated with COPD and lung function had a modest ability to discriminate COPD (area under the curve, ∼0.6), and accounted for a mean 0.9-1.9% lower forced expiratory volume in 1 second percent predicted for each additional risk allele. In a large genetic association analysis, we identified associations with severe COPD near PPIC and SERPINA1. A risk score based on combining genetic variants had modest, but significant, effects on risk of COPD and lung function.
2012-01-01
Background Severe alcohol misuse as measured by the Alcohol Use Disorders Identification Test–Consumption (AUDIT-C) is associated with increased risk of future fractures and trauma-related hospitalizations. This study examined the association between AUDIT-C scores and two-year risk of any type of trauma among US Veterans Health Administration (VHA) patients and assessed whether risk varied by age or gender. Methods Outpatients (215, 924 male and 9168 female) who returned mailed AUDIT-C questionnaires were followed for 24 months in the medical record for any International Statistical Classification of Diseases and Related Health Problems (ICD-9) code related to trauma. The two-year prevalence of trauma was examined as a function of AUDIT-C scores, with low-level drinking (AUDIT-C 1–4) as the reference group. Men and women were examined separately, and age-stratified analyses were performed. Results Having an AUDIT-C score of 9–12 (indicating severe alcohol misuse) was associated with increased risk for trauma. Mean (SD) ages for men and women were 68.2 (11.5) and 57.2 (15.8), respectively. Age-stratified analyses showed that, for men ≤50 years, those with AUDIT-C scores ≥9 had an increased risk for trauma compared with those with AUDIT-C scores in the 1–4 range (adjusted prevalence, 25.7% versus 20.8%, respectively; OR = 1.24; 95% confidence interval [CI], 1.03–1.50). For men ≥65 years with average comorbidity and education, those with AUDIT-C scores of 5–8 (adjusted prevalence, 7.9% versus 7.4%; OR = 1.16; 95% CI, 1.02–1.31) and 9–12 (adjusted prevalence 11.1% versus 7.4%; OR = 1.68; 95% CI, 1.30–2.17) were at significantly increased risk for trauma compared with men ≥65 years in the reference group. Higher AUDIT-C scores were not associated with increased risk of trauma among women. Conclusions Men with severe alcohol misuse (AUDIT-C 9–12) demonstrate an increased risk of trauma. Men ≥65 showed an increased risk for trauma at all levels of alcohol misuse (AUDIT-C 5–8 and 9–12). These findings may be used as part of an evidence-based brief intervention for alcohol use disorders. More research is needed to understand the relationship between AUDIT-C scores and risk of trauma in women. PMID:22966411
Wu, Wei; West, Stephen G.; Hughes, Jan N.
2008-01-01
We investigated the effects of grade retention in first grade on the growth of the Woodcock-Johnson broad mathematics and reading scores over three years using linear growth curve modeling on an academically at-risk sample. A large sample (n = 784) of first grade children who were at risk for retention were initially identified based on low literacy scores. Scores representing propensity for retention were constructed based on 72 variables collected in comprehensive baseline testing in first grade. We closely matched 97 pairs of retained and promoted children based on their propensity scores using optimal matching procedures. This procedure adjusted for baseline differences between the retained and promoted children. We found that grade retention decreased the growth rate of mathematical skills but had no significant effect on reading skills. In addition, several potential moderators of the effect of retention on growth of mathematical and reading skills were identified including limited English language proficiency and children's conduct problems. PMID:19083352
Wu, Wei; West, Stephen G; Hughes, Jan N
2008-02-01
We investigated the effects of grade retention in first grade on the growth of the Woodcock-Johnson broad mathematics and reading scores over three years using linear growth curve modeling on an academically at-risk sample. A large sample (n=784) of first grade children who were at risk for retention was initially identified based on low literacy scores. Scores representing propensity for retention were constructed based on 72 variables collected in comprehensive baseline testing in first grade. We closely matched 97 pairs of retained and promoted children based on their propensity scores using optimal matching procedures. This procedure adjusted for baseline differences between the retained and promoted children. We found that grade retention decreased the growth rate of mathematical skills but had no significant effect on reading skills. In addition, several potential moderators of the effect of retention on growth of mathematical and reading skills were identified including limited English language proficiency and children's conduct problems.
Aagaard, Theis; Roen, Ashley; Daugaard, Gedske; Brown, Peter; Sengeløv, Henrik; Mocroft, Amanda; Lundgren, Jens; Helleberg, Marie
2017-01-01
Abstract Background Febrile neutropenia (FN) is a common complication to chemotherapy associated with a high burden of morbidity and mortality. Reliable prediction of individual risk based on pretreatment risk factors allows for stratification of preventive interventions. We aimed to develop such a risk stratification model to predict FN in the 30 days after initiation of chemotherapy. Methods We included consecutive treatment-naïve patients with solid cancers and diffuse large B-cell lymphomas at Copenhagen University Hospital, 2010–2015. Data were obtained from the PERSIMUNE repository of electronic health records. FN was defined as neutrophils ≤0.5 × 10E9/L at the time of either a blood culture sample or death. Time from initiation of chemotherapy to FN was analyzed using Fine-Gray models with death as a competing event. Risk factors investigated were: age, sex, body surface area, haemoglobin, albumin, neutrophil-to-lymphocyte ratio, Charlson Comorbidity Index (CCI) and chemotherapy drugs. Parameter estimates were scaled and summed to create the risk score. The scores were grouped into four: low, intermediate, high and very high risk. Results Among 8,585 patients, 467 experienced FN, incidence rate/30 person-days 0.05 (95% CI, 0.05–0.06). Age (1 point if > 65 years), albumin (1 point if < 39 g/L), CCI (1 point if > 2) and chemotherapy (range -5 to 6 points/drug) predicted FN. Median score at inclusion was 2 points (range –5 to 9). The cumulative incidence and the incidence rates and hazard ratios of FN are shown in Figure 1 and Table 1, respectively. Conclusion We developed a risk score to predict FN the first month after initiation of chemotherapy. The score is easy to use and provides good differentiation of risk groups; the score needs independent validation before routine use. Disclosures All authors: No reported disclosures.
Genetic Risk Prediction of Atrial Fibrillation
Lubitz, Steven A.; Yin, Xiaoyan; Lin, Henry J.; Kolek, Matthew; Smith, J. Gustav; Trompet, Stella; Rienstra, Michiel; Rost, Natalia S.; Teixeira, Pedro L.; Almgren, Peter; Anderson, Christopher D.; Chen, Lin Y.; Engström, Gunnar; Ford, Ian; Furie, Karen L.; Guo, Xiuqing; Larson, Martin G.; Lunetta, Kathryn L.; Macfarlane, Peter W.; Psaty, Bruce M.; Soliman, Elsayed Z.; Sotoodehnia, Nona; Stott, David J.; Taylor, Kent D.; Weng, Lu-Chen; Yao, Jie; Geelhoed, Bastiaan; Verweij, Niek; Siland, Joylene E.; Kathiresan, Sekar; Roselli, Carolina; Roden, Dan; van der Harst, Pim; Darbar, Dawood; Jukema, J. Wouter; Melander, Olle; Rosand, Jonathan; Rotter, Jerome I.; Heckbert, Susan R.; Ellinor, Patrick T.; Alonso, Alvaro; Benjamin, Emelia J.
2017-01-01
Background Atrial fibrillation (AF) is common and has a substantial genetic basis. Identification of individuals at greatest AF risk could minimize the incidence of cardioembolic stroke. Methods To determine whether genetic data can stratify risk for development of AF, we examined associations between AF genetic risk scores and incident AF in five prospective studies comprising 18,919 individuals of European ancestry. We examined associations between AF genetic risk scores and ischemic stroke in a separate study of 509 ischemic stroke cases (202 cardioembolic [40%]) and 3,028 controls. Scores were based on 11 to 719 common variants (≥5%) associated with AF at P-values ranging from <1×10−3 to <1×10−8 in a prior independent genetic association study. Results Incident AF occurred in 1,032 (5.5%) individuals. AF genetic risk scores were associated with new-onset AF after adjusting for clinical risk factors. The pooled hazard ratio for incident AF for the highest versus lowest quartile of genetic risk scores ranged from 1.28 (719 variants; 95%CI, 1.13–1.46; P=1.5×10−4) to 1.67 (25 variants; 95%CI, 1.47–1.90; P=9.3×10−15). Discrimination of combined clinical and genetic risk scores varied across studies and scores (maximum C statistic, 0.629–0.811; maximum ΔC statistic from clinical score alone, 0.009–0.017). AF genetic risk was associated with stroke in age- and sex-adjusted models. For example, individuals in the highest quartile of a 127-variant score had a 2.49-fold increased odds of cardioembolic stroke, versus those in the lowest quartile (95%CI, 1.39–4.58; P=2.7×10−3). The effect persisted after excluding individuals (n=70) with known AF (odds ratio, 2.25; 95%CI, 1.20–4.40; P=0.01). Conclusions Comprehensive AF genetic risk scores were associated with incident AF beyond clinical AF risk factors, with magnitudes of risk comparable to other clinical risk factors, though offered small improvements in discrimination. AF genetic risk was also associated with cardioembolic stroke in age- and sex-adjusted analyses. Efforts to determine whether AF genetic risk may improve identification of subclinical AF or distinguish stroke mechanisms are warranted. PMID:27793994
Ledesma-Gumba, M A; Danguilan, R A; Casasola, C C; Ona, E T
2008-09-01
To evaluate the efficacy of tailored immunosuppressive regimens prescribed according to a risk stratification scoring system based on the number of HLA mismatches, donor source, panel-reactive antibodies (PRA), and repeat transplant. Patients in a retrospective cohort of 329 kidney transplantations performed from October 2004 to December 2005 were assigned scores of 0, 2, 4, or 6 with higher scores for > or =1 HLA mismatches, PRA > 10%, repeat transplant, and unrelated or deceased donor. Added scores of < or =4 comprised the low-risk group who received a Calcineurin inhibitor (CNI)-based regimen without induction, whereas a score > or = 6 denoted high risk including a CNI-based regimen with an interleukin-2 receptor antibody. The efficacy analysis compared the incidences of biopsy-proven acute rejection episodes (BPAR) at 1 year. Only 227 (69%) of 329 patients had a complete data set and 84 were excluded because they did not follow the prescribed protocol, yielding 113 low- and 30 high-risk patients in the final population. Low-risk patients had a mean PRA of 5.4%, living related donors in 68%, and primary transplants. High-risk patients had a mean PRA of 18.8% (range = 10%-97%), living nonrelated donors in 84%, four deceased donors, and four repeat transplants. The overall 1-year incidence of BPAR was 5.7%. No significant difference (P = .081) was observed in 1-year BPAR between the low- (4.5%) and high-risk (9.8%) groups. Likewise, no significant difference in the 1-year mean serum creatinine was observed according to the CNI. The mean creatinine was 1.12 for cyclosporine and 1.38 for tacrolimus treatment (P = .06) in the low-risk group and 1.08 for cyclosporine and 1.2 for tacrolimus (P = .61) in the high-risk cohort. There was no significant difference in acute rejection rates between the immunologically low- or high-risk patients using tailored immunosuppression, which was effective to minimize its occurrence with good renal function at 1 year.
Family history and risk of breast cancer: an analysis accounting for family structure.
Brewer, Hannah R; Jones, Michael E; Schoemaker, Minouk J; Ashworth, Alan; Swerdlow, Anthony J
2017-08-01
Family history is an important risk factor for breast cancer incidence, but the parameters conventionally used to categorize it are based solely on numbers and/or ages of breast cancer cases in the family and take no account of the size and age-structure of the woman's family. Using data from the Generations Study, a cohort of over 113,000 women from the general UK population, we analyzed breast cancer risk in relation to first-degree family history using a family history score (FHS) that takes account of the expected number of family cases based on the family's age-structure and national cancer incidence rates. Breast cancer risk increased significantly (P trend < 0.0001) with greater FHS. There was a 3.5-fold (95% CI 2.56-4.79) range of risk between the lowest and highest FHS groups, whereas women who had two or more relatives with breast cancer, the strongest conventional familial risk factor, had a 2.5-fold (95% CI 1.83-3.47) increase in risk. Using likelihood ratio tests, the best model for determining breast cancer risk due to family history was that combining FHS and age of relative at diagnosis. A family history score based on expected as well as observed breast cancers in a family can give greater risk discrimination on breast cancer incidence than conventional parameters based solely on cases in affected relatives. Our modeling suggests that a yet stronger predictor of risk might be a combination of this score and age at diagnosis in relatives.
[Clinical scores for the risk of recurrent VTED and for the relationship cancer-VTED].
Junod, Alain
2016-02-17
Clinical scores related to the risk of recurrent venous thromboembolic disease (VTED), to the relationship between cancer and VTED (risk of development of VTED, risk of recurrent VTED, prognosis of pulmonary embolism) and to the risk of cancer following VTED are analysed and commented upon. Although they most often rely on appropriate methodology and are often based on a large number of subjects, they unfortunately provide information that is not necessarily useful for the care of patients. Their use should be considered only when positive impact studies are published.
Ahmed, Emad; El-Menyar, Ayman
2016-03-01
The South Asian (SA) population constitutes one of the largest ethnic groups in the world. Several studies that compared host and migrant populations around the world indicate that SAs have a higher risk of developing cardiovascular disease (CVD) than their native-born counterparts. Herein, we review the literature to address the role of the screening tools, scoring systems, and guidelines for primary, secondary, and tertiary prevention in these populations. Management based on screening for the CVD risk factors in a high-risk population such as SAs can improve health care outcomes. There are many scoring tools for calculating 10-year CVD risk; however, each scoring system has its limitations in this particular ethnicity. Further work is needed to establish a unique scoring and guidelines in SAs. © The Author(s) 2015.
Sisa, Ivan
2018-02-09
Cardiovascular disease (CVD) mortality is predicted to increase in Latin America countries due to their rapidly aging population. However, there is very little information about CVD risk assessment as a primary preventive measure in this high-risk population. We predicted the national risk of developing CVD in Ecuadorian elderly population using the Systematic COronary Risk Evaluation in Older Persons (SCORE OP) High and Low models by risk categories/CVD risk region in 2009. Data on national cardiovascular risk factors were obtained from the Encuesta sobre Salud, Bienestar y Envejecimiento. We computed the predicted 5-year risk of CVD risk and compared the extent of agreement and reclassification in stratifying high-risk individuals between SCORE OP High and Low models. Analyses were done by risk categories, CVD risk region, and sex. In 2009, based on SCORE OP Low model almost 42% of elderly adults living in Ecuador were at high risk of suffering CVD over a 5-year period. The extent of agreement between SCORE OP High and Low risk prediction models was moderate (Cohen's kappa test of 0.5), 34% of individuals approximately were reclassified into different risk categories and a third of the population would benefit from a pharmacologic intervention to reduce the CVD risk. Forty-two percent of elderly Ecuadorians were at high risk of suffering CVD over a 5-year period, indicating an urgent need to tailor primary preventive measures for this vulnerable and high-risk population. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Reliability of Modern Scores to Predict Long-Term Mortality After Isolated Aortic Valve Operations.
Barili, Fabio; Pacini, Davide; D'Ovidio, Mariangela; Ventura, Martina; Alamanni, Francesco; Di Bartolomeo, Roberto; Grossi, Claudio; Davoli, Marina; Fusco, Danilo; Perucci, Carlo; Parolari, Alessandro
2016-02-01
Contemporary scores for estimating perioperative death have been proposed to also predict also long-term death. The aim of the study was to evaluate the performance of the updated European System for Cardiac Operative Risk Evaluation II, The Society of Thoracic Surgeons Predicted Risk of Mortality score, and the Age, Creatinine, Left Ventricular Ejection Fraction score for predicting long-term mortality in a contemporary cohort of isolated aortic valve replacement (AVR). We also sought to develop for each score a simple algorithm based on predicted perioperative risk to predict long-term survival. Complete data on 1,444 patients who underwent isolated AVR in a 7-year period were retrieved from three prospective institutional databases and linked with the Italian Tax Register Information System. Data were evaluated with performance analyses and time-to-event semiparametric regression. Survival was 83.0% ± 1.1% at 5 years and 67.8 ± 1.9% at 8 years. Discrimination and calibration of all three scores both worsened for prediction of death at 1 year and 5 years. Nonetheless, a significant relationship was found between long-term survival and quartiles of scores (p < 0.0001). The estimated perioperative risk by each model was used to develop an algorithm to predict long-term death. The hazard ratios for death were 1.1 (95% confidence interval, 1.07 to 1.12) for European System for Cardiac Operative Risk Evaluation II, 1.34 (95% CI, 1.28 to 1.40) for the Society of Thoracic Surgeons score, and 1.08 (95% CI, 1.06 to 1.10) for the Age, Creatinine, Left Ventricular Ejection Fraction score. The predicted risk generated by European System for Cardiac Operative Risk Evaluation II, The Society of Thoracic Surgeons score, and Age, Creatinine, Left Ventricular Ejection Fraction scores cannot also be considered a direct estimate of the long-term risk for death. Nonetheless, the three scores can be used to derive an estimate of long-term risk of death in patients who undergo isolated AVR with the use of a simple algorithm. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Applying machine learning to pattern analysis for automated in-design layout optimization
NASA Astrophysics Data System (ADS)
Cain, Jason P.; Fakhry, Moutaz; Pathak, Piyush; Sweis, Jason; Gennari, Frank; Lai, Ya-Chieh
2018-04-01
Building on previous work for cataloging unique topological patterns in an integrated circuit physical design, a new process is defined in which a risk scoring methodology is used to rank patterns based on manufacturing risk. Patterns with high risk are then mapped to functionally equivalent patterns with lower risk. The higher risk patterns are then replaced in the design with their lower risk equivalents. The pattern selection and replacement is fully automated and suitable for use for full-chip designs. Results from 14nm product designs show that the approach can identify and replace risk patterns with quantifiable positive impact on the risk score distribution after replacement.
Apgar Score Is Related to Development of Atopic Dermatitis: Cotwin Control Study
Naeser, Vibeke; Kahr, Niklas; Stensballe, Lone Graff; Kyvik, Kirsten Ohm; Skytthe, Axel; Backer, Vibeke
2013-01-01
Aim. To study the impact of birth characteristics on the risk of atopic dermatitis in a twin population. Methods. In a population-based questionnaire study of 10,809 twins, 3–9 years of age, from the Danish Twin Registry, we identified 907 twin pairs discordant for parent-reported atopic dermatitis. We cross-linked with data from the Danish National Birth Registry and performed cotwin control analysis in order to test the impact of birth characteristics on the risk of atopic dermatitis. Results. Apgar score, OR (per unit) = 1.23 (1.06–1.44), P = 0.008, and female sex, OR = 1.31 (1.06–1.61), P = 0.012, were risk factors for atopic dermatitis in cotwin control analysis, whereas birth anthropometric factors were not significantly related to disease development. Risk estimates in monozygotic and dizygotic twins were not significantly different for the identified risk factors. Conclusions. In this population-based cotwin control study, high Apgar score was a risk factor for atopic dermatitis. This novel finding must be confirmed in subsequent studies. PMID:24222775
Brindle, P; May, M; Gill, P; Cappuccio, F; D'Agostino, R; Fischbacher, C; Ebrahim, S
2006-01-01
Objective To recalibrate an existing Framingham risk score to produce a web‐based tool for estimating the 10‐year risk of coronary heart disease (CHD) and cardiovascular disease (CVD) in seven British black and minority ethnic groups. Design Risk prediction models were recalibrated against survey data on ethnic group risk factors and disease prevalence compared with the general population. Ethnic‐ and sex‐specific 10‐year risks of CHD and CVD, at the means of the risk factors for each ethnic group, were calculated from the product of the incidence rate in the general population and the prevalence ratios for each ethnic group. Setting Two community‐based surveys. Participants 3778 men and 4544 women, aged 35–54, from the Health Surveys for England 1998 and 1999 and the Wandsworth Heart and Stroke Study. Main outcome measures 10‐year risk of CHD and CVD. Results 10‐year risk of CHD and CVD for non‐smoking people aged 50 years with a systolic blood pressure of 130 mm Hg and a total cholesterol to high density lipoprotein cholesterol ratio of 4.2 was highest in men for those of Pakistani and Bangladeshi origin (CVD risk 12.6% and 12.8%, respectively). CHD risk in men with the same risk factor values was lowest in Caribbeans (2.8%) and CVD risk was lowest in Chinese (5.4%). Women of Pakistani origin were at highest risk and Chinese women at lowest risk for both outcomes with CVD risks of 6.6% and 1.2%, respectively. A web‐based risk calculator (ETHRISK) allows 10‐year risks to be estimated in routine primary care settings for relevant risk factor and ethnic group combinations. Conclusions In the absence of cohort studies in the UK that include significant numbers of black and minority ethnic groups, this risk score provides a pragmatic solution to including people from diverse ethnic backgrounds in the primary prevention of CVD. PMID:16762981
Shigehara, Kazuyoshi; Konaka, Hiroyuki; Ijima, Masashi; Nohara, Takahiro; Narimoto, Kazutaka; Izumi, Koji; Kadono, Yoshifumi; Kitagawa, Yasuhide; Mizokami, Atsushi; Namiki, Mikio
2016-12-01
We investigated the correlation between highly sensitive C-reactive protein (hs-CRP) levels and erectile function, and assessed the clinical role of hs-CRP levels in men with late-onset hypogonadism (LOH) syndrome. For 77 participants, we assessed Sexual Health Inventory for men (SHIM) score, Aging Male Symptoms (AMS) score and International Prostate Symptom Score (IPSS). We also evaluated free testosterone (FT), hs-CRP, total cholesterol, triglyceride levels, high density lipoprotein cholesterol, hemoglobin A1c, body mass index, waist size and blood pressure. We attempted to identify parameters correlated with SHIM score and to determine the factors affecting cardiovascular risk based on hs-CRP levels. A Spearman rank correlation test revealed that age, AMS score, IPSS and hs-CRP levels were significantly correlated with SHIM score. Age-adjusted analysis revealed that hs-CRP and IPSS were the independent factors affecting SHIM score (r= -0.304 and -0.322, respectively). Seventeen patients belonged to the moderate to high risk group for cardiovascular disease, whereas the remaining 60 belonged to the low risk group. Age, FT value and SHIM score showed significant differences between the two groups. A multivariate regression analysis demonstrated that SHIM score was an independent factor affecting cardiovascular risk (OR: 0.796; 95%CI: 0.637-0.995).
Alcohol and cancer: risk perception and risk denial beliefs among the French general population.
Bocquier, Aurélie; Fressard, Lisa; Verger, Pierre; Legleye, Stéphane; Peretti-Watel, Patrick
2017-08-01
Worldwide, millions of deaths each year are attributed to alcohol. We sought to examine French people's beliefs about the risks of alcohol, their correlates, and their associations with alcohol use. Data came from the 2010 Baromètre Cancer survey, a random cross-sectional telephone survey of the French general population (n = 3359 individuals aged 15-75 years). Using principal component analysis of seven beliefs about alcohol risks, we built two scores (one assessing risk denial based on self-confidence and the other risk relativization). Two multiple linear regressions explored these scores' socio-demographic and perceived information level correlates. Multiple logistic regressions tested the associations of these scores with daily drinking and with heavy episodic drinking (HED). About 60% of the respondents acknowledged that alcohol increases the risk of cancer, and 89% felt well-informed about the risks of alcohol. Beliefs that may promote risk denial were frequent (e.g. 72% agreed that soda and hamburgers are as bad as alcohol for your health). Both risk denial and risk relativization scores were higher among men, older respondents and those of low socioeconomic status. The probability of daily drinking increased with the risk relativization score and that of HED with both scores. Beliefs that can help people to deny the cancer risks due to alcohol use are common in France and may exist in many other countries where alcoholic beverages have been an integral part of the culture. These results can be used to redesign public information campaigns about the risks of alcohol. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Measuring Biological Age via Metabonomics: The Metabolic Age Score.
Hertel, Johannes; Friedrich, Nele; Wittfeld, Katharina; Pietzner, Maik; Budde, Kathrin; Van der Auwera, Sandra; Lohmann, Tobias; Teumer, Alexander; Völzke, Henry; Nauck, Matthias; Grabe, Hans Jörgen
2016-02-05
Chronological age is one of the most important risk factors for adverse clinical outcome. Still, two individuals at the same chronological age could have different biological aging states, leading to different individual risk profiles. Capturing this individual variance could constitute an even more powerful predictor enhancing prediction in age-related morbidity. Applying a nonlinear regression technique, we constructed a metabonomic measurement for biological age, the metabolic age score, based on urine data measured via (1)H NMR spectroscopy. We validated the score in two large independent population-based samples by revealing its significant associations with chronological age and age-related clinical phenotypes as well as its independent predictive value for survival over approximately 13 years of follow-up. Furthermore, the metabolic age score was prognostic for weight loss in a sample of individuals who underwent bariatric surgery. We conclude that the metabolic age score is an informative measurement of biological age with possible applications in personalized medicine.
Early Cannabis Use, Polygenic Risk Score for Schizophrenia and Brain Maturation in Adolescence.
French, Leon; Gray, Courtney; Leonard, Gabriel; Perron, Michel; Pike, G Bruce; Richer, Louis; Séguin, Jean R; Veillette, Suzanne; Evans, C John; Artiges, Eric; Banaschewski, Tobias; Bokde, Arun W L; Bromberg, Uli; Bruehl, Ruediger; Buchel, Christian; Cattrell, Anna; Conrod, Patricia J; Flor, Herta; Frouin, Vincent; Gallinat, Jurgen; Garavan, Hugh; Gowland, Penny; Heinz, Andreas; Lemaitre, Herve; Martinot, Jean-Luc; Nees, Frauke; Orfanos, Dimitri Papadopoulos; Pangelinan, Melissa Marie; Poustka, Luise; Rietschel, Marcella; Smolka, Michael N; Walter, Henrik; Whelan, Robert; Timpson, Nic J; Schumann, Gunter; Smith, George Davey; Pausova, Zdenka; Paus, Tomáš
2015-10-01
Cannabis use during adolescence is known to increase the risk for schizophrenia in men. Sex differences in the dynamics of brain maturation during adolescence may be of particular importance with regard to vulnerability of the male brain to cannabis exposure. To evaluate whether the association between cannabis use and cortical maturation in adolescents is moderated by a polygenic risk score for schizophrenia. Observation of 3 population-based samples included initial analysis in 1024 adolescents of both sexes from the Canadian Saguenay Youth Study (SYS) and follow-up in 426 adolescents of both sexes from the IMAGEN Study from 8 European cities and 504 male youth from the Avon Longitudinal Study of Parents and Children (ALSPAC) based in England. A total of 1577 participants (aged 12-21 years; 899 [57.0%] male) had (1) information about cannabis use; (2) imaging studies of the brain; and (3) a polygenic risk score for schizophrenia across 108 genetic loci identified by the Psychiatric Genomics Consortium. Data analysis was performed from March 1 through December 31, 2014. Cortical thickness derived from T1-weighted magnetic resonance images. Linear regression tests were used to assess the relationships between cannabis use, cortical thickness, and risk score. Across the 3 samples of 1574 participants, a negative association was observed between cannabis use in early adolescence and cortical thickness in male participants with a high polygenic risk score. This observation was not the case for low-risk male participants or for the low- or high-risk female participants. Thus, in SYS male participants, cannabis use interacted with risk score vis-à-vis cortical thickness (P = .009); higher scores were associated with lower thickness only in males who used cannabis. Similarly, in the IMAGEN male participants, cannabis use interacted with increased risk score vis-à-vis a change in decreasing cortical thickness from 14.5 to 18.5 years of age (t137 = -2.36; P = .02). Finally, in the ALSPAC high-risk group of male participants, those who used cannabis most frequently (≥61 occasions) had lower cortical thickness than those who never used cannabis (difference in cortical thickness, 0.07 [95% CI, 0.01-0.12]; P = .02) and those with light use (<5 occasions) (difference in cortical thickness, 0.11 [95% CI, 0.03-0.18]; P = .004). Cannabis use in early adolescence moderates the association between the genetic risk for schizophrenia and cortical maturation among male individuals. This finding implicates processes underlying cortical maturation in mediating the link between cannabis use and liability to schizophrenia.
Polygenic risk predicts obesity in both white and black young adults.
Domingue, Benjamin W; Belsky, Daniel W; Harris, Kathleen Mullan; Smolen, Andrew; McQueen, Matthew B; Boardman, Jason D
2014-01-01
To test transethnic replication of a genetic risk score for obesity in white and black young adults using a national sample with longitudinal data. A prospective longitudinal study using the National Longitudinal Study of Adolescent Health Sibling Pairs (n = 1,303). Obesity phenotypes were measured from anthropometric assessments when study members were aged 18-26 and again when they were 24-32. Genetic risk scores were computed based on published genome-wide association study discoveries for obesity. Analyses tested genetic associations with body-mass index (BMI), waist-height ratio, obesity, and change in BMI over time. White and black young adults with higher genetic risk scores had higher BMI and waist-height ratio and were more likely to be obese compared to lower genetic risk age-peers. Sibling analyses revealed that the genetic risk score was predictive of BMI net of risk factors shared by siblings. In white young adults only, higher genetic risk predicted increased risk of becoming obese during the study period. In black young adults, genetic risk scores constructed using loci identified in European and African American samples had similar predictive power. Cumulative information across the human genome can be used to characterize individual level risk for obesity. Measured genetic risk accounts for only a small amount of total variation in BMI among white and black young adults. Future research is needed to identify modifiable environmental exposures that amplify or mitigate genetic risk for elevated BMI.
A Danish diabetes risk score for targeted screening: the Inter99 study.
Glümer, Charlotte; Carstensen, Bendix; Sandbaek, Annelli; Lauritzen, Torsten; Jørgensen, Torben; Borch-Johnsen, Knut
2004-03-01
To develop a simple self-administered questionnaire identifying individuals with undiagnosed diabetes with a sensitivity of 75% and minimizing the high-risk group needing subsequent testing. A population-based sample (Inter99 study) of 6,784 individuals aged 30-60 years completed a questionnaire on diabetes-related symptoms and risk factors. The participants underwent an oral glucose tolerance test. The risk score was derived from the first half and validated on the second half of the study population. External validation was performed based on the Danish Anglo-Danish-Dutch Study of Intensive Treatment in People with Screen Detected Diabetes in Primary Care (ADDITION) pilot study. The risk score was developed by stepwise backward multiple logistic regression. The final risk score included age, sex, BMI, known hypertension, physical activity at leisure time, and family history of diabetes, items independently and significantly (P<0.05) associated with the presence of previously undiagnosed diabetes. The area under the receiver operating curve was 0.804 (95% CI 0.765-0.838) for the first half of the Inter99 population, 0.761 (0.720-0.803) for the second half of the Inter99 population, and 0.803 (0.721-0.876) for the ADDITION pilot study. The sensitivity, specificity, and percentage that needed subsequent testing were 76, 72, and 29%, respectively. The false-negative individuals in the risk score had a lower absolute risk of ischemic heart disease compared with the true-positive individuals (11.3 vs. 20.4%; P<0.0001). We developed a questionnaire to be used in a stepwise screening strategy for type 2 diabetes, decreasing the numbers of subsequent tests and thereby possibly minimizing the economical and personal costs of the screening strategy.
Kim, Mun Hee; Kim, Young Sang; Oh, Hye Jin; Kwon, Yu Ri; Kim, Hye Won
2018-05-01
We examined the relationship between 10-year predicted atherosclerosis cardiovascular disease (ASCVD) risk score and 25-hydroxyvitamin D in Koreans aged 40-79 years. A population-based, cross-sectional design was used from data based on the Korea National Health and Nutrition Examination Survey 2014. A total of 1,134 healthy Koreans aged 40-79 years were included. A positive relationship between serum 25-hydroxyvitamin D level and ASCVD score was shown in women (β=0.015) after adjusting for central obesity, physical activity, and supplement intake. The chances of being in the moderate to high risk (risk group, ASCVD score ≥5%) with vitamin D sufficiency (serum 25-hydroxyvitamin D ≥20 ng/mL) was 1.267-fold (95% confidence interval, 1.039-1.595) greater than the chance of being included in the group with vitamin D deficiency (serum 25-hydroxyvitamin D <20 ng/mL) after adjustments in women. Our research indicated a significantly positive association between 25-hydroxyvitamin D and ASCVD score. Further detailed studies to evaluate this correlation are needed.
Lu, Xiangfeng; Huang, Jianfeng; Wang, Laiyuan; Chen, Shufeng; Yang, Xueli; Li, Jianxin; Cao, Jie; Chen, Jichun; Li, Ying; Zhao, Liancheng; Li, Hongfan; Liu, Fangcao; Huang, Chen; Shen, Chong; Shen, Jinjin; Yu, Ling; Xu, Lihua; Mu, Jianjun; Wu, Xianping; Ji, Xu; Guo, Dongshuang; Zhou, Zhengyuan; Yang, Zili; Wang, Renping; Yang, Jun; Yan, Weili; Gu, Dongfeng
2015-10-01
Although multiple genetic markers associated with blood pressure have been identified by genome-wide association studies, their aggregate effect on risk of incident hypertension and cardiovascular disease is uncertain, particularly among East Asian who may have different genetic and environmental exposures from Europeans. We aimed to examine the association between genetic predisposition to higher blood pressure and risk of incident hypertension and cardiovascular disease in 26 262 individuals in 2 Chinese population-based prospective cohorts. A genetic risk score was calculated based on 22 established variants for blood pressure in East Asian. We found the genetic risk score was significantly and independently associated with linear increases in blood pressure and risk of incident hypertension and cardiovascular disease (P range from 4.57×10(-3) to 3.10×10(-6)). In analyses adjusted for traditional risk factors including blood pressure, individuals carrying most blood pressure-related risk alleles (top quintile of genetic score distribution) had 40% (95% confidence interval, 18-66) and 26% (6-45) increased risk for incident hypertension and cardiovascular disease, respectively, when compared with individuals in the bottom quintile. The genetic risk score also significantly improved discrimination for incident hypertension and cardiovascular disease and led to modest improvements in risk reclassification for cardiovascular disease (all the P<0.05). Our data indicate that genetic predisposition to higher blood pressure is an independent risk factor for blood pressure increase and incident hypertension and cardiovascular disease and provides modest incremental information to cardiovascular disease risk prediction. The potential clinical use of this panel of blood pressure-associated polymorphisms remains to be determined. © 2015 American Heart Association, Inc.
Time-dependent changes in mortality and transformation risk in MDS
Tuechler, Heinz; Sanz, Guillermo; Schanz, Julie; Garcia-Manero, Guillermo; Solé, Francesc; Bennett, John M.; Bowen, David; Fenaux, Pierre; Dreyfus, Francois; Kantarjian, Hagop; Kuendgen, Andrea; Malcovati, Luca; Cazzola, Mario; Cermak, Jaroslav; Fonatsch, Christa; Le Beau, Michelle M.; Slovak, Marilyn L.; Levis, Alessandro; Luebbert, Michael; Maciejewski, Jaroslaw; Machherndl-Spandl, Sigrid; Magalhaes, Silvia M. M.; Miyazaki, Yasushi; Sekeres, Mikkael A.; Sperr, Wolfgang R.; Stauder, Reinhard; Tauro, Sudhir; Valent, Peter; Vallespi, Teresa; van de Loosdrecht, Arjan A.; Germing, Ulrich; Haase, Detlef; Greenberg, Peter L.
2016-01-01
In myelodysplastic syndromes (MDSs), the evolution of risk for disease progression or death has not been systematically investigated despite being crucial for correct interpretation of prognostic risk scores. In a multicenter retrospective study, we described changes in risk over time, the consequences for basal prognostic scores, and their potential clinical implications. Major MDS prognostic risk scoring systems and their constituent individual predictors were analyzed in 7212 primary untreated MDS patients from the International Working Group for Prognosis in MDS database. Changes in risk of mortality and of leukemic transformation over time from diagnosis were described. Hazards regarding mortality and acute myeloid leukemia transformation diminished over time from diagnosis in higher-risk MDS patients, whereas they remained stable in lower-risk patients. After approximately 3.5 years, hazards in the separate risk groups became similar and were essentially equivalent after 5 years. This fact led to loss of prognostic power of different scoring systems considered, which was more pronounced for survival. Inclusion of age resulted in increased initial prognostic power for survival and less attenuation in hazards. If needed for practicability in clinical management, the differing development of risks suggested a reasonable division into lower- and higher-risk MDS based on the IPSS-R at a cutoff of 3.5 points. Our data regarding time-dependent performance of prognostic scores reflect the disparate change of risks in MDS subpopulations. Lower-risk patients at diagnosis remain lower risk whereas initially high-risk patients demonstrate decreasing risk over time. This change of risk should be considered in clinical decision making. PMID:27335276
Fujiyoshi, Akira; Arima, Hisatomi; Tanaka-Mizuno, Sachiko; Hisamatsu, Takahashi; Kadowaki, Sayaka; Kadota, Aya; Zaid, Maryam; Sekikawa, Akira; Yamamoto, Takashi; Horie, Minoru; Miura, Katsuyuki; Ueshima, Hirotsugu
2017-12-05
The clinical significance of coronary artery calcification (CAC) is not fully determined in general East Asian populations where background coronary heart disease (CHD) is less common than in USA/Western countries. We cross-sectionally assessed the association between CAC and estimated CHD risk as well as each major risk factor in general Japanese men. Participants were 996 randomly selected Japanese men aged 40-79 y, free of stroke, myocardial infarction, or revascularization. We examined an independent relationship between each risk factor used in prediction models and CAC score ≥100 by logistic regression. We then divided the participants into quintiles of estimated CHD risk per prediction model to calculate odds ratio of having CAC score ≥100. Receiver operating characteristic curve and c-index were used to examine discriminative ability of prevalent CAC for each prediction model. Age, smoking status, and systolic blood pressure were significantly associated with CAC score ≥100 in the multivariable analysis. The odds of having CAC score ≥100 were higher for those in higher quintiles in all prediction models (p-values for trend across quintiles <0.0001 for all models). All prediction models showed fair and similar discriminative abilities to detect CAC score ≥100, with similar c-statistics (around 0.70). In a community-based sample of Japanese men free of CHD and stroke, CAC score ≥100 was significantly associated with higher estimated CHD risk by prediction models. This finding supports the potential utility of CAC as a biomarker for CHD in a general Japanese male population.
Barroso, Lourdes Cañón; Muro, Eloísa Cruces; Herrera, Natalio Díaz; Ochoa, Gerardo Fernández; Hueros, Juan Ignacio Calvo; Buitrago, Francisco
2010-01-01
Objective To analyse the 10-year performance of the original Framingham coronary risk function and of the SCORE cardiovascular death risk function in a non-diabetic population of 40–65 years of age served by a Spanish healthcare centre. Also, to estimate the percentage of patients who are candidates for antihypertensive and lipid-lowering therapy. Design Longitudinal, observational study of a retrospective cohort followed up for 10 years. Setting Primary care health centre. Patients A total of 608 non-diabetic patients of 40–65 years of age (mean 52.8 years, 56.7% women), without evidence of cardiovascular disease were studied. Main outcome measures Coronary risk at 10 years from the time of their recruitment, using the tables based on the original Framingham function, and of their 10-year risk of fatal cardiovascular disease using the SCORE tables. Results The actual incidence rates of coronary and fatal cardiovascular events were 7.9% and 1.5%, respectively. The original Framingham equation over-predicted risk by 64%, while SCORE function over-predicted risk by 40%, but the SCORE model performed better than the Framingham one for discrimination and calibration statistics. The original Framingham function classified 18.3% of the population as high risk and SCORE 9.2%. The proportions of patients who would be candidates for lipid-lowering therapy were 31.0% and 23.8% according to the original Framingham and SCORE functions, respectively, and 36.8% and 31.2% for antihypertensive therapy. Conclusion The SCORE function showed better values than the original Framingham function for each of the discrimination and calibration statistics. The original Framingham function selected a greater percentage of candidates for antihypertensive and lipid-lowering therapy. PMID:20873973
2014-01-01
Background The Alcohol Use Disorders Identification Test (AUDIT) is a 10-item alcohol screener that has been recommended for use in Aboriginal primary health care settings. The time it takes respondents to complete AUDIT, however, has proven to be a barrier to its routine delivery. Two shorter versions, AUDIT-C and AUDIT-3, have been used as screening instruments in primary health care. This paper aims to identify the AUDIT-C and AUDIT-3 cutoff scores that most closely identify individuals classified as being at-risk drinkers, high-risk drinkers, or likely alcohol dependent by the 10-item AUDIT. Methods Two cross-sectional surveys were conducted from June 2009 to May 2010 and from July 2010 to June 2011. Aboriginal Australian participants (N = 156) were recruited through an Aboriginal Community Controlled Health Service, and a community-based drug and alcohol treatment agency in rural New South Wales (NSW), and through community-based Aboriginal groups in Sydney NSW. Sensitivity, specificity, and positive and negative predictive values of each score on the AUDIT-C and AUDIT-3 were calculated, relative to cutoff scores on the 10-item AUDIT for at-risk, high-risk, and likely dependent drinkers. Receiver operating characteristic (ROC) curve analyses were conducted to measure the detection characteristics of AUDIT-C and AUDIT-3 for the three categories of risk. Results The areas under the receiver operating characteristic (AUROC) curves were high for drinkers classified as being at-risk, high-risk, and likely dependent. Conclusions Recommended cutoff scores for Aboriginal Australians are as follows: at-risk drinkers AUDIT-C ≥ 5, AUDIT-3 ≥ 1; high-risk drinkers AUDIT-C ≥ 6, AUDIT-3 ≥ 2; and likely dependent drinkers AUDIT-C ≥ 9, AUDIT-3 ≥ 3. Adequate sensitivity and specificity were achieved for recommended cutoff scores. AUROC curves were above 0.90. PMID:25179547
Saptharishi, L G; Jayashree, Muralidharan; Singhi, Sunit
2016-04-01
Given the high burden of health care-associated infections (HAIs) in resource-limited settings, there is a tendency toward overdiagnosis/treatment. This study was designed to create an easy-to-use, dynamic, bedside risk stratification model for classifying children based on their risk of developing HAIs during their pediatric intensive care unit (PICU) stay, to aid judicious resource utilization. A prospective, observational cohort study was conducted in the 12-bed PICU of a large Indian tertiary care hospital between January and October 2011. A total of 412 consecutive admissions, aged 1 month to 12 years with PICU stay greater than 48 hours were enrolled. Independent predictors for HAIs identified using multivariate regression analysis were combined to create a novel scoring system. Performance and calibration of score were assessed using receiver operating characteristic curves and Hosmer-Lemeshow statistic, respectively. Internal validation was done. Age (<5 years), Pediatric Risk of Mortality III (24 hours) score, presence of indwelling catheters, need for intubation, albumin infusion, immunomodulator, and prior antibiotic use (≥4) were independent predictors of HAIs. This model, with area under the ROC curve of 0.87, at a cutoff of 15, had a negative predictive value of 89.9% with overall accuracy of 79.3%. It reduced classification errors from 29.8% to 20.7%. All 7 predictors retained their statistical significance after bootstrapping, confirming the internal validity of the score. The "Pediatric Risk of Nosocomial Sepsis" score can reliably classify children into high- and low-risk groups, based on their risk of developing HAIs in the PICU of a resource-limited setting. In view of its high sensitivity and specificity, diagnostic and therapeutic interventions may be directed away from the low-risk group, ensuring effective utilization of limited resources. Copyright © 2015 Elsevier Inc. All rights reserved.
Gilman, Robert H.; Sanchez-Abanto, Jose R.; Study Group, CRONICAS Cohort
2016-01-01
Objective. To develop and validate a risk score for detecting cases of undiagnosed diabetes in a resource-constrained country. Methods. Two population-based studies in Peruvian population aged ≥35 years were used in the analysis: the ENINBSC survey (n = 2,472) and the CRONICAS Cohort Study (n = 2,945). Fasting plasma glucose ≥7.0 mmol/L was used to diagnose diabetes in both studies. Coefficients for risk score were derived from the ENINBSC data and then the performance was validated using both baseline and follow-up data of the CRONICAS Cohort Study. Results. The prevalence of undiagnosed diabetes was 2.0% in the ENINBSC survey and 2.9% in the CRONICAS Cohort Study. Predictors of undiagnosed diabetes were age, diabetes in first-degree relatives, and waist circumference. Score values ranged from 0 to 4, with an optimal cutoff ≥2 and had a moderate performance when applied in the CRONICAS baseline data (AUC = 0.68; 95% CI: 0.62–0.73; sensitivity 70%; specificity 59%). When predicting incident cases, the AUC was 0.66 (95% CI: 0.61–0.71), with a sensitivity of 69% and specificity of 59%. Conclusions. A simple nonblood based risk score based on age, diabetes in first-degree relatives, and waist circumference can be used as a simple screening tool for undiagnosed and incident cases of diabetes in Peru. PMID:27689096
Viljoen, Jodi L.; Gray, Andrew L.; Shaffer, Catherine; Latzman, Natasha E.; Scalora, Mario J.; Ullman, Daniel
2018-01-01
Although the Juvenile Sex Offender Assessment Protocol–II (J-SOAP-II) and the Structured Assessment of Violence Risk in Youth (SAVRY) include an emphasis on dynamic, or modifiable factors, there has been little research on dynamic changes on these tools. To help address this gap, we compared admission and discharge scores of 163 adolescents who attended a residential, cognitive-behavioral treatment program for sexual offending. Based on reliable change indices, one half of youth showed a reliable decrease on the J-SOAP-II Dynamic Risk Total Score and one third of youth showed a reliable decrease on the SAVRY Dynamic Risk Total Score. Contrary to expectations, decreases in risk factors and increases in protective factors did not predict reduced sexual, violent nonsexual, or any reoffending. In addition, no associations were found between scores on the Psychopathy Checklist:Youth Version and levels of change. Overall, the J-SOAP-II and the SAVRY hold promise in measuring change, but further research is needed. PMID:26199271
Viljoen, Jodi L; Gray, Andrew L; Shaffer, Catherine; Latzman, Natasha E; Scalora, Mario J; Ullman, Daniel
2017-06-01
Although the Juvenile Sex Offender Assessment Protocol-II (J-SOAP-II) and the Structured Assessment of Violence Risk in Youth (SAVRY) include an emphasis on dynamic, or modifiable factors, there has been little research on dynamic changes on these tools. To help address this gap, we compared admission and discharge scores of 163 adolescents who attended a residential, cognitive-behavioral treatment program for sexual offending. Based on reliable change indices, one half of youth showed a reliable decrease on the J-SOAP-II Dynamic Risk Total Score and one third of youth showed a reliable decrease on the SAVRY Dynamic Risk Total Score. Contrary to expectations, decreases in risk factors and increases in protective factors did not predict reduced sexual, violent nonsexual, or any reoffending. In addition, no associations were found between scores on the Psychopathy Checklist:Youth Version and levels of change. Overall, the J-SOAP-II and the SAVRY hold promise in measuring change, but further research is needed.
Defining High Risk Patients for Endovascular Aneurysm Repair: A National Analysis
Egorova, Natalia; Giacovelli, Jeannine K.; Gelijns, Annetine; Mureebe, Leila; Greco, Giampaolo; Morrissey, Nicholas; Nowygrod, Roman; Moskowitz, Alan; McKinsey, James; Kent, K. Craig
2011-01-01
Background Endovascular aneurysm repair (EVAR) is commonly used as a minimally invasive technique for repairing infrarenal aortic aneurysms. There have been recent concerns that a subset of high-risk patients experience unfavorable outcomes with this intervention. To determine whether such a high-risk cohort exists and to identify the characteristics of these patients, we analyzed the outcomes of Medicare patients treated with EVAR from 2000–2006. Methods and Results We identified 66,943 patients who underwent EVAR from Inpatient Medicare database. The overall 30-day mortality was 1.6%. A risk model for perioperative mortality was developed by randomly selecting 44,630 patients; the other 1/3 of the dataset was used to validate the model. The model was deemed reliable (Hosmer-Lemeshow statistics was p=0.25 for the development, p=0.24 for the validation model) and accurate (c=0.735 and c=0.731 for the development and the validation model, respectively). In our scoring system, where scores ranged between 1 and 7, the following were identified as significant baseline factors that predict mortality: renal failure with dialysis (score=7), renal failure without dialysis (score=3), clinically significant lower extremity ischemia (score=5), patient age ≥85 (score=3), 75–84 (score=2), 70–74 (score=1), heart failure (score=3), chronic liver disease (score=3), female gender (score=2), neurological disorders (score=2), , chronic pulmonary disease (score=2), surgeon experience in EVAR<3 procedures (score=1) and hospital annual volume in EVAR <7 procedures (score=1). The majority of Medicare patients who were treated (96.6%, n=64,651) had a score of 9 or less, which correlated with a mortality < 5%. Only 3.4% of patients had a mortality ≥ 5% and 0.8% of patients (n=509) had a score of 13 or higher, which correlated with a mortality >10%. Conclusion We conclude that there is a high-risk cohort of patients that should not be treated with EVAR; however, this cohort is small. Our scoring system, which is based on patient and institutional factors, provides criteria that can be easily used by clinicians to quantify perioperative risk for EVAR candidates. PMID:19782526
Harrison, Stephanie L; de Craen, Anton J M; Kerse, Ngaire; Teh, Ruth; Granic, Antoneta; Davies, Karen; Wesnes, Keith A; den Elzen, Wendy P J; Gussekloo, Jacobijn; Kirkwood, Thomas B L; Robinson, Louise; Jagger, Carol; Siervo, Mario; Stephan, Blossom C M
2017-02-01
To examine the Framingham Stroke Risk Profile (FSRP); the Cardiovascular Risk Factors, Aging, and Incidence of Dementia (CAIDE) risk score, and oxi-inflammatory load (cumulative risk score of three blood biomarkers-homocysteine, interleukin-6, C-reactive protein) for associations with cognitive decline using three cohort studies of very old adults and to examine whether incorporating these biomarkers with the risk scores can affect the association with cognitive decline. Three longitudinal, population-based cohort studies. Newcastle-upon-Tyne, United Kingdom; Leiden, the Netherlands; and Lakes and Bay of Plenty District Health Board areas, New Zealand. Newcastle 85+ Study participants (n = 616), Leiden 85-plus Study participants (n = 444), and Life and Living in Advanced Age, a Cohort Study in New Zealand (LiLACS NZ Study) participants (n = 396). FSRP, CAIDE risk score, oxi-inflammatory load, FSRP incorporating oxi-inflammatory load, and CAIDE risk score incorporating oxi-inflammatory load. Oxi-inflammatory load could be calculated only in the Newcastle 85+ and the Leiden 85-plus studies. Measures of global cognitive function were available for all three data sets. Domain-specific measures were available for the Newcastle 85+ and the Leiden 85-plus studies. Meta-analysis of pooled results showed greater risk of incident global cognitive impairment with higher FSRP (hazard ratio (HR) = 1.46, 95% confidence interval (CI) = 1.08-1.98), CAIDE (HR = 1.53, 95% CI = 1.09-2.14), and oxi-inflammatory load (HR = 1.73, 95% CI = 1.04-2.88) scores. Adding oxi-inflammatory load to the risk scores increased the risk of cognitive impairment for the FSRP (HR = 1.65, 95% CI = 1.17-2.33) and the CAIDE model (HR = 1.93, 95% CI = 1.39-2.67). Adding oxi-inflammatory load to cardiovascular risk scores may be useful for determining risk of cognitive impairment in very old adults. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Rodríguez-Sánchez, B.; Mata-Cases, M.; Rodríguez-Mañas, L.; Capel, M.; Oliva-Moreno, J.
2017-01-01
Background To analyse and compare the impact of cardiovascular risk factors and disease on health-related quality of life (HRQoL) in people with and without diabetes living in the community. Methods We used data of 1,905 people with diabetes and 19,031 people without diabetes from the last Spanish National Health Survey (years 2011–2012). The HRQoL instrument used was the EuroQol 5D-5L, based on time trade-off scores. Matching methods were used to assess any differences in the HRQoL in people with and without diabetes with the same characteristics (age, gender, education level, and healthy lifestyle), according to cardiovascular risk factors and diseases. Disparities were also analysed for every dimension of HRQoL: mobility, daily activities, personal care, pain/discomfort, and anxiety/depression. Results There were no significant differences in time trade-off scores between people with and without diabetes when cardiovascular risk factors or established cardiovascular disease were not present. However, when cardiovascular risk factors were present, the HRQoL score was significantly lower in people with diabetes than in those without. This difference was indeed greater when cardiovascular diseases were present. More precisely, people with diabetes and any of the cardiovascular risk factors, who have not yet developed any cardiovascular disease, report lower HRQoL, 0.046 TTO score points over 1 (7.93 over 100 in the VAS score) compared to those without diabetes, and 0.14 TTO score points of difference (14.61 over 100 in the VAS score) if cardiovascular diseases were present. In fact, when the three risk factors were present in people with diabetes, HRQoL was significantly lower (0.10 TTO score points over 1 and 10.86 points over 100 in VAS score), obesity being the most influential risk factor. Conclusions The presence of established cardiovascular disease and/or cardiovascular risk factors, specially obesity, account for impaired quality of life in people with diabetes. PMID:29240836
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-03
... maximum of 15 points, based upon significant risk factors that are not adequately captured in the... severity score could be adjusted, up or down, by a maximum of 15 points, based on significant risk factors... Risk (VaR)/Tier 1 capital--and one additional factor to the ability to withstand funding-related stress...
Risk score for peri-interventional complications of carotid artery stenting.
Hofmann, Robert; Niessner, Alexander; Kypta, Alexander; Steinwender, Clemens; Kammler, Jürgen; Kerschner, Klaus; Grund, Michael; Leisch, Franz; Huber, Kurt
2006-10-01
Routinely available independent risk factors for the peri-interventional outcome of patients undergoing elective carotid artery stenting (CAS) are lacking. The rationale of the study was to create a risk score identifying high-risk patients. We prospectively enrolled 606 consecutive patients assigned to CAS at a secondary care hospital. Various biochemical, clinical, and lesion-related risk factors were prospectively defined. The primary end point reflecting periprocedural complications encompassed minor and major stroke, nonfatal myocardial infarction and all-cause mortality within 30 days. Three percent of patients (n=18) experienced a nonfatal minor (n=13) or major (n=5) stroke. 1.3% of patients (n=8) died from fatal stroke (n=4) or other causes (n=4). No myocardial infarction was observed within 30 days after stenting. Multivariable analysis revealed diabetes mellitus with inadequate glycemic control (HbA1c > 7%), age > or = 80 years, ulceration of the carotid artery stenosis, and a contralateral stenosis > or = 50% as independent risk factors. A risk score formed with these variables showed a superior predictive value (C-statistic = 0.73) compared with single risk factors. The presence of 2 or more of these risk factors identified patients with a risk of 11% for a periprocedural complication compared with 2% in patients with a score of 0 or 1. In patients undergoing elective CAS, a risk score based on routinely accessible variables was able to identify patients at high-risk for atherothrombotic events and all-cause death within 30 days after the intervention.
McDevitt, Roland D; Haviland, Amelia M; Lore, Ryan; Laudenberger, Laura; Eisenberg, Matthew; Sood, Neeraj
2014-04-01
To identify the degree of selection into consumer-directed health plans (CDHPs) versus traditional plans over time, and factors that influence choice and temper risk selection. Sixteen large employers offering both CDHP and traditional plans during the 2004–2007 period, more than 200,000 families. We model CDHP choice with logistic regression; predictors include risk scores, in addition to family, choice setting, and plan characteristics. Additional models stratify by account type or single enrollee versus family. Risk scores, family characteristics, and enrollment decisions are derived from medical claims and enrollment files. Interviews with human resources executives provide additional data. CDHP risk scores were 74 percent of traditional plan scores in the first year, and this difference declined over time. Employer contributions to accounts and employee premium savings fostered CDHP enrollment and reduced risk selection. Having to make an active choice of plan increased CDHP enrollment but also increased risk selection. Risk selection was greater for singles than families and did not differ between HRA and HSA-based CDHPs. Risk selection was not severe and it was well managed. Employers have effective methods to encourage CDHP enrollment and temper selection against traditional plans.
Hendriks, Rianne J; van der Leest, Marloes M G; Dijkstra, Siebren; Barentsz, Jelle O; Van Criekinge, Wim; Hulsbergen-van de Kaa, Christina A; Schalken, Jack A; Mulders, Peter F A; van Oort, Inge M
2017-10-01
Prostate cancer (PCa) diagnostics would greatly benefit from more accurate, non-invasive techniques for the detection of clinically significant disease, leading to a reduction of over-diagnosis and over-treatment. The aim of this study was to determine the association between a novel urinary biomarker-based risk score (SelectMDx), multiparametric MRI (mpMRI) outcomes, and biopsy results for PCa detection. This retrospective observational study used data from the validation study of the SelectMDx score, in which urine was collected after digital rectal examination from men undergoing prostate biopsies. A subset of these patients also underwent a mpMRI scan of the prostate. The indications for performing mpMRI were based on persistent clinical suspicion of PCa or local staging after PCa was found upon biopsy. All mpMRI images were centrally reviewed in 2016 by an experienced radiologist blinded for the urine test results and biopsy outcome. The PI-RADS version 2 was used. In total, 172 patients were included for analysis. Hundred (58%) patients had PCa detected upon prostate biopsy, of which 52 (52%) had high-grade disease correlated with a significantly higher SelectMDx score (P < 0.01). The median SelectMDx score was significantly higher in patients with a suspicious significant lesion on mpMRI compared to no suspicion of significant PCa (P < 0.01). For the prediction of mpMRI outcome, the area-under-the-curve of SelectMDx was 0.83 compared to 0.66 for PSA and 0.65 for PCA3. There was a positive association between SelectMDx score and the final PI-RADS grade. There was a statistically significant difference in SelectMDx score between PI-RADS 3 and 4 (P < 0.01) and between PI-RADS 4 and 5 (P < 0.01). The novel urinary biomarker-based SelectMDx score is a promising tool in PCa detection. This study showed promising results regarding the correlation between the SelectMDx score and mpMRI outcomes, outperforming PCA3. Our results suggest that this risk score could guide clinicians in identifying patients at risk for significant PCa and selecting patients for further radiological diagnostics to reduce unnecessary procedures. © 2017 Wiley Periodicals, Inc.
Han, Yaling; Chen, Jiyan; Qiu, Miaohan; Li, Yi; Li, Jing; Feng, Yingqing; Qiu, Jian; Meng, Liang; Sun, Yihong; Tao, Guizhou; Wu, Zhaohui; Yang, Chunyu; Guo, Jincheng; Pu, Kui; Chen, Shaoliang; Wang, Xiaozeng
2018-06-05
The prognosis of patients with coronary artery disease (CAD) at hospital discharge was constantly varying, and post-discharge risk of ischemic events remain a concern. However, risk prediction tools to identify risk of ischemia for these patients has not yet been reported. We sought to develop a scoring system for predicting long-term ischemic events in CAD patients receiving antiplatelet therapy that would be beneficial in appropriate personalized decision-making for these patients. In this prospective Optimal antiPlatelet Therapy for Chinese patients with Coronary Artery Disease (OPT-CAD, NCT01735305) registry, a total of 14,032 patients with CAD receiving at least one kind of antiplatelet agent were enrolled from 107 centers across China, from January 2012 to March 2014. The risk scoring system was developed in a derivation cohort (enrolled initially 10,000 patients in the database) using a logistic regression model and was subsequently tested in a validation cohort (the last 4,032 patients). Points in risk score was assigned based on the multivariable odds ratio of each factor. Ischemic events were defined as the composite of cardiac death, myocardial infarction or stroke. Ischemic events occurred in 342 (3.4%) patients in the derivation cohort and 160 (4.0%) patients in the validation cohort during 1-year follow-up. The OPT-CAD score, ranging from 0-257 points, consist of 10 independent risk factors, including age (0-71 points), heart rates (0-36 points), hypertension (0-20 points), prior myocardial infarction (16 points), prior stroke (16 points), renal insufficient (21 points), anemia (19 points), low ejection fraction (22 points), positive cardiac troponin (23 points) and ST-segment deviation (13 points). In predicting 1-year ischemic events, the area under receiver operating characteristics curve were 0.73 and 0.72 in derivation and validation cohort, respectively. The incidences of ischemic events in low- (0-90 points), medium- (91-150 points) and high-risk (≥151 points) patients were 1.6%, 5.5%, and 15.0%, respectively. Compared to GRACE score, OPT-CAD score had a better discrimination in predicting ischemic events and all-cause mortality (ischemic events: 0.72 vs 0.65, all-cause mortality: 0.79 vs 0.72, both P<0.001). Among CAD patients, a risk score based on 10 baseline clinical variables performed better than the GRACE risk score in predicting long-term ischemic events. However, further research is needed to assess the value of the OPT-CAD score in guiding the management of antiplatelet therapy for patients with CAD. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Olesen, Jonas Bjerring; Lip, Gregory Y H; Hansen, Morten Lock; Hansen, Peter Riis; Tolstrup, Janne Schurmann; Lindhardsen, Jesper; Selmer, Christian; Ahlehoff, Ole; Olsen, Anne-Marie Schjerning; Gislason, Gunnar Hilmar; Torp-Pedersen, Christian
2011-01-31
To evaluate the individual risk factors composing the CHADS(2) (Congestive heart failure, Hypertension, Age ≥ 75 years, Diabetes, previous Stroke) score and the CHA(2)DS(2)-VASc (CHA(2)DS(2)-Vascular disease, Age 65-74 years, Sex category) score and to calculate the capability of the schemes to predict thromboembolism. Registry based cohort study. Nationwide data on patients admitted to hospital with atrial fibrillation. Population All patients with atrial fibrillation not treated with vitamin K antagonists in Denmark in the period 1997-2006. Stroke and thromboembolism. Of 121,280 patients with non-valvular atrial fibrillation, 73,538 (60.6%) fulfilled the study inclusion criteria. In patients at "low risk" (score = 0), the rate of thromboembolism per 100 person years was 1.67 (95% confidence interval 1.47 to 1.89) with CHADS(2) and 0.78 (0.58 to 1.04) with CHA(2)DS(2)-VASc at one year's follow-up. In patients at "intermediate risk" (score = 1), this rate was 4.75 (4.45 to 5.07) with CHADS(2) and 2.01 (1.70 to 2.36) with CHA(2)DS(2)-VASc. The rate of thromboembolism depended on the individual risk factors composing the scores, and both schemes underestimated the risk associated with previous thromboembolic events. When patients were categorised into low, intermediate, and high risk groups, C statistics at 10 years' follow-up were 0.812 (0.796 to 0.827) with CHADS(2) and 0.888 (0.875 to 0.900) with CHA(2)DS(2)-VASc. The risk associated with a specific risk stratification score depended on the risk factors composing the score. CHA(2)DS(2)-VASc performed better than CHADS(2) in predicting patients at high risk, and those categorised as low risk by CHA(2)DS(2)-VASc were truly at low risk for thromboembolism.
Rahman, Mushtaqur; Simmons, Rebecca K; Harding, Anne-Helen; Wareham, Nicholas J; Griffin, Simon J
2008-06-01
Randomized trials have demonstrated that Type 2 diabetes is preventable among high-risk individuals. To date, such individuals have been identified through population screening using the oral glucose tolerance test. To assess whether a risk score comprising only routinely collected non-biochemical parameters was effective in identifying those at risk of developing Type 2 diabetes. Population-based prospective cohort (European Prospective Investigation of Cancer-Norfolk). Participants aged 40-79 recruited from UK general practices attended a health check between 1993 and 1998 (n = 25 639) and were followed for a mean of 5 years for diabetes incidence. The Cambridge Diabetes Risk Score was computed for 24 495 individuals with baseline data on age, sex, prescription of steroids and anti-hypertensive medication, family history of diabetes, body mass index and smoking status. We examined the incidence of diabetes across quintiles of the risk score and plotted a receiver operating characteristic (ROC) curve to assess discrimination. There were 323 new cases of diabetes, a cumulative incidence of 2.76/1000 person-years. Those in the top quintile of risk were 22 times more likely to develop diabetes than those in the bottom quintile (odds ratio 22.3; 95% CI: 11.0-45.4). In all, 54% of all clinically incident cases occurred in individuals in the top quintile of risk (risk score > 0.37). The area under the ROC was 74.5%. The risk score is a simple, effective tool for the identification of those at risk of developing Type 2 diabetes. Such methods may be more feasible than mass population screening with biochemical tests in defining target populations for prevention programmes.
Daswani, Bhavna; Desai, Meena; Mitra, Sumegha; Gavali, Shubhangi; Patil, Anushree; Kukreja, Subhash; Khatkhatay, M Ikram
2016-03-01
Fracture risk assessment tool® calculations can be performed with or without addition of bone mineral density; however, the impact of this addition on fracture risk assessment tool® scores has not been studied in Indian women. Given the limited availability and high cost of bone mineral density testing in India, it is important to know the influence of bone mineral density on fracture risk assessment tool® scores in Indian women. Therefore, our aim was to assess the contribution of bone mineral density in fracture risk assessment tool® outcome in Indian women. Apparently healthy postmenopausal Indian women (n = 506), aged 40-72 years, without clinical risk factors for bone disease, were retrospectively selected, and their fracture risk assessment tool® scores calculated with and without bone mineral density were compared. Based on WHO criteria, 30% women were osteoporotic, 42.9% were osteopenic and 27.1% had normal bone mineral density. Fracture risk assessment tool® scores for risk of both major osteoporotic fracture and hip fracture significantly increased on including bone mineral density (P < 0.0001). When criteria of National Osteoporosis Foundation, US was applied number of participants eligible for medical therapy increased upon inclusion of bone mineral density, (for major osteoporotic fracture risk number of women eligible without bone mineral density was 0 and with bone mineral density was 1, P > 0.05, whereas, for hip fracture risk number of women eligible without bone mineral density was 2 and with bone mineral density was 17, P < 0.0001). Until the establishment of country-specific medication intervention thresholds, bone mineral density should be included while calculating fracture risk assessment tool® scores in Indian women. © The Author(s) 2016.
Hillhouse, Joel; Turrisi, Rob; Cleveland, Michael J.; Scaglione, Nichole M.; Baker, Katie; Florence, L. Carter
2015-01-01
Background Younger indoor tanning initiation leads to greater melanoma risk due to more frequent and persistent behavior. Despite this, there are no published studies exploring the predictors of indoor tanning initiation in teen populations. Purpose This longitudinal study uses latent profile analysis to examine indoor tanning initiation in indoor tanning risk subgroups from a national sample of female adolescents. Methods Latent profile analysis used indoor tanning beliefs and perceptions to identify indoor tanning initiation risk subgroups. The teens in each subgroup were reassessed on indoor tanning initiation after a year. Results Three subgroups were identified: a low-risk, Anti-Tanning subgroup (18.6%) characterized by low scores on positive indoor tanning belief scales and high scores on beliefs about indoor tanning dangers; a moderate-risk Aware Social Tanner subgroup (47.2%) characterized by high scores on positive indoor tanning belief scales but also high scores on beliefs about indoor tanning dangers; and a high-risk Risky Relaxation Tanner subgroup (34.2%) characterized by high scores on positive indoor tanning belief scales and low scores on beliefs about indoor tanning dangers. Teens in the Aware Social Tanner and Risky Relaxation Tanner subgroups were significantly more likely to initiate indoor tanning in the following year. Conclusions These findings highlight the need to identify teens at risk for indoor tanning initiation and develop tailored interventions that will move them to the lowest risk subgroup. Subgroup correlates suggest parent and peer-based interventions may be successful. PMID:26370893
Hillhouse, Joel; Turrisi, Rob; Cleveland, Michael J; Scaglione, Nichole M; Baker, Katie; Florence, L Carter
2016-02-01
Younger indoor tanning initiation leads to greater melanoma risk due to more frequent and persistent behavior. Despite this, there are no published studies exploring the predictors of indoor tanning initiation in teen populations. This longitudinal study uses latent profile analysis to examine indoor tanning initiation in indoor tanning risk subgroups from a national sample of female adolescents. Latent profile analysis used indoor tanning beliefs and perceptions to identify indoor tanning initiation risk subgroups. The teens in each subgroup were reassessed on indoor tanning initiation after a year. Three subgroups were identified: a low risk, anti-tanning subgroup (18.6 %) characterized by low scores on positive indoor tanning belief scales and high scores on beliefs about indoor tanning dangers; a moderate risk aware social tanner subgroup (47.2 %) characterized by high scores on positive indoor tanning belief scales but also high scores on beliefs about indoor tanning dangers; and a high risk risky relaxation tanner subgroup (34.2 %) characterized by high scores on positive indoor tanning belief scales and low scores on beliefs about indoor tanning dangers. Teens in the aware social tanner and risky relaxation tanner subgroups were significantly more likely to initiate indoor tanning in the following year. These findings highlight the need to identify teens at risk for indoor tanning initiation and develop tailored interventions that will move them to the lowest risk subgroup. Subgroup correlates suggest parent and peer-based interventions may be successful.
Associations of CAIDE Dementia Risk Score with MRI, PIB-PET measures, and cognition
Stephen, Ruth; Liu, Yawu; Ngandu, Tiia; Rinne, Juha O.; Kemppainen, Nina; Parkkola, Riitta; Laatikainen, Tiina; Paajanen, Teemu; Hänninen, Tuomo; Strandberg, Timo; Antikainen, Riitta; Tuomilehto, Jaakko; Keinänen Kiukaanniemi, Sirkka; Vanninen, Ritva; Helisalmi, Seppo; Levälahti, Esko; Kivipelto, Miia; Soininen, Hilkka; Solomon, Alina
2017-01-01
Background: CAIDE Dementia Risk Score is the first validated tool for estimating dementia risk based on a midlife risk profile. Objectives: This observational study investigated longitudinal associations of CAIDE Dementia Risk Score with brain MRI, amyloid burden evaluated with PIB-PET, and detailed cognition measures. Methods: FINGER participants were at-risk elderly without dementia. CAIDE Risk Score was calculated using data from previous national surveys (mean age 52.4 years). In connection to baseline FINGER visit (on average 17.6 years later, mean age 70.1 years), 132 participants underwent MRI scans, and 48 underwent PIB-PET scans. All 1,260 participants were cognitively assessed (Neuropsychological Test Battery, NTB). Neuroimaging assessments included brain cortical thickness and volumes (Freesurfer 5.0.3), visually rated medial temporal atrophy (MTA), white matter lesions (WML), and amyloid accumulation. Results: Higher CAIDE Dementia Risk Score was related to more pronounced deep WML (OR 1.22, 95% CI 1.05–1.43), lower total gray matter (β-coefficient –0.29, p = 0.001) and hippocampal volume (β-coefficient –0.28, p = 0.003), lower cortical thickness (β-coefficient –0.19, p = 0.042), and poorer cognition (β-coefficients –0.31 for total NTB score, –0.25 for executive functioning, –0.33 for processing speed, and –0.20 for memory, all p < 0.001). Higher CAIDE Dementia Risk Score including APOE genotype was additionally related to more pronounced MTA (OR 1.15, 95% CI 1.00–1.30). No associations were found with periventricular WML or amyloid accumulation. Conclusions: The CAIDE Dementia Risk Score was related to indicators of cerebrovascular changes and neurodegeneration on MRI, and cognition. The lack of association with brain amyloid accumulation needs to be verified in studies with larger sample sizes. PMID:28671114
Identifying Patients With Vesicovaginal Fistula at High Risk of Urinary Incontinence After Surgery
Bengtson, Angela M.; Kopp, Dawn; Tang, Jennifer H.; Chipungu, Ennet; Moyo, Margaret; Wilkinson, Jeffrey
2016-01-01
Objective To develop a risk score to identify women with vesicovaginal fistula at high risk of residual urinary incontinence after surgical repair. Methods We conducted a prospective cohort study among 401 women undergoing their first vesicovaginal fistula repair at a referral fistula repair center in Lilongwe, Malawi, between September 2011 and December 2014, who returned for follow-up within 120 days of surgery. We used logistic regression to develop a risk score to identify women with high likelihood of residual urinary incontinence, defined as incontinence grade 2-5 within 120 days of vesicovaginal fistula repair, based on preoperative clinical and demographic characteristics (age, number of years with fistula, HIV status, body mass index, previous repair surgery at an outside facility, revised Goh Classification, Goh vesicovaginal fistula size, circumferential fistula, vaginal scaring, bladder size, and urethral length). The sensitivity, specificity, positive and negative predictive values of the risk score at each cut-point were assessed. Results Overall, 11 (3%) women had unsuccessful fistula closure. Of those with successful fistula closure (n=372), 85 (23%) experienced residual incontinence. A risk score cut-point of 20 had sensitivity 82% (95% CI 72%, 89%) and specificity 63% (95% CI 57%, 69%) to potentially identify women with residual incontinence. In our population, the positive predictive value for a risk score cut-point of _20 or higher was 43% (95% CI 36%, 51%) and the negative predictive value was 91% (95% CI 86%, 94%). Forty-eight percent of our study population had a risk score ≥20 and therefore, would have been identified for further intervention. Conclusions A risk score 20 or higher was associated with an increased likelihood of residual incontinence, with satisfactory sensitivity and specificity. If validated in alternative settings, the risk score could be used to refer women with high likelihood of postoperative incontinence to more experienced surgeons. PMID:27741181
Yen, Jennifer; Van Arendonk, Kyle J.; Streiff, Michael B.; McNamara, LeAnn; Stewart, F. Dylan; Conner G, Kim G; Thompson, Richard E.; Haut, Elliott R.; Takemoto, Clifford M.
2017-01-01
OBJECTIVES Identify risk factors for venous thromboembolism (VTE) and develop a VTE risk assessment model for pediatric trauma patients. DESIGN, SETTING, AND PATIENTS We performed a retrospective review of patients 21 years and younger who were hospitalized following traumatic injuries at the John Hopkins level 1 adult and pediatric trauma center (1987-2011). The clinical characteristics of patients with and without VTE were compared, and multivariable logistic regression analysis was used to identify independent risk factors for VTE. Weighted risk assessment scoring systems were developed based on these and previously identified factors from patients in the National Trauma Data Bank (NTDB 2008-2010); the scoring systems were validated in this cohort from Johns Hopkins as well as a cohort of pediatric admissions from the NTDB (2011-2012). MAIN RESULTS Forty-nine of 17,366 pediatric trauma patients (0.28%) were diagnosed with VTE after admission to our trauma center. After adjusting for potential confounders, VTE was independently associated with older age, surgery, blood transfusion, higher Injury Severity Score (ISS), and lower Glasgow Coma Scale (GCS) score. These and additional factors were identified in 402,329 pediatric patients from the NTDB from 2008-2010; independent risk factors from the logistic regression analysis of this NTDB cohort were selected and incorporated into weighted risk assessment scoring systems. Two models were developed and were cross-validated in 2 separate pediatric trauma cohorts: 1) 282,535 patients in the NTDB from 2011 to 2012 2) 17,366 patients from Johns Hopkins. The receiver operator curve using these models in the validation cohorts had area under the curves that ranged 90% to 94%. CONCLUSIONS VTE is infrequent after trauma in pediatric patients. We developed weighted scoring systems to stratify pediatric trauma patients at risk for VTE. These systems may have potential to guide risk-appropriate VTE prophylaxis in children after trauma. PMID:26963757
Andersson, M; Kolodziej, B; Andersson, R E
2017-10-01
The role of imaging in the diagnosis of appendicitis is controversial. This prospective interventional study and nested randomized trial analysed the impact of implementing a risk stratification algorithm based on the Appendicitis Inflammatory Response (AIR) score, and compared routine imaging with selective imaging after clinical reassessment. Patients presenting with suspicion of appendicitis between September 2009 and January 2012 from age 10 years were included at 21 emergency surgical centres and from age 5 years at three university paediatric centres. Registration of clinical characteristics, treatments and outcomes started during the baseline period. The AIR score-based algorithm was implemented during the intervention period. Intermediate-risk patients were randomized to routine imaging or selective imaging after clinical reassessment. The baseline period included 1152 patients, and the intervention period 2639, of whom 1068 intermediate-risk patients were randomized. In low-risk patients, use of the AIR score-based algorithm resulted in less imaging (19·2 versus 34·5 per cent; P < 0·001), fewer admissions (29·5 versus 42·8 per cent; P < 0·001), and fewer negative explorations (1·6 versus 3·2 per cent; P = 0·030) and operations for non-perforated appendicitis (6·8 versus 9·7 per cent; P = 0·034). Intermediate-risk patients randomized to the imaging and observation groups had the same proportion of negative appendicectomies (6·4 versus 6·7 per cent respectively; P = 0·884), number of admissions, number of perforations and length of hospital stay, but routine imaging was associated with an increased proportion of patients treated for appendicitis (53·4 versus 46·3 per cent; P = 0·020). AIR score-based risk classification can safely reduce the use of diagnostic imaging and hospital admissions in patients with suspicion of appendicitis. Registration number: NCT00971438 ( http://www.clinicaltrials.gov). © 2017 BJS Society Ltd Published by John Wiley & Sons Ltd.
A novel ultrasound-based vascular calcification score (CALCS) to detect subclinical atherosclerosis.
Flore, R; Zocco, M A; Ainora, M E; Fonnesu, C; Nesci, A; Gasbarrini, A; Ponziani, F R
2018-02-01
To quantify non-coronary vascular calcifications (VC) in asymptomatic patients at low-intermediate cardiovascular risk by a new color Doppler ultrasound (DUS)-based score (the carotid, aortic, lower limbs calcium score, CALCs), and to correlate this score with classical parameters associated with cardiovascular risk [carotid intima media thickness (IMT), and arterial stiffness (AS)]. All consecutive asymptomatic patients who underwent a screening DUS of non-coronary circulation were evaluated and patients at low-intermediate cardiovascular risk were selected according to Framingham risk score (FRS). Among them, we enrolled 70 patients with US evidence of VC and 71 age, sex and FRS matched controls. The presence of VC was correlated with classical markers of cardiovascular risk, such as AS and intima-media thickness (IMT). AS, expressed as pulse wave velocity (PWV) and arterial distensibility, carotid IMT and CALCs were measured for both groups. AS and c-IMT were assessed by a new Radio-Frequency (RF) DUS-based method. CALCs was generated by our previously described B-mode DUS-based method according to number/size of VC in 11 non-coronary segments (range 0-33). Patients with VC presented higher AS and IMT values than controls (PWV 8.34±0.98 m/s vs. 6.74±0.68 m/s, p<0.0001; arterial distensibility 267±12 mm vs. 315±65 mm, p=0.001; IMT 687±132 mm vs. 572±91 mm, p<0.0001). Mean CALCs of patients with VC was 8.41±7.78. CALCs were significantly correlated with c-IMT (p<0.0001; r=0.3), PWV (p<0.0001; r=0.4) and arterial distensibility (p=0.002; r=-0.1). DUS-based CALCs is highly correlated with other validated markers of subclinical atherosclerosis, such as c-IMT and AS. Our results demonstrated the ability of CALCs to identify individual predictive factors beyond the traditional risk factors by quantifying an interesting and novel step of the atherogenic process. Future studies on larger series and with adequate follow up are necessary to confirm these results and to evaluate the role of this new marker in monitoring calcific atherosclerosis progression.
da Cunha, Diogo Thimoteo; de Rosso, Veridiana Vera; Stedefeldt, Elke
2016-03-01
The objective of this study was to verify the characteristics of food safety inspections, considering risk categories and binary scores. A cross-sectional study was performed with 439 restaurants in 43 Brazilian cities. A food safety checklist with 177 items was applied to the food service establishments. These items were classified into four groups (R1 to R4) according to the main factors that can cause outbreaks involving food: R1, time and temperature aspects; R2, direct contamination; R3, water conditions and raw material; and R4, indirect contamination (i.e., structures and buildings). A score adjusted for 100 was calculated for the overall violation score and the violation score for each risk category. The average violation score (standard deviation) was 18.9% (16.0), with an amplitude of 0.0 to 76.7%. Restaurants with a low overall violation score (approximately 20%) presented a high number of violations from the R1 and R2 groups, representing the most risky violations. Practical solutions to minimize this evaluation bias were discussed. Food safety evaluation should use weighted scores and be risk-based. However, some precautions must be taken by researchers, health inspectors, and health surveillance departments to develop an adequate and reliable instrument.
Sezgin, Duygu; Esin, M Nihal
2018-08-01
To evaluate effects of a PRECEDE-PROCEED Model based, nurse-delivered Ergonomic Risk Management Program (ERMP) in the aim of reducing musculoskeletal symptoms of intensive care unit (ICU) nurses. This pre-test post-test design for non-equivalent control groups study comprised 72 ICU nurses from two hospitals. A randomised sampling was done through the study population. The ERMP was delivered as an intervention including 26weeks of follow-up. Data was collected by "Descriptives of Nurses and Ergonomic Risk Reporting Form", "Rapid Upper Risk Assessment Form (RULA)", "ICU Environment Assessment Form" and "Personal interviews form". There was no difference between sociodemographic characteristics, work and general health conditions within intervention and control group. One month after the intervention, nurses had significant decrease in their total RULA scores during bending down and patient repositioning movements as 1.40 and 0.82, respectively. Six months after the ERMP, the mean total RULA scores of nurses during the patient repositioning was 4.39±1.49 which meant "immediate further analyses and modifications recommended". After all, pain intensity scores, medication use due to pain, and RULA ergonomic risk scores were significantly decreased, while exercise frequency was increased. The ERMP was effective to increase exercise frequency and to decrease musculoskeletal pain and ergonomic risk levels of ICU nurses. Copyright © 2018 Elsevier Ltd. All rights reserved.
Larson, Mary Jo; Mohr, Beth A; Adams, Rachel Sayko; Wooten, Nikki R; Williams, Thomas V
2014-08-01
We identified to what extent the Department of Defense postdeployment health surveillance program identifies at-risk drinking, alone or in conjunction with psychological comorbidities, and refers service members who screen positive for additional assessment or care. We completed a cross-sectional analysis of 333 803 US Army active duty members returning from Iraq or Afghanistan deployments in fiscal years 2008 to 2011 with a postdeployment health assessment. Alcohol measures included 2 based on self-report quantity-frequency items-at-risk drinking (positive Alcohol Use Disorders Identification Test alcohol consumption questions [AUDIT-C] screen) and severe alcohol problems (AUDIT-C score of 8 or higher)-and another based on the interviewing provider's assessment. Nearly 29% of US Army active duty members screened positive for at-risk drinking, and 5.6% had an AUDIT-C score of 8 or higher. Interviewing providers identified potential alcohol problems among only 61.8% of those screening positive for at-risk drinking and only 74.9% of those with AUDIT-C scores of 8 or higher. They referred for a follow-up visit to primary care or another setting only 29.2% of at-risk drinkers and only 35.9% of those with AUDIT-C scores of 8 or higher. This study identified missed opportunities for early intervention for at-risk drinking. Future research should evaluate the effect of early intervention on long-term outcomes.
Prediction of individual genetic risk to prostate cancer using a polygenic score.
Szulkin, Robert; Whitington, Thomas; Eklund, Martin; Aly, Markus; Eeles, Rosalind A; Easton, Douglas; Kote-Jarai, Z Sofia; Amin Al Olama, Ali; Benlloch, Sara; Muir, Kenneth; Giles, Graham G; Southey, Melissa C; Fitzgerald, Liesel M; Henderson, Brian E; Schumacher, Fredrick; Haiman, Christopher A; Schleutker, Johanna; Wahlfors, Tiina; Tammela, Teuvo L J; Nordestgaard, Børge G; Key, Tim J; Travis, Ruth C; Neal, David E; Donovan, Jenny L; Hamdy, Freddie C; Pharoah, Paul; Pashayan, Nora; Khaw, Kay-Tee; Stanford, Janet L; Thibodeau, Stephen N; McDonnell, Shannon K; Schaid, Daniel J; Maier, Christiane; Vogel, Walther; Luedeke, Manuel; Herkommer, Kathleen; Kibel, Adam S; Cybulski, Cezary; Lubiński, Jan; Kluźniak, Wojciech; Cannon-Albright, Lisa; Brenner, Hermann; Butterbach, Katja; Stegmaier, Christa; Park, Jong Y; Sellers, Thomas; Lin, Hui-Yi; Lim, Hui-Yi; Slavov, Chavdar; Kaneva, Radka; Mitev, Vanio; Batra, Jyotsna; Clements, Judith A; Spurdle, Amanda; Teixeira, Manuel R; Paulo, Paula; Maia, Sofia; Pandha, Hardev; Michael, Agnieszka; Kierzek, Andrzej; Gronberg, Henrik; Wiklund, Fredrik
2015-09-01
Polygenic risk scores comprising established susceptibility variants have shown to be informative classifiers for several complex diseases including prostate cancer. For prostate cancer it is unknown if inclusion of genetic markers that have so far not been associated with prostate cancer risk at a genome-wide significant level will improve disease prediction. We built polygenic risk scores in a large training set comprising over 25,000 individuals. Initially 65 established prostate cancer susceptibility variants were selected. After LD pruning additional variants were prioritized based on their association with prostate cancer. Six-fold cross validation was performed to assess genetic risk scores and optimize the number of additional variants to be included. The final model was evaluated in an independent study population including 1,370 cases and 1,239 controls. The polygenic risk score with 65 established susceptibility variants provided an area under the curve (AUC) of 0.67. Adding an additional 68 novel variants significantly increased the AUC to 0.68 (P = 0.0012) and the net reclassification index with 0.21 (P = 8.5E-08). All novel variants were located in genomic regions established as associated with prostate cancer risk. Inclusion of additional genetic variants from established prostate cancer susceptibility regions improves disease prediction. © 2015 Wiley Periodicals, Inc.
León-Justel, Antonio; Madrazo-Atutxa, Ainara; Alvarez-Rios, Ana I; Infantes-Fontán, Rocio; Garcia-Arnés, Juan A; Lillo-Muñoz, Juan A; Aulinas, Anna; Urgell-Rull, Eulàlia; Boronat, Mauro; Sánchez-de-Abajo, Ana; Fajardo-Montañana, Carmen; Ortuño-Alonso, Mario; Salinas-Vert, Isabel; Granada, Maria L; Cano, David A; Leal-Cerro, Alfonso
2016-10-01
Cushing's syndrome (CS) is challenging to diagnose. Increased prevalence of CS in specific patient populations has been reported, but routine screening for CS remains questionable. To decrease the diagnostic delay and improve disease outcomes, simple new screening methods for CS in at-risk populations are needed. To develop and validate a simple scoring system to predict CS based on clinical signs and an easy-to-use biochemical test. Observational, prospective, multicenter. Referral hospital. A cohort of 353 patients attending endocrinology units for outpatient visits. All patients were evaluated with late-night salivary cortisol (LNSC) and a low-dose dexamethasone suppression test for CS. Diagnosis or exclusion of CS. Twenty-six cases of CS were diagnosed in the cohort. A risk scoring system was developed by logistic regression analysis, and cutoff values were derived from a receiver operating characteristic curve. This risk score included clinical signs and symptoms (muscular atrophy, osteoporosis, and dorsocervical fat pad) and LNSC levels. The estimated area under the receiver operating characteristic curve was 0.93, with a sensitivity of 96.2% and specificity of 82.9%. We developed a risk score to predict CS in an at-risk population. This score may help to identify at-risk patients in non-endocrinological settings such as primary care, but external validation is warranted.
Brasil, Albert Vincent Berthier; Teles, Alisson R; Roxo, Marcelo Ricardo; Schuster, Marcelo Neutzling; Zauk, Eduardo Ballverdu; Barcellos, Gabriel da Costa; Costa, Pablo Ramon Fruett da; Ferreira, Nelson Pires; Kraemer, Jorge Luiz; Ferreira, Marcelo Paglioli; Gobbato, Pedro Luis; Worm, Paulo Valdeci
2016-10-01
To analyze the cumulative effect of risk factors associated with early major complications in postoperative spine surgery. Retrospective analysis of 583 surgically-treated patients. Early "major" complications were defined as those that may lead to permanent detrimental effects or require further significant intervention. A balanced risk score was built using multiple logistic regression. Ninety-two early major complications occurred in 76 patients (13%). Age > 60 years and surgery of three or more levels proved to be significant independent risk factors in the multivariate analysis. The balanced scoring system was defined as: 0 points (no risk factor), 2 points (1 factor) or 4 points (2 factors). The incidence of early major complications in each category was 7% (0 points), 15% (2 points) and 29% (4 points) respectively. This balanced scoring system, based on two risk factors, represents an important tool for both surgical indication and for patient counseling before surgery.
Baena-Díez, José Miguel; Subirana, Isaac; Ramos, Rafael; Gómez de la Cámara, Agustín; Elosua, Roberto; Vila, Joan; Marín-Ibáñez, Alejandro; Guembe, María Jesús; Rigo, Fernando; Tormo-Díaz, María José; Moreno-Iribas, Conchi; Cabré, Joan Josep; Segura, Antonio; Lapetra, José; Quesada, Miquel; Medrano, María José; González-Diego, Paulino; Frontera, Guillem; Gavrila, Diana; Ardanaz, Eva; Basora, Josep; García, José María; García-Lareo, Manel; Gutiérrez-Fuentes, José Antonio; Mayoral, Eduardo; Sala, Joan; Dégano, Irene R; Francès, Albert; Castell, Conxa; Grau, María; Marrugat, Jaume
2018-04-01
To assess the validity of the original low-risk SCORE function without and with high-density lipoprotein cholesterol and SCORE calibrated to the Spanish population. Pooled analysis with individual data from 12 Spanish population-based cohort studies. We included 30 919 individuals aged 40 to 64 years with no history of cardiovascular disease at baseline, who were followed up for 10 years for the causes of death included in the SCORE project. The validity of the risk functions was analyzed with the area under the ROC curve (discrimination) and the Hosmer-Lemeshow test (calibration), respectively. Follow-up comprised 286 105 persons/y. Ten-year cardiovascular mortality was 0.6%. The ratio between estimated/observed cases ranged from 9.1, 6.5, and 9.1 in men and 3.3, 1.3, and 1.9 in women with original low-risk SCORE risk function without and with high-density lipoprotein cholesterol and calibrated SCORE, respectively; differences were statistically significant with the Hosmer-Lemeshow test between predicted and observed mortality with SCORE (P < .001 in both sexes and with all functions). The area under the ROC curve with the original SCORE was 0.68 in men and 0.69 in women. All versions of the SCORE functions available in Spain significantly overestimate the cardiovascular mortality observed in the Spanish population. Despite the acceptable discrimination capacity, prediction of the number of fatal cardiovascular events (calibration) was significantly inaccurate. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Development of a claims-based risk score to identify obese individuals.
Clark, Jeanne M; Chang, Hsien-Yen; Bolen, Shari D; Shore, Andrew D; Goodwin, Suzanne M; Weiner, Jonathan P
2010-08-01
Obesity is underdiagnosed, hampering system-based health promotion and research. Our objective was to develop and validate a claims-based risk model to identify obese persons using medical diagnosis and prescription records. We conducted a cross-sectional analysis of de-identified claims data from enrollees of 3 Blue Cross Blue Shield plans who completed a health risk assessment capturing height and weight. The final sample of 71,057 enrollees was randomly split into 2 subsamples for development and validation of the obesity risk model. Using the Johns Hopkins Adjusted Clinical Groups case-mix/predictive risk methodology, we categorized study members' diagnosis (ICD) codes. Logistic regression was used to determine which claims-based risk markers were associated with a body mass index (BMI) > or = 35 kg/m(2). The sensitivities of the scores > or =90(th) percentile to detect obesity were 26% to 33%, while the specificities were >90%. The areas under the receiver operator curve ranged from 0.67 to 0.73. In contrast, a diagnosis of obesity or an obesity medication alone had very poor sensitivity (10% and 1%, respectively); the obesity risk model identified an additional 22% of obese members. Varying the percentile cut-point from the 70(th) to the 99(th) percentile resulted in positive predictive values ranging from 15.5 to 59.2. An obesity risk score was highly specific for detecting a BMI > or = 35 kg/m(2) and substantially increased the detection of obese members beyond a provider-coded obesity diagnosis or medication claim. This model could be used for obesity care management and health promotion or for obesity-related research.
Petersen, Japke F; Stuiver, Martijn M; Timmermans, Adriana J; Chen, Amy; Zhang, Hongzhen; O'Neill, James P; Deady, Sandra; Vander Poorten, Vincent; Meulemans, Jeroen; Wennerberg, Johan; Skroder, Carl; Day, Andrew T; Koch, Wayne; van den Brekel, Michiel W M
2018-05-01
TNM-classification inadequately estimates patient-specific overall survival (OS). We aimed to improve this by developing a risk-prediction model for patients with advanced larynx cancer. Cohort study. We developed a risk prediction model to estimate the 5-year OS rate based on a cohort of 3,442 patients with T3T4N0N+M0 larynx cancer. The model was internally validated using bootstrapping samples and externally validated on patient data from five external centers (n = 770). The main outcome was performance of the model as tested by discrimination, calibration, and the ability to distinguish risk groups based on tertiles from the derivation dataset. The model performance was compared to a model based on T and N classification only. We included age, gender, T and N classification, and subsite as prognostic variables in the standard model. After external validation, the standard model had a significantly better fit than a model based on T and N classification alone (C statistic, 0.59 vs. 0.55, P < .001). The model was able to distinguish well among three risk groups based on tertiles of the risk score. Adding treatment modality to the model did not decrease the predictive power. As a post hoc analysis, we tested the added value of comorbidity as scored by American Society of Anesthesiologists score in a subsample, which increased the C statistic to 0.68. A risk prediction model for patients with advanced larynx cancer, consisting of readily available clinical variables, gives more accurate estimations of the estimated 5-year survival rate when compared to a model based on T and N classification alone. 2c. Laryngoscope, 128:1140-1145, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Karayannis, Nicholas V; Jull, Gwendolen A; Nicholas, Michael K; Hodges, Paul W
2018-01-01
To determine the distribution of higher psychological risk features within movement-based subgroups for people with low back pain (LBP). Cross-sectional observational study. Participants were recruited from physiotherapy clinics and community advertisements. Measures were collected at a university outpatient-based physiotherapy clinic. People (N=102) seeking treatment for LBP. Participants were subgrouped according to 3 classification schemes: Mechanical Diagnosis and Treatment (MDT), Treatment-Based Classification (TBC), and O'Sullivan Classification (OSC). Questionnaires were used to categorize low-, medium-, and high-risk features based on depression, anxiety, and stress (Depression, Anxiety, and Stress Scale-21 Items); fear avoidance (Fear-Avoidance Beliefs Questionnaire); catastrophizing and coping (Pain-Related Self-Symptoms Scale); and self-efficacy (Pain Self-Efficacy Questionnaire). Psychological risk profiles were compared between movement-based subgroups within each scheme. Scores across all questionnaires revealed that most patients had low psychological risk profiles, but there were instances of higher (range, 1%-25%) risk profiles within questionnaire components. The small proportion of individuals with higher psychological risk scores were distributed between subgroups across TBC, MDT, and OSC schemes. Movement-based subgrouping alone cannot inform on individuals with higher psychological risk features. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
SIMulation of Medication Error induced by Clinical Trial drug labeling: the SIMME-CT study.
Dollinger, Cecile; Schwiertz, Vérane; Sarfati, Laura; Gourc-Berthod, Chloé; Guédat, Marie-Gabrielle; Alloux, Céline; Vantard, Nicolas; Gauthier, Noémie; He, Sophie; Kiouris, Elena; Caffin, Anne-Gaelle; Bernard, Delphine; Ranchon, Florence; Rioufol, Catherine
2016-06-01
To assess the impact of investigational drug labels on the risk of medication error in drug dispensing. A simulation-based learning program focusing on investigational drug dispensing was conducted. The study was undertaken in an Investigational Drugs Dispensing Unit of a University Hospital of Lyon, France. Sixty-three pharmacy workers (pharmacists, residents, technicians or students) were enrolled. Ten risk factors were selected concerning label information or the risk of confusion with another clinical trial. Each risk factor was scored independently out of 5: the higher the score, the greater the risk of error. From 400 labels analyzed, two groups were selected for the dispensing simulation: 27 labels with high risk (score ≥3) and 27 with low risk (score ≤2). Each question in the learning program was displayed as a simulated clinical trial prescription. Medication error was defined as at least one erroneous answer (i.e. error in drug dispensing). For each question, response times were collected. High-risk investigational drug labels correlated with medication error and slower response time. Error rates were significantly 5.5-fold higher for high-risk series. Error frequency was not significantly affected by occupational category or experience in clinical trials. SIMME-CT is the first simulation-based learning tool to focus on investigational drug labels as a risk factor for medication error. SIMME-CT was also used as a training tool for staff involved in clinical research, to develop medication error risk awareness and to validate competence in continuing medical education. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Sandström, A; Cnattingius, S; Wikström, A K; Stephansson, O
2012-12-01
To investigate risk of recurrence of labour dystocia and mode of delivery in second labour after taking first labour and fetal and maternal characteristics into account. A population-based cohort study. The Swedish Medical Birth Register from 1992 to 2006. A total of 239 953 women who gave birth to their first and second singleton infants in cephalic presentation at ≥ 37 weeks of gestation with spontaneous onset of labour. We used logistic regression analysis to estimate crude and adjusted odds ratios. Labour dystocia and mode of delivery in second labour. Overall labour dystocia affected only 12% of women with previous dystocia. Regardless of mode of first delivery, rates of dystocia in the second labour were higher in women with than without previous dystocia, but were more pronounced in women with previous caesarean section (34%). Analyses with risk score groups for dystocia (risk factors were long interpregnancy interval, maternal age ≥ 35 years, obesity, short maternal stature, not cohabiting and post-term pregnancy) showed that risk of instrumental delivery in second labour increased with previous dystocia and increasing risk score. Among women with trial of labour after caesarean section with previous dystocia and a risk score of 3 or more, 66% had a vaginal instrumental or caesarean delivery (17 and 49%, respectively). In women with trial of labour after caesarean section without previous dystocia and a risk score of 0, corresponding risk was 32% (14 and 18%, respectively). Previous labour dystocia increases the risk of dystocia in subsequent delivery. Taking first labour and fetal and maternal characteristics into account is important in the risk assessments for dystocia and instrumental delivery in second labour. © 2012 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2012 RCOG.
Healthy eating index and breast cancer risk among Malaysian women.
Shahril, Mohd Razif; Sulaiman, Suhaina; Shaharudin, Soraya Hanie; Akmal, Sharifah Noor
2013-07-01
Healthy Eating Index-2005 (HEI-2005), an index-based dietary pattern, has been shown to predict the risk of chronic diseases among Americans. This study aims to examine the ability of HEI-2005 in predicting the probability for risk of premenopausal and postmenopausal breast cancer among Malaysian women. Data from a case-control nutritional epidemiology study among 764 participants including 382 breast cancer cases and 382 healthy women were extracted and scored. Multivariate odds ratios (OR) with 95% confidence intervals (CI) were used to evaluate the relationship between the risk of breast cancer and quartiles (Q) of HEI-2005 total scores and its component, whereas the risk prediction ability of HEI-2005 was investigated using diagnostics analysis. The results of this study showed that there is a significant reduction in the risk of breast cancer, with a higher HEI-2005 total score among premenopausal women (OR Q1 vs. Q4=0.34, 95% CI; 0.15-0.76) and postmenopausal women (OR Q1 vs. Q4=0.20, 95% CI; 0.06-0.63). However, HEI-2005 has a sensitivity of 56-60%, a specificity of 55-60%, and a positive predictive value and negative predictive value of 57-58%, which indicates a moderate ability to predict the risk of breast cancer according to menopausal status. The breast cancer incidence observed poorly agrees with risk outcomes from HEI-2005 as shown by low κ statistics (κ=0.15). In conclusion, although the total HEI-2005 scores were associated with a risk of breast cancer among Malaysian women, the ability of HEI-2005 to predict risk is poor as indicated by the diagnostic analysis. A local index-based dietary pattern, which is disease specific, is required to predict the risk of breast cancer among Malaysian women for early prevention.
Schlegel, Andrea; Kalisvaart, Marit; Scalera, Irene; Laing, Richard W; Mergental, Hynek; Mirza, Darius F; Perera, Thamara; Isaac, John; Dutkowski, Philipp; Muiesan, Paolo
2018-03-01
Primary non-function and ischaemic cholangiopathy are the most feared complications following donation-after-circulatory-death (DCD) liver transplantation. The aim of this study was to design a new score on risk assessment in liver-transplantation DCD based on donor-and-recipient parameters. Using the UK national DCD database, a risk analysis was performed in adult recipients of DCD liver grafts in the UK between 2000 and 2015 (n = 1,153). A new risk score was calculated (UK DCD Risk Score) on the basis of a regression analysis. This is validated using the United Network for Organ Sharing database (n = 1,617) and our own DCD liver-transplant database (n = 315). Finally, the new score was compared with two other available prediction systems: the DCD risk scores from the University of California, Los Angeles and King's College Hospital, London. The following seven strongest predictors of DCD graft survival were identified: functional donor warm ischaemia, cold ischaemia, recipient model for end-stage liver disease, recipient age, donor age, previous orthotopic liver transplantation, and donor body mass index. A combination of these risk factors (UK DCD risk model) stratified the best recipients in terms of graft survival in the entire UK DCD database, as well as in the United Network for Organ Sharing and in our own DCD population. Importantly, the UK DCD Risk Score significantly predicted graft loss caused by primary non-function or ischaemic cholangiopathy in the futile group (>10 score points). The new prediction model demonstrated a better C statistic of 0.79 compared to the two other available systems (0.71 and 0.64, respectively). The UK DCD Risk Score is a reliable tool to detect high-risk and futile combinations of donor-and-recipient factors in DCD liver transplantation. It is simple to use and offers a great potential for making better decisions on which DCD graft should be rejected or may benefit from functional assessment and further optimization by machine perfusion. In this study, we provide a new prediction model for graft loss in donation-after-circulatory-death (DCD) liver transplantation. Based on UK national data, the new UK DCD Risk Score involves the following seven clinically relevant risk factors: donor age, donor body mass index, functional donor warm ischaemia, cold storage, recipient age, recipient laboratory model for end-stage liver disease, and retransplantation. Three risk classes were defined: low risk (0-5 points), high risk (6-10 points), and futile (>10 points). This new model stratified best in terms of graft survival compared to other available models. Futile combinations (>10 points) achieved an only very limited 1- and 5-year graft survival of 37% and less than 20%, respectively. In contrast, an excellent graft survival has been shown in low-risk combinations (≤5 points). The new model is easy to calculate at the time of liver acceptance. It may help to decide which risk combination will benefit from additional graft treatment, or which DCD liver should be declined for a certain recipient. Copyright © 2017 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Ay, Hakan; Arsava, E Murat; Johnston, S Claiborne; Vangel, Mark; Schwamm, Lee H; Furie, Karen L; Koroshetz, Walter J; Sorensen, A Gregory
2009-01-01
Predictive instruments based on clinical features for early stroke risk after transient ischemic attack suffer from limited specificity. We sought to combine imaging and clinical features to improve predictions for 7-day stroke risk after transient ischemic attack. We studied 601 consecutive patients with transient ischemic attack who had MRI within 24 hours of symptom onset. A logistic regression model was developed using stroke within 7 days as the response criterion and diffusion-weighted imaging findings and dichotomized ABCD(2) score (ABCD(2) >/=4) as covariates. Subsequent stroke occurred in 25 patients (5.2%). Dichotomized ABCD(2) score and acute infarct on diffusion-weighted imaging were each independent predictors of stroke risk. The 7-day risk was 0.0% with no predictor, 2.0% with ABCD(2) score >/=4 alone, 4.9% with acute infarct on diffusion-weighted imaging alone, and 14.9% with both predictors (an automated calculator is available at http://cip.martinos.org). Adding imaging increased the area under the receiver operating characteristic curve from 0.66 (95% CI, 0.57 to 0.76) using the ABCD(2) score to 0.81 (95% CI, 0.74 to 0.88; P=0.003). The sensitivity of 80% on the receiver operating characteristic curve corresponded to a specificity of 73% for the CIP model and 47% for the ABCD(2) score. Combining acute imaging findings with clinical transient ischemic attack features causes a dramatic boost in the accuracy of predictions with clinical features alone for early risk of stroke after transient ischemic attack. If validated in relevant clinical settings, risk stratification by the CIP model may assist in early implementation of therapeutic measures and effective use of hospital resources.
Hu, Chenggong; Zhou, Yongfang; Liu, Chang; Kang, Yan
2018-01-01
Gastric cancer (GC) is the fifth most common cancer and the third leading cause of cancer-associated mortality worldwide. In the current study, comprehensive bioinformatic analyses were performed to develop a novel scoring system for GC risk assessment based on CAP-Gly domain containing linker protein family member 4 (CLIP4) DNA methylation status. Two GC datasets with methylation sequencing information and mRNA expression profiling were downloaded from the The Cancer Genome Atlas and Gene Expression Omnibus databases. Differentially expressed genes (DEGs) between the CLIP4 hypermethylation and CLIP4 hypomethylation groups were screened using the limma package in R 3.3.1, and survival analysis of these DEGs was performed using the survival package. A risk scoring system was established via regression factor-weighted gene expression based on linear combination to screen the most important genes associated with CLIP4 methylation and prognosis. Genes associated with high/low-risk value were selected using the limma package. Functional enrichment analysis of the top 500 DEGs that positively and negatively associated with risk values was performed using DAVID 6.8 online and the gene set enrichment analysis (GSEA) software. In total, 35 genes were identified to be that significantly associated with prognosis and CLIP4 DNA methylation, and three prognostic signature genes, claudin-11 (CLDN11), apolipoprotein D (APOD), and chordin like 1 (CHRDL1), were used to establish a risk assessment system. The prognostic scoring system exhibited efficiency in classifying patients with different prognoses, where the low-risk groups had significantly longer overall survival times than those in the high-risk groups. CLDN11, APOD and CHRDL1 exhibited reduced expression in the hypermethylation and low-risk groups compare with the hypomethylation and high-risk groups, respectively. Multivariate Cox analysis indicated that risk value could be used as an independent prognostic factor. In functional analysis, six functional gene ontology terms and five GSEA pathways were associated with CLDN11, APOD and CHRDL1. The results established the credibility of the scoring system in this study. Additionally, these three genes, which were significantly associated with CLIP4 DNA methylation and GC risk assessment, were identified as potential prognostic biomarkers. PMID:29901187
Sutradhar, Rinku; Atzema, Clare; Seow, Hsien; Earle, Craig; Porter, Joan; Barbera, Lisa
2014-12-01
Although prior studies show the importance of self-reported symptom scores as predictors of cancer survival, most are based on scores recorded at a single point in time. To show that information on repeated assessments of symptom severity improves predictions for risk of death and to use updated symptom information for determining whether worsening of symptom scores is associated with a higher hazard of death. This was a province-based longitudinal study of adult outpatients who had a cancer diagnosis and had assessments of symptom severity. We implemented a time-to-death Cox model with a time-varying covariate for each symptom to account for changing symptom scores over time. This model was compared with that using only a time-fixed (baseline) covariate for each symptom. The regression coefficients of each model were derived based on a randomly selected 60% of patients, and then, the predictive performance of each model was assessed via concordance probabilities when applied to the remaining 40% of patients. This study had 66,112 patients diagnosed with cancer and more than 310,000 assessments of symptoms. The use of repeated assessments of symptom scores improved predictions for risk of death compared with using only baseline symptom scores. Increased pain and fatigue and reduced appetite were the strongest predictors for death. If available, researchers should consider including changing information on symptom scores, as opposed to only baseline information on symptom scores, when examining hazard of death among patients with cancer. Worsening of pain, fatigue, and appetite may be a flag for impending death. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Predictive power of the grace score in population with diabetes.
Baeza-Román, Anna; de Miguel-Balsa, Eva; Latour-Pérez, Jaime; Carrillo-López, Andrés
2017-12-01
Current clinical practice guidelines recommend risk stratification in patients with acute coronary syndrome (ACS) upon admission to hospital. Diabetes mellitus (DM) is widely recognized as an independent predictor of mortality in these patients, although it is not included in the GRACE risk score. The objective of this study is to validate the GRACE risk score in a contemporary population and particularly in the subgroup of patients with diabetes, and to test the effects of including the DM variable in the model. Retrospective cohort study in patients included in the ARIAM-SEMICYUC registry, with a diagnosis of ACS and with available in-hospital mortality data. We tested the predictive power of the GRACE score, calculating the area under the ROC curve. We assessed the calibration of the score and the predictive ability based on type of ACS and the presence of DM. Finally, we evaluated the effect of including the DM variable in the model by calculating the net reclassification improvement. The GRACE score shows good predictive power for hospital mortality in the study population, with a moderate degree of calibration and no significant differences based on ACS type or the presence of DM. Including DM as a variable did not add any predictive value to the GRACE model. The GRACE score has an appropriate predictive power, with good calibration and clinical applicability in the subgroup of diabetic patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Genetic Predisposition to Ischemic Stroke
Kamatani, Yoichiro; Takahashi, Atsushi; Hata, Jun; Furukawa, Ryohei; Shiwa, Yuh; Yamaji, Taiki; Hara, Megumi; Tanno, Kozo; Ohmomo, Hideki; Ono, Kanako; Takashima, Naoyuki; Matsuda, Koichi; Wakai, Kenji; Sawada, Norie; Iwasaki, Motoki; Yamagishi, Kazumasa; Ago, Tetsuro; Ninomiya, Toshiharu; Fukushima, Akimune; Hozawa, Atsushi; Minegishi, Naoko; Satoh, Mamoru; Endo, Ryujin; Sasaki, Makoto; Sakata, Kiyomi; Kobayashi, Seiichiro; Ogasawara, Kuniaki; Nakamura, Motoyuki; Hitomi, Jiro; Kita, Yoshikuni; Tanaka, Keitaro; Iso, Hiroyasu; Kitazono, Takanari; Kubo, Michiaki; Tanaka, Hideo; Tsugane, Shoichiro; Kiyohara, Yutaka; Yamamoto, Masayuki; Sobue, Kenji; Shimizu, Atsushi
2017-01-01
Background and Purpose— The prediction of genetic predispositions to ischemic stroke (IS) may allow the identification of individuals at elevated risk and thereby prevent IS in clinical practice. Previously developed weighted multilocus genetic risk scores showed limited predictive ability for IS. Here, we investigated the predictive ability of a newer method, polygenic risk score (polyGRS), based on the idea that a few strong signals, as well as several weaker signals, can be collectively informative to determine IS risk. Methods— We genotyped 13 214 Japanese individuals with IS and 26 470 controls (derivation samples) and generated both multilocus genetic risk scores and polyGRS, using the same derivation data set. The predictive abilities of each scoring system were then assessed using 2 independent sets of Japanese samples (KyushuU and JPJM data sets). Results— In both validation data sets, polyGRS was shown to be significantly associated with IS, but weighted multilocus genetic risk scores was not. Comparing the highest with the lowest polyGRS quintile, the odds ratios for IS were 1.75 (95% confidence interval, 1.33–2.31) and 1.99 (95% confidence interval, 1.19–3.33) in the KyushuU and JPJM samples, respectively. Using the KyushuU samples, the addition of polyGRS to a nongenetic risk model resulted in a significant improvement of the predictive ability (net reclassification improvement=0.151; P<0.001). Conclusions— The polyGRS was shown to be superior to weighted multilocus genetic risk scores as an IS prediction model. Thus, together with the nongenetic risk factors, polyGRS will provide valuable information for individual risk assessment and management of modifiable risk factors. PMID:28034966
Prognostic Value of Risk Score and Urinary Markers in Idiopathic Membranous Nephropathy
Hofstra, Julia M.; Wetzels, Jack F.M.
2012-01-01
Summary Background and objectives Accurate prediction of prognosis may improve management of patients with idiopathic membranous nephropathy. This study compared the Toronto Risk Score and urinary low-molecular weight proteins. Design, setting, participants, & measurements One hundred four patients with biopsy-proven idiopathic membranous nephropathy who presented between 1995 and 2008 with a well-preserved kidney function and nephrotic range proteinuria were included. Urinary β2-microglobulin and α1-microglobulin measurements were obtained by timed standardized measurements, and the Toronto Risk Score was calculated using data obtained from medical records. The endpoint was progression, which was defined as an increase in serum creatinine>50% or >25% with a concentration>135 μmol/L. Results Forty-nine patients showed progression. The area under the receiver-operating characteristics curve was 0.78 (95% confidence interval=0.69–0.88) for the risk score versus 0.80 (0.71–0.89) and 0.79 (0.71–0.88) for urinary β2- and α1-microglobulin, respectively. Differences were not significant. Persistent proteinuria did not add accuracy to the Toronto Risk Score. Conversely, its accuracy was not reduced when data from the first 6 months of follow-up were used. Furthermore, a score based on GFR estimated with the six-variable Modification of Diet in Renal Disease equation, calculated in the first 6 months of follow-up, gave an area under the receiver-operating characteristics curve of 0.83 (0.74–0.92), which was not statistically different from other markers. Conclusions The prognostic accuracies of the Toronto Risk Score and urinary low-molecular weight proteins were not significantly different. The risk score can be calculated within 6 months of diagnosis, and a simplified risk score using estimated GFR–Modification of Diet in Renal Disease may be sufficient. PMID:22595828
Scope Complexity Options Risks Excursions (SCORE) Factor Mathematical Description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Samberson, Jonell Nicole; Shettigar, Subhasini
The purpose of the Scope, Complexity, Options, Risks, Excursions (SCORE) model is to estimate the relative complexity of design variants of future warhead options, resulting in scores. SCORE factors extend this capability by providing estimates of complexity relative to a base system (i.e., all design options are normalized to one weapon system). First, a clearly defined set of scope elements for a warhead option is established. The complexity of each scope element is estimated by Subject Matter Experts (SMEs), including a level of uncertainty, relative to a specific reference system. When determining factors, complexity estimates for a scope element canmore » be directly tied to the base system or chained together via comparable scope elements in a string of reference systems that ends with the base system. The SCORE analysis process is a growing multi-organizational Nuclear Security Enterprise (NSE) effort, under the management of the NA-12 led Enterprise Modeling and Analysis Consortium (EMAC). Historically, it has provided the data elicitation, integration, and computation needed to support the out-year Life Extension Program (LEP) cost estimates included in the Stockpile Stewardship Management Plan (SSMP).« less
76 FR 41602 - Fair Credit Reporting Risk-Based Pricing Regulations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-15
... on and disclose the key factors provided with the scores purchased from consumer reporting agencies... making the credit decision, the creditor must disclose that score and certain information relating to the... following disclosures: (1) the credit score \\4\\ used by the person in making the credit decision; (2) the...
Rodríguez-Mañero, Moisés; Abu Assi, Emad; Sánchez-Gómez, Juan Miguel; Fernández-Armenta, Juan; Díaz-Infante, Ernesto; García-Bolao, Ignacio; Benezet-Mazuecos, Juan; Andrés Lahuerta, Ana; Expósito-García, Víctor; Bertomeu-González, Vicente; Arce-León, Álvaro; Barrio-López, María Teresa; Peinado, Rafael; Martínez-Sande, Luis; Arias, Miguel A
2016-11-01
Several clinical risk scores have been developed to identify patients at high risk of all-cause mortality despite implantation of an implantable cardioverter-defibrillator. We aimed to examine and compare the predictive capacity of 4 simple scoring systems (MADIT-II, FADES, PACE and SHOCKED) for predicting mortality after defibrillator implantation for primary prevention of sudden cardiac death in a Mediterranean country. A multicenter retrospective study was performed in 15 Spanish hospitals. Consecutive patients referred for defibrillator implantation between January 2010 and December 2011 were included. A total of 916 patients with ischemic and nonischemic heart disease were included (mean age, 62 ± 11 years, 81.4% male). Over 33.4 ± 12.9 months, 113 (12.3%) patients died (cardiovascular origin in 86 [9.4%] patients). At 12, 24, 36, and 48 months, mortality rates were 4.5%, 7.6%, 10.8%, and 12.3% respectively. All the risk scores showed a stepwise increase in the risk of death throughout the scoring system of each of the scores and all 4 scores identified patients at greater risk of mortality. The scores were significantly associated with all-cause mortality throughout the follow-up period. PACE displayed the lowest c-index value regardless of whether the population had heart disease of ischemic (c-statistic = 0.61) or nonischemic origin (c-statistic = 0.61), whereas MADIT-II (c-statistic = 0.67 and 0.65 in ischemic and nonischemic cardiomyopathy, respectively), SHOCKED (c-statistic = 0.68 and 0.66, respectively), and FADES (c-statistic = 0.66 and 0.60) provided similar c-statistic values (P ≥ .09). In this nontrial-based cohort of Mediterranean patients, the 4 evaluated risk scores showed a significant stepwise increase in the risk of death. Among the currently available risk scores, MADIT-II, FADES, and SHOCKED provide slightly better performance than PACE. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
External validation of a prehospital risk score for critical illness.
Kievlan, Daniel R; Martin-Gill, Christian; Kahn, Jeremy M; Callaway, Clifton W; Yealy, Donald M; Angus, Derek C; Seymour, Christopher W
2016-08-11
Identification of critically ill patients during prehospital care could facilitate early treatment and aid in the regionalization of critical care. Tools to consistently identify those in the field with or at higher risk of developing critical illness do not exist. We sought to validate a prehospital critical illness risk score that uses objective clinical variables in a contemporary cohort of geographically and temporally distinct prehospital encounters. We linked prehospital encounters at 21 emergency medical services (EMS) agencies to inpatient electronic health records at nine hospitals in southwestern Pennsylvania from 2010 to 2012. The primary outcome was critical illness during hospitalization, defined as an intensive care unit stay with delivery of organ support (mechanical ventilation or vasopressor use). We calculated the prehospital risk score using demographics and first vital signs from eligible EMS encounters, and we tested the association between score variables and critical illness using multivariable logistic regression. Discrimination was assessed using the AUROC curve, and calibration was determined by plotting observed versus expected events across score values. Operating characteristics were calculated at score thresholds. Among 42,550 nontrauma, non-cardiac arrest adult EMS patients, 1926 (4.5 %) developed critical illness during hospitalization. We observed moderate discrimination of the prehospital critical illness risk score (AUROC 0.73, 95 % CI 0.72-0.74) and adequate calibration based on observed versus expected plots. At a score threshold of 2, sensitivity was 0.63 (95 % CI 0.61-0.75), specificity was 0.73 (95 % CI 0.72-0.73), negative predictive value was 0.98 (95 % CI 0.98-0.98), and positive predictive value was 0.10 (95 % CI 0.09-0.10). The risk score performance was greater with alternative definitions of critical illness, including in-hospital mortality (AUROC 0.77, 95 % CI 0.7 -0.78). In an external validation cohort, a prehospital risk score using objective clinical data had moderate discrimination for critical illness during hospitalization.
Maenner, Matthew J; Greenberg, Jan S; Mailick, Marsha R
2015-05-01
Lower (versus higher) IQ scores have been shown to increase the risk of early mortality, however, the underlying mechanisms are poorly understood and previous studies underrepresent individuals with intellectual disability (ID) and women. This study followed one third of all senior-year students (approximately aged 17) attending public high school in Wisconsin, U.S. in 1957 (n = 10,317) until 2011. Men and women with the lowest IQ test scores (i.e., IQ scores ≤ 85) had increased rates of mortality compared to people with the highest IQ test scores, particularly for cardiovascular disease. Importantly, when educational attainment was held constant, people with lower IQ test scores did not have higher mortality by age 70 than people with higher IQ test scores. Individuals with lower IQ test scores likely experience multiple disadvantages throughout life that contribute to increased risk of early mortality.
Robinson, C L; Jouni, H; Kruisselbrink, T M; Austin, E E; Christensen, K D; Green, R C; Kullo, I J
2016-02-01
We investigated whether disclosure of coronary heart disease (CHD) genetic risk influences perceived personal control (PPC) and genetic counseling satisfaction (GCS). Participants (n = 207, age: 45-65 years) were randomized to receive estimated 10-year risk of CHD based on a conventional risk score (CRS) with or without a genetic risk score (GRS). Risk estimates were disclosed by a genetic counselor who also reviewed how GRS altered risk in those randomized to CRS+GRS. Each participant subsequently met with a physician and then completed surveys to assess PPC and GCS. Participants who received CRS+GRS had higher PPC than those who received CRS alone although the absolute difference was small (25.2 ± 2.7 vs 24.1 ± 3.8, p = 0.04). A greater proportion of CRS+GRS participants had higher GCS scores (17.3 ± 5.3 vs 15.9 ± 6.3, p = 0.06). In the CRS+GRS group, PPC and GCS scores were not correlated with GRS. Within both groups, PPC and GCS scores were similar in patients with or without family history (p = NS). In conclusion, patients who received their genetic risk of CHD had higher PPC and tended to have higher GCS. Our findings suggest that disclosure of genetic risk of CHD together with conventional risk estimates is appreciated by patients. Whether this results in improved outcomes needs additional investigation. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Risk of Stroke in Patients With Short-Run Atrial Tachyarrhythmia.
Yamada, Shinya; Lin, Chin-Yu; Chang, Shih-Lin; Chao, Tze-Fan; Lin, Yenn-Jiang; Lo, Li-Wei; Chung, Fa-Po; Hu, Yu-Feng; Tuan, Ta-Chuan; Liao, Jo-Nan; Te, Abigail Louise D; Chang, Yao-Ting; Chang, Ting-Yung; Wu, Cheng-I; Higa, Satoshi; Chen, Shih-Ann
2017-12-01
The risk of stroke in patients with short-run atrial tachyarrhythmia (AT) remains unclear. This study aimed to investigate the relationship between short-run AT and the stroke and the use of the CHA 2 DS 2 -VASc score for the risk stratification. From the registry of 24-hour Holter monitoring, 5342 subjects without known atrial fibrillation or stroke were enrolled. Short-run AT was defined as episodes of supraventricular ectopic beats <5 seconds. There were 1595 subjects (29.8%) with short-run AT. During the median follow-up period of 9.0 years, 494 subjects developed new-onset stroke. Patients with short-run AT had significantly higher stroke rates compared with patients without short-run AT (11.4% versus 8.3%; P <0.001). In patients with short-run AT, the number of strokes per 100 person-years for patients with CHA 2 DS 2 -VASc score of 0 and 1 were 0.23 and 0.67, respectively. However, the number of them for patients with CHA 2 DS 2 -VASc score of 2, 3, 4, and ≥5 were 1.62, 1.89, 1.30, and 2.91, respectively. In patients with CHA 2 DS 2 -VASc score of 0 or 1, age (>61 years old) and burden of premature atrial contractions (>25 beats/d) independently predicted the risk of stroke. In subgroup analyses, short-run AT patients were divided into 3 groups based on their CHA 2 DS 2 -VASc scores: low score (score of 0 [men] or 1 [women]; n=324), intermediate score (score of 1 [men] or 2 [women]; n=275), and high score (score of ≥2 [men] or ≥3 [women]; n=996). When compared with low score, intermediate and high scores were independent predictors for stroke (hazard ratio, 6.165; P <0.001 and hazard ratio, 8.577; P <0.001, respectively). Short-run AT increases the risk of stroke. Therefore, the CHA 2 DS 2 -VASc score could be used for the risk stratification. Age and burden of premature atrial contractions were independent predictors for stroke in patients with CHA 2 DS 2 -VASc score of 0 or 1. © 2017 American Heart Association, Inc.
Lichtenberg, Peter A; Gross, Evan; Ficker, Lisa J
2018-06-08
This work examines the clinical utility of the scoring system for the Lichtenberg Financial Decision-making Rating Scale (LFDRS) and its usefulness for decision making capacity and financial exploitation. Objective 1 was to examine the clinical utility of a person centered, empirically supported, financial decision making scale. Objective 2 was to determine whether the risk-scoring system created for this rating scale is sufficiently accurate for the use of cutoff scores in cases of decisional capacity and cases of suspected financial exploitation. Objective 3 was to examine whether cognitive decline and decisional impairment predicted suspected financial exploitation. Two hundred independently living, non-demented community-dwelling older adults comprised the sample. Participants completed the rating scale and other cognitive measures. Receiver operating characteristic curves were in the good to excellent range for decisional capacity scoring, and in the fair to good range for financial exploitation. Analyses supported the conceptual link between decision making deficits and risk for exploitation, and supported the use of the risk-scoring system in a community-based population. This study adds to the empirical evidence supporting the use of the rating scale as a clinical tool assessing risk for financial decisional impairment and/or financial exploitation.
Admission glucose does not improve GRACE score at 6 months and 5 years after myocardial infarction.
de Mulder, Maarten; van der Ploeg, Tjeerd; de Waard, Guus A; Boersma, Eric; Umans, Victor A
2011-01-01
Admission plasma glucose (APG) is a biomarker that predicts mortality in myocardial infarction (MI) patients. Therefore, APG may improve risk stratification based on the GRACE risk score. We collected data on baseline characteristics and long-term (median 55 months) outcome of 550 MI patients who entered our hospital in 2003 and 2006. We determined the GRACE risk score at admission for each patient, which was entered in a logistic regression model, together with APG, to evaluate their prognostic value for 6-month and 5-year mortality. Patients with APG ≥7.8 mmol/l had a higher mortality than those with APG levels <7.8 mmol/l; 6 months: 13.7 versus 3.6%, p value <0.001; 5 years: 20.4 versus 11.1%, p value 0.003. After adjustment for the GRACE risk score variables, APG appeared a significant predictor of 6-month and 5-year mortality, adjusted OR 1.17 (1.06-1.29) and 1.12 (1.03-1.22). The combination of the GRACE risk score and APG increased the model's performance (discrimination C-index 0.87 vs. 0.85), although the difference was not significant (p = 0.095). Combining the GRACE risk score and APG reclassified 12.9% of the patients, but the net reclassification improvement was nonsignificant (p = 0.146). APG is a predictor of 6-month and 5-year mortality, each mmol/l increase in APG being associated with a mortality increase of 17 and 12%, respectively, independent of the GRACE risk score. However, adding APG to the GRACE model did not result in significantly improved clinical risk stratification. Copyright © 2012 S. Karger AG, Basel.
Lip, Gregory Y H; Hansen, Morten Lock; Hansen, Peter Riis; Tolstrup, Janne Schurmann; Lindhardsen, Jesper; Selmer, Christian; Ahlehoff, Ole; Olsen, Anne-Marie Schjerning; Gislason, Gunnar Hilmar; Torp-Pedersen, Christian
2011-01-01
Objectives To evaluate the individual risk factors composing the CHADS2 (Congestive heart failure, Hypertension, Age≥75 years, Diabetes, previous Stroke) score and the CHA2DS2-VASc (CHA2DS2-Vascular disease, Age 65-74 years, Sex category) score and to calculate the capability of the schemes to predict thromboembolism. Design Registry based cohort study. Setting Nationwide data on patients admitted to hospital with atrial fibrillation. Population All patients with atrial fibrillation not treated with vitamin K antagonists in Denmark in the period 1997-2006. Main outcome measures Stroke and thromboembolism. Results Of 121 280 patients with non-valvular atrial fibrillation, 73 538 (60.6%) fulfilled the study inclusion criteria. In patients at “low risk” (score=0), the rate of thromboembolism per 100 person years was 1.67 (95% confidence interval 1.47 to 1.89) with CHADS2 and 0.78 (0.58 to 1.04) with CHA2DS2-VASc at one year’s follow-up. In patients at “intermediate risk” (score=1), this rate was 4.75 (4.45 to 5.07) with CHADS2 and 2.01 (1.70 to 2.36) with CHA2DS2-VASc. The rate of thromboembolism depended on the individual risk factors composing the scores, and both schemes underestimated the risk associated with previous thromboembolic events. When patients were categorised into low, intermediate, and high risk groups, C statistics at 10 years’ follow-up were 0.812 (0.796 to 0.827) with CHADS2 and 0.888 (0.875 to 0.900) with CHA2DS2-VASc. Conclusions The risk associated with a specific risk stratification score depended on the risk factors composing the score. CHA2DS2-VASc performed better than CHADS2 in predicting patients at high risk, and those categorised as low risk by CHA2DS2-VASc were truly at low risk for thromboembolism. PMID:21282258
Sorting Out the Health Risk in California's State-Based Marketplace.
Bindman, Andrew B; Hulett, Denis; Gilmer, Todd P; Bertko, John
2016-02-01
To characterize the health risk of enrollees in California's state-based insurance marketplace (Covered California) by metal tier, region, month of enrollment, and plan. 2014 Open-enrollment data from Covered California linked with 2012 hospitalization and emergency department (ED) visit records from statewide all-payer administrative databases. Chronic Illness and Disability Payment System (CDPS) health risk scores derived from an individual's age and sex from the enrollment file and the diagnoses captured in the hospitalization and ED records. CDPS scores were standardized by setting the average to 1.00. Among the 1,286,089 enrollees, 120,573 (9.4 percent) had at least one ED visit and/or a hospitalization in 2012. Higher risk enrollees chose plans with greater actuarial value. The standardized CDPS health risk score was 11 percent higher in the first month of enrollment (1.08; 99 percent CI: 1.07-1.09) than the last month (0.97; 99 percent CI: 0.97-0.97). Four of the 12 plans enrolled 91 percent of individuals; their average health risk scores were each within 3 percent of the marketplace's statewide average. Providing health plans with a means to assess the health risk of their year 1 enrollees allowed them to anticipate whether they would receive or contribute payments to a risk-adjustment pool. After receiving these findings as a part of their negotiations with Covered California, health plans covering the majority of enrollees decreased their initially proposed 2015 rates, saving consumers tens of millions of dollars in potential premiums. © Health Research and Educational Trust.
Kim, Bia Z; Patel, Dipika V; McKelvie, James; Sherwin, Trevor; McGhee, Charles N J
2017-09-01
To assess the effect of preoperative risk stratification for phacoemulsification surgery on intraoperative complications in a teaching hospital. Prospective cohort study. Prospective assessment of consecutive phacoemulsification cases (N = 500) enabled calculation of a risk score (M-score of 0-8) using a risk stratification system. M-scores of >3 were allocated to senior surgeons. All surgeries were performed in a public teaching hospital setting, Auckland, New Zealand, in early 2016. Postoperatively, data were reviewed for complications and corrected distance visual acuity (CDVA). Results were compared to a prospective study (N = 500, phase 1) performed prior to formal introduction of risk stratification. Intraoperative complications increased with increasing M-scores (P = .044). Median M-score for complicated cases was higher (P = .022). Odds ratio (OR) for a complication increased 1.269 per unit increase in M-score (95% confidence interval [CI] 1.007-1.599, P = .043). Overall rate of any intraoperative complication was 5.0%. Intraoperative complication rates decreased from 8.4% to 5.0% (OR = 0.576, P = .043) comparing phase 1 and phase 2 (formal introduction of risk stratification). The severity of complications also reduced. A significant decrease in complications for M = 0 (ie, minimal risk cases) was also identified comparing the current study (3.1%) to phase 1 (7.2%), P = .034. There was no change in postoperative complication risks (OR 0.812, P = .434) or in mean postoperative CDVA (20/30, P = .484) comparing current with phase 1 outcomes. A simple preoperative risk stratification system, based on standard patient information gathered at preoperative consultation, appears to reduce intraoperative complications and support safer surgical training by appropriate allocation of higher-risk cases. Copyright © 2017 Elsevier Inc. All rights reserved.
Early Cannabis Use, Polygenic Risk Score for Schizophrenia, and Brain Maturation in Adolescence
French, Leon; Gray, Courtney; Leonard, Gabriel; Perron, Michel; Pike, G. Bruce; Richer, Louis; Séguin, Jean R.; Veillette, Suzanne; Evans, C. John; Artiges, Eric; Banaschewski, Tobias; Bokde, Arun W. L.; Bromberg, Uli; Bruehl, Ruediger; Buchel, Christian; Cattrell, Anna; Conrod, Patricia J.; Flor, Herta; Frouin, Vincent; Gallinat, Jurgen; Garavan, Hugh; Gowland, Penny; Heinz, Andreas; Lemaitre, Herve; Martinot, Jean-Luc; Nees, Frauke; Orfanos, Dimitri Papadopoulos; Pangelinan, Melissa Marie; Poustka, Luise; Rietschel, Marcella; Smolka, Michael N.; Walter, Henrik; Whelan, Robert; Timpson, Nic J.; Schumann, Gunter; Smith, George Davey; Pausova, Zdenka; Paus, Tomáš
2016-01-01
IMPORTANCE Cannabis use during adolescence is known to increase the risk for schizophrenia in men. Sex differences in the dynamics of brain maturation during adolescence may be of particular importance with regard to vulnerability of the male brain to cannabis exposure. OBJECTIVE To evaluate whether the association between cannabis use and cortical maturation in adolescents is moderated by a polygenic risk score for schizophrenia. DESIGN, SETTING, AND PARTICIPANTS Observation of 3 population-based samples included initial analysis in 1024 adolescents of both sexes from the Canadian Saguenay Youth Study (SYS) and follow-up in 426 adolescents of both sexes from the IMAGEN Study from 8 European cities and 504 male youth from the Avon Longitudinal Study of Parents and Children (ALSPAC) based in England. A total of 1577 participants (aged 12–21 years; 899 [57.0%] male) had (1) information about cannabis use; (2) imaging studies of the brain; and (3) a polygenic risk score for schizophrenia across 108 genetic loci identified by the Psychiatric Genomics Consortium. Data analysis was performed from March 1 through December 31, 2014. MAIN OUTCOMES AND MEASURES Cortical thickness derived from T1-weighted magnetic resonance images. Linear regression tests were used to assess the relationships between cannabis use, cortical thickness, and risk score. RESULTS Across the 3 samples of 1574 participants, a negative association was observed between cannabis use in early adolescence and cortical thickness in male participants with a high polygenic risk score. This observation was not the case for low-risk male participants or for the low- or high-risk female participants. Thus, in SYS male participants, cannabis use interacted with risk score vis-à-vis cortical thickness (P = .009); higher scores were associated with lower thickness only in males who used cannabis. Similarly, in the IMAGEN male participants, cannabis use interacted with increased risk score vis-à-vis a change in decreasing cortical thickness from 14.5 to 18.5 years of age (t137 = −2.36; P = .02). Finally, in the ALSPAC high-risk group of male participants, those who used cannabis most frequently (≥61 occasions) had lower cortical thickness than those who never used cannabis (difference in cortical thickness, 0.07 [95% CI, 0.01–0.12]; P = .02) and those with light use (<5 occasions) (difference in cortical thickness, 0.11 [95% CI, 0.03–0.18]; P = .004). CONCLUSIONS AND RELEVANCE Cannabis use in early adolescence moderates the association between the genetic risk for schizophrenia and cortical maturation among male individuals. This finding implicates processes underlying cortical maturation in mediating the link between cannabis use and liability to schizophrenia. PMID:26308966
Thørrisen, Mikkel Magnus; Skogen, Jens Christoffer; Aas, Randi Wågø
2018-06-14
Harmful alcohol consumption is a major risk factor for ill-health on an individual level, a global public health challenge, and associated with workplace productivity loss. This study aimed to explore the proportion of risky drinkers in a sample of employees, investigate sociodemographic associations with risky drinking, and examine implications for intervention needs, according to recommendations from the World Health Organization (WHO). In a cross-sectional design, sociodemographic data were collected from Norwegian employees in 14 companies (n = 3571) across sectors and branches. Risky drinking was measured with the Alcohol Use Disorders Identification Test (AUDIT). The threshold for risky drinking was set at ≥8 scores on the AUDIT. Based on WHO guidelines, risky drinkers were divided into three risk categories (moderate risk: scores 8-15, high risk: scores 16-19, and dependence likely risk: scores 20-40). The association between sociodemographic variables and risky drinking were explored with chi square tests for independence and adjusted logistic regression. The risk groups were then examined according to the WHO intervention recommendations. 11.0% of the total sample reported risky drinking. Risky drinking was associated with male gender (OR = 2.97, p < .001), younger age (OR = 1.03, p < .001), low education (OR = 1.17, p < .05), being unmarried (OR = 1.38, p < .05) and not having children (OR = 1.62, p < .05). Risky drinking was most common among males without children (33.5%), males living alone (31.4%) and males aged ≤39 (26.5%). 94.6% of risky drinkers scored within the lowest risk category. Based on WHO guidelines, approximately one out of ten employees need simple advice, targeting risky drinking. In high-risk groups, one out of three employees need interventions. A considerable amount of employees (one to three out of ten), particularly young, unmarried males without children and higher education, may be characterised as risky drinkers. This group may benefit from low-cost interventions, based on recommendations from the WHO guidelines.
Habel, Laurel A; Shak, Steven; Jacobs, Marlena K; Capra, Angela; Alexander, Claire; Pho, Mylan; Baker, Joffre; Walker, Michael; Watson, Drew; Hackett, James; Blick, Noelle T; Greenberg, Deborah; Fehrenbacher, Louis; Langholz, Bryan; Quesenberry, Charles P
2006-01-01
The Oncotype DX assay was recently reported to predict risk for distant recurrence among a clinical trial population of tamoxifen-treated patients with lymph node-negative, estrogen receptor (ER)-positive breast cancer. To confirm and extend these findings, we evaluated the performance of this 21-gene assay among node-negative patients from a community hospital setting. A case-control study was conducted among 4,964 Kaiser Permanente patients diagnosed with node-negative invasive breast cancer from 1985 to 1994 and not treated with adjuvant chemotherapy. Cases (n = 220) were patients who died from breast cancer. Controls (n = 570) were breast cancer patients who were individually matched to cases with respect to age, race, adjuvant tamoxifen, medical facility and diagnosis year, and were alive at the date of death of their matched case. Using an RT-PCR assay, archived tumor tissues were analyzed for expression levels of 16 cancer-related and five reference genes, and a summary risk score (the Recurrence Score) was calculated for each patient. Conditional logistic regression methods were used to estimate the association between risk of breast cancer death and Recurrence Score. After adjusting for tumor size and grade, the Recurrence Score was associated with risk of breast cancer death in ER-positive, tamoxifen-treated and -untreated patients (P = 0.003 and P = 0.03, respectively). At 10 years, the risks for breast cancer death in ER-positive, tamoxifen-treated patients were 2.8% (95% confidence interval [CI] 1.7-3.9%), 10.7% (95% CI 6.3-14.9%), and 15.5% (95% CI 7.6-22.8%) for those in the low, intermediate and high risk Recurrence Score groups, respectively. They were 6.2% (95% CI 4.5-7.9%), 17.8% (95% CI 11.8-23.3%), and 19.9% (95% CI 14.2-25.2%) for ER-positive patients not treated with tamoxifen. In both the tamoxifen-treated and -untreated groups, approximately 50% of patients had low risk Recurrence Score values. In this large, population-based study of lymph node-negative patients not treated with chemotherapy, the Recurrence Score was strongly associated with risk of breast cancer death among ER-positive, tamoxifen-treated and -untreated patients.
Habel, Laurel A; Shak, Steven; Jacobs, Marlena K; Capra, Angela; Alexander, Claire; Pho, Mylan; Baker, Joffre; Walker, Michael; Watson, Drew; Hackett, James; Blick, Noelle T; Greenberg, Deborah; Fehrenbacher, Louis; Langholz, Bryan; Quesenberry, Charles P
2006-01-01
Introduction The Oncotype DX assay was recently reported to predict risk for distant recurrence among a clinical trial population of tamoxifen-treated patients with lymph node-negative, estrogen receptor (ER)-positive breast cancer. To confirm and extend these findings, we evaluated the performance of this 21-gene assay among node-negative patients from a community hospital setting. Methods A case-control study was conducted among 4,964 Kaiser Permanente patients diagnosed with node-negative invasive breast cancer from 1985 to 1994 and not treated with adjuvant chemotherapy. Cases (n = 220) were patients who died from breast cancer. Controls (n = 570) were breast cancer patients who were individually matched to cases with respect to age, race, adjuvant tamoxifen, medical facility and diagnosis year, and were alive at the date of death of their matched case. Using an RT-PCR assay, archived tumor tissues were analyzed for expression levels of 16 cancer-related and five reference genes, and a summary risk score (the Recurrence Score) was calculated for each patient. Conditional logistic regression methods were used to estimate the association between risk of breast cancer death and Recurrence Score. Results After adjusting for tumor size and grade, the Recurrence Score was associated with risk of breast cancer death in ER-positive, tamoxifen-treated and -untreated patients (P = 0.003 and P = 0.03, respectively). At 10 years, the risks for breast cancer death in ER-positive, tamoxifen-treated patients were 2.8% (95% confidence interval [CI] 1.7–3.9%), 10.7% (95% CI 6.3–14.9%), and 15.5% (95% CI 7.6–22.8%) for those in the low, intermediate and high risk Recurrence Score groups, respectively. They were 6.2% (95% CI 4.5–7.9%), 17.8% (95% CI 11.8–23.3%), and 19.9% (95% CI 14.2–25.2%) for ER-positive patients not treated with tamoxifen. In both the tamoxifen-treated and -untreated groups, approximately 50% of patients had low risk Recurrence Score values. Conclusion In this large, population-based study of lymph node-negative patients not treated with chemotherapy, the Recurrence Score was strongly associated with risk of breast cancer death among ER-positive, tamoxifen-treated and -untreated patients. PMID:16737553
Foster, Bethany J; Gao, Tao; Mackie, Andrew S; Zemel, Babette S; Ali, Huma; Platt, Robert W; Colan, Steven D
2013-04-01
Left ventricular (LV) mass varies in proportion to lean body mass (LBM) but is usually expressed relative to height or body surface area (BSA), each of which functions as a surrogate for LBM. The aims of this study were to characterize the adiposity-related biases associated with each of these scaling variables and to determine the impact of these biases on the diagnosis of LV hypertrophy (LVH) in a group of children at risk for LVH. In a retrospective study, LV mass was estimated using M-mode echocardiography in 222 healthy nonoverweight reference children and 112 children "at risk" for LVH (48 healthy overweight children and 64 children with hypertension). LBM was estimated for all children using validated predictive equations and was considered the criterion scaling variable. Z scores for LV mass for LBM, LV mass for height, and LV mass for BSA were calculated for each child relative to the reference group. The performance of height-based and BSA-based Z scores were compared with that of LBM-based Z scores at different levels of adiposity (estimated by the Z score for body mass index for age [BMIz]). Among healthy normotensive children, LV mass-for-height Z scores were greater than LV mass-for-LBM Z scores at higher values of BMIz and lower than LV mass-for-LBM Z scores at lower values of BMIz (R(2) = 0.52, P < .0001). LV mass-for-BSA Z scores for agreed well with LBM-based Z scores at BMIz < 0.7 but were lower than LV mass-for-LBM Z scores for at BMIz > 0.7 (R(2) = 0.31, P < .0001). Compared with 13% of at-risk children classified as having LVH on the basis of LV mass for LBM > 95th percentile, 30% and 11% had LVH when LV mass was scaled to height and BSA, respectively. Scaling LV mass to BSA in children results in less misclassification with respect to LVH than does scaling to height. Copyright © 2013 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
Dominguez, Ligia J.; Bes-Rastrollo, Maira; Basterra-Gortari, Francisco Javier; Gea, Alfredo; Barbagallo, Mario; Martínez-González, Miguel A.
2015-01-01
Background Strong evidence supports that dietary modifications may decrease incident type 2 diabetes mellitus (T2DM). Numerous diabetes risk models/scores have been developed, but most do not rely specifically on dietary variables or do not fully capture the overall dietary pattern. We prospectively assessed the association of a dietary-based diabetes-risk score (DDS), which integrates optimal food patterns, with the risk of developing T2DM in the SUN (“Seguimiento Universidad de Navarra”) longitudinal study. Methods We assessed 17,292 participants initially free of diabetes, followed-up for a mean of 9.2 years. A validated 136-item FFQ was administered at baseline. Taking into account previous literature, the DDS positively weighted vegetables, fruit, whole cereals, nuts, coffee, low-fat dairy, fiber, PUFA, and alcohol in moderate amounts; while it negatively weighted red meat, processed meats and sugar-sweetened beverages. Energy-adjusted quintiles of each item (with exception of moderate alcohol consumption that received either 0 or 5 points) were used to build the DDS (maximum: 60 points). Incident T2DM was confirmed through additional detailed questionnaires and review of medical records of participants. We used Cox proportional hazards models adjusted for socio-demographic and anthropometric parameters, health-related habits, and clinical variables to estimate hazard ratios (HR) of T2DM. Results We observed 143 T2DM confirmed cases during follow-up. Better baseline conformity with the DDS was associated with lower incidence of T2DM (multivariable-adjusted HR for intermediate (25–39 points) vs. low (11–24) category 0.43 [95% confidence interval (CI) 0.21, 0.89]; and for high (40–60) vs. low category 0.32 [95% CI: 0.14, 0.69]; p for linear trend: 0.019). Conclusions The DDS, a simple score exclusively based on dietary components, showed a strong inverse association with incident T2DM. This score may be applicable in clinical practice to improve dietary habits of subjects at high risk of T2DM and also as an educational tool for laypeople to help them in self-assessing their future risk for developing diabetes. PMID:26544985
Mayer, Lukas; Ferrari, Julia; Krebs, Stefan; Boehme, Christian; Toell, Thomas; Matosevic, Benjamin; Tinchon, Alexander; Brainin, Michael; Gattringer, Thomas; Sommer, Peter; Thun, Peter; Willeit, Johann; Lang, Wilfried; Kiechl, Stefan; Knoflach, Michael
2018-03-01
Changing definition of TIA from time to a tissue basis questions the validity of the well-established ABCD3-I risk score for recurrent ischemic cerebrovascular events. We analyzed patients with ischemic stroke with mild neurological symptoms arriving < 24 h after symptom onset in a phase where it is unclear, if the event turns out to be a TIA or minor stroke, in the prospective multi-center Austrian Stroke Unit Registry. Patients were retrospectively categorized according to a time-based (symptom duration below/above 24 h) and tissue-based (without/with corresponding brain lesion on CT or MRI) definition of TIA or minor stroke. Outcome parameters were early stroke during stroke unit stay and 3-month ischemic stroke. Of the 5237 TIA and minor stroke patients with prospectively documented ABCD3-I score, 2755 (52.6%) had a TIA by the time-based and 2183 (41.7%) by the tissue-based definition. Of the 2457 (46.9%) patients with complete 3-month followup, corresponding numbers were 1195 (48.3%) for the time- and 971 (39.5%) for the tissue-based definition of TIA. Early and 3-month ischemic stroke occurred in 1.1 and 2.5% of time-based TIA, 3.8 and 5.9% of time-based minor stroke, 1.2 and 2.3% of tissue-based TIA as well as in 3.1 and 5.5% of tissue-based minor stroke patients. Irrespective of the definition of TIA and minor stroke, the risk of early and 3-month ischemic stroke steadily increased with increasing ABCD3-I score points. The ABCD3-I score performs equally in TIA patients in tissue- as well as time-based definition and the same is true for minor stroke patients.
Moisseiev, Elad; Sela, Tzahi; Minkev, Liza; Varssano, David
2013-01-01
Purpose To evaluate the trends in corneal refractive procedure selection for the correction of myopia, focusing on the relative proportions of laser in situ keratomileusis (LASIK) and surface ablation procedures. Methods Only eyes that underwent LASIK or surface ablation for the correction of myopia between 2008–2011 were included in this retrospective study. Additional recorded parameters included patient age, preoperative manifest refraction, corneal thickness, and calculated residual corneal bed thickness. A risk score was given to each eye, based on these parameters, according to the Ectasia Risk Factor Score System (ERFSS), without the preoperative corneal topography. Results This study included 16,163 eyes, of which 38.4% underwent LASIK and 61.6% underwent surface ablation. The risk score correlated with procedure selection, with LASIK being preferred in eyes with a score of 0 and surface ablation in eyes with a score of 2 or higher. When controlling for age, preoperative manifest refraction, corneal thickness, and all parameters, the relative proportion of surface ablation compared with LASIK was found to have grown significantly during the study period. Conclusions Our results indicate that with time, surface ablation tended to be performed more often than LASIK for the correction of myopia in our cohort. Increased awareness of risk factors and preoperative risk assessment tools, such as the ERFSS, have shifted the current practice of refractive surgery from LASIK towards surface ablation despite the former’s advantages, especially in cases in which the risk for ectasia is more than minimal (risk score 2 and higher). PMID:23345963
Pharmacogenomic Approaches for Automated Medication Risk Assessment in People with Polypharmacy
Liu, Jiazhen; Friedman, Carol; Finkelstein, Joseph
2018-01-01
Abstract Medication regimen may be optimized based on individual drug efficacy identified by pharmacogenomic testing. However, majority of current pharmacogenomic decision support tools provide assessment only of single drug-gene interactions without taking into account complex drug-drug and drug-drug-gene interactions which are prevalent in people with polypharmacy and can result in adverse drug events or insufficient drug efficacy. The main objective of this project was to develop comprehensive pharmacogenomic decision support for medication risk assessment in people with polypharmacy that simultaneously accounts for multiple drug and gene effects. To achieve this goal, the project addressed two aims: (1) development of comprehensive knowledge repository of actionable pharmacogenes; (2) introduction of scoring approaches reflecting potential adverse effect risk levels of complex medication regimens accounting for pharmacogenomic polymorphisms and multiple drug metabolizing pathways. After pharmacogenomic knowledge repository was introduced, a scoring algorithm has been built and pilot-tested using a limited data set. The resulting total risk score for frequently hospitalized older adults with polypharmacy (72.04±17.84) was statistically significantly different (p<0.05) from the total risk score for older adults with polypharmacy with low hospitalization rate (8.98±2.37). An initial prototype assessment demonstrated feasibility of our approach and identified steps for improving risk scoring algorithms.
McDevitt, Roland D; Haviland, Amelia M; Lore, Ryan; Laudenberger, Laura; Eisenberg, Matthew; Sood, Neeraj
2014-01-01
Objective To identify the degree of selection into consumer-directed health plans (CDHPs) versus traditional plans over time, and factors that influence choice and temper risk selection. Data Sources/Study Setting Sixteen large employers offering both CDHP and traditional plans during the 2004–2007 period, more than 200,000 families. Study Design We model CDHP choice with logistic regression; predictors include risk scores, in addition to family, choice setting, and plan characteristics. Additional models stratify by account type or single enrollee versus family. Data Collection/Extraction Methods Risk scores, family characteristics, and enrollment decisions are derived from medical claims and enrollment files. Interviews with human resources executives provide additional data. Principal Findings CDHP risk scores were 74 percent of traditional plan scores in the first year, and this difference declined over time. Employer contributions to accounts and employee premium savings fostered CDHP enrollment and reduced risk selection. Having to make an active choice of plan increased CDHP enrollment but also increased risk selection. Risk selection was greater for singles than families and did not differ between HRA and HSA-based CDHPs. Conclusions Risk selection was not severe and it was well managed. Employers have effective methods to encourage CDHP enrollment and temper selection against traditional plans. PMID:24800305
Long Term Effects of First Grade Multi-Tier Intervention
Otaiba, Stephanie Al; Kim, Young-Suk; Wanzek, Jeanne; Petscher, Yaacov; Wagner, Richard K.
2014-01-01
The purpose of this study was to compare the long term effects of two first grade RTI models (Dynamic and Typical RTI) on the reading performance of students in second and third grade. Participants included 419 first grade students (352 in second grade and 278 in third grade after attrition). Students were classified based on first grade screeners as at-risk or not at-risk and then based on their response to intervention (no risk [NR], relative easy to remediate [ER] and requiring sustained remediation [SR]). Students in the Dynamic RTI condition had higher reading comprehension scores at the end of third grade. At the end of second grade, ER and SR students had lower reading scores than NR students. At the end of third grade, there were no differences in reading skills between ER and NR students, but SR students had lower scores than NR students. ER students in the Dynamic RTI condition had higher reading scores at the end of second grade than those in the Typical RTI condition. Limitations and directions for future research are discussed. PMID:25346781
Flore, R; Ponziani, F R; Tinelli, G; Arena, V; Fonnesu, C; Nesci, A; Santoro, L; Tondi, P; Santoliquido, A
2015-04-01
Carotid intima-media thickness (c-IMT), arterial stiffness (AS) and vascular calcification (VC) are now considered important new markers of atherosclerosis and have been associated with increased prevalence of cardiovascular events. An accurate, reproducible and easy detection of these parameters could increase the prognostic value of the traditional cardiovascular risk factors in many subjects at low and intermediate risk. Today, c-IMT and AS can be measured by ultrasound, while cardiac computed tomography is the gold standard to quantify coronary VC, although concern about the reproducibility of the former and the safety of the latter have been raised. Nevertheless, a safe and reliable method to quantify non-coronary (i.e., peripheral) VC has not been detected yet. To review the most innovative and accurate ultrasound-based modalities of c-IMT and AS detection and to describe a novel UltraSound-Based Carotid, Aortic and Lower limbs Calcification Score (USB-CALCs, simply named CALC), allowing to quantify peripheral calcifications. Finally, to propose a system for cardiovascular risk reclassification derived from the global evaluation of "Quality Intima-Media Thickness", "Quality Arterial Stiffness", and "CALC score" in addition to the Framingham score.
Stridsman, Caroline; Backman, Helena; Eklund, Britt-Marie; Rönmark, Eva; Hedman, Linnea
2017-07-01
Population-based studies investigating health-related quality of life (HRQoL) among asthmatic adolescents are rare. Further, among subjects with asthma, HRQoL may be affected by asthma control and severity. To investigate HRQoL in relation to asthma control and asthma severity among adolescents. As a part of the population-based OLIN pediatric study, 266 adolescents with current asthma (14-15 yr) were identified. N = 247 completed the DISABKIDS HRQoL asthma module, including the domains impact and worry. The Asthma Control Test (ACT) was used and a disease severity score was calculated based on symptoms and medicine use. The prevalence of current asthma was 11%. Well-controlled asthma was reported by 15% of the adolescents, and 53% had partly controlled asthma. The prevalence of uncontrolled asthma was significantly higher among girls than boys (38% vs 25%), and girls also reported lower HRQoL scores. There was a fairly strong correlation between the ACT and DISABKIDS scores. Independent risk factors for low HRQoL impact (a score <67) were female sex (OR 4.66, 95%CI 1.82-9.54) and decreased ACT scores (1.38, 1.18-1.62). Risk factors for low HRQoL worry (a score <70) were female sex (3.33, 1.41-7.86), decreased ACT scores (1.35, 1.16-1.57), severe asthma (6.23, 1.46-16.50), and having current eczema (2.68, 1.00-7.24). Only a minority of the asthmatic adolescents reported well-controlled asthma, and poor asthma control and female sex were risk factors for low HRQoL. Our results demonstrate that evaluation of asthma control is of great importance for asthma management. © 2017 Wiley Periodicals, Inc.
Arends, Pauline; Sonneveld, Milan J; Zoutendijk, Roeland; Carey, Ivana; Brown, Ashley; Fasano, Massimo; Mutimer, David; Deterding, Katja; Reijnders, Jurriën G P; Oo, Ye; Petersen, Jörg; van Bömmel, Florian; de Knegt, Robert J; Santantonio, Teresa; Berg, Thomas; Welzel, Tania M; Wedemeyer, Heiner; Buti, Maria; Pradat, Pierre; Zoulim, Fabien; Hansen, Bettina; Janssen, Harry L A
2015-08-01
Hepatocellular carcinoma (HCC) risk-scores may predict HCC in Asian entecavir (ETV)-treated patients. We aimed to study risk factors and performance of risk scores during ETV treatment in an ethnically diverse Western population. We studied all HBV monoinfected patients treated with ETV from 11 European referral centres within the VIRGIL Network. A total of 744 patients were included; 42% Caucasian, 29% Asian, 19% other, 10% unknown. At baseline, 164 patients (22%) had cirrhosis. During a median follow-up of 167 (IQR 82-212) weeks, 14 patients developed HCC of whom nine (64%) had cirrhosis at baseline. The 5-year cumulative incidence rate of HCC was 2.1% for non-cirrhotic and 10.9% for cirrhotic patients (p<0.001). HCC incidence was higher in older patients (p<0.001) and patients with lower baseline platelet counts (p=0.02). Twelve patients who developed HCC achieved virologic response (HBV DNA <80 IU/mL) before HCC. At baseline, higher CU-HCC and GAG-HCC, but not REACH-B scores were associated with development of HCC. Discriminatory performance of HCC risk scores was low, with sensitivity ranging from 18% to 73%, and c-statistics from 0.71 to 0.85. Performance was further reduced in Caucasians with c-statistics from 0.54 to 0.74. Predicted risk of HCC based on risk-scores declined during ETV therapy (all p<0.001), but predictive performances after 1 year were comparable to those at baseline. Cumulative incidence of HCC is low in patients treated with ETV, but ETV does not eliminate the risk of HCC. Discriminatory performance of HCC risk scores was limited, particularly in Caucasians, at baseline and during therapy. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Brodowska, Katarzyna; Stryjewski, Tomasz P; Papavasileiou, Evangelia; Chee, Yewlin E; Eliott, Dean
2017-05-01
The Retinal Detachment after Open Globe Injury (RD-OGI) Score is a clinical prediction model that was developed at the Massachusetts Eye and Ear Infirmary to predict the risk of retinal detachment (RD) after open globe injury (OGI). This study sought to validate the RD-OGI Score in an independent cohort of patients. Retrospective cohort study. The predictive value of the RD-OGI Score was evaluated by comparing the original RD-OGI Scores of 893 eyes with OGI that presented between 1999 and 2011 (the derivation cohort) with 184 eyes with OGI that presented from January 1, 2012, to January 31, 2014 (the validation cohort). Three risk classes (low, moderate, and high) were created and logistic regression was undertaken to evaluate the optimal predictive value of the RD-OGI Score. A Kaplan-Meier survival analysis evaluated survival experience between the risk classes. Time to RD. At 1 year after OGI, 255 eyes (29%) in the derivation cohort and 66 eyes (36%) in the validation cohort were diagnosed with an RD. At 1 year, the low risk class (RD-OGI Scores 0-2) had a 3% detachment rate in the derivation cohort and a 0% detachment rate in the validation cohort, the moderate risk class (RD-OGI Scores 2.5-4.5) had a 29% detachment rate in the derivation cohort and a 35% detachment rate in the validation cohort, and the high risk class (RD-OGI scores 5-7.5) had a 73% detachment rate in the derivation cohort and an 86% detachment rate in the validation cohort. Regression modeling revealed the RD-OGI to be highly discriminative, especially 30 days after injury, with an area under the receiver operating characteristic curve of 0.939 in the validation cohort. Survival experience was significantly different depending upon the risk class (P < 0.0001, log-rank chi-square). The RD-OGI Score can reliably predict the future risk of developing an RD based on clinical variables that are present at the time of the initial evaluation after OGI. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Rakovitch, Eileen; Nofech-Mozes, Sharon; Hanna, Wedad; Baehner, Frederick L; Saskin, Refik; Butler, Steven M; Tuck, Alan; Sengupta, Sandip; Elavathil, Leela; Jani, Prashant A; Bonin, Michel; Chang, Martin C; Robertson, Susan J; Slodkowska, Elzbieta; Fong, Cindy; Anderson, Joseph M; Jamshidian, Farid; Miller, Dave P; Cherbavaz, Diana B; Shak, Steven; Paszat, Lawrence
2015-07-01
Validated biomarkers are needed to improve risk assessment and treatment decision-making for women with ductal carcinoma in situ (DCIS) of the breast. The Oncotype DX DCIS Score (DS) was shown to predict the risk of local recurrence (LR) in individuals with low-risk DCIS treated by breast-conserving surgery (BCS) alone. Our objective was to confirm these results in a larger population-based cohort of individuals. We used an established population-based cohort of individuals diagnosed with DCIS treated with BCS alone from 1994 to 2003 with validation of treatment and outcomes. Central pathology assessment excluded cases with invasive cancer, DCIS < 2 mm or positive margins. Cox model was used to determine the relationship between independent covariates, the DS (hazard ratio (HR)/50 Cp units (U)) and LR. Tumor blocks were collected for 828 patients. Final evaluable population includes 718 cases, of whom 571 had negative margins. Median follow-up was 9.6 years. 100 cases developed LR following BCS alone (DCIS, N = 44; invasive, N = 57). In the primary pre-specified analysis, the DS was associated with any LR (DCIS or invasive) in ER+ patients (HR 2.26; P < 0.001) and in all patients regardless of ER status (HR 2.15; P < 0.001). DCIS Score provided independent information on LR risk beyond clinical and pathologic variables including size, age, grade, necrosis, multifocality, and subtype (adjusted HR 1.68; P = 0.02). DCIS was associated with invasive LR (HR 1.78; P = 0.04) and DCIS LR (HR 2.43; P = 0.005). The DCIS Score independently predicts and quantifies individualized recurrence risk in a population of patients with pure DCIS treated by BCS alone.
Can complications in febrile neutropenia be predicted? Report from a developing country.
Oberoi, Sapna; Das, Anirban; Trehan, Amita; Ray, Pallab; Bansal, Deepak
2017-11-01
Febrile neutropenia (FN) is an important cause of morbidity and mortality in children with acute lymphoblastic leukemia (ALL). We aimed to look at complications in febrile neutropenia and to derive a risk model for developing complications from the variables predicting complications. Children on treatment for ALL, presenting with FN, were prospectively enrolled over a period of 1 year. Their clinical presentation, course during hospital stay, and outcomes were recorded. Complications recorded included septic shock, pneumonia requiring invasive or non-invasive ventilation, renal failure, neutropenic enterocolitis, encephalopathy, congestive heart failure, and bleeding manifestations. There were 320 episodes of FN among 176 patients. Complications occurred during 73 (22.8%) episodes. Time since last chemotherapy ≤7 days [OR 2.2 (1-4.5)], clinical focus of infection [OR 2.7 (1.3-5.5)], undernutrition [OR 2.5 (1.1-5.5)], absolute neutrophil count (ANC) ≤ 100/μL [OR 2.8 (1.3-5.9)], and C-reactive protein (CRP) > 60 mg/L at admission [OR 13.3 (5.2-33.8)] were independent predictors of complications. A risk model (total score = 13) was developed based on these predictors. Children with score of ≥7 had 17.2 (7.7-38.6) odds of developing complications as compared to those with score <7. Score of <7 predicted children at lower risk of complications [sensitivity 88% (78.2-93.8%), specificity 72.5% (65.7-78.4%), PPV 53.6% (44.3-62.6%), NPV 94.4% (89.3-97.1%)]. Complications during febrile neutropenia are high in a developing country setup. A risk score model based on identified risk factors can possibly help in recognizing low-risk febrile neutropenic children at admission.
Pneumococcal pneumonia - Are the new severity scores more accurate in predicting adverse outcomes?
Ribeiro, C; Ladeira, I; Gaio, A R; Brito, M C
2013-01-01
The site-of-care decision is one of the most important factors in the management of patients with community-acquired pneumonia. The severity scores are validated prognostic tools for community-acquired pneumonia mortality and treatment site decision. The aim of this paper was to compare the discriminatory power of four scores - the classic PSI and CURB65 ant the most recent SCAP and SMART-COP - in predicting major adverse events: death, ICU admission, need for invasive mechanical ventilation or vasopressor support in patients admitted with pneumococcal pneumonia. A five year retrospective study of patients admitted for pneumococcal pneumonia. Patients were stratified based on admission data and assigned to low-, intermediate-, and high-risk classes for each score. Results were obtained comparing low versus non-low risk classes. We studied 142 episodes of hospitalization with 2 deaths and 10 patients needing mechanical ventilation and vasopressor support. The majority of patients were classified as low risk by all scores - we found high negative predictive values for all adverse events studied, the most negative value corresponding to the SCAP score. The more recent scores showed better accuracy for predicting ICU admission and need for ventilation or vasopressor support (mostly for the SCAP score with higher AUC values for all adverse events). The rate of all adverse outcomes increased directly with increasing risk class in all scores. The new gravity scores appear to have a higher discriminatory power in all adverse events in our study, particularly, the SCAP score. Copyright © 2012 Sociedade Portuguesa de Pneumologia. Published by Elsevier España. All rights reserved.
Mohr, Beth A.; Adams, Rachel Sayko; Wooten, Nikki R.; Williams, Thomas V.
2014-01-01
Objectives. We identified to what extent the Department of Defense postdeployment health surveillance program identifies at-risk drinking, alone or in conjunction with psychological comorbidities, and refers service members who screen positive for additional assessment or care. Methods. We completed a cross-sectional analysis of 333 803 US Army active duty members returning from Iraq or Afghanistan deployments in fiscal years 2008 to 2011 with a postdeployment health assessment. Alcohol measures included 2 based on self-report quantity-frequency items—at-risk drinking (positive Alcohol Use Disorders Identification Test alcohol consumption questions [AUDIT-C] screen) and severe alcohol problems (AUDIT-C score of 8 or higher)—and another based on the interviewing provider’s assessment. Results. Nearly 29% of US Army active duty members screened positive for at-risk drinking, and 5.6% had an AUDIT-C score of 8 or higher. Interviewing providers identified potential alcohol problems among only 61.8% of those screening positive for at-risk drinking and only 74.9% of those with AUDIT-C scores of 8 or higher. They referred for a follow-up visit to primary care or another setting only 29.2% of at-risk drinkers and only 35.9% of those with AUDIT-C scores of 8 or higher. Conclusions. This study identified missed opportunities for early intervention for at-risk drinking. Future research should evaluate the effect of early intervention on long-term outcomes. PMID:24922163
Two risk score models for predicting incident Type 2 diabetes in Japan.
Doi, Y; Ninomiya, T; Hata, J; Hirakawa, Y; Mukai, N; Iwase, M; Kiyohara, Y
2012-01-01
Risk scoring methods are effective for identifying persons at high risk of Type 2 diabetes mellitus, but such approaches have not yet been established in Japan. A total of 1935 subjects of a derivation cohort were followed up for 14 years from 1988 and 1147 subjects of a validation cohort independent of the derivation cohort were followed up for 5 years from 2002. Risk scores were estimated based on the coefficients (β) of Cox proportional hazards model in the derivation cohort and were verified in the validation cohort. In the derivation cohort, the non-invasive risk model was established using significant risk factors; namely, age, sex, family history of diabetes, abdominal circumference, body mass index, hypertension, regular exercise and current smoking. We also created another scoring risk model by adding fasting plasma glucose levels to the non-invasive model (plus-fasting plasma glucose model). The area under the curve of the non-invasive model was 0.700 and it increased significantly to 0.772 (P < 0.001) in the plus-fasting plasma glucose model. The ability of the non-invasive model to predict Type 2 diabetes was comparable with that of impaired glucose tolerance, and the plus-fasting plasma glucose model was superior to it. The cumulative incidence of Type 2 diabetes was significantly increased with elevating quintiles of the sum scores of both models in the validation cohort (P for trend < 0.001). We developed two practical risk score models for easily identifying individuals at high risk of incident Type 2 diabetes without an oral glucose tolerance test in the Japanese population. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
Irungu, Elizabeth M; Heffron, Renee; Mugo, Nelly; Ngure, Kenneth; Katabira, Elly; Bulya, Nulu; Bukusi, Elizabeth; Odoyo, Josephine; Asiimwe, Stephen; Tindimwebwa, Edna; Celum, Connie; Baeten, Jared M
2016-10-17
Antiretroviral therapy (ART) and pre-exposure prophylaxis (PrEP) reduce HIV-1 transmission within heterosexual HIV-1 serodiscordant couples. Prioritizing couples at highest HIV-1 transmission risk for ART and PrEP would maximize impact and minimize costs. The Partners Demonstration Project is an open-label, delivery study of integrated PrEP and ART for HIV-1 prevention among high risk HIV-1 serodiscordant couples in Kenya and Uganda. We evaluated the feasibility of using a validated risk score that weighs a combination of easily measurable factors (age, children, marital status, male circumcision status, condom use, plasma HIV-1 levels) to identify couples at highest risk for HIV-1 transmission for enrollment. Couples scoring ≥5 met the risk score eligibility criteria. We screened 1694 HIV-1 serodiscordant couples and enrolled 1013. Of the screened couples, 1331 (78.6 %) scored ≥5 (with an expected incidence >3 % per year) and 76 % of these entered the study. The median age of the HIV-1 uninfected partner was 29 years [IQR 26, 36] and 20 % were <25 years of age. The HIV-1 uninfected partner was male in 67 % of partnerships, 33 % of whom were uncircumcised, 57 % of couples had no children, and 65 % reported unprotected sex in the month prior to enrollment. Among HIV-1 infected partners, 41 % had plasma viral load >50,000 copies/ml. A risk scoring tool identified HIV-1 serodiscordant couples for a demonstration project of PrEP and ART with high HIV-1 risk. The tool may be feasible for research and public health settings to maximize efficiency and minimize HIV-1 prevention costs.
Namazi, Nazli; Larijani, Bagher; Azadbakht, Leila
2017-08-01
The association between a low-carbohydrate diet (LCD) score and the risk of diabetes mellitus (DM) is contradictory. This study is a systemic review of cohort studies that have focused on the association between the LCD score and DM. We searched PubMed/Medline, Scopus, Embase, ISI Web of Science, and Google Scholar for papers published through January 2017 with no language restrictions. Cohort studies that reported relative risks (RRs) with 95% confidence intervals (CI) for DM were included. Finally, 4 studies were considered for our meta-analysis. The total number of participants ranged from 479 to 85 059. Among 4 cohort studies, 8 081 cases with DM were observed over follow-up durations ranging from 3.6 to 20 years. A marginal significant association was observed between the highest LCD score and the risk of DM (RR=1.17; 95% CI: 0.9, 1.51). Moreover, the RRs for studies with energy adjustments showed a significant association (RR: 1.32; 95% CI: 1.17, 1.49; I 2 : 0%). Based on our findings, study qualities score of less or equal to 7 had a significant influence on the pooled effect size (RR=1.31, 95%CI: 1.15, 1.49; I 2 : 0%), whereas the overall RR in the studies with quality score more than 7 was 1.09 (95% CI: 0.73, 1.63). In conclusion, we have found that the highest LCD score was marginally associated with the risk of DM. However, more prospective cohort studies are needed to clarify the effects of the LCD score on the risk of DM. © Georg Thieme Verlag KG Stuttgart · New York.
Automatic detection of cardiovascular risk in CT attenuation correction maps in Rb-82 PET/CTs
NASA Astrophysics Data System (ADS)
Išgum, Ivana; de Vos, Bob D.; Wolterink, Jelmer M.; Dey, Damini; Berman, Daniel S.; Rubeaux, Mathieu; Leiner, Tim; Slomka, Piotr J.
2016-03-01
CT attenuation correction (CTAC) images acquired with PET/CT visualize coronary artery calcium (CAC) and enable CAC quantification. CAC scores acquired with CTAC have been suggested as a marker of cardiovascular disease (CVD). In this work, an algorithm previously developed for automatic CAC scoring in dedicated cardiac CT was applied to automatic CAC detection in CTAC. The study included 134 consecutive patients undergoing 82-Rb PET/CT. Low-dose rest CTAC scans were acquired (100 kV, 11 mAs, 1.4mm×1.4mm×3mm voxel size). An experienced observer defined the reference standard with the clinically used intensity level threshold for calcium identification (130 HU). Five scans were removed from analysis due to artifacts. The algorithm extracted potential CAC by intensity-based thresholding and 3D connected component labeling. Each candidate was described by location, size, shape and intensity features. An ensemble of extremely randomized decision trees was used to identify CAC. The data set was randomly divided into training and test sets. Automatically identified CAC was quantified using volume and Agatston scores. In 33 test scans, the system detected on average 469mm3/730mm3 (64%) of CAC with 36mm3 false positive volume per scan. The intraclass correlation coefficient for volume scores was 0.84. Each patient was assigned to one of four CVD risk categories based on the Agatston score (0-10, 11-100, 101-400, <400). The correct CVD category was assigned to 85% of patients (Cohen's linearly weighted κ0.82). Automatic detection of CVD risk based on CAC scoring in rest CTAC images is feasible. This may enable large scale studies evaluating clinical value of CAC scoring in CTAC data.
Two-Step Approach for the Prediction of Future Type 2 Diabetes Risk
Abdul-Ghani, Muhammad A.; Abdul-Ghani, Tamam; Stern, Michael P.; Karavic, Jasmina; Tuomi, Tiinamaija; Bo, Insoma; DeFronzo, Ralph A.; Groop, Leif
2011-01-01
OBJECTIVE To develop a model for the prediction of type 2 diabetes mellitus (T2DM) risk on the basis of a multivariate logistic model and 1-h plasma glucose concentration (1-h PG). RESEARCH DESIGN AND METHODS The model was developed in a cohort of 1,562 nondiabetic subjects from the San Antonio Heart Study (SAHS) and validated in 2,395 nondiabetic subjects in the Botnia Study. A risk score on the basis of anthropometric parameters, plasma glucose and lipid profile, and blood pressure was computed for each subject. Subjects with a risk score above a certain cut point were considered to represent high-risk individuals, and their 1-h PG concentration during the oral glucose tolerance test was used to further refine their future T2DM risk. RESULTS We used the San Antonio Diabetes Prediction Model (SADPM) to generate the initial risk score. A risk-score value of 0.065 was found to be an optimal cut point for initial screening and selection of high-risk individuals. A 1-h PG concentration >140 mg/dL in high-risk individuals (whose risk score was >0.065) was the optimal cut point for identification of subjects at increased risk. The two cut points had 77.8, 77.4, and 44.8% (for the SAHS) and 75.8, 71.6, and 11.9% (for the Botnia Study) sensitivity, specificity, and positive predictive value, respectively, in the SAHS and Botnia Study. CONCLUSIONS A two-step model, based on the combination of the SADPM and 1-h PG, is a useful tool for the identification of high-risk Mexican-American and Caucasian individuals. PMID:21788628
Piazza, Nicolo; Wenaweser, Peter; van Gameren, Menno; Pilgrim, Thomas; Tzikas, Apostolos; Tsikas, Apostolos; Otten, Amber; Nuis, Rutger; Onuma, Yoshinobu; Cheng, Jin Ming; Kappetein, A Pieter; Boersma, Eric; Juni, Peter; de Jaegere, Peter; Windecker, Stephan; Serruys, Patrick W
2010-02-01
Surgical risk scores, such as the logistic EuroSCORE (LES) and Society of Thoracic Surgeons Predicted Risk of Mortality (STS) score, are commonly used to identify high-risk or "inoperable" patients for transcatheter aortic valve implantation (TAVI). In Europe, the LES plays an important role in selecting patients for implantation with the Medtronic CoreValve System. What is less clear, however, is the role of the STS score of these patients and the relationship between the LES and STS. The purpose of this study is to examine the correlation between LES and STS scores and their performance characteristics in high-risk surgical patients implanted with the Medtronic CoreValve System. All consecutive patients (n = 168) in whom a CoreValve bioprosthesis was implanted between November 2005 and June 2009 at 2 centers (Bern University Hospital, Bern, Switzerland, and Erasmus Medical Center, Rotterdam, The Netherlands) were included for analysis. Patient demographics were recorded in a prospective database. Logistic EuroSCORE and STS scores were calculated on a prospective and retrospective basis, respectively. Observed mortality was 11.1%. The mean LES was 3 times higher than the mean STS score (LES 20.2% +/- 13.9% vs STS 6.7% +/- 5.8%). Based on the various LES and STS cutoff values used in previous and ongoing TAVI trials, 53% of patients had an LES > or =15%, 16% had an STS > or =10%, and 40% had an LES > or =20% or STS > or =10%. Pearson correlation coefficient revealed a reasonable (moderate) linear relationship between the LES and STS scores, r = 0.58, P < .001. Although the STS score outperformed the LES, both models had suboptimal discriminatory power (c-statistic, 0.49 for LES and 0.69 for STS) and calibration. Clinical judgment and the Heart Team concept should play a key role in selecting patients for TAVI, whereas currently available surgical risk score algorithms should be used to guide clinical decision making. Copyright (c) 2010 Mosby, Inc. All rights reserved.
Schievink, Bauke; de Zeeuw, Dick; Smink, Paul A; Andress, Dennis; Brennan, John J; Coll, Blai; Correa-Rotter, Ricardo; Hou, Fan Fan; Kohan, Donald; Kitzman, Dalane W; Makino, Hirofumi; Parving, Hans-Henrik; Perkovic, Vlado; Remuzzi, Giuseppe; Tobe, Sheldon; Toto, Robert; Hoekman, Jarno; Lambers Heerspink, Hiddo J
2016-05-01
A recent phase II clinical trial (Reducing Residual Albuminuria in Subjects with Diabetes and Nephropathy with AtRasentan trial and an identical trial in Japan (RADAR/JAPAN)) showed that the endothelin A receptor antagonist atrasentan lowers albuminuria, blood pressure, cholesterol, hemoglobin, and increases body weight in patients with type 2 diabetes and nephropathy. We previously developed an algorithm, the Parameter Response Efficacy (PRE) score, which translates short-term drug effects into predictions of long-term effects on clinical outcomes. We used the PRE score on data from the RADAR/JAPAN study to predict the effect of atrasentan on renal and heart failure outcomes. We performed a post-hoc analysis of the RADAR/JAPAN randomized clinical trials in which 211 patients with type-2 diabetes and nephropathy were randomly assigned to atrasentan 0.75 mg/day, 1.25 mg/day, or placebo. A PRE score was developed in a background set of completed clinical trials using multivariate Cox models. The score was applied to baseline and week-12 risk marker levels of RADAR/JAPAN participants, to predict atrasentan effects on clinical outcomes. Outcomes were defined as doubling serum creatinine or end-stage renal disease and hospitalization for heart failure. The PRE score predicted renal risk changes of -23% and -30% for atrasentan 0.75 and 1.25 mg/day, respectively. PRE scores also predicted a small non-significant increase in heart failure risk for atrasentan 0.75 and 1.25 mg/day (+2% vs. +7%). Selecting patients with >30% albuminuria reduction from baseline (responders) improved renal outcome to almost 50% risk reduction, whereas non-responders showed no renal benefit. Based on the RADAR/JAPAN study, with short-term changes in risk markers, atrasentan is expected to decrease renal risk without increased risk of heart failure. Within this population albuminuria responders appear to contribute to the predicted improvements, whereas non-responders showed no benefit. The ongoing hard outcome trial (SONAR) in type 2 diabetic patients with >30% albuminuria reduction to atrasentan will allow us to assess the validity of these predictions. © The European Society of Cardiology 2015.
Du, Juan; Ruan, Xiangyan; Gu, Muqing; Bitzer, Johannes; Mueck, Alfred O
2016-06-01
Female sexual dysfunction (FSD) is a very common sexual health problem worldwide. The prevalence of FSD in Chinese women is, however, unknown. This is the first study to investigate a large number of young women throughout China via the internet, to determine the prevalence and types of FSD and to identify the risk factors for FSD. The primary endpoint was the Female Sexual Function Index (FSFI) score, with additional questions on contraception, sexual activity, relationship stability, pregnancy and other factors which may influence sexual function. The online questionnaire was completed by women from 31 of the 34 Chinese provinces. A total of 1618 completed questionnaires were received, and 1010 were included in the analyses after screening (62.4%). The mean age of the respondents was 25.1 ± 4.5 years. The mean total FSFI score was 24.99 ± 4.60. According to FSFI definitions (cut-off score 26.55), 60.2% of women were at risk of FSD. Based on domain scores, 52 were considered at high risk of dysfunction for pain (5.1%), 35 for orgasm (3.5%), 33 for desire (3.3%), 20 for arousal (2.0%), 6 for satisfaction (0.6%) and 2 for lubrication (0.2%). The prevalence of FSFI scores indicating risk of sexual dysfunction was about 60% in Chinese women. An unstable relationship, pressure to become pregnant, non-use of contraception, negative self-evaluation of appearance and increasing age were significantly associated with FSD in young Chinese women.
Bell, Jill A; daCosta DiBonaventura, Marco; Witt, Edward A; Ben-Joseph, Rami; Reeve, Bryce B
2017-02-01
To assess the feasibility of using the SF-36v2 mental health (MH) and mental component summary (MCS) scores for classification of risk for major depressive disorder (MDD), and to determine cut-off scores based on the sensitivity and specificity in a general US representative sample, and a chronic pain subpopulation. Data were analyzed from the 2013 US National Health and Wellness Survey (adults 18 y old and above; N=75,000), and among a chronic pain subpopulation (n=6679). Risk of MDD was a score ≥10 on the Patient Health Questionnaire (PHQ-9). Logistic regression modeling was used to predict at risk for MDD and receiver operating characteristic curves were produced. The total sample had MH scores of 48.8 and MCS scores of 48.9, similar to the normative US population mean. Percent of respondents with a PHQ-9≥10 were 15.0% and 29.1% for the total sample and chronic pain subpopulation, respectively. Cut-off scores (PHQ-9≥10) in the total sample for the MH and MCS were 43.0 and 46.0, respectively. Specificities for the MH and MCS were 77.8% and 76.1%; sensitivities were 84.9% and 88.1%, respectively. Among the subpopulation with chronic pain, cut-off scores for the MH and MCS were 40.4 and 43.1, respectively. Corresponding specificities for the MH and MCS were 77.9% and 73.9%; sensitivities were 78.3% and 83.4%, respectively. The SF-36v2 was found to have sufficient specificity and sensitivity to categorize participants at risk for MDD. If no depression questionnaire is available, it is feasible to use the SF-36v2 to characterize the MH of populations.
Practical, transparent prospective risk analysis for the clinical laboratory.
Janssens, Pim Mw
2014-11-01
Prospective risk analysis (PRA) is an essential element in quality assurance for clinical laboratories. Practical approaches to conducting PRA in laboratories, however, are scarce. On the basis of the classical Failure Mode and Effect Analysis method, an approach to PRA was developed for application to key laboratory processes. First, the separate, major steps of the process under investigation are identified. Scores are then given for the Probability (P) and Consequence (C) of predefined types of failures and the chances of Detecting (D) these failures. Based on the P and C scores (on a 10-point scale), an overall Risk score (R) is calculated. The scores for each process were recorded in a matrix table. Based on predetermined criteria for R and D, it was determined whether a more detailed analysis was required for potential failures and, ultimately, where risk-reducing measures were necessary, if any. As an illustration, this paper presents the results of the application of PRA to our pre-analytical and analytical activities. The highest R scores were obtained in the stat processes, the most common failure type in the collective process steps was 'delayed processing or analysis', the failure type with the highest mean R score was 'inappropriate analysis' and the failure type most frequently rated as suboptimal was 'identification error'. The PRA designed is a useful semi-objective tool to identify process steps with potential failures rated as risky. Its systematic design and convenient output in matrix tables makes it easy to perform, practical and transparent. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Jung, Seung-Hyun; Cho, Sung-Min; Yim, Seon-Hee; Kim, So-Hee; Park, Hyeon-Chun; Cho, Mi-La; Shim, Seung-Cheol; Kim, Tae-Hwan; Park, Sung-Hwan; Chung, Yeun-Jun
2016-12-01
To develop a genotype-based ankylosing spondylitis (AS) risk prediction model that is more sensitive and specific than HLA-B27 typing. To develop the AS genetic risk scoring (AS-GRS) model, 648 individuals (285 cases and 363 controls) were examined for 5 copy number variants (CNV), 7 single-nucleotide polymorphisms (SNP), and an HLA-B27 marker by TaqMan assays. The AS-GRS model was developed using logistic regression and validated with a larger independent set (576 cases and 680 controls). Through logistic regression, we built the AS-GRS model consisting of 5 genetic components: HLA-B27, 3 CNV (1q32.2, 13q13.1, and 16p13.3), and 1 SNP (rs10865331). All significant associations of genetic factors in the model were replicated in the independent validation set. The discriminative ability of the AS-GRS model measured by the area under the curve was excellent: 0.976 (95% CI 0.96-0.99) in the model construction set and 0.951 (95% CI 0.94-0.96) in the validation set. The AS-GRS model showed higher specificity and accuracy than the HLA-B27-only model when the sensitivity was set to over 94%. When we categorized the individuals into quartiles based on the AS-GRS scores, OR of the 4 groups (low, intermediate-1, intermediate-2, and high risk) showed an increasing trend with the AS-GRS scores (r 2 = 0.950) and the highest risk group showed a 494× higher risk of AS than the lowest risk group (95% CI 237.3-1029.1). Our AS-GRS could be used to identify individuals at high risk for AS before major symptoms appear, which may improve the prognosis for them through early treatment.
Horne, Benjamin D; Budge, Deborah; Masica, Andrew L; Savitz, Lucy A; Benuzillo, José; Cantu, Gabriela; Bradshaw, Alejandra; McCubrey, Raymond O; Bair, Tami L; Roberts, Colleen A; Rasmusson, Kismet D; Alharethi, Rami; Kfoury, Abdallah G; James, Brent C; Lappé, Donald L
2017-03-01
Improving 30-day readmission continues to be problematic for most hospitals. This study reports the creation and validation of sex-specific inpatient (i) heart failure (HF) risk scores using electronic data from the beginning of inpatient care for effective and efficient prediction of 30-day readmission risk. HF patients hospitalized at Intermountain Healthcare from 2005 to 2012 (derivation: n=6079; validation: n=2663) and Baylor Scott & White Health (North Region) from 2005 to 2013 (validation: n=5162) were studied. Sex-specific iHF scores were derived to predict post-hospitalization 30-day readmission using common HF laboratory measures and age. Risk scores adding social, morbidity, and treatment factors were also evaluated. The iHF model for females utilized potassium, bicarbonate, blood urea nitrogen, red blood cell count, white blood cell count, and mean corpuscular hemoglobin concentration; for males, components were B-type natriuretic peptide, sodium, creatinine, hematocrit, red cell distribution width, and mean platelet volume. Among females, odds ratios (OR) were OR=1.99 for iHF tertile 3 vs. 1 (95% confidence interval [CI]=1.28, 3.08) for Intermountain validation (P-trend across tertiles=0.002) and OR=1.29 (CI=1.01, 1.66) for Baylor patients (P-trend=0.049). Among males, iHF had OR=1.95 (CI=1.33, 2.85) for tertile 3 vs. 1 in Intermountain (P-trend <0.001) and OR=2.03 (CI=1.52, 2.71) in Baylor (P-trend < 0.001). Expanded models using 182-183 variables had predictive abilities similar to iHF. Sex-specific laboratory-based electronic health record-delivered iHF risk scores effectively predicted 30-day readmission among HF patients. Efficient to calculate and deliver to clinicians, recent clinical implementation of iHF scores suggest they are useful and useable for more precise clinical HF treatment. Copyright © 2016 Elsevier Inc. All rights reserved.
Diaz-Beveridge, R; Bruixola, G; Lorente, D; Caballero, J; Rodrigo, E; Segura, Á; Akhoundova, D; Giménez, A; Aparicio, J
2018-03-01
Sorafenib is a standard treatment for patients (pts) with advanced hepatocellular carcinoma (aHCC), although the clinical benefit is heterogeneous between different pts groups. Among novel prognostic factors, a low baseline neutrophil-to-lymphocyte ratio (bNLR) and early-onset diarrhoea have been linked with a better prognosis. To identify prognostic factors in pts with aHCC treated with 1st-line sorafenib and to develop a new prognostic score to guide management. Retrospective review of 145 pts bNLR, overall toxicity, early toxicity rates and overall survival (OS) were assessed. Univariate and multivariate analysis of prognostic factors for OS was performed. The prognostic score was calculated from the coefficients found in the Cox analysis. ROC curves and pseudoR2 index were used for internal validation. Discrimination ability and calibration were tested by Harrel's c-index (HCI) and Akaike criteria (AIC). The optimal bNLR cut-off for the prediction of OS was 4 (AUC 0.62). Independent prognostic factors in multivariate analysis for OS were performance status (PS) (p < .0001), Child-Pugh (C-P) score (p = 0.005), early-onset diarrhoea (p = 0.006) and BNLR (0.011). The prognostic score based on these four variables was found efficient (HCI = 0.659; AIC = 1.180). Four risk groups for OS could be identified: a very low-risk (median OS = 48.6 months), a low-risk (median OS = 11.6 months), an intermediate-risk (median OS = 8.3 months) and a high-risk group (median OS = 4.4 months). PS and C-P score were the main prognostic factors for OS, followed by early-onset diarrhoea and bNLR. We identified four risk groups for OS depending on these parameters. This prognostic model could be useful for patient stratification, but an external validation is needed.
Panella, L; La Porta, F; Caselli, S; Marchisio, S; Tennant, A
2012-09-01
Effective discharge planning is increasingly recognised as a critical component of hospital-based Rehabilitation. The BRASS index is a risk screening tool for identification, shortly after hospital admission, of patients who are at risk of post-discharge problems. To evaluate the internal construct validity and reliability of the Blaylock Risk Assessment Screening Score (BRASS) within the rehabilitation setting. Observational prospective study. Rehabilitation ward of an Italian district hospital. One hundred and four consecutively admitted patients. Using classical psychometric methods and Rasch analysis (RA), the internal construct validity and reliability of the BRASS were examined. Also, external and predictive validity of the Rasch-modified BRASS (RMB) score were determined. Reliability of the original BRASS was low (Cronbach's alpha=0.595) and factor analyses showed that it was clearly multidimensional. A RA, based on a reduced 7-BRASS item set (RMB), satisfied model's expectations. Reliability was 0.777. The RMB scores strongly correlated with the original BRASS (rho=0.952; P<0.000) and with FIM™ admission scores (rho=-0.853; P<0.000). A RMB score of 12 was associated with an increased risk of nursing home admission (RR=2.1, 95%CI=1.7-2.5), whereas a score of 17 was associated to a higher risk of length of stay >28 days (RR=7.6, 95%CI=1.8-31.9). This study demonstrated that the original BRASS was multidimensional and unreliable. However, the RMB holds adequate internal construct validity and is sufficiently reliable as a predictor of discharge problems for group, but not individual use. The application of tools and methods (such as the BRASS Index) developed under the biomedical paradigm in a Physical and Rehabilitation Medicine setting may have limitations. Further research is needed to develop, within the rehabilitation setting, a valid measuring tool of risk of post-discharge problems at the individual level.
Puente, Javier; López-Tarruella, Sara; Ruiz, Amparo; Lluch, Ana; Pastor, Miguel; Alba, Emilio; de la Haba, Juan; Ramos, Manuel; Cirera, Luis; Antón, Antonio; Llombart, Antoni; Plazaola, Arrate; Fernández-Aramburo, Antonio; Sastre, Javier; Díaz-Rubio, Eduardo; Martin, Miguel
2010-07-01
Women with recurrent metastatic breast cancer from a Spanish hospital registry (El Alamo, GEICAM) were analyzed in order to identify the most helpful prognostic factors to predict survival and to ultimately construct a practical prognostic index. The inclusion criteria covered women patients diagnosed with operable invasive breast cancer who had metastatic recurrence between 1990 and 1997 in GEICAM hospitals. Patients with stage IV breast cancer at initial diagnosis or with isolated loco-regional recurrence were excluded from this analysis. Data from 2,322 patients with recurrent breast cancer after primary treatment (surgery, radiation and systemic adjuvant treatment) were used to construct the prognostic index. The prognostic index score for each individual patient was calculated by totalling up the scores of each independent variable. The maximum score obtainable was 26.1. Nine-hundred and sixty-two patients who had complete data for all the variables were used in the computation of the prognostic index score. We were able to stratify them into three prognostic groups based on the prognostic index score: 322 patients in the good risk group (score < or =13.5), 308 patients in the intermediate risk group (score 13.51-15.60) and 332 patients in the poor risk group (score > or =15.61). The median survivals for these groups were 3.69, 2.27 and 1.02 years, respectively (P < 0.0001). In conclusion, risk scores are extraordinarily valuable tools, highly recommendable in the clinical practice.
Romaguera, Dora; Gracia-Lavedan, Esther; Molinuevo, Amaia; de Batlle, Jordi; Mendez, Michelle; Moreno, Victor; Vidal, Carmen; Castelló, Adela; Pérez-Gómez, Beatriz; Martín, Vicente; Molina, Antonio J; Dávila-Batista, Verónica; Dierssen-Sotos, Trinidad; Gómez-Acebo, Inés; Llorca, Javier; Guevara, Marcela; Castilla, Jesús; Urtiaga, Carmen; Llorens-Ivorra, Cristóbal; Fernández-Tardón, Guillermo; Tardón, Adonina; Lorca, José Andrés; Marcos-Gragera, Rafael; Huerta, José María; Olmedo-Requena, Rocío; Jimenez-Moleon, José Juan; Altzibar, Jone; de Sanjosé, Silvia; Pollán, Marina; Aragonés, Núria; Castaño-Vinyals, Gemma; Kogevinas, Manolis; Amiano, Pilar
2017-07-01
Prostate, breast and colorectal cancer are the most common tumours in Spain. The aim of the present study was to evaluate the association between adherence to nutrition-based guidelines for cancer prevention and prostate, breast and colorectal cancer, in the MCC-Spain case-control study. A total of 1,718 colorectal, 1,343 breast and 864 prostate cancer cases and 3,431 population-based controls recruited between 2007 and 2012, were included in the present study. The World Cancer Research Fund/American Institute for Cancer Research (WCRC/AICR) score based on six recommendations for cancer prevention (on body fatness, physical activity, foods and drinks that promote weight gain, plant foods, animal foods and alcoholic drinks; score range 0-6) was constructed. We used unconditional logistic regression analysis adjusting for potential confounders. One-point increment in the WCRF/AICR score was associated with 25% (95% CI 19-30%) lower risk of colorectal, and 15% (95% CI 7-22%) lower risk of breast cancer; no association with prostate cancer was detected, except for cases with a Gleason score ≥7 (poorly differentiated/undifferentiated tumours) (OR 0.87, 95% CI 0.76-0.99). These results add to the wealth of evidence indicating that a great proportion of common cancer cases could be avoided by adopting healthy lifestyle habits. © 2017 UICC.
Spreckelsen, C; Juenger, J
2017-09-26
Adequate estimation and communication of risks is a critical competence of physicians. Due to an evident lack of these competences, effective training addressing risk competence during medical education is needed. Test-enhanced learning has been shown to produce marked effects on achievements. This study aimed to investigate the effect of repeated tests implemented on top of a blended learning program for risk competence. We introduced a blended-learning curriculum for risk estimation and risk communication based on a set of operationalized learning objectives, which was integrated into a mandatory course "Evidence-based Medicine" for third-year students. A randomized controlled trial addressed the effect of repeated testing on achievement as measured by the students' pre- and post-training score (nine multiple-choice items). Basic numeracy and statistical literacy were assessed at baseline. Analysis relied on descriptive statistics (histograms, box plots, scatter plots, and summary of descriptive measures), bootstrapped confidence intervals, analysis of covariance (ANCOVA), and effect sizes (Cohen's d, r) based on adjusted means and standard deviations. All of the 114 students enrolled in the course consented to take part in the study and were assigned to either the intervention or control group (both: n = 57) by balanced randomization. Five participants dropped out due to non-compliance (control: 4, intervention: 1). Both groups profited considerably from the program in general (Cohen's d for overall pre vs. post scores: 2.61). Repeated testing yielded an additional positive effect: while the covariate (baseline score) exhibits no relation to the post-intervention score, F(1, 106) = 2.88, p > .05, there was a significant effect of the intervention (repeated tests scenario) on learning achievement, F(1106) = 12.72, p < .05, d = .94, r = .42 (95% CI: [.26, .57]). However, in the subgroup of participants with a high initial numeracy score no similar effect could be observed. Dedicated training can improve relevant components of risk competence of medical students. An already promising overall effect of the blended learning approach can be improved significantly by implementing a test-enhanced learning design, namely repeated testing. As students with a high initial numeracy score did not profit equally from repeated testing, target-group specific opt-out may be offered.
Maternal and Child Characteristics Associated With Mother-Child Interaction in One-Year-Olds.
Graff, J Carolyn; Bush, Andrew J; Palmer, Frederick B; Murphy, Laura E; Whitaker, Toni M; Tylavsky, Frances A
2017-08-01
Mothers' interactions with their young children have predicted later child development, behavior, and health, but evidence has been developed mainly in at-risk clinical samples. An economically and racially diverse sample of pregnant women who were not experiencing a high-risk pregnancy were recruited to participate in a community-based, longitudinal study of factors associated with child cognitive and social-emotional development during the first 3 years. The purpose of the present analysis was to identify associations between the characteristics of 1125 mothers and their 1-year-olds and the mothers' and children's scores on the Nursing Child Assessment Teaching Scale (NCATS). A multivariable approach was used to identify maternal and child characteristics associated with NCATS scores and to develop prediction models for NCATS total and subscale scores of mothers and children. Child expressive and receptive communication and maternal IQ, marital status, age, and insurance predicted NCATS Mother total score, accounting for 28% of the score variance. Child expressive communication and birth weight predicted the NCATS Child total score, accounting for 4% of variance. Child's expressive communication and mother's IQ and marital status predicted NCATS mother-child total scores. While these findings were similar to reports of NCATS scores in at-risk populations, no previous teams examined all of the mother and child characteristics included in this analysis. These findings support the utility of the NCATS for assessing mother-child interaction and predicting child outcomes in community-based, non-clinical populations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Coronary risk stratification of patients undergoing surgery for valvular heart disease.
Hasselbalch, Rasmus Bo; Engstrøm, Thomas; Pries-Heje, Mia; Heitmann, Merete; Pedersen, Frants; Schou, Morten; Mickley, Hans; Elming, Hanne; Steffensen, Rolf; Køber, Lars; Iversen, Kasper
2017-01-15
Multislice computed tomography (MSCT) is a non-invasive, less expensive, low-radiation alternative to coronary angiography (CAG) prior to valvular heart surgery. MSCT has a high negative predictive value for coronary artery disease (CAD) but previous studies of patients with valvular disease have shown that MSCT, as the primary evaluation technique, lead to re-evaluation with CAG in about a third of cases and it is therefore not recommended. If a subgroup of patients with low- to intermediate risk of CAD could be identified and examined with MSCT, it could be cost-effective, reduce radiation and the risk of complications associated with CAG. The study cohort was derived from a national registry of patients undergoing CAG prior to valvular heart surgery. Using logistic regression, we identified significant risk factors for CAD and developed a risk score (CT-valve score). The score was validated on a similar cohort of patients from another registry. The study cohort consisted of 2221 patients, 521 (23.5%) had CAD. The validation cohort consisted of 2575 patients, 771 (29.9%) had CAD. The identified risk factors were male sex, age, smoking, hyperlipidemia, hypertension, aortic valve disease, extracardiac arteriopathy, ejection fraction <30% and diabetes mellitus. CT-valve score could identify a third of the population with a risk about 10%. A score based on risk factors of CAD can identify patients that might benefit from using MSCT as a gatekeeper to CAG prior to heart valve surgery. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Turusheva, Anna; Frolova, Elena; Bert, Vaes; Hegendoerfer, Eralda; Degryse, Jean-Marie
2017-07-01
Prediction models help to make decisions about further management in clinical practice. This study aims to develop a mortality risk score based on previously identified risk predictors and to perform internal and external validations. In a population-based prospective cohort study of 611 community-dwelling individuals aged 65+ in St. Petersburg (Russia), all-cause mortality risks over 2.5 years follow-up were determined based on the results obtained from anthropometry, medical history, physical performance tests, spirometry and laboratory tests. C-statistic, risk reclassification analysis, integrated discrimination improvement analysis, decision curves analysis, internal validation and external validation were performed. Older adults were at higher risk for mortality [HR (95%CI)=4.54 (3.73-5.52)] when two or more of the following components were present: poor physical performance, low muscle mass, poor lung function, and anemia. If anemia was combined with high C-reactive protein (CRP) and high B-type natriuretic peptide (BNP) was added the HR (95%CI) was slightly higher (5.81 (4.73-7.14)) even after adjusting for age, sex and comorbidities. Our models were validated in an external population of adults 80+. The extended model had a better predictive capacity for cardiovascular mortality [HR (95%CI)=5.05 (2.23-11.44)] compared to the baseline model [HR (95%CI)=2.17 (1.18-4.00)] in the external population. We developed and validated a new risk prediction score that may be used to identify older adults at higher risk for mortality in Russia. Additional studies need to determine which targeted interventions improve the outcomes of these at-risk individuals. Copyright © 2017 Elsevier B.V. All rights reserved.
Komulainen, K; Pulkki-Råback, L; Jokela, M; Lyytikäinen, L-P; Pitkänen, N; Laitinen, T; Hintsanen, M; Elovainio, M; Hintsa, T; Jula, A; Juonala, M; Pahkala, K; Viikari, J; Lehtimäki, T; Raitakari, O; Keltikangas-Järvinen, L
2018-04-01
The life-course development of body mass index (BMI) may be driven by interactions between genes and obesity-inducing social environments. We examined whether lower parental or own education accentuates the genetic risk for higher BMI over the life course, and whether diet and physical activity account for the educational differences in genetic associations with BMI. The study comprised 2441 participants (1319 women, 3-18 years at baseline) from the prospective, population-based Cardiovascular Risk in Young Finns Study. BMI (kg/m 2 ) trajectories were calculated from 18 to 49 years, using data from six time points spanning 31 years. A polygenic risk score for BMI was calculated as a weighted sum of risk alleles in 97 single-nucleotide polymorphisms. Education was assessed via self-reports, measured prospectively from participants in adulthood and from parents when participants were children. Diet and physical activity were self-reported in adulthood. Mean BMI increased from 22.6 to 26.6 kg/m 2 during the follow-up. In growth curve analyses, the genetic risk score was associated with faster BMI increase over time (b=0.02, (95% CI, 0.01-0.02, P<0.001)). The association between the genetic risk score and BMI was more pronounced among those with lower educational level in adulthood (b=-0.12 (95% CI, -0.23-0.01); P=0.036)). No interaction effect was observed between the genetic risk score and parental education (b=0.05 (95% CI, -0.09-0.18; P=0.51)). Diet and physical activity explained little of the interaction effect between the genetic risk score and adulthood education. In this prospective study, the association of a risk score of 97 genetic variants with BMI was stronger among those with low compared with high education. This suggests lower education in adulthood accentuates the risk of higher BMI in people at genetic risk.
Predictors of heart failure in patients with stable coronary artery disease: a PEACE study.
Lewis, Eldrin F; Solomon, Scott D; Jablonski, Kathleen A; Rice, Madeline Murguia; Clemenza, Francesco; Hsia, Judith; Maggioni, Aldo P; Zabalgoitia, Miguel; Huynh, Thao; Cuddy, Thomas E; Gersh, Bernard J; Rouleau, Jean; Braunwald, Eugene; Pfeffer, Marc A
2009-05-01
Heart failure (HF) is a disease commonly associated with coronary artery disease. Most risk models for HF development have focused on patients with acute myocardial infarction. The Prevention of Events with Angiotensin-Converting Enzyme Inhibition population enabled the development of a risk model to predict HF in patients with stable coronary artery disease and preserved ejection fraction. In the 8290, Prevention of Events with Angiotensin-Converting Enzyme Inhibition patients without preexisting HF, new-onset HF hospitalizations, and fatal HF were assessed over a median follow-up of 4.8 years. Covariates were evaluated and maintained in the Cox regression multivariable model using backward selection if P<0.05. A risk score was developed and converted to an integer-based scoring system. Among the Prevention of Events with Angiotensin-Converting Enzyme Inhibition population (age, 64+/-8; female, 18%; prior myocardial infarction, 55%), there were 268 cases of fatal and nonfatal HF. Twelve characteristics were associated with increased risk of HF along with several baseline medications, including older age, history of hypertension, and diabetes. Randomization to trandolapril independently reduced the risk of HF. There was no interaction between trandolapril treatment and other risk factors for HF. The risk score (range, 0 to 21) demonstrated excellent discriminatory power (c-statistic 0.80). Risk of HF ranged from 1.75% in patients with a risk score of 0% to 33% in patients with risk score >or=16. Among patients with stable coronary artery disease and preserved ejection fraction, traditional and newer factors were independently associated with increased risk of HF. Trandolopril decreased the risk of HF in these patients with preserved ejection fraction.
Liu, Li; Tabung, Fred K; Zhang, Xuehong; Nowak, Jonathan A; Qian, Zhi Rong; Hamada, Tsuyoshi; Nevo, Daniel; Bullman, Susan; Mima, Kosuke; Kosumi, Keisuke; da Silva, Annacarolina; Song, Mingyang; Cao, Yin; Twombly, Tyler S; Shi, Yan; Liu, Hongli; Gu, Mancang; Koh, Hideo; Li, Wanwan; Du, Chunxia; Chen, Yang; Li, Chenxi; Li, Wenbin; Mehta, Raaj S; Wu, Kana; Wang, Molin; Kostic, Aleksander D; Giannakis, Marios; Garrett, Wendy S; Hutthenhower, Curtis; Chan, Andrew T; Fuchs, Charles S; Nishihara, Reiko; Ogino, Shuji; Giovannucci, Edward L
2018-04-24
Specific nutritional components are likely to induce intestinal inflammation, which is characterized by increased levels of interleukin 6 (IL6), C-reactive protein (CRP), and tumor necrosis factor-receptor superfamily member 1B (TNFRSF1B) in the circulation and promotes colorectal carcinogenesis. The inflammatory effects of a diet can be estimated based on an empiric dietary inflammatory pattern (EDIP) score, calculated based on intake of 18 foods associated with plasma levels of IL6, CRP, and TNFRSF1B. An inflammatory environment in the colon (based on increased levels of IL6, CRP, and TNFRSF1B in peripheral blood) contributes to impairment of the mucosal barrier and altered immune cell responses, affecting the composition of the intestinal microbiota. Colonization by Fusobacterium nucleatum has been associated with the presence and features of colorectal adenocarcinoma. We investigated the association between diets that promote inflammation (based on EDIP score) and colorectal cancer subtypes classified by level of F nucleatum in the tumor microenvironment. We calculated EDIP scores based on answers to questionnaires collected from participants in the Nurses' Health Study (through June 1, 2012) and the Health Professionals Follow-up Study (through January 31, 2012). Participants in both cohorts reported diagnoses of rectal or colon cancer in biennial questionnaires; deaths from unreported colorectal cancer cases were identified through the National Death Index and next of kin. Colorectal tumor tissues were collected from hospitals where the patients underwent tumor resection and F nucleatum DNA was quantified by a polymerase chain reaction assay. We used multivariable duplication-method Cox proportional hazard regression to assess the associations of EDIP scores with risks of colorectal cancer subclassified by F nucleatum status. During 28 years of follow-up evaluation of 124,433 participants, we documented 951 incident cases of colorectal carcinoma with tissue F nucleatum data. Higher EDIP scores were associated with increased risk of F nucleatum-positive colorectal tumors (P trend = .03); for subjects in the highest vs lowest EDIP score tertiles, the hazard ratio for F nucleatum-positive colorectal tumors was 1.63 (95% CI, 1.03-2.58). EDIP scores did not associate with F nucleatum-negative tumors (P trend = .44). High EDIP scores associated with proximal F nucleatum-positive colorectal tumors but not with proximal F nucleatum-negative colorectal tumors (P heterogeneity = .003). Diets that promote intestinal inflammation, based on EDIP score, are associated with increased risk of F nucleatum-positive colorectal carcinomas, but not carcinomas that do not contain these bacteria. These findings indicate that diet-induced intestinal inflammation alters the gut microbiome to contribute to colorectal carcinogenesis; nutritional interventions might be used in precision medicine and cancer prevention. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Massie, Allan B; Luo, Xun; Alejo, Jennifer L; Poon, Anna K; Cameron, Andrew M; Segev, Dorry L
2015-05-01
Liver allocation is based on current Model for End-Stage Liver Disease (MELD) scores, with priority in the case of a tie being given to those waiting the longest with a given MELD score. We hypothesized that this priority might not reflect risk: registrants whose MELD score has recently increased receive lower priority but might have higher wait-list mortality. We studied wait-list and posttransplant mortality in 69,643 adult registrants from 2002 to 2013. By likelihood maximization, we empirically defined a MELD spike as a MELD increase ≥ 30% over the previous 7 days. At any given time, only 0.6% of wait-list patients experienced a spike; however, these patients accounted for 25% of all wait-list deaths. Registrants who reached a given MELD score after a spike had higher wait-list mortality in the ensuing 7 days than those with the same resulting MELD score who did not spike, but they had no difference in posttransplant mortality. The spike-associated wait-list mortality increase was highest for registrants with medium MELD scores: specifically, 2.3-fold higher (spike versus no spike) for a MELD score of 10, 4.0-fold higher for a MELD score of 20, and 2.5-fold higher for a MELD score of 30. A model incorporating the MELD score and spikes predicted wait-list mortality risk much better than a model incorporating only the MELD score. Registrants with a sudden MELD increase have a higher risk of short-term wait-list mortality than is indicated by their current MELD score but have no increased risk of posttransplant mortality; allocation policy should be adjusted accordingly. © 2015 American Association for the Study of Liver Diseases.
Guenther, Kilian; Vach, Werner; Kachel, Walter; Bruder, Ingo; Hentschel, Roland
2015-01-01
Comparing outcomes at different neonatal intensive care units (NICUs) requires adjustment for intrinsic risk. The Clinical Risk Index for Babies (CRIB) is a widely used risk model, but it has been criticized for being affected by therapeutic decisions. The Prematurity Risk Evaluation Measure (PREM) is not supposed to be prone to treatment bias, but has not yet been validated. We aimed to validate the PREM, compare its accuracy to that of the original and modified versions of the CRIB and CRIB-II, and examine the congruence of risk categorization. Very-low-birth-weight (VLBW) infants with a gestational age (GA) <33 weeks, who were admitted to NICUs in Baden-Württemberg from 2003 to 2008, were identified from the German neonatal quality assurance program. CRIB, CRIB-II and PREM scores were calculated and modified. Omitting variables that directly reflected therapeutic decisions [the applied fraction of inspired oxygen (FiO2)] or that may have been prone to early-treatment bias (base excess and temperature), non-NICU-therapy-influenced scores were obtained. Score performance was assessed by the area under their ROC curve (AUC). The CRIB showed the largest AUC (0.89), which dropped significantly (to 0.85) after omitting the FiO2. The PREM birth condition model, PREM(bcm) (AUC 0.86), and the PREM birth model, PREM(bm) (AUC 0.82), also demonstrated good discrimination. PREM(bm) was superior to other non-therapy-affected scores and to GA, particularly in infants with <750 g birth weight. Congruence of risk categorization was low, especially among higher-risk cases. The CRIB score had the largest AUC, resulting from its inclusion of FiO2. PREM(bm), as the most accurate score among those unaffected by early treatment, seems to be a good alternative for strict risk adjustment in NICU auditing. It could be useful to combine scores. © 2015 S. Karger AG, Basel.
Influence of household rat infestation on leptospira transmission in the urban slum environment.
Costa, Federico; Ribeiro, Guilherme S; Felzemburgh, Ridalva D M; Santos, Norlan; Reis, Renato Barbosa; Santos, Andreia C; Fraga, Deborah Bittencourt Mothe; Araujo, Wildo N; Santana, Carlos; Childs, James E; Reis, Mitermayer G; Ko, Albert I
2014-12-01
The Norway rat (Rattus norvegicus) is the principal reservoir for leptospirosis in many urban settings. Few studies have identified markers for rat infestation in slum environments while none have evaluated the association between household rat infestation and Leptospira infection in humans or the use of infestation markers as a predictive model to stratify risk for leptospirosis. We enrolled a cohort of 2,003 urban slum residents from Salvador, Brazil in 2004, and followed the cohort during four annual serosurveys to identify serologic evidence for Leptospira infection. In 2007, we performed rodent infestation and environmental surveys of 80 case households, in which resided at least one individual with Leptospira infection, and 109 control households. In the case-control study, signs of rodent infestation were identified in 78% and 42% of the households, respectively. Regression modeling identified the presence of R. norvegicus feces (OR, 4.95; 95% CI, 2.13-11.47), rodent burrows (2.80; 1.06-7.36), access to water (2.79; 1.28-6.09), and un-plastered walls (2.71; 1.21-6.04) as independent risk factors associated with Leptospira infection in a household. We developed a predictive model for infection, based on assigning scores to each of the rodent infestation risk factors. Receiver operating characteristic curve analysis found that the prediction score produced a good/excellent fit based on an area under the curve of 0.78 (0.71-0.84). Our study found that a high proportion of slum households were infested with R. norvegicus and that rat infestation was significantly associated with the risk of Leptospira infection, indicating that high level transmission occurs among slum households. We developed an easily applicable prediction score based on rat infestation markers, which identified households with highest infection risk. The use of the prediction score in community-based screening may therefore be an effective risk stratification strategy for targeting control measures in slum settings of high leptospirosis transmission.
Joint Associations of Diet, Lifestyle, and Genes with Age-Related Macular Degeneration.
Meyers, Kristin J; Liu, Zhe; Millen, Amy E; Iyengar, Sudha K; Blodi, Barbara A; Johnson, Elizabeth; Snodderly, D Max; Klein, Michael L; Gehrs, Karen M; Tinker, Lesley; Sarto, Gloria E; Robinson, Jennifer; Wallace, Robert B; Mares, Julie A
2015-11-01
Unhealthy lifestyles have been associated with increased odds for age-related macular degeneration (AMD). Whether this association is modified by genetic risk for AMD is unknown and was investigated. Interactions between healthy lifestyles AMD risk genotypes were studied in relation to the prevalence of AMD, assessed 6 years later. Women 50 to 79 years of age in the Carotenoids in Age-Related Eye Disease Study with exposure and AMD data (n=1663). Healthy lifestyle scores (0-6 points) were assigned based on Healthy Eating Index scores, physical activity (metabolic equivalent of task hours/week), and smoking pack years assessed in 1994 and 1998. Genetic risk was based on Y402H in complement factor H (CFH) and A69S in age-related maculopathy susceptibility locus 2 (ARMS2). Additive and multiplicative interactions in odds ratios were assessed using the synergy index and a multiplicative interaction term, respectively. AMD presence and severity were assessed from grading of stereoscopic fundus photographs taken in 2001-2004. AMD was present in 337 women, 91% of whom had early AMD. The odds of AMD were 3.3 times greater (95% confidence interval [CI], 1.8-6.1) in women with both low healthy lifestyle score (0-2) and high-risk CFH genotype (CC), relative to those who had low genetic risk (TT) and high healthy lifestyle scores (4-6). There were no significant additive (synergy index [SI], 1.08; 95% CI, 0.70-1.67) or multiplicative (Pinteraction=0.94) interactions in the full sample. However, when limiting the sample to women with stable diets before AMD assessment (n=728) the odds for AMD associated with low healthy lifestyle scores and high-risk CFH genotype were strengthened (odds ratio, 4.6; 95% CI, 1.8-11.6) and the synergy index was significant (SI, 1.34; 95% CI, 1.05-1.70). Adjusting for dietary lutein and zeaxanthin attenuated, and therefore partially explained, the joint association. There were no significant additive or multiplicative interactions for ARMS2 and lifestyle score. Having unhealthy lifestyles and 2 CFH risk alleles increased AMD risk (primarily in the early stages), in an or additive or greater (synergistic) manner. However, unhealthy lifestyles increased AMD risk regardless of AMD risk genotype. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Nuotio, Joel; Pitkänen, Niina; Magnussen, Costan G; Buscot, Marie-Jeanne; Venäläinen, Mikko S; Elo, Laura L; Jokinen, Eero; Laitinen, Tomi; Taittonen, Leena; Hutri-Kähönen, Nina; Lyytikäinen, Leo-Pekka; Lehtimäki, Terho; Viikari, Jorma S; Juonala, Markus; Raitakari, Olli T
2017-06-01
Dyslipidemia is a major modifiable risk factor for cardiovascular disease. We examined whether the addition of novel single-nucleotide polymorphisms for blood lipid levels enhances the prediction of adult dyslipidemia in comparison to childhood lipid measures. Two thousand four hundred and twenty-two participants of the Cardiovascular Risk in Young Finns Study who had participated in 2 surveys held during childhood (in 1980 when aged 3-18 years and in 1986) and at least once in a follow-up study in adulthood (2001, 2007, and 2011) were included. We examined whether inclusion of a lipid-specific weighted genetic risk score based on 58 single-nucleotide polymorphisms for low-density lipoprotein cholesterol, 71 single-nucleotide polymorphisms for high-density lipoprotein cholesterol, and 40 single-nucleotide polymorphisms for triglycerides improved the prediction of adult dyslipidemia compared with clinical childhood risk factors. Adjusting for age, sex, body mass index, physical activity, and smoking in childhood, childhood lipid levels, and weighted genetic risk scores were associated with an increased risk of adult dyslipidemia for all lipids. Risk assessment based on 2 childhood lipid measures and the lipid-specific weighted genetic risk scores improved the accuracy of predicting adult dyslipidemia compared with the approach using only childhood lipid measures for low-density lipoprotein cholesterol (area under the receiver-operating characteristic curve 0.806 versus 0.811; P =0.01) and triglycerides (area under the receiver-operating characteristic curve 0.740 versus area under the receiver-operating characteristic curve 0.758; P <0.01). The overall net reclassification improvement and integrated discrimination improvement were significant for all outcomes. The inclusion of weighted genetic risk scores to lipid-screening programs in childhood could modestly improve the identification of those at highest risk of dyslipidemia in adulthood. © 2017 American Heart Association, Inc.
Aqueduct: a methodology to measure and communicate global water risks
NASA Astrophysics Data System (ADS)
Gassert, Francis; Reig, Paul
2013-04-01
The Aqueduct Water Risk Atlas (Aqueduct) is a publicly available, global database and interactive tool that maps indicators of water related risks for decision makers worldwide. Aqueduct makes use of the latest geo-statistical modeling techniques to compute a composite index and translate the most recently available hydrological data into practical information on water related risks for companies, investors, and governments alike. Twelve global indicators are grouped into a Water Risk Framework designed in response to the growing concerns from private sector actors around water scarcity, water quality, climate change, and increasing demand for freshwater. The Aqueduct framework organizes indicators into three categories of risk that bring together multiple dimensions of water related risk into comprehensive aggregated scores and includes indicators of water stress, variability in supply, storage, flood, drought, groundwater, water quality and social conflict, addressing both spatial and temporal variation in water hazards. Indicators are selected based on relevance to water users, availability and robustness of global data sources, and expert consultation, and are collected from existing datasets or derived from a Global Land Data Assimilation System (GLDAS) based integrated water balance model. Indicators are normalized using a threshold approach, and composite scores are computed using a linear aggregation scheme that allows for dynamic weighting to capture users' unique exposure to water hazards. By providing consistent scores across the globe, the Aqueduct Water Risk Atlas enables rapid comparison across diverse aspects of water risk. Companies can use this information to prioritize actions, investors to leverage financial interest to improve water management, and governments to engage with the private sector to seek solutions for more equitable and sustainable water governance. The Aqueduct Water Risk Atlas enables practical applications of scientific data, helping non-expert audiences better understand and evaluate risks facing water users. This presentation will discuss the methodology used to combine the indicator values into aggregated risk scores and lessons learned from working with diverse audiences in academia, development institutions, and the public and private sectors.
Performance of polygenic scores for predicting phobic anxiety.
Walter, Stefan; Glymour, M Maria; Koenen, Karestan; Liang, Liming; Tchetgen Tchetgen, Eric J; Cornelis, Marilyn; Chang, Shun-Chiao; Rimm, Eric; Kawachi, Ichiro; Kubzansky, Laura D
2013-01-01
Anxiety disorders are common, with a lifetime prevalence of 20% in the U.S., and are responsible for substantial burdens of disability, missed work days and health care utilization. To date, no causal genetic variants have been identified for anxiety, anxiety disorders, or related traits. To investigate whether a phobic anxiety symptom score was associated with 3 alternative polygenic risk scores, derived from external genome-wide association studies of anxiety, an internally estimated agnostic polygenic score, or previously identified candidate genes. Longitudinal follow-up study. Using linear and logistic regression we investigated whether phobic anxiety was associated with polygenic risk scores derived from internal, leave-one out genome-wide association studies, from 31 candidate genes, and from out-of-sample genome-wide association weights previously shown to predict depression and anxiety in another cohort. Study participants (n = 11,127) were individuals from the Nurses' Health Study and Health Professionals Follow-up Study. Anxiety symptoms were assessed via the 8-item phobic anxiety scale of the Crown Crisp Index at two time points, from which a continuous phenotype score was derived. We found no genome-wide significant associations with phobic anxiety. Phobic anxiety was also not associated with a polygenic risk score derived from the genome-wide association study beta weights using liberal p-value thresholds; with a previously published genome-wide polygenic score; or with a candidate gene risk score based on 31 genes previously hypothesized to predict anxiety. There is a substantial gap between twin-study heritability estimates of anxiety disorders ranging between 20-40% and heritability explained by genome-wide association results. New approaches such as improved genome imputations, application of gene expression and biological pathways information, and incorporating social or environmental modifiers of genetic risks may be necessary to identify significant genetic predictors of anxiety.
Antithrombotic Therapy for Atrial Fibrillation
You, John J.; Singer, Daniel E.; Howard, Patricia A.; Lane, Deirdre A.; Eckman, Mark H.; Fang, Margaret C.; Hylek, Elaine M.; Schulman, Sam; Go, Alan S.; Hughes, Michael; Spencer, Frederick A.; Manning, Warren J.; Halperin, Jonathan L.
2012-01-01
Background: The risk of stroke varies considerably across different groups of patients with atrial fibrillation (AF). Antithrombotic prophylaxis for stroke is associated with an increased risk of bleeding. We provide recommendations for antithrombotic treatment based on net clinical benefit for patients with AF at varying levels of stroke risk and in a number of common clinical scenarios. Methods: We used the methods described in the Methodology for the Development of Antithrombotic Therapy and Prevention of Thrombosis Guidelines: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines article of this supplement. Results: For patients with nonrheumatic AF, including those with paroxysmal AF, who are (1) at low risk of stroke (eg, CHADS2 [congestive heart failure, hypertension, age ≥ 75 years, diabetes mellitus, prior stroke or transient ischemic attack] score of 0), we suggest no therapy rather than antithrombotic therapy, and for patients choosing antithrombotic therapy, we suggest aspirin rather than oral anticoagulation or combination therapy with aspirin and clopidogrel; (2) at intermediate risk of stroke (eg, CHADS2 score of 1), we recommend oral anticoagulation rather than no therapy, and we suggest oral anticoagulation rather than aspirin or combination therapy with aspirin and clopidogrel; and (3) at high risk of stroke (eg, CHADS2 score of ≥ 2), we recommend oral anticoagulation rather than no therapy, aspirin, or combination therapy with aspirin and clopidogrel. Where we recommend or suggest in favor of oral anticoagulation, we suggest dabigatran 150 mg bid rather than adjusted-dose vitamin K antagonist therapy. Conclusions: Oral anticoagulation is the optimal choice of antithrombotic therapy for patients with AF at high risk of stroke (CHADS2 score of ≥ 2). At lower levels of stroke risk, antithrombotic treatment decisions will require a more individualized approach. PMID:22315271
Peñas-Lledó, Eva; Bulik, Cynthia M; Lichtenstein, Paul; Larsson, Henrik; Baker, Jessica H
2015-09-01
This study explored the cross-sectional and predictive effect of drive for thinness and/or negative affect scores on the development of self-reported anorexia nervosa (AN) and bulimia nervosa (BN). K-means were used to cluster the Eating Disorder Inventory-Drive for Thinness (DT) and Child Behavior Checklist Anxious/Depressed (A/D) scores from 615 unrelated female twins at age 16-17. Logistic regressions were used to assess the effect of these clusters on self-reported eating disorder diagnosis at ages 16-17 (n = 565) and 19-20 (n = 451). DT and A/D scores were grouped into four clusters: Mild (scores lower than 90th percentile on both scales), DT (higher scores only on DT), A/D (higher scores only on A/D), and DT-A/D (higher scores on both the DT and A/D scales). DT and DT-A/D clusters at age 16-17 were associated cross-sectionally with AN and both cross-sectionally and longitudinally with BN. The DT-A/D cluster had the highest prevalence of AN at follow-up compared with all other clusters. Similarly, an interaction was observed between DT and A/D that predicted risk for AN. Having elevated DT and A/D scores may increase risk for eating disorder symptomatology above and beyond a high score on either alone. Findings suggest that cluster modeling based on DT and A/D may be useful to inform novel and useful intervention strategies for AN and BN in adolescents. © 2015 Wiley Periodicals, Inc.
Peñas-Lledó, Eva; Bulik, Cynthia M.; Lichtenstein, Paul; Larsson, Henrik; Baker, Jessica H.
2015-01-01
Objective The present study explored the cross-sectional and predictive effect of drive for thinness and/or negative affect scores on the development of self-reported anorexia nervosa (AN) and bulimia nervosa (BN). Method K-means were used to cluster the Eating Disorder Inventory-Drive for Thinness (DT) and Child Behavior Checklist Anxious/Depressed (A/D) scores from 615 unrelated female twins at age 16–17. Logistic regressions were used to assess the effect of these clusters on self-reported eating disorder diagnosis at ages 16–17 (n=565) and 19–20 (n=451). Results DT and A/D scores were grouped into four clusters: Mild (scores lower than 90th percentile on both scales), DT (higher scores only on DT), A/D (higher scores only on A/D), and DT-A/D (higher scores on both the DT and A/D scales). DT and DT-A/D clusters at age 16–17 were associated cross-sectionally with AN and both cross-sectionally and longitudinally with BN. The DT-A/D cluster had the highest prevalence of AN at follow-up compared with all other clusters. Similarly, an interaction was observed between DT and A/D that predicted risk for AN. Discussion Having elevated DT and A/D scores may increase risk for eating disorder symptomatology above and beyond a high score on either alone. Findings suggest that cluster modeling based on DT and A/D may be useful to inform novel and useful intervention strategies for AN and BN in adolescents. PMID:26013185
Grove, Erik L; Hansen, Peter Riis; Olesen, Jonas B; Ahlehoff, Ole; Selmer, Christian; Lindhardsen, Jesper; Madsen, Jan Kyst; Køber, Lars; Torp-Pedersen, Christian; Gislason, Gunnar H
2011-01-01
Objective To examine the effect of proton pump inhibitors on adverse cardiovascular events in aspirin treated patients with first time myocardial infarction. Design Retrospective nationwide propensity score matched study based on administrative data. Setting All hospitals in Denmark. Participants All aspirin treated patients surviving 30 days after a first myocardial infarction from 1997 to 2006, with follow-up for one year. Patients treated with clopidogrel were excluded. Main outcome measures The risk of the combined end point of cardiovascular death, myocardial infarction, or stroke associated with use of proton pump inhibitors was analysed using Kaplan-Meier analysis, Cox proportional hazard models, and propensity score matched Cox proportional hazard models. Results 3366 of 19 925 (16.9%) aspirin treated patients experienced recurrent myocardial infarction, stroke, or cardiovascular death. The hazard ratio for the combined end point in patients receiving proton pump inhibitors based on the time dependent Cox proportional hazard model was 1.46 (1.33 to 1.61; P<0.001) and for the propensity score matched model based on 8318 patients it was 1.61 (1.45 to 1.79; P<0.001). A sensitivity analysis showed no increase in risk related to use of H2 receptor blockers (1.04, 0.79 to 1.38; P=0.78). Conclusion In aspirin treated patients with first time myocardial infarction, treatment with proton pump inhibitors was associated with an increased risk of adverse cardiovascular events. PMID:21562004
Tomita, Hirofumi; Okumura, Ken; Inoue, Hiroshi; Atarashi, Hirotsugu; Yamashita, Takeshi; Origasa, Hideki; Tsushima, Eiki
2015-01-01
Because the current Japanese guideline recommends CHADS2 score-based risk stratification in nonvalvular atrial fibrillation (NVAF) patients and does not list female sex as a risk for thromboembolic events, we designed the present study to compare the CHA2DS2-VASc and CHA2DS2-VA scores in the J-RHYTHM Registry. We prospectively assessed the incidence of thromboembolic events for 2 years in 997 NVAF patients without warfarin treatment (age 68±12 years, 294 females). The predictive value of the CHA2DS2-VASc and CHA2DS2-VA scores for thromboembolic events was evaluated by c-statistic difference and net reclassification improvement (NRI). Thromboembolic events occurred in 7/294 females (1.2%/year) and 23/703 males (1.6%/year) (odds ratio 0.72 for female to male, 95% confidence interval (CI) 0.28-1.62, P=0.44). No sex difference was found in patient groups stratified by CHA2DS2-VASc and CHA2DS2-VA scores. There were significant c-statistic difference (0.029, Z=2.3, P=0.02) and NRI (0.11, 95% CI 0.01-0.20, P=0.02), with the CHA2DS2-VA score being superior to the CHA2DS2-VASc score. In patients with CHA2DS2-VASc scores 0 and 1 (n=374), there were markedly significant c-statistic difference (0.053, Z=6.6, P<0.0001) and NRI (0.11, 95% CI 0.07-0.14, P<0.0001), again supporting superiority of CHA2DS2-VA to CHA2DS2-VASc score. In Japanese NVAF patients, the CHA2DS2-VA score, a risk scoring system excluding female sex from CHA2DS2-VASc, may be more useful in risk stratification for thromboembolic events than CHA2DS2-VASc score, especially in identifying truly low-risk patients.
[Prediction of postoperative nausea and vomiting using an artificial neural network].
Traeger, M; Eberhart, A; Geldner, G; Morin, A M; Putzke, C; Wulf, H; Eberhart, L H J
2003-12-01
Postoperative nausea and vomiting (PONV) are still frequent side-effects after general anaesthesia. These unpleasant symptoms for the patients can be sufficiently reduced using a multimodal antiemetic approach. However, these efforts should be restricted to risk patients for PONV. Thus, predictive models are required to identify these patients before surgery. So far all risk scores to predict PONV are based on results of logistic regression analysis. Artificial neural networks (ANN) can also be used for prediction since they can take into account complex and non-linear relationships between predictive variables and the dependent item. This study presents the development of an ANN to predict PONV and compares its performance with two established simplified risk scores (Apfel's and Koivuranta's scores). The development of the ANN was based on data from 1,764 patients undergoing elective surgical procedures under balanced anaesthesia. The ANN was trained with 1,364 datasets and a further 400 were used for supervising the learning process. One of the 49 ANNs showing the best predictive performance was compared with the established risk scores with respect to practicability, discrimination (by means of the area under a receiver operating characteristics curve) and calibration properties (by means of a weighted linear regression between the predicted and the actual incidences of PONV). The ANN tested showed a statistically significant ( p<0.0001) and clinically relevant higher discriminating power (0.74; 95% confidence interval: 0.70-0.78) than the Apfel score (0.66; 95% CI: 0.61-0.71) or Koivuranta's score (0.69; 95% CI: 0.65-0.74). Furthermore, the agreement between the actual incidences of PONV and those predicted by the ANN was also better and near to an ideal fit, represented by the equation y=1.0x+0. The equations for the calibration curves were: KNN y=1.11x+0, Apfel y=0.71x+1, Koivuranta 0.86x-5. The improved predictive accuracy achieved by the ANN is clinically relevant. However, the disadvantages of this system prevail because a computer is required for risk calculation. Thus, we still recommend the use of one of the simplified risk scores for clinical practice.
Predicting outcome of status epilepticus.
Leitinger, M; Kalss, G; Rohracher, A; Pilz, G; Novak, H; Höfler, J; Deak, I; Kuchukhidze, G; Dobesberger, J; Wakonig, A; Trinka, E
2015-08-01
Status epilepticus (SE) is a frequent neurological emergency complicated by high mortality and often poor functional outcome in survivors. The aim of this study was to review available clinical scores to predict outcome. Literature review. PubMed Search terms were "score", "outcome", and "status epilepticus" (April 9th 2015). Publications with abstracts available in English, no other language restrictions, or any restrictions concerning investigated patients were included. Two scores were identified: "Status Epilepticus Severity Score--STESS" and "Epidemiology based Mortality score in SE--EMSE". A comprehensive comparison of test parameters concerning performance, options, and limitations was performed. Epidemiology based Mortality score in SE allows detailed individualization of risk factors and is significantly superior to STESS in a retrospective explorative study. In particular, EMSE is very good at detection of good and bad outcome, whereas STESS detecting bad outcome is limited by a ceiling effect and uncertainty of correct cutoff value. Epidemiology based Mortality score in SE can be adapted to different regions in the world and to advances in medicine, as new data emerge. In addition, we designed a reporting standard for status epilepticus to enhance acquisition and communication of outcome relevant data. A data acquisition sheet used from patient admission in emergency room, from the EEG lab to intensive care unit, is provided for optimized data collection. Status Epilepticus Severity Score is easy to perform and predicts bad outcome, but has a low predictive value for good outcomes. Epidemiology based Mortality score in SE is superior to STESS in predicting good or bad outcome but needs marginally more time to perform. Epidemiology based Mortality score in SE may prove very useful for risk stratification in interventional studies and is recommended for individual outcome prediction. Prospective validation in different cohorts is needed for EMSE, whereas STESS needs further validation in cohorts with a wider range of etiologies. This article is part of a Special Issue entitled "Status Epilepticus". Copyright © 2015. Published by Elsevier Inc.
A Simple Symptom Score for Acute HIV Infection in a San Diego Community Based Screening Program.
Lin, Timothy C; Gianella, Sara; Tenenbaum, Tara; Little, Susan J; Hoenigl, Martin
2017-12-25
Treatment of acute HIV infection (AHI) decreases transmission and preserves immune function, but AHI diagnosis remains resource-intensive. Risk-based scores predictive for AHI have been described for high-risk groups, however symptom-based scores could be more generalizable across populations. Adults who tested either positive for AHI (antibody-negative, HIV nucleic acid test [NAT]-positive) or HIV NAT-negative with the community-based Early Test HIV screening program in San Diego were retrospectively randomized 2:1 into a derivation and validation set. In the former, symptoms significant for AHI in a multivariate logistic regression model were assigned a score value (the odds ratio rounded to the nearest integer). The score was assessed in the validation set using receiver operating characteristics and areas under the curve (AUC). An optimal cut-off score was found using Youden's index. Of 998 participants (including 737 men who have sex with men (MSM), 149 non-MSM men, 109 ciswomen and 3 trans women), 113 had AHI (including 109 MSM). Compared to HIV-negative cases, AHI cases reported more symptoms (median 4 vs 0, p<0.01). Fever, myalgia and weight loss were significantly associated with AHI in the multivariate model and corresponded to 11, 8 and 4 score points, respectively. The summed score yielded AUC of 0.85 (95%CI 0.77-0.93). A score of ≥11 was 72% sensitive, 96% specific with diagnostic odds ratio of 70.27 (95%CI 28.14-175.93). A 3-symptom score accurately predicted AHI in a community based screening program and may inform allocation of resources in settings that do not routinely screen for AHI. © The Author(s) 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.
Hibbard, Judith H; Greene, Jessica; Sacks, Rebecca; Overton, Valerie; Parrotta, Carmen D
2016-03-01
We explored whether supplementing a clinical risk score with a behavioral measure could improve targeting of the patients most in need of supports that reduce their risk of costly service utilization. Using data from a large health system that determines patient self-management capability using the Patient Activation Measure, we examined utilization of hospital and emergency department care by the 15 percent of patients with the highest clinical risk scores. After controlling for risk scores and placing patients within segments based on their level of activation in 2011, we found that the lower the activation level, the higher the utilization and cost of hospital services in each of the following three years. These findings demonstrate that adding a measure of patient self-management capability to a risk assessment can improve prediction of high care costs and inform actions to better meet patient needs. Project HOPE—The People-to-People Health Foundation, Inc.
Carbone, Marco; Sharp, Stephen J; Flack, Steve; Paximadas, Dimitrios; Spiess, Kelly; Adgey, Carolyn; Griffiths, Laura; Lim, Reyna; Trembling, Paul; Williamson, Kate; Wareham, Nick J; Aldersley, Mark; Bathgate, Andrew; Burroughs, Andrew K; Heneghan, Michael A; Neuberger, James M; Thorburn, Douglas; Hirschfield, Gideon M; Cordell, Heather J; Alexander, Graeme J; Jones, David E J; Sandford, Richard N; Mells, George F
2016-03-01
The biochemical response to ursodeoxycholic acid (UDCA)--so-called "treatment response"--strongly predicts long-term outcome in primary biliary cholangitis (PBC). Several long-term prognostic models based solely on the treatment response have been developed that are widely used to risk stratify PBC patients and guide their management. However, they do not take other prognostic variables into account, such as the stage of the liver disease. We sought to improve existing long-term prognostic models of PBC using data from the UK-PBC Research Cohort. We performed Cox's proportional hazards regression analysis of diverse explanatory variables in a derivation cohort of 1,916 UDCA-treated participants. We used nonautomatic backward selection to derive the best-fitting Cox model, from which we derived a multivariable fractional polynomial model. We combined linear predictors and baseline survivor functions in equations to score the risk of a liver transplant or liver-related death occurring within 5, 10, or 15 years. We validated these risk scores in an independent cohort of 1,249 UDCA-treated participants. The best-fitting model consisted of the baseline albumin and platelet count, as well as the bilirubin, transaminases, and alkaline phosphatase, after 12 months of UDCA. In the validation cohort, the 5-, 10-, and 15-year risk scores were highly accurate (areas under the curve: >0.90). The prognosis of PBC patients can be accurately evaluated using the UK-PBC risk scores. They may be used to identify high-risk patients for closer monitoring and second-line therapies, as well as low-risk patients who could potentially be followed up in primary care. © 2015 by the American Association for the Study of Liver Diseases.
Willaert, Willem I M; Cheshire, Nicholas J; Aggarwal, Rajesh; Van Herzeele, Isabelle; Stansby, Gerard; Macdonald, Sumaira; Vermassen, Frank E
2012-12-01
Carotid artery stenting (CAS) is a technically demanding procedure with a risk of periprocedural stroke. A scoring system based on anatomic criteria has been developed to facilitate patient selection for CAS. Advancements in simulation science also enable case evaluation through patient-specific virtual reality (VR) rehearsal on an endovascular simulator. This study aimed to validate the anatomic scoring system for CAS using the patient-specific VR technology. Three patients were selected and graded according to the CAS scoring system (maximum score, 9): one easy (score, <4.9), one intermediate (score, 5.0-5.9), and one difficult (score, >7.0). The three cases were performed on the simulator in random order by 20 novice interventionalists pretrained in CAS. Technical performances were assessed using simulator-based metrics and expert-based ratings. The interventionalists took significantly longer to perform the difficult CAS case (median, 31.6 vs 19.7 vs 14.6 minutes; P<.0001) compared with the intermediate and easy cases; similarly, more fluoroscopy time (20.7 vs 12.1 vs 8.2 minutes; P<.0001), contrast volume (56.5 vs 51.5 vs 50.0 mL; P=.0060), and roadmaps (10 vs 9 vs 9; P=.0040) were used. The quality of performance declined significantly as the cases became more challenging (score, 24 vs 22 vs 19; P<.0001). The anatomic scoring system for CAS can predict the difficulty of a CAS procedure as measured by patient-specific VR. This scoring system, with or without the additional use of patient-specific VR, can guide novice interventionalists in selecting appropriate patients for CAS. This may reduce the perioperative stroke risk and enhance patient safety. Copyright © 2012 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
Healthy Plant-Based Diets Are Associated with Lower Risk of All-Cause Mortality in US Adults.
Kim, Hyunju; Caulfield, Laura E; Rebholz, Casey M
2018-04-01
Plant-based diets, often referred to as vegetarian diets, are associated with health benefits. However, the association with mortality is less clear. We investigated associations between plant-based diet indexes and all-cause and cardiovascular disease mortality in a nationally representative sample of US adults. Analyses were based on 11,879 participants (20-80 y of age) from NHANES III (1988-1994) linked to data on all-cause and cardiovascular disease mortality through 2011. We constructed an overall plant-based diet index (PDI), which assigns positive scores for plant foods and negative scores for animal foods, on the basis of a food-frequency questionnaire administered at baseline. We also constructed a healthful PDI (hPDI), in which only healthy plant foods received positive scores, and a less-healthful (unhealthy) PDI (uPDI), in which only less-healthful plant foods received positive scores. Cox proportional hazards models were used to estimate the association between plant-based diet consumption in 1988-1994 and subsequent mortality. We tested for effect modification by sex. In the overall sample, PDI and uPDI were not associated with all-cause or cardiovascular disease mortality after controlling for demographic characteristics, socioeconomic factors, and health behaviors. However, among those with an hPDI score above the median, a 10-unit increase in hPDI was associated with a 5% lower risk in all-cause mortality in the overall study population (HR: 0.95; 95% CI: 0.91, 0.98) and among women (HR: 0.94; 95% CI: 0.88, 0.99), but not among men (HR: 0.95; 95% CI: 0.90, 1.01). There was no effect modification by sex (P-interaction > 0.10). A nonlinear association between hPDI and all-cause mortality was observed. Healthy plant-based diet scores above the median were associated with a lower risk of all-cause mortality in US adults. Future research exploring the impact of quality of plant-based diets on long-term health outcomes is necessary.
Mitsui, Nobuyuki; Asakura, Satoshi; Shimizu, Yusuke; Fujii, Yutaka; Toyomaki, Atsuhito; Kako, Yuki; Tanaka, Teruaki; Kitagawa, Nobuki; Inoue, Takeshi; Kusumi, Ichiro
2014-01-01
The suicide risk among young adults is related to multiple factors; therefore, it is difficult to predict and prevent suicidal behavior. We conducted the present study to reveal the most important factors relating to suicidal ideation in Japanese university students with major depressive episodes (MDEs) of major depressive disorder (MDD). The subjects were 30 Japanese university students who had MDEs of MDD, and were aged between 18 and 26 years old. They were divided into two groups - without suicide risk group (n=15), and with suicide risk group (n=15) - based on the results of the Mini-International Neuropsychiatric Interview. Additionally, healthy controls were recruited from the same population (n=15). All subjects completed the self-assessment scales including the Beck Depression Inventory 2nd edition (BDI-II), the Beck Hopelessness Scale (BHS), Rosenberg's Self-Esteem Scale (RSES), and SF-36v2™ (The Medical Outcomes Study 36-item short-form health survey version 2), and they were all administered a battery of neuropsychological tests. The RSES score of the suicide risk group was significantly lower than the RSES score of the without suicide risk group, whereas the BDI-II score and the BHS score were not significantly different between the two groups. The mean social functioning score on the SF-36v2 of the with suicide risk group was significantly lower than that of the without suicide risk group. The individual's self-esteem and social functioning may play an important role in suicide risk among young adults with MDEs of MDD.
Miceli, Antonio; Duggan, Simon M J; Capoun, Radek; Romeo, Francesco; Caputo, Massimo; Angelini, Gianni D
2010-08-01
There is no accepted consensus on the definition of high-risk patients who may benefit from the use of intraaortic balloon pump (IABP) in coronary artery bypass grafting (CABG). The aim of this study was to develop a risk model to identify high-risk patients and predict the need for IABP insertion during CABG. From April 1996 to December 2006, 8,872 consecutive patients underwent isolated CABG; of these 182 patients (2.1%) received intraoperative or postoperative IABP. The scoring risk model was developed in 4,575 patients (derivation dataset) and validated on the remaining patients (validation dataset). Predictive accuracy was evaluated by the area under the receiver operating characteristic curve. Mortality was 1% in the entire cohort and 18.7% (22 patients) in the group which received IABP. Multivariable analysis showed that age greater than 70 years, moderate and poor left ventricular dysfunction, previous cardiac surgery, emergency operation, left main disease, Canadian Cardiovascular Society 3-4 class, and recent myocardial infarction were independent risk factors for the need of IABP insertion. Three risk groups were identified. The observed probability of receiving IABP and mortality in the validation dataset was 36.4% and 10% in the high-risk group (score >14), 10.9% and 2.8% in the medium-risk group (score 7 to 13), and 1.7% and 0.7% in the low-risk group (score 0 to 6). This simple clinical risk model based on preoperative clinical data can be used to identify high-risk patients who may benefit from elective insertion of IABP during CABG. Copyright 2010 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
McCaffrey, Ruth; Bishop, Mary; Adonis-Rizzo, Marie; Williamson, Ellen; McPherson, Melanie; Cruikshank, Alice; Carrier, Vicki Jo; Sands, Simone; Pigano, Diane; Girard, Patricia; Lauzon, Cathy
2007-01-01
Hospital-acquired deep vein thrombosis (DVT) and pulmonary embolisms (PE) are preventable problems that can increase mortality. Early assessment and recognition of risk as well as initiating appropriate prevention measures can prevent DVT or PE. The purpose of this research project was to develop a DVT risk assessment tool and test the tool for validity and reliability. Three phases were undertaken in developing and testing the JFK Medical Center DVT risk assessment tool. Investigation and clarification of risk and predisposing factors for DVT were identified from the literature, expert nursing knowledge, and medical staff input. Second, item development and weighting were undertaken. Third, parametric testing for content validity measured the differences in mean assessment tool scores between a group of patients who developed DVT in the hospital and a demographically similar group who did not develop DVT. Interrater reliability was measured by having three different nurses score each patient and compare the differences in scores among the three. The DVT group had significantly higher scores on the JFK DVT assessment scale than did those who did not experience DVT. Interrater reliability showed a strong correlation among the scores of the three nurses (.98). Providing a valid and reliable tool for measuring the risk for DVT or PE in hospitalized patients will enable nurses to intervene early in patients at risk. Basing DVT risk assessment on the evidence provided in this study will assist nurses in becoming more confident in recognizing the necessity for interventions in hospitalized patients and decreasing risk. Nurses can now evaluate patients at risk for DVT or PE using the JFK Medial Center's risk assessment tool.
Bramley, E; Costa, N D; Fulkerson, W J; Lean, I J
2013-11-01
To investigate associations between ruminal acidosis and body condition score (BCS), prevalence of poor rumen fill, diarrhoea and lameness in dairy cows in New South Wales and Victoria, Australia. This was a cross-sectional study conducted in 100 dairy herds in five regions of Australia. Feeding practices, diets and management practices of herds were assessed. Lactating cows within herds were sampled for rumen biochemistry (n = 8 per herd) and scored for body condition, rumen fill and locomotion (n = 15 per herd). The consistency of faecal pats (n = 20 per herd) from the lactating herd was also scored. A perineal faecal staining score was given to each herd. Herds were classified as subclinically acidotic (ACID), suboptimal (SO) and non-acidotic (Normal) when ≥3/8 cows per herd were allocated to previously defined categories based on rumen biochemical measures. Multivariate logistic regression models were used to examine associations between the prevalence of conditions within a herd and explanatory variables. Median BCS and perineal staining score were not associated with herd category (p >0.05). In the multivariate models, herds with a high prevalence of low rumen fill scores (≤2/5) were more likely to be categorised Normal than SO with an associated increased risk of 69% (p = 0.05). Herds that had a greater prevalence of lame cows (locomotion scores ≥3/5), had 103% higher risk of being categorised as ACID than SO (p = 0.034). In a multivariate logistic regression model, with herd modelled as a random effect, an increase of 1% of pasture in the diet was associated with a 5.5% increase in risk of high faecal scores (≥4/5) indicating diarrhoea (p = 0.001). This study confirmed that herd categories based on rumen function are associated with biological outcomes consistent with acidosis. Herds that had a higher risk of lameness also had a much higher risk of being categorised ACID than SO. Herds with a high prevalence of low rumen scores were more likely to be categorised Normal than SO. The findings indicate that differences in rumen metabolism identified for herd categories ACID, SO and Normal were associated with differences in disease risk and physiology. The study also identified an association between pasture feeding and higher faecal scores. This study suggests that there is a challenge for farmers seeking to increase milk production of cows on pasture to maintain the health of cattle.
Hamad, Rita; Modrek, Sepideh; Kubo, Jessica; Goldstein, Benjamin A; Cullen, Mark R
2015-01-01
Investigators across many fields often struggle with how best to capture an individual's overall health status, with options including both subjective and objective measures. With the increasing availability of "big data," researchers can now take advantage of novel metrics of health status. These predictive algorithms were initially developed to forecast and manage expenditures, yet they represent an underutilized tool that could contribute significantly to health research. In this paper, we describe the properties and possible applications of one such "health risk score," the DxCG Intelligence tool. We link claims and administrative datasets on a cohort of U.S. workers during the period 1996-2011 (N = 14,161). We examine the risk score's association with incident diagnoses of five disease conditions, and we link employee data with the National Death Index to characterize its relationship with mortality. We review prior studies documenting the risk score's association with other health and non-health outcomes, including healthcare utilization, early retirement, and occupational injury. We find that the risk score is associated with outcomes across a variety of health and non-health domains. These examples demonstrate the broad applicability of this tool in multiple fields of research and illustrate its utility as a measure of overall health status for epidemiologists and other health researchers.
Huang, Tao; Qi, Qibin; Zheng, Yan; Ley, Sylvia H.; Manson, JoAnn E.; Hu, Frank B.
2015-01-01
OBJECTIVE Abdominal obesity is a major risk factor for type 2 diabetes (T2D). We aimed to examine the association between the genetic predisposition to central obesity, assessed by the waist-to-hip ratio (WHR) genetic score, and T2D risk. RESEARCH DESIGN AND METHODS The current study included 2,591 participants with T2D and 3,052 participants without T2D of European ancestry from the Nurses’ Health Study (NHS) and the Health Professionals Follow-up Study (HPFS). Genetic predisposition to central obesity was estimated using a genetic score based on 14 established loci for the WHR. RESULTS We found that the central obesity genetic score was linearly related to higher T2D risk. Results were similar in the NHS (women) and HPFS (men). In combined results, each point of the central obesity genetic score was associated with an odds ratio (OR) of 1.04 (95% CI 1.01–1.07) for developing T2D, and the OR was 1.24 (1.03–1.45) when comparing extreme quartiles of the genetic score after multivariate adjustment. CONCLUSIONS The data indicate that genetic predisposition to central obesity is associated with higher T2D risk. This association is mediated by central obesity. PMID:25852209
Christiansen, Erik; Agerbo, Esben; Bilenberg, Niels; Stenager, Elsebeth
2016-01-01
SSRIs are widely used in the treatment of mental illness for both children and adults. Studies have found a slightly increased risk of suicidal thoughts and suicide attempts in young people using SSRIs but SSRIs' impact on risk for suicides in youth is not well-established. Is there indication that SSRIs might raise risk for suicide attempts in young people? We used an observational register-based historical cohort design, a large cohort of all Danish individuals born in 1983-1989 (n = 392,458) and a propensity score approach to analyse the impact from SSRIs on risk for suicide attempts. Every suicide attempt and redeemed prescription of SSRIs was analysed by Cox regression. We found a significant overlap between redeeming a prescription on SSRIs and subsequent suicide attempt. The risk for suicide attempt was highest in the first 3 months after redeeming the first prescription. The hazard ratio for suicide attempts after redeeming a prescription was estimated to 5.23, 95% CI 4.82-5.68. We conclude that the risk of suicide attempt is higher for young people in the first months after redeeming their first prescription for SSRIs, compared to non-users. For SSRI users with lower propensity score (fewer risk factors for SSRIs) the risk of suicide attempt is estimated to be highest. Although the design may miss some explicit reason for prescription of SSRIs and SSRIs might be a marker for those in high risk rather than a causal risk factor, we would recommend systematic risk assessment in the period after redeeming the first prescription.
McClelland, Robyn L; Jorgensen, Neal W; Budoff, Matthew; Blaha, Michael J; Post, Wendy S; Kronmal, Richard A; Bild, Diane E; Shea, Steven; Liu, Kiang; Watson, Karol E; Folsom, Aaron R; Khera, Amit; Ayers, Colby; Mahabadi, Amir-Abbas; Lehmann, Nils; Jöckel, Karl-Heinz; Moebus, Susanne; Carr, J Jeffrey; Erbel, Raimund; Burke, Gregory L
2015-10-13
Several studies have demonstrated the tremendous potential of using coronary artery calcium (CAC) in addition to traditional risk factors for coronary heart disease (CHD) risk prediction. However, to date, no risk score incorporating CAC has been developed. The goal of this study was to derive and validate a novel risk score to estimate 10-year CHD risk using CAC and traditional risk factors. Algorithm development was conducted in the MESA (Multi-Ethnic Study of Atherosclerosis), a prospective community-based cohort study of 6,814 participants age 45 to 84 years, who were free of clinical heart disease at baseline and followed for 10 years. MESA is sex balanced and included 39% non-Hispanic whites, 12% Chinese Americans, 28% African Americans, and 22% Hispanic Americans. External validation was conducted in the HNR (Heinz Nixdorf Recall Study) and the DHS (Dallas Heart Study). Inclusion of CAC in the MESA risk score offered significant improvements in risk prediction (C-statistic 0.80 vs. 0.75; p < 0.0001). External validation in both the HNR and DHS studies provided evidence of very good discrimination and calibration. Harrell's C-statistic was 0.779 in HNR and 0.816 in DHS. Additionally, the difference in estimated 10-year risk between events and nonevents was approximately 8% to 9%, indicating excellent discrimination. Mean calibration, or calibration-in-the-large, was excellent for both studies, with average predicted 10-year risk within one-half of a percent of the observed event rate. An accurate estimate of 10-year CHD risk can be obtained using traditional risk factors and CAC. The MESA risk score, which is available online on the MESA web site for easy use, can be used to aid clinicians when communicating risk to patients and when determining risk-based treatment strategies. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Oosting, Ellen; Hoogeboom, Thomas J; Appelman-de Vries, Suzan A; Swets, Adam; Dronkers, Jaap J; van Meeteren, Nico L U
2016-01-01
The aim of this study was to evaluate the value of conventional factors, the Risk Assessment and Predictor Tool (RAPT) and performance-based functional tests as predictors of delayed recovery after total hip arthroplasty (THA). A prospective cohort study in a regional hospital in the Netherlands with 315 patients was attending for THA in 2012. The dependent variable recovery of function was assessed with the Modified Iowa Levels of Assistance scale. Delayed recovery was defined as taking more than 3 days to walk independently. Independent variables were age, sex, BMI, Charnley score, RAPT score and scores for four performance-based tests [2-minute walk test, timed up and go test (TUG), 10-meter walking test (10 mW) and hand grip strength]. Regression analysis with all variables identified older age (>70 years), Charnley score C, slow walking speed (10 mW >10.0 s) and poor functional mobility (TUG >10.5 s) as the best predictors of delayed recovery of function. This model (AUC 0.85, 95% CI 0.79-0.91) performed better than a model with conventional factors and RAPT scores, and significantly better (p = 0.04) than a model with only conventional factors (AUC 0.81, 95% CI 0.74-0.87). The combination of performance-based tests and conventional factors predicted inpatient functional recovery after THA. Two simple functional performance-based tests have a significant added value to a more conventional screening with age and comorbidities to predict recovery of functioning immediately after total hip surgery. Patients over 70 years old, with comorbidities, with a TUG score >10.5 s and a walking speed >1.0 m/s are at risk for delayed recovery of functioning. Those high risk patients need an accurate discharge plan and could benefit from targeted pre- and postoperative therapeutic exercise programs.
Model to Determine Risk of Pancreatic Cancer in Patients with New-onset Diabetes.
Sharma, Ayush; Kandlakunta, Harika; Singh Nagpal, Sajan Jiv; Ziding, Feng; Hoos, William; Petersen, Gloria M; Chari, Suresh T
2018-05-15
Of subjects with new-onset diabetes (based on glycemia) over the age of 50 years, approximately 1% are diagnosed with pancreatic cancer within 3 years. We aimed to develop and validate a model to determine risk of pancreatic cancer in individuals with new-onset diabetes. We retrospectively collected data from 4 independent, non-overlapping cohorts of patients (n=1561) with new-onset diabetes (based on glycemia; data collected at date of diagnosis and 12 months before) in the Rochester Epidemiology Project, from January 1, 2000 through December 31, 2015 to create our model. The model weighed scores for the 3 factors identified in the discovery cohort to be most strongly associated with pancreatic cancer (64 patients with pancreatic cancer and 192 with type-2 diabetes): change in weight, change in blood glucose, and age at onset of diabetes. We called our model enriching new-onset diabetes for pancreatic cancer (END-PAC). We validated the locked-down model and cutoff score in an independent population-based cohort of 1096 patients with diabetes; of these 9 patients (.82%) had pancreatic within 3 years of meeting the criteria for new-onset diabetes. In the discovery cohort the END-PAC model identified patients who developed pancreatic cancer within 3 years of onset of diabetes with an area under the receiver operating characteristic curve value of 0.87; a score of >3 identified patients who developed pancreatic cancer with 80% sensitivity and specificity. In the validation cohort, a score of >3 identified 7/9 patients with pancreatic cancer (78%), with 85% specificity; the prevalence of pancreatic cancer in subjects with score of >3 (3.6%) was 4.4-fold more than in patients with new-onset diabetes. A high END-PAC score in subjects who did not have pancreatic cancer (false positives) was often due to such factors as recent steroid use or different malignancy. An END-PAC score <0 (in 49% of subjects) meant that patients had an extremely low-risk for pancreatic cancer. An END-PAC score >3 identified 75% of subjects in the discovery cohort >6 months before a diagnosis of pancreatic cancer. Based on change in weight, change in blood glucose, and age at onset of diabetes, we developed and validated a model to determine risk of pancreatic cancer in patients with new-onset diabetes, based on glycemia (the END-PAC model). An independent, prospective study is needed to further validate this model, which could contribute to early detection of pancreatic cancer. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Abdelbary, B E; Garcia-Viveros, M; Ramirez-Oropesa, H; Rahbar, M H; Restrepo, B I
2017-10-01
The purpose of this study was to develop a method for identifying newly diagnosed tuberculosis (TB) patients at risk for TB adverse events in Tamaulipas, Mexico. Surveillance data between 2006 and 2013 (8431 subjects) was used to develop risk scores based on predictive modelling. The final models revealed that TB patients failing their treatment regimen were more likely to have at most a primary school education, multi-drug resistance (MDR)-TB, and few to moderate bacilli on acid-fast bacilli smear. TB patients who died were more likely to be older males with MDR-TB, HIV, malnutrition, and reporting excessive alcohol use. Modified risk scores were developed with strong predictability for treatment failure and death (c-statistic 0·65 and 0·70, respectively), and moderate predictability for drug resistance (c-statistic 0·57). Among TB patients with diabetes, risk scores showed moderate predictability for death (c-statistic 0·68). Our findings suggest that in the clinical setting, the use of our risk scores for TB treatment failure or death will help identify these individuals for tailored management to prevent these adverse events. In contrast, the available variables in the TB surveillance dataset are not robust predictors of drug resistance, indicating the need for prompt testing at time of diagnosis.
Amano, Toshiyasu; Earle, Carolyn; Imao, Tetsuya; Takemae, Katsuro
2016-01-01
Several studies have indicated that erectile dysfunction (ED) patients also suffer from lower urinary tract symptoms (LUTS). We investigated a group of men with LUTS and assessed their sexual function with the aim of being able to predict ED risk factors and introduce ED treatments earlier for this patient group. International Prostate Symptom Score (IPSS), Overactive Bladder Symptoms Score (OABSS) and Sexual Health Inventory for Men (SHIM) score were obtained from 236 men with LUTS at their first out-patients visit. Clinical parameters such as body mass index, prostate volume, residual urine volume and prostate specific antigen were also evaluated. The relationship between the SHIM score and other clinical data was analyzed. According to the SHIM score, ED in men with LUTS was severe 15%, moderate 19%, moderate to mild 28%, mild 17%, normal 7% and data was incomplete in 14%. Based on the results of a multivariate analysis, aging (p < 0.001) and OAB severity (p = 0.024) were significantly correlated to severe and moderate ED. Furthermore, among OAB symptoms score items, urge urinary incontinence was a risk factor for severe and moderate ED (p = 0.005). Aging and OAB (notably urinary urge incontinence) are risk factors for severe and moderate ED in men with LUTS.
Kelly, Peter J; Albers, Gregory W; Chatzikonstantinou, Anastasios; De Marchis, Gian Marco; Ferrari, Julia; George, Paul; Katan, Mira; Knoflach, Michael; Kim, Jong S; Li, Linxin; Lee, Eun-Jae; Olivot, Jean-Marc; Purroy, Francisco; Raposo, Nicolas; Rothwell, Peter M; Sharma, Vijay K; Song, Bo; Tsivgoulis, Georgios; Walsh, Cathal; Xu, Yuming; Merwick, Aine
2016-11-01
Identification of patients at highest risk of early stroke after transient ischaemic attack has been improved with imaging based scores. We aimed to compare the validity and prognostic utility of imaging-based stroke risk scores in patients after transient ischaemic attack. We did a pooled analysis of published and unpublished individual-patient data from 16 cohort studies of transient ischaemic attack done in Asia, Europe, and the USA, with early brain and vascular imaging and follow up. All patients were assessed by stroke specialists in hospital settings as inpatients, in emergency departments, or in transient ischaemic attack clinics. Inclusion criteria were stroke-specialist confirmed transient ischaemic attack, age of 18 years or older, and MRI done within 7 days of index transient ischaemic attack and before stroke recurrence. Multivariable logistic regression was done to analyse the predictive utility of abnormal diffusion-weighted MRI, carotid stenosis, and transient ischaemic attack within 1 week of index transient ischaemic attack (dual transient ischaemic attack) after adjusting for ABCD2 score. We compared the prognostic utility of the ABCD2, ABCD2-I, and ABCD3-I scores using discrimination, calibration, and risk reclassification. In 2176 patients from 16 cohort studies done between 2005 and 2015, after adjusting for ABCD2 score, positive diffusion-weighted imaging (odds ratio [OR] 3·8, 95% CI 2·1-7·0), dual transient ischaemic attack (OR 3·3, 95% CI 1·8-5·8), and ipsilateral carotid stenosis (OR 4·7, 95% CI 2·6-8·6) were associated with 7 day stroke after index transient ischaemic attack (p<0·001 for all). 7 day stroke risk increased with increasing ABCD2-I and ABCD3-I scores (both p<0·001). Discrimination to identify early stroke risk was improved for ABCD2-I versus ABCD2 (2 day c statistic 0·74 vs 0·64; p=0·006). However, discrimination was further improved by ABCD3-I compared with ABCD2 (2 day c statistic 0·84 vs 0·64; p<0·001) and ABCD2-I (c statistic 0·84 vs 0·74; p<0·001). Early stroke risk reclassification was improved by ABCD3-I compared with ABCD2-I score (clinical net reclassification improvement 33% at 2 days). Although ABCD2-I and ABCD3-I showed validity, the ABCD3-I score reliably identified highest-risk patients at highest risk of a stroke after transient ischaemic attack with improved risk prediction compared with ABCD2-I. Transient ischaemic attack management guided by ABCD3-I with immediate stroke-specialist assessment, urgent MRI, and vascular imaging should now be considered, with monitoring of safety and cost-effectiveness. Health Research Board of Ireland, Irish Heart Foundation, Irish Health Service Executive, Irish National Lottery, National Medical Research Council of Singapore, Swiss National Science Foundation, Bangerter-Rhyner Foundation, Swiss National Science Foundation, Swisslife Jubiläumsstiftung for Medical Research, Swiss Neurological Society, Fondazione Dr Ettore Balli (Switzerland), Clinical Trial Unit of University of Bern, South Korea's Ministry for Health, Welfare, and Family Affairs, UK Wellcome Trust, Wolfson Foundation, UK Stroke Association, British Heart Foundation, Dunhill Medical Trust, National Institute of Health Research (NIHR), Medical Research Council, and the NIHR Oxford Biomedical Research Centre. Copyright © 2016 Elsevier Ltd. All rights reserved.
Does the Surgical Apgar Score Measure Intraoperative Performance?
Regenbogen, Scott E.; Lancaster, R. Todd; Lipsitz, Stuart R.; Greenberg, Caprice C.; Hutter, Matthew M.; Gawande, Atul A.
2008-01-01
Objective To evaluate whether Surgical Apgar Scores measure the relationship between intraoperative care and surgical outcomes. Summary Background Data With preoperative risk-adjustment now well-developed, the role of intraoperative performance in surgical outcomes may be considered. We previously derived and validated a ten-point Surgical Apgar Score—based on intraoperative blood loss, heart rate, and blood pressure—that effectively predicts major postoperative complications within 30 days of general and vascular surgery. This study evaluates whether the predictive value of this score comes solely from patients’ preoperative risk, or also measures care in the operating room. Methods Among a systematic sample of 4,119 general and vascular surgery patients at a major academic hospital, we constructed a detailed risk-prediction model including 27 patient-comorbidity and procedure-complexity variables, and computed patients’ propensity to suffer a major postoperative complication. We evaluated the prognostic value of patients’ Surgical Apgar Scores before and after adjustment for this preoperative risk. Results After risk-adjustment, the Surgical Apgar Score remained strongly correlated with postoperative outcomes (p<0.0001). Odds of major complications among average-scoring patients (scores 7–8) were equivalent to preoperative predictions (likelihood ratio (LR) 1.05, 95%CI 0.78–1.41), significantly decreased for those who achieved the best scores of 9–10 (LR 0.52, 95%CI 0.35–0.78), and were significantly poorer for those with low scores—LRs 1.60 (1.12–2.28) for scores 5–6, and 2.80 (1.50–5.21) for scores 0–4. Conclusions Even after accounting for fixed preoperative risk—due to patients’ acute condition, comorbidities and/or operative complexity—the Surgical Apgar Score appears to detect differences in intraoperative management that reduce odds of major complications by half, or increase them by nearly three-fold. PMID:18650644
Predicting mortality in sick African children: the FEAST Paediatric Emergency Triage (PET) Score.
George, Elizabeth C; Walker, A Sarah; Kiguli, Sarah; Olupot-Olupot, Peter; Opoka, Robert O; Engoru, Charles; Akech, Samuel O; Nyeko, Richard; Mtove, George; Reyburn, Hugh; Berkley, James A; Mpoya, Ayub; Levin, Michael; Crawley, Jane; Gibb, Diana M; Maitland, Kathryn; Babiker, Abdel G
2015-07-31
Mortality in paediatric emergency care units in Africa often occurs within the first 24 h of admission and remains high. Alongside effective triage systems, a practical clinical bedside risk score to identify those at greatest risk could contribute to reducing mortality. Data collected during the Fluid As Expansive Supportive Therapy (FEAST) trial, a multi-centre trial involving 3,170 severely ill African children, were analysed to identify clinical and laboratory prognostic factors for mortality. Multivariable Cox regression was used to build a model in this derivation dataset based on clinical parameters that could be quickly and easily assessed at the bedside. A score developed from the model coefficients was externally validated in two admissions datasets from Kilifi District Hospital, Kenya, and compared to published risk scores using Area Under the Receiver Operating Curve (AUROC) and Hosmer-Lemeshow tests. The Net Reclassification Index (NRI) was used to identify additional laboratory prognostic factors. A risk score using 8 clinical variables (temperature, heart rate, capillary refill time, conscious level, severe pallor, respiratory distress, lung crepitations, and weak pulse volume) was developed. The score ranged from 0-10 and had an AUROC of 0.82 (95 % CI, 0.77-0.87) in the FEAST trial derivation set. In the independent validation datasets, the score had an AUROC of 0.77 (95 % CI, 0.72-0.82) amongst admissions to a paediatric high dependency ward and 0.86 (95 % CI, 0.82-0.89) amongst general paediatric admissions. This discriminative ability was similar to, or better than other risk scores in the validation datasets. NRI identified lactate, blood urea nitrogen, and pH to be important prognostic laboratory variables that could add information to the clinical score. Eight clinical prognostic factors that could be rapidly assessed by healthcare staff for triage were combined to create the FEAST Paediatric Emergency Triage (PET) score and externally validated. The score discriminated those at highest risk of fatal outcome at the point of hospital admission and compared well to other published risk scores. Further laboratory tests were also identified as prognostic factors which could be added if resources were available or as indices of severity for comparison between centres in future research studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mossahebi, S; Feigenberg, S; Nichols, E
Purpose: GammaPod™, the first stereotactic radiotherapy device for early stage breast cancer treatment, has been recently installed and commissioned at our institution. A multidisciplinary working group applied the failure mode and effects analysis (FMEA) approach to perform a risk analysis. Methods: FMEA was applied to the GammaPod™ treatment process by: 1) generating process maps for each stage of treatment; 2) identifying potential failure modes and outlining their causes and effects; 3) scoring the potential failure modes using the risk priority number (RPN) system based on the product of severity, frequency of occurrence, and detectability (ranging 1–10). An RPN of highermore » than 150 was set as the threshold for minimal concern of risk. For these high-risk failure modes, potential quality assurance procedures and risk control techniques have been proposed. A new set of severity, occurrence, and detectability values were re-assessed in presence of the suggested mitigation strategies. Results: In the single-day image-and-treat workflow, 19, 22, and 27 sub-processes were identified for the stages of simulation, treatment planning, and delivery processes, respectively. During the simulation stage, 38 potential failure modes were found and scored, in terms of RPN, in the range of 9-392. 34 potential failure modes were analyzed in treatment planning with a score range of 16-200. For the treatment delivery stage, 47 potential failure modes were found with an RPN score range of 16-392. The most critical failure modes consisted of breast-cup pressure loss and incorrect target localization due to patient upper-body alignment inaccuracies. The final RPN score of these failure modes based on recommended actions were assessed to be below 150. Conclusion: FMEA risk analysis technique was applied to the treatment process of GammaPod™, a new stereotactic radiotherapy technology. Application of systematic risk analysis methods is projected to lead to improved quality of GammaPod™ treatments. Ying Niu and Cedric Yu are affiliated with Xcision Medical Systems.« less
van Walraven, Carl; Wong, Jenna; Forster, Alan J
2012-01-01
Between 5% and 10% of patients die or are urgently readmitted within 30 days of discharge from hospital. Readmission risk indexes have either excluded acute diagnoses or modelled them as multiple distinct variables. In this study, we derived and validated a score summarizing the influence of acute hospital diagnoses and procedures on death or urgent readmission within 30 days. From population-based hospital abstracts in Ontario, we randomly sampled 200 000 discharges between April 2003 and March 2009 and determined who had been readmitted urgently or died within 30 days of discharge. We used generalized estimating equation modelling, with a sample of 100 000 patients, to measure the adjusted association of various case-mix groups (CMGs-homogenous groups of acute care inpatients with similar clinical and resource-utilization characteristics) with 30-day death or urgent readmission. This final model was transformed into a scoring system that was validated in the remaining 100 000 patients. Patients in the derivation set belonged to 1 of 506 CMGs and had a 6.8% risk of 30-day death or urgent readmission. Forty-seven CMG codes (more than half of which were directly related to chronic diseases) were independently associated with this outcome, which led to a CMG score that ranged from -6 to 7 points. The CMG score was significantly associated with 30-day death or urgent readmission (unadjusted odds ratio for a 1-point increase in CMG score 1.52, 95% confidence interval [CI] 1.49-1.56). Alone, the CMG score was only moderately discriminative (C statistic 0.650, 95% CI 0.644-0.656). However, when the CMG score was added to a validated risk index for death or readmission, the C statistic increased to 0.759 (95% CI 0.753-0.765). The CMG score was well calibrated for 30-day death or readmission. In this study, we developed a scoring system for acute hospital diagnoses and procedures that could be used as part of a risk-adjustment methodology for analyses of postdischarge outcomes.
Bai, Ying; Shantsila, Alena; Lip, Gregory Y H
2017-02-01
The use of anticoagulation for stroke prevention in patients with atrial fibrillation (AF) and CHA 2 DS 2 -VASc score of 1 has been debated, partially due to limited data on ischemic stroke risk and specific clinical trials in these patients. East Asian patients have a different stroke risk profile compared to non-East Asians. We performed a systematic review and meta-analysis of ischemic stroke risk in AF patients with a CHA 2 DS 2 -VASc score of 1 in East Asian countries. A comprehensive literature search for studies evaluating ischemic stroke risk related with AF with CHA 2 DS 2 -VASc score of 1 was conducted by two reviewers. We used a fixed-effect model first, then a random-effect model if heterogeneity was assessed with I 2 . After pooling 6 studies, the annual rate of ischemic stroke in East Asian patients with AF and a CHA 2 DS 2 -VASc score of 1 was 1.66% (95% CI: 0.71%-2.61%, I2 = 98.4%). There was a wide range in reported pooled rates between countries, from 0.59% to 3.13%. Significant difference existed not only in the community-based studies (Chinese: 2.10% vs. Japanese: 0.60%), but also from the hospital-based studies (Chinese: 3.55% vs. Japanese: 0.42%). Confining the analysis to those on no antithrombotic treatment had limited effect on the summary estimate (eg. Chinese: 4.28% vs. Japanese: 0.6%). In Chinese studies, ischemic stroke rate was lower in females than males (female: 1.40% vs. male: 1.79%). However, the low event rate in Japanese studies may reflect unrecorded anticoagulation status at follow-up. Some regional differences between East Asian countries were observed for ischemic stroke risk in patients with a CHA 2 DS 2 -VASc score of 1. This may reflect methodological differences in studies and unrecorded anticoagulation use at followup, but further prospective studies are required to ascertain ischemic stroke risks, as well as the differences and reasons for this between East Asians and non-East Asians.
Goh, Louise G H; Dhaliwal, Satvinder S; Welborn, Timothy A; Lee, Andy H; Della, Phillip R
2014-01-01
Objectives It is important to ascertain which anthropometric measurements of obesity, general or central, are better predictors of cardiovascular disease (CVD) risk in women. 10-year CVD risk was calculated from the Framingham risk score model, SCORE risk chart for high-risk regions, general CVD and simplified general CVD risk score models. Increase in CVD risk associated with 1 SD increment in each anthropometric measurement above the mean was calculated, and the diagnostic utility of obesity measures in identifying participants with increased likelihood of being above the treatment threshold was assessed. Design Cross-sectional data from the National Heart Foundation Risk Factor Prevalence Study. Setting Population-based survey in Australia. Participants 4487 women aged 20–69 years without heart disease, diabetes or stroke. Outcome measures Anthropometric obesity measures that demonstrated the greatest increase in CVD risk as a result of incremental change, 1 SD above the mean, and obesity measures that had the greatest diagnostic utility in identifying participants above the respective treatment thresholds of various risk score models. Results Waist circumference (WC), waist-to-hip ratio (WHR) and waist-to-stature ratio had larger effects on increased CVD risk compared with body mass index (BMI). These central obesity measures also had higher sensitivity and specificity in identifying women above and below the 20% treatment threshold than BMI. Central obesity measures also recorded better correlations with CVD risk compared with general obesity measures. WC and WHR were found to be significant and independent predictors of CVD risk, as indicated by the high area under the receiver operating characteristic curves (>0.76), after controlling for BMI in the simplified general CVD risk score model. Conclusions Central obesity measures are better predictors of CVD risk compared with general obesity measures in women. It is equally important to maintain a healthy weight and to prevent central obesity concurrently. PMID:24503301
Goh, Louise G H; Dhaliwal, Satvinder S; Welborn, Timothy A; Lee, Andy H; Della, Phillip R
2014-02-06
It is important to ascertain which anthropometric measurements of obesity, general or central, are better predictors of cardiovascular disease (CVD) risk in women. 10-year CVD risk was calculated from the Framingham risk score model, SCORE risk chart for high-risk regions, general CVD and simplified general CVD risk score models. Increase in CVD risk associated with 1 SD increment in each anthropometric measurement above the mean was calculated, and the diagnostic utility of obesity measures in identifying participants with increased likelihood of being above the treatment threshold was assessed. Cross-sectional data from the National Heart Foundation Risk Factor Prevalence Study. Population-based survey in Australia. 4487 women aged 20-69 years without heart disease, diabetes or stroke. Anthropometric obesity measures that demonstrated the greatest increase in CVD risk as a result of incremental change, 1 SD above the mean, and obesity measures that had the greatest diagnostic utility in identifying participants above the respective treatment thresholds of various risk score models. Waist circumference (WC), waist-to-hip ratio (WHR) and waist-to-stature ratio had larger effects on increased CVD risk compared with body mass index (BMI). These central obesity measures also had higher sensitivity and specificity in identifying women above and below the 20% treatment threshold than BMI. Central obesity measures also recorded better correlations with CVD risk compared with general obesity measures. WC and WHR were found to be significant and independent predictors of CVD risk, as indicated by the high area under the receiver operating characteristic curves (>0.76), after controlling for BMI in the simplified general CVD risk score model. Central obesity measures are better predictors of CVD risk compared with general obesity measures in women. It is equally important to maintain a healthy weight and to prevent central obesity concurrently.
Rossi, A P; Micciolo, R; Rubele, S; Fantin, F; Caliari, C; Zoico, E; Mazzali, G; Ferrari, E; Volpato, S; Zamboni, M
2017-01-01
to validate the MSRA questionnaire proposed as prescreening tool for sarcopenia, in a population of community-dwelling elderly subjects. observational study. community dwelling elderly subjects. 274 community dwelling elderly subjects, 177 women and 97 men, aged 66-78 years. Based on EWGSOP diagnostic criteria subjects were classified as sarcopenic and non-sarcopenic. The Mini Sarcopenia Risk Assessment (MSRA) questionnaire, is composed of seven questions and investigates anamnestic and nutritional characteristics related to risk of sarcopenia onset (age, protein and dairy products consumption, number of meals per day, physical activity level, number of hospitalizations and weight loss in the last year). 33.5% of the study population, were classified as sarcopenic. With the 7-item MSRA score, subjects with a score of 30 or less, had a 4-fold greater risk of being sarcopenic than subjects with a score higher than 30 (OR:4.20;95% CI:2.26-8.06); area under the ROC curve was 0.786 (95% CI:0.725-0.847). In a logistic regression, considering as dependent variable the probability of being sarcopenic, and as independent variables the 7 items of the questionnaire, two items (number of meals and milk and dairy products consumption) showed non-significant diagnostic power. A 5-item score was then derived and the area under the ROC curve was 0.789 (95% IC:0.728-0.851). Taking into account the cost of false positive and false negative costs and the prevalence of sarcopenia, the "optimal" threshold of the original MSRA score (based on 7 items) is 30, with a sensitivity of 0.804 and a specificity of 0.505, while the "optimal" threshold of the MSRA score based on 5 items, is 45, with a sensitivity of 0.804 and a specificity of 0.604. this preliminary study shows that the MSRA questionnaire is predictive of sarcopenia and can be suggested as prescreening instrument to detect this condition. The use of a short form of the MSRA questionnaire improves the capacity to identify sarcopenic subjects.
Ronald, Lisa A; Campbell, Jonathon R; Balshaw, Robert F; Roth, David Z; Romanowski, Kamila; Marra, Fawziah; Cook, Victoria J; Johnston, James C
2016-01-01
Introduction Improved understanding of risk factors for developing active tuberculosis (TB) will better inform decisions about diagnostic testing and treatment for latent TB infection (LTBI) in migrant populations in low-incidence regions. We aim to examine TB risk factors among the foreign-born population in British Columbia (BC), Canada, and to create and validate a clinically relevant multivariate risk score to predict active TB. Methods and analysis This retrospective population-based cohort study will include all foreign-born individuals who acquired permanent resident status in Canada between 1 January 1985 and 31 December 2013 and acquired healthcare coverage in BC at any point during this period. Multiple administrative databases and disease registries will be linked, including a National Immigration Database, BC Provincial Health Insurance Registration, physician billings, hospitalisations, drugs dispensed from community pharmacies, vital statistics, HIV testing and notifications, cancer, chronic kidney disease and dialysis treatment, and all TB and LTBI testing and treatment data in BC. Extended proportional hazards regression will be used to estimate risk factors for TB and to create a prognostic TB risk score. Ethics and dissemination Ethical approval for this study has been obtained from the University of British Columbia Clinical Ethics Review Board. Once completed, study findings will be presented at conferences and published in peer-reviewed journals. An online TB risk score calculator will also be created. PMID:27888179
Impact of maternal education level on risk of low Apgar score.
Almeida, N K O; Pedreira, C E; Almeida, R M V R
2016-11-01
To investigate the association between 5-min Apgar score and socio-economic characteristics of pregnant women, particularly education level. Population-based cross-sectional study. This study used hospital records of live term singleton births in Brazil from 2004 to 2009, obtained from the Ministry of Health National Information System. Crude and adjusted odds ratios (ORs) were used to estimate the risk of a low 5-min Apgar score (≤6) associated with maternal education level, maternal age, marital status, primiparity, number of prenatal visits and mode of delivery (vaginal/caesarean section). Nearly 12 million records were analysed. Births from mothers with 0, 1-3, 4-7 and 8-11 years of education resulted in crude ORs for low 5-min Apgar score of 3.1, 2.2, 1.8 and 1.3, respectively (reference: ≥12 years of education). The crude OR for mothers aged ≥41 years (reference 21-34 years) was 1.4, but no risk was detected for those with ≥12 years of education and those who gave birth by caesarean section (OR 1.0 [95% confidence interval 0.9-1.2]). Generally, the risk of a low 5-min Apgar score was found to increase as maternal age moved away from 21 to 34 years (OR 1.1-1.7), and for mothers with the same characteristics, the risk of a low 5-min Apgar score was found to decrease markedly as education level increased (adjusted OR decreased from 2.6 to 1.2). Maternal education level is clearly associated with the risk of a low 5-min Apgar score. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Increased CAIDE dementia risk, cognition, CSF biomarkers, and vascular burden in healthy adults.
Ecay-Torres, Mirian; Estanga, Ainara; Tainta, Mikel; Izagirre, Andrea; Garcia-Sebastian, Maite; Villanua, Jorge; Clerigue, Montserrat; Iriondo, Ane; Urreta, Iratxe; Arrospide, Arantzazu; Díaz-Mardomingo, Carmen; Kivipelto, Miia; Martinez-Lage, Pablo
2018-06-13
To investigate the cognitive profile of healthy individuals with increased Cardiovascular Risk Factors, Aging and Dementia (CAIDE) dementia risk score and to explore whether this association is related to vascular burden and CSF biomarkers of amyloidosis and neurodegeneration. Cognitively normal participants (mean age 57.6 years) from the Gipuzkoa Alzheimer Project study were classified as having high risk (HR; n = 82) or low risk (LR; n = 293) for dementia according to a CAIDE score cutoff of 9. Cognitive composites were compared between groups. We explored using generalized linear models the role of APOE genotype, MRI white matter hyperintensities (WMH), and CSF (n = 218) levels of β-amyloid 1-42 (Aβ 1-42 ), total tau (t-tau), and phosphorylated tau (p-tau) in the association between CAIDE score and cognition. HR participants obtained lower scores on executive function (EF) ( p = 0.001) and visual perception and construction (VPC) ( p < 0.001) composites. EF composite was associated with CAIDE score × p-tau ( p = 0.001), CAIDE score × t-tau ( p = 0.001), and WMH ( p = 0.003). VPC composite was associated with APOE ( p = 0.001), Aβ 1-42 ( p = 0.004), the interaction APOE × Aβ 1-42 ( p = 0.003), and WMH ( p = 0.004). Performance on global memory was associated with Aβ 1-42 ( p = 0.006), APOE ( p = 0.008), and their interaction ( p = 0.006). Analyses were adjusted for age, education, sex, premorbid intelligence, and stress. Healthy participants at increased dementia risk based on CAIDE scores show lower performance in EF and VPC. This difference is related to APOE , WMH, and Alzheimer biomarkers. © 2018 American Academy of Neurology.
Faxén, Jonas; Hall, Marlous; Gale, Chris P; Sundström, Johan; Lindahl, Bertil; Jernberg, Tomas; Szummer, Karolina
2017-12-01
To develop a simple risk-score model for predicting in-hospital cardiac arrest (CA) among patients hospitalized with suspected non-ST elevation acute coronary syndrome (NSTE-ACS). Using the Swedish Web-system for Enhancement and Development of Evidence-based care in Heart disease Evaluated According to Recommended Therapies (SWEDEHEART), we identified patients (n=242 303) admitted with suspected NSTE-ACS between 2008 and 2014. Logistic regression was used to assess the association between 26 candidate variables and in-hospital CA. A risk-score model was developed and validated using a temporal cohort (n=126 073) comprising patients from SWEDEHEART between 2005 and 2007 and an external cohort (n=276 109) comprising patients from the Myocardial Ischaemia National Audit Project (MINAP) between 2008 and 2013. The incidence of in-hospital CA for NSTE-ACS and non-ACS was lower in the SWEDEHEART-derivation cohort than in MINAP (1.3% and 0.5% vs. 2.3% and 2.3%). A seven point, five variable risk score (age ≥60 years (1 point), ST-T abnormalities (2 points), Killip Class >1 (1 point), heart rate <50 or ≥100bpm (1 point), and systolic blood pressure <100mmHg (2 points) was developed. Model discrimination was good in the derivation cohort (c-statistic 0.72) and temporal validation cohort (c-statistic 0.74), and calibration was reasonable with a tendency towards overestimation of risk with a higher sum of score points. External validation showed moderate discrimination (c-statistic 0.65) and calibration showed a general underestimation of predicted risk. A simple points score containing five variables readily available on admission predicts in-hospital CA for patients with suspected NSTE-ACS. Copyright © 2017 Elsevier B.V. All rights reserved.
Risk Driven Outcome-Based Command and Control (C2) Assessment
2000-01-01
shaping the risk ranking scores into more interpretable and statistically sound risk measures. Regression analysis was applied to determine what...Architecture Framework Implementation, AFCEA Coursebook 503J, February 8-11, 2000, San Diego, California. [Morgan and Henrion, 1990] M. Granger Morgan and
Romero, Daniela C; Sauris, Aileen; Rodriguez, Fátima; Delgado, Daniela; Reddy, Ankita; Foody, JoAnne M
2016-03-01
Hispanic women suffer from high rates of cardiometabolic risk factors and an increasingly disproportionate burden of cardiovascular disease (CVD). Particularly, Hispanic women with limited English proficiency suffer from low levels of CVD knowledge associated with adverse CVD health outcomes. Thirty-two predominantly Spanish-speaking Hispanic women completed, Vivir Con un Corazón Saludable (VCUCS), a culturally tailored Spanish language-based 6-week intensive community program targeting CVD health knowledge through weekly interactive health sessions. A 30-question CVD knowledge questionnaire was used to assess mean changes in CVD knowledge at baseline and postintervention across five major knowledge domains including CVD epidemiology, dietary knowledge, medical information, risk factors, and heart attack symptoms. Completion of the program was associated with a statistically significant (p < 0.001) increase in total mean CVD knowledge scores from 39 % (mean 11.7/30.0) to 66 % (mean 19.8/30.0) postintervention consistent with a 68 % increase in overall mean CVD scores. There was a statistically significant (p < 0.001) increase in mean knowledge scores across all five CVD domains. A culturally tailored Spanish language-based health program is effective in increasing CVD awareness among high CVD risk Hispanic women with low English proficiency and low baseline CVD knowledge.
Lim, Seong-Rin; Lam, Carl W; Schoenung, Julie M
2011-09-01
Life Cycle Impact Assessment (LCIA) and Risk Assessment (RA) employ different approaches to evaluate toxic impact potential for their own general applications. LCIA is often used to evaluate toxicity potentials for corporate environmental management and RA is often used to evaluate a risk score for environmental policy in government. This study evaluates the cancer, non-cancer, and ecotoxicity potentials and risk scores of chemicals and industry sectors in the United States on the basis of the LCIA- and RA-based tools developed by U.S. EPA, and compares the priority screening of toxic chemicals and industry sectors identified with each method to examine whether the LCIA- and RA-based results lead to the same prioritization schemes. The Tool for the Reduction and Assessment of Chemical and other environmental Impacts (TRACI) is applied as an LCIA-based screening approach with a focus on air and water emissions, and the Risk-Screening Environmental Indicator (RSEI) is applied in equivalent fashion as an RA-based screening approach. The U.S. Toxic Release Inventory is used as the dataset for this analysis, because of its general applicability to a comprehensive list of chemical substances and industry sectors. Overall, the TRACI and RSEI results do not agree with each other in part due to the unavailability of characterization factors and toxic scores for select substances, but primarily because of their different evaluation approaches. Therefore, TRACI and RSEI should be used together both to support a more comprehensive and robust approach to screening of chemicals for environmental management and policy and to highlight substances that are found to be of concern from both perspectives. Copyright © 2011 Elsevier Ltd. All rights reserved.
Toemen, L; Gishti, O; Vogelezang, S; Gaillard, R; Hofman, A; Franco, O H; Felix, J F; Jaddoe, V W V
2015-07-01
High body mass index is associated with increased C-reactive protein levels in childhood and adulthood. Little is known about the associations of detailed adiposity measures with C-reactive protein levels in childhood. We examined the associations of general and abdominal adiposity measures with C-reactive protein levels at school age. To gain insight into the direction of causality, we used genetic risk scores based on known genetic variants in adults as proxies for child adiposity measures and C-reactive protein levels. Within a population-based cohort study among 4338 children at the median age of 6.2 years, we measured body mass index, fat mass percentage, android/gynoid fat mass ratio and preperitoneal abdominal fat mass. We also measured C-reactive protein blood levels and defined increased levels as ⩾3.0 mg l(-1). Single-nucleotide polymorphisms (SNPs) for the weighted genetic risk scores were extracted from large genome-wide association studies on adult body mass index, waist-hip ratio and C-reactive protein levels. All fat mass measures were associated with increased C-reactive protein levels, even after adjusting for multiple confounders. Fat mass percentage was most strongly associated with increased C-reactive protein levels (odds ratio 1.46 (95% confidence interval 1.30-1.65) per increase standard deviation scores in fat mass percentage). The association was independent of body mass index. The genetic risk score based on adult body mass index SNPs, but not adult waist-hip ratio SNPs, tended to be associated with increased C-reactive protein levels at school age. The genetic risk score based on adult C-reactive protein level SNPs was not associated with adiposity measures at school age. Our results suggest that higher general and abdominal fat mass may lead to increased C-reactive protein levels at school age. Further studies are needed to replicate these results and explore the causality and long-term consequences.
Hanif, M W; Valsamakis, G; Dixon, A; Boutsiadis, A; Jones, A F; Barnett, A H; Kumar, S
2008-09-01
We tested a stepwise, community-based screening strategy for glucose intolerance in South Asians using a health questionnaire in conjunction with body mass index (BMI). Anthropometric measurements (waist and hip circumference, sagittal diameter and percentage body fat) were then conducted in a hospital setting followed by an oral glucose tolerance test (OGTT) to identify subjects at the highest risk and analyse the factors predicting that risk. A health questionnaire was administered to 435 subjects in a community setting and BMI was measured. Subjects were graded by a risk score based on the health questionnaire as high, medium and low. Subjects with high and medium risk scores and a representative sample of those with low scores had anthropometric measurements in hospital followed by an OGTT. In total, 205 (47%) of the subjects had an OGTT performed. In total, 48.7% of the subjects tested with an OGTT had evidence of glucose dysregulation: 20% had diabetes and 28.7% had impaired glucose tolerance (IGT). Logistic regression model explained 49.1% of the total variability. The significant predictors of diabetes and IGT were Blood Glucose Monitoring Strips (BMI), random blood glucose (BM), sibling with diabetes and presence of diagnosed hypertension or ischaemic disease. Most of these predictors along with other heredity diabetes factors create a composite score, with high predictability, as the receiver operating curve analysis shows. We describe a simple, stepwise strategy in a community setting, based on a health questionnaire and anthropometric measurements, to explain about 50% of cases with IGT and diabetes and diagnose about 50% of cases from the population screened. We have also identified factors that predict the risk.
USDA-ARS?s Scientific Manuscript database
Background: The effect of adherence to the American Heart Association (AHA) 2006 Diet and Lifestyle recommendations is unknown. Objective: To develop a unique diet and lifestyle score based on the AHA 2006 Diet and Lifestyle (AHA DL) recommendations. We evaluated this score in relation to available ...
Health-Based Capitation Risk Adjustment in Minnesota Public Health Care Programs
Gifford, Gregory A.; Edwards, Kevan R.; Knutson, David J.
2004-01-01
This article documents the history and implementation of health-based capitation risk adjustment in Minnesota public health care programs, and identifies key implementation issues. Capitation payments in these programs are risk adjusted using an historical, health plan risk score, based on concurrent risk assessment. Phased implementation of capitation risk adjustment for these programs began January 1, 2000. Minnesota's experience with capitation risk adjustment suggests that: (1) implementation can accelerate encounter data submission, (2) administrative decisions made during implementation can create issues that impact payment model performance, and (3) changes in diagnosis data management during implementation may require changes to the payment model. PMID:25372356
Briggs, Matthew S; Spees, Colleen; Bout-Tabaku, Sharon; Taylor, Christopher A; Eneli, Ihuoma; Schmitt, Laura C
2015-04-01
Obese youth demonstrate the same obesity-associated morbidities observed in obese adults, including poor cardiorespiratory fitness, poor quality of life, and reports of musculoskeletal pain. The purposes of this study were to compare the prevalence of cardiovascular risk factors and evaluate the odds of metabolic syndrome in obese youth based on measures of cardiorespiratory fitness, quality of life, and pain. A medical chart review of 183 obese youth in a medical weight management program was conducted. Measures of cardiovascular risk and metabolic syndrome were recorded. Groups were categorized based on Progressive Aerobic Cardiovascular Endurance Run (PACER) score, Pediatric Quality of Life (PedsQL)-Physical Function score, PedsQL-Psychosocial Health score, and reports of musculoskeletal pain. Statistical analysis included independent t-tests, Mann-Whitney U-test, chi-squared test, and logistic regression. Thirty-three percent of the entire sample had C-reactive protein (CRP) levels >3.0 mg/dL and 30% were categorized as having metabolic syndrome. Patients with lower PACER scores demonstrated a greater prevalence of CRP levels >3.0 mg/dL versus those with higher PACER scores (45% vs. 12%; P=0.01). There were no other differences in the prevalence of cardiovascular risk factors or metabolic syndrome when categorized by PACER, PedsQL, or pain. Those with CRP levels >3.0 mg/dL demonstrated increased odds of metabolic syndrome-[odds (95% confidence interval, CI): 4.93 (1.24-19.61); P=0.02]. Overall, results do not show differences in cardiovascular risk in obese youth when categorized by PACER, PedsQL, or reports of MSK pain. Elevated CRP may be a useful predictor of metabolic syndrome in obese youth and warrants further investigation.
siMS Score: Simple Method for Quantifying Metabolic Syndrome.
Soldatovic, Ivan; Vukovic, Rade; Culafic, Djordje; Gajic, Milan; Dimitrijevic-Sreckovic, Vesna
2016-01-01
To evaluate siMS score and siMS risk score, novel continuous metabolic syndrome scores as methods for quantification of metabolic status and risk. Developed siMS score was calculated using formula: siMS score = 2*Waist/Height + Gly/5.6 + Tg/1.7 + TAsystolic/130-HDL/1.02 or 1.28 (for male or female subjects, respectively). siMS risk score was calculated using formula: siMS risk score = siMS score * age/45 or 50 (for male or female subjects, respectively) * family history of cardio/cerebro-vascular events (event = 1.2, no event = 1). A sample of 528 obese and non-obese participants was used to validate siMS score and siMS risk score. Scores calculated as sum of z-scores (each component of metabolic syndrome regressed with age and gender) and sum of scores derived from principal component analysis (PCA) were used for evaluation of siMS score. Variants were made by replacing glucose with HOMA in calculations. Framingham score was used for evaluation of siMS risk score. Correlation between siMS score with sum of z-scores and weighted sum of factors of PCA was high (r = 0.866 and r = 0.822, respectively). Correlation between siMS risk score and log transformed Framingham score was medium to high for age groups 18+,30+ and 35+ (0.835, 0.707 and 0.667, respectively). siMS score and siMS risk score showed high correlation with more complex scores. Demonstrated accuracy together with superior simplicity and the ability to evaluate and follow-up individual patients makes siMS and siMS risk scores very convenient for use in clinical practice and research as well.
Healthful and Unhealthful Plant-Based Diets and the Risk of Coronary Heart Disease in U.S. Adults.
Satija, Ambika; Bhupathiraju, Shilpa N; Spiegelman, Donna; Chiuve, Stephanie E; Manson, JoAnn E; Willett, Walter; Rexrode, Kathryn M; Rimm, Eric B; Hu, Frank B
2017-07-25
Plant-based diets are recommended for coronary heart disease (CHD) prevention. However, not all plant foods are necessarily beneficial for health. This study sought to examine associations between plant-based diet indices and CHD incidence. We included 73,710 women in NHS (Nurses' Health Study) (1984 to 2012), 92,329 women in NHS2 (1991 to 2013), and 43,259 men in Health Professionals Follow-up Study (1986 to 2012), free of chronic diseases at baseline. We created an overall plant-based diet index (PDI) from repeated semiquantitative food-frequency questionnaire data, by assigning positive scores to plant foods and reverse scores to animal foods. We also created a healthful plant-based diet index (hPDI) where healthy plant foods (whole grains, fruits/vegetables, nuts/legumes, oils, tea/coffee) received positive scores, whereas less-healthy plant foods (juices/sweetened beverages, refined grains, potatoes/fries, sweets) and animal foods received reverse scores. To create an unhealthful PDI (uPDI), we gave positive scores to less-healthy plant foods and reverse scores to animal and healthy plant foods. Over 4,833,042 person-years of follow-up, we documented 8,631 incident CHD cases. In pooled multivariable analysis, higher adherence to PDI was independently inversely associated with CHD (hazard ratio [HR] comparing extreme deciles: 0.92; 95% confidence interval [CI]: 0.83 to 1.01; p trend = 0.003). This inverse association was stronger for hDPI (HR: 0.75; 95% CI: 0.68 to 0.83; p trend <0.001). Conversely, uPDI was positively associated with CHD (HR: 1.32; 95% CI: 1.20 to 1.46; p trend <0.001). Higher intake of a plant-based diet index rich in healthier plant foods is associated with substantially lower CHD risk, whereas a plant-based diet index that emphasizes less-healthy plant foods is associated with higher CHD risk. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Transcriptional risk scores link GWAS to eQTLs and predict complications in Crohn's disease.
Marigorta, Urko M; Denson, Lee A; Hyams, Jeffrey S; Mondal, Kajari; Prince, Jarod; Walters, Thomas D; Griffiths, Anne; Noe, Joshua D; Crandall, Wallace V; Rosh, Joel R; Mack, David R; Kellermayer, Richard; Heyman, Melvin B; Baker, Susan S; Stephens, Michael C; Baldassano, Robert N; Markowitz, James F; Kim, Mi-Ok; Dubinsky, Marla C; Cho, Judy; Aronow, Bruce J; Kugathasan, Subra; Gibson, Greg
2017-10-01
Gene expression profiling can be used to uncover the mechanisms by which loci identified through genome-wide association studies (GWAS) contribute to pathology. Given that most GWAS hits are in putative regulatory regions and transcript abundance is physiologically closer to the phenotype of interest, we hypothesized that summation of risk-allele-associated gene expression, namely a transcriptional risk score (TRS), should provide accurate estimates of disease risk. We integrate summary-level GWAS and expression quantitative trait locus (eQTL) data with RNA-seq data from the RISK study, an inception cohort of pediatric Crohn's disease. We show that TRSs based on genes regulated by variants linked to inflammatory bowel disease (IBD) not only outperform genetic risk scores (GRSs) in distinguishing Crohn's disease from healthy samples, but also serve to identify patients who in time will progress to complicated disease. Our dissection of eQTL effects may be used to distinguish genes whose association with disease is through promotion versus protection, thereby linking statistical association to biological mechanism. The TRS approach constitutes a potential strategy for personalized medicine that enhances inference from static genotypic risk assessment.
NASA Astrophysics Data System (ADS)
Mardi Safitri, Dian; Arfi Nabila, Zahra; Azmi, Nora
2018-03-01
Musculoskeletal Disorders (MSD) is one of the ergonomic risks due to manual activity, non-neutral posture and repetitive motion. The purpose of this study is to measure risk and implement ergonomic interventions to reduce the risk of MSD on the paper pallet assembly work station. Measurements to work posture are done by Ovako Working Posture Analysis (OWAS) methods and Rapid Entire Body Assessment (REBA) method, while the measurement of work repetitiveness was using Strain Index (SI) method. Assembly processes operators are identified has the highest risk level. OWAS score, Strain Index, and REBA values are 4, 20.25, and 11. Ergonomic improvements are needed to reduce that level of risk. Proposed improvements will be developed using the Quality Function Deployment (QFD) method applied with Axiomatic House of Quality (AHOQ) and Morphological Chart. As the result, risk level based on OWAS score & REBA score turn out from 4 & 11 to be 1 & 2. Biomechanics analysis of the operator also shows the decreasing values for L4-L5 moment, compression, joint shear, and joint moment strength.
Lo Re, Vincent; Haynes, Kevin; Forde, Kimberly A; Goldberg, David S; Lewis, James D; Carbonari, Dena M; Leidl, Kimberly B F; Reddy, K Rajender; Nezamzadeh, Melissa S; Roy, Jason; Sha, Daohang; Marks, Amy R; De Boer, Jolanda; Schneider, Jennifer L; Strom, Brian L; Corley, Douglas A
2015-12-01
Few studies have evaluated the ability of laboratory tests to predict risk of acute liver failure (ALF) among patients with drug-induced liver injury (DILI). We aimed to develop a highly sensitive model to identify DILI patients at increased risk of ALF. We compared its performance with that of Hy's Law, which predicts severity of DILI based on levels of alanine aminotransferase or aspartate aminotransferase and total bilirubin, and validated the model in a separate sample. We conducted a retrospective cohort study of 15,353 Kaiser Permanente Northern California members diagnosed with DILI from 2004 through 2010, liver aminotransferase levels above the upper limit of normal, and no pre-existing liver disease. Thirty ALF events were confirmed by medical record review. Logistic regression was used to develop prognostic models for ALF based on laboratory results measured at DILI diagnosis. External validation was performed in a sample of 76 patients with DILI at the University of Pennsylvania. Hy's Law identified patients that developed ALF with a high level of specificity (0.92) and negative predictive value (0.99), but low level of sensitivity (0.68) and positive predictive value (0.02). The model we developed, comprising data on platelet count and total bilirubin level, identified patients with ALF with a C statistic of 0.87 (95% confidence interval [CI], 0.76-0.96) and enabled calculation of a risk score (Drug-Induced Liver Toxicity ALF Score). We found a cut-off score that identified patients at high risk patients for ALF with a sensitivity value of 0.91 (95% CI, 0.71-0.99) and a specificity value of 0.76 (95% CI, 0.75-0.77). This cut-off score identified patients at high risk for ALF with a high level of sensitivity (0.89; 95% CI, 0.52-1.00) in the validation analysis. Hy's Law identifies patients with DILI at high risk for ALF with low sensitivity but high specificity. We developed a model (the Drug-Induced Liver Toxicity ALF Score) based on platelet count and total bilirubin level that identifies patients at increased risk for ALF with high sensitivity. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Federico, Massimo; Bellei, Monica; Marcheselli, Luigi; Schwartz, Marc; Manni, Martina; Tarantino, Vittoria; Pileri, Stefano; Ko, Young-Hyeh; Cabrera, Maria E; Horwitz, Steven; Kim, Won S; Shustov, Andrei; Foss, Francine M; Nagler, Arnon; Carson, Kenneth; Pinter-Brown, Lauren C; Montoto, Silvia; Spina, Michele; Feldman, Tatyana A; Lechowicz, Mary J; Smith, Sonali M; Lansigan, Frederick; Gabus, Raul; Vose, Julie M; Advani, Ranjana H
2018-06-01
Different models to investigate the prognosis of peripheral T cell lymphoma not otherwise specified (PTCL-NOS) have been developed by means of retrospective analyses. Here we report on a new model designed on data from the prospective T Cell Project. Twelve covariates collected by the T Cell Project were analysed and a new model (T cell score), based on four covariates (serum albumin, performance status, stage and absolute neutrophil count) that maintained their prognostic value in multiple Cox proportional hazards regression analysis was proposed. Among patients registered in the T Cell Project, 311 PTCL-NOS were retained for study. At a median follow-up of 46 months, the median overall survival (OS) and progression-free survival (PFS) was 20 and 10 months, respectively. Three groups were identified at low risk (LR, 48 patients, 15%, score 0), intermediate risk (IR, 189 patients, 61%, score 1-2), and high risk (HiR, 74 patients, 24%, score 3-4), having a 3-year OS of 76% [95% confidence interval 61-88], 43% [35-51], and 11% [4-21], respectively (P < 0·001). Comparing the performance of the T cell score on OS to that of each of the previously developed models, it emerged that the new score had the best discriminant power. The new T cell score, based on clinical variables, identifies a group with very unfavourable outcomes. © 2018 The Authors. British Journal of Haematology published by John Wiley & Sons Ltd.
Bliden, Kevin P; Chaudhary, Rahul; Navarese, Eliano P; Sharma, Tushar; Kaza, Himabindu; Tantry, Udaya S; Gurbel, Paul A
2018-01-01
Conventional cardiovascular risk estimators based on clinical demographics have limited prediction of coronary events. Markers for thrombogenicity and vascular function have not been explored in risk estimation of high-risk patients with coronary artery disease. We aimed to develop a clinical and biomarker score to predict 3-year adverse cardiovascular events. Four hundred eleven patients, with ejection fraction ≥40% undergoing coronary angiography, and found to have a luminal diameter stenosis ≥50%, were included in the analysis. Thrombelastography indices and central pulse pressure (CPP) were determined at the time of catheterization. We identified predictors of death, myocardial infarction (MI) or stroke and developed a numerical ischemia risk score. The primary endpoint of cardiovascular death, MI or stroke occurred in 22 patients (5.4%). The factors associated with events were age, prior PCI or CABG, diabetes, CPP, and thrombin-induced platelet-fibrin clot strength, and were included in the MAGMA-ischemia score. The MAGMA-ischemia score showed a c-statistic of 0.85 (95% Confidence Interval [CI] 0.80-0.87; p<0.001) for the primary endpoint. In the subset of patients who underwent revascularization, the c-statistic was 0.90 (p<0.001). Patients with MAGMA-ischemia score greater than 5 had highest risk to develop clinical events, hazard ratio for the primary endpoint: 13.9 (95% CI 5.8-33.1, p<0.001) and for the secondary endpoint: 4.8 (95% CI 2.3-9.6, p<0.001). When compared to previous models, the MAGMA-ischemia score yielded a higher discrimination. Inclusion of CPP and assessment of thrombogenicity in a novel score for patients with documented CAD enhanced the prediction of events. Copyright © 2017 Elsevier B.V. All rights reserved.
Ahn, Hye-Ran; Shin, Min-Ho; Yun, Woo-Jun; Kim, Hye-Yeon; Lee, Young-Hoon; Kweon, Sun-Seog; Rhee, Jung-Ae; Choi, Jin-Su; Choi, Seong-Woo
2011-03-01
To compare the predictability of the Framingham Risk Score (FRS), United Kingdom Prospective Diabetes Study (UKPDS) risk engine, and the Systematic Coronary Risk Evaluation (SCORE) for carotid atherosclerosis and peripheral arterial disease in Korean type 2 diabetic patients. Among 1,275 registered type 2 diabetes patients in the health center, 621 subjects with type 2 diabetes participated in the study. Well-trained examiners measured the carotid intima-media thickness (IMT), carotid plaque, and ankle brachial index (ABI). The subject's 10-year risk of coronary heart disease was calculated according to the FRS, UKPDS, and SCORE risk scores. These three risk scores were compared to the areas under the curve (AUC). The odds ratios (ORs) of all risk scores increased as the quartiles increased for plaque, IMT, and ABI. For plaque and IMT, the UKPDS risk score provided the highest OR (95% confidence interval) at 3.82 (2.36, 6.17) and at 6.21 (3.37, 11.45). For ABI, the SCORE risk estimation provided the highest OR at 7.41 (3.20, 17.18). However, no significant difference was detected for plaque, IMT, or ABI (P = 0.839, 0.313, and 0.113, respectively) when the AUCs of the three risk scores were compared. When we graphed the Kernel density distribution of these three risk scores, UKPDS had a higher distribution than FRS and SCORE. No significant difference was observed when comparing the predictability of the FRS, UKPDS risk engine, and SCORE risk estimation for carotid atherosclerosis and peripheral arterial disease in Korean type 2 diabetic patients.
Laursen, Stig B; Dalton, Harry R; Murray, Iain A; Michell, Nick; Johnston, Matt R; Schultz, Michael; Hansen, Jane M; Schaffalitzky de Muckadell, Ove B; Blatchford, Oliver; Stanley, Adrian J
2015-01-01
Upper gastrointestinal hemorrhage (UGIH) is a common cause of hospital admission. The Glasgow Blatchford score (GBS) is an accurate determinant of patients' risk for hospital-based intervention or death. Patients with a GBS of 0 are at low risk for poor outcome and could be managed as outpatients. Some investigators therefore have proposed extending the definition of low-risk patients by using a higher GBS cut-off value, possibly with an age adjustment. We compared 3 thresholds of the GBS and 2 age-adjusted modifications to identify the optimal cut-off value or modification. We performed an observational study of 2305 consecutive patients presenting with UGIH at 4 centers (Scotland, England, Denmark, and New Zealand). The performance of each threshold and modification was evaluated based on sensitivity and specificity analyses, the proportion of low-risk patients identified, and outcomes of patients classified as low risk. There were differences in age (P = .0001), need for intervention (P < .0001), mortality (P < .015), and GBS (P = .0001) among sites. All systems identified low-risk patients with high levels of sensitivity (>97%). The GBS at cut-off values of ≤1 and ≤2, and both modifications, identified low-risk patients with higher levels of specificity (40%-49%) than the GBS with a cut-off value of 0 (22% specificity; P < .001). The GBS at a cut-off value of ≤2 had the highest specificity, but 3% of patients classified as low-risk patients had adverse outcomes. All GBS cut-off values, and score modifications, had low levels of specificity when tested in New Zealand (2.5%-11%). A GBS cut-off value of ≤1 and both GBS modifications identify almost twice as many low-risk patients with UGIH as a GBS at a cut-off value of 0. Implementing a protocol for outpatient management, based on one of these scores, could reduce hospital admissions by 15% to 20%. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Sato, Masaya; Tateishi, Ryosuke; Yasunaga, Hideo; Horiguchi, Hiromasa; Matsui, Hiroki; Yoshida, Haruhiko; Fushimi, Kiyohide; Koike, Kazuhiko
2017-03-01
We aimed to develop a model for predicting in-hospital mortality of cirrhotic patients following major surgical procedures using a large sample of patients derived from a Japanese nationwide administrative database. We enrolled 2197 cirrhotic patients who underwent elective (n = 1973) or emergency (n = 224) surgery. We analyzed the risk factors for postoperative mortality and established a scoring system for predicting postoperative mortality in cirrhotic patients using a split-sample method. In-hospital mortality rates following elective or emergency surgery were 4.7% and 20.5%, respectively. In multivariate analysis, patient age, Child-Pugh (CP) class, Charlson Comorbidity Index (CCI), and duration of anesthesia in elective surgery were significantly associated with in-hospital mortality. In emergency surgery, CP class and duration of anesthesia were significant factors. Based on multivariate analysis in the training set (n = 987), the Adequate Operative Treatment for Liver Cirrhosis (ADOPT-LC) score that used patient age, CP class, CCI, and duration of anesthesia to predict in-hospital mortality following elective surgery was developed. This scoring system was validated in the testing set (n = 986) and produced an area under the curve of 0.881. We also developed iOS/Android apps to calculate ADOPT-LC scores to allow easy access to the current evidence in daily clinical practice. Patient age, CP class, CCI, and duration of anesthesia were identified as important risk factors for predicting postoperative mortality in cirrhotic patients. The ADOPT-LC score effectively predicts in-hospital mortality following elective surgery and may assist decisions regarding surgical procedures in cirrhotic patients based on a quantitative risk assessment. © 2016 The Authors Hepatology Research published by John Wiley & Sons Australia, Ltd on behalf of Japan Society of Hepatology.
OLGA- and OLGIM-based staging of gastritis using narrow-band imaging magnifying endoscopy.
Saka, Akiko; Yagi, Kazuyoshi; Nimura, Satoshi
2015-11-01
As atrophic gastritis and intestinal metaplasia as a result of Helicobacter pylori are considered risk factors for gastric cancer, it is important to assess their severity. In the West, the operative link for gastritis assessment (OLGA) and operative link for gastric intestinal metaplasia assessment (OLGIM) staging systems based on biopsy have been widely adopted. In Japan, however, narrow-band imaging (NBI)-magnifying endoscopic diagnosis of gastric mucosal inflammation, atrophy, and intestinal metaplasia has been reported to be fairly accurate. Therefore, we investigated the practicality of NBI-magnifying endoscopy (NBI-ME) for gastritis staging. We enrolled 55 patients, in whom NBI-ME was used to score the lesser curvature of the antrum (antrum) and the lesser curvature of the lower body (corpus). The NBI-ME score classification was established from images obtained beforehand, and then biopsy specimens taken from the observed areas were scored according to histological findings. The NBI-ME and histology scores were then compared. Furthermore, we assessed the NBI-ME and histology stages using a combination of scores for the antrum and corpus, and divided the stages into two risk groups: low and high. The degree to which the stage assessed by NBI-ME approximated that assessed by histology was then ascertained. Degree of correspondence between the NBI-ME and histology scores was 69.1% for the antrum and 72.7% for the corpus, and that between the high- and low-risk groups was 89.1%. Staging of gastritis using NBI-ME approximates that based on histology, and would be a practical alternative to the latter. © 2015 The Authors. Digestive Endoscopy © 2015 Japan Gastroenterological Endoscopy Society.
Lee, Chang-Hoon; Lee, Jinwoo; Park, Young Sik; Lee, Sang-Min; Yim, Jae-Joon; Kim, Young Whan; Han, Sung Koo; Yoo, Chul-Gyu
2015-09-01
In assigning patients with chronic obstructive pulmonary disease (COPD) to subgroups according to the updated guidelines of the Global Initiative for Chronic Obstructive Lung Disease, discrepancies have been noted between the COPD assessment test (CAT) criteria and modified Medical Research Council (mMRC) criteria. We investigated the determinants of symptom and risk groups and sought to identify a better CAT criterion. This retrospective study included COPD patients seen between June 20, 2012, and December 5, 2012. The CAT score that can accurately predict an mMRC grade ≥ 2 versus < 2 was evaluated by comparing the area under the receiver operating curve (AUROC) and by classification and regression tree (CART) analysis. Among 428 COPD patients, the percentages of patients classified into subgroups A, B, C, and D were 24.5%, 47.2%, 4.2%, and 24.1% based on CAT criteria and 49.3%, 22.4%, 8.9%, and 19.4% based on mMRC criteria, respectively. More than 90% of the patients who met the mMRC criteria for the 'more symptoms group' also met the CAT criteria. AUROC and CART analyses suggested that a CAT score ≥ 15 predicted an mMRC grade ≥ 2 more accurately than the current CAT score criterion. During follow-up, patients with CAT scores of 10 to 14 did not have a different risk of exacerbation versus those with CAT scores < 10, but they did have a lower exacerbation risk compared to those with CAT scores of 15 to 19. A CAT score ≥ 15 is a better indicator for the 'more symptoms group' in the management of COPD patients.
Perrotti, Andrea; Gatti, Giuseppe; Dorigo, Enrica; Sinagra, Gianfranco; Pappalardo, Aniello; Chocron, Sidney
The Gatti score is a weighted scoring system based on risk factors for deep sternal wound infection (DSWI) that was created in an Italian center to predict DSWI risk after bilateral internal thoracic artery (BITA) grafting. No external evaluation based on validation samples derived from other surgical centers has been performed. The aim of this study is to perform this validation. During 2015, BITA grafts were used as skeletonized conduits in all 255 consecutive patients with multi-vessel coronary disease who underwent isolated coronary bypass surgery at the Department of Thoracic and Cardio-Vascular Surgery, University Hospital Jean Minjoz, Besançon, France. Baseline characteristics, operative data, and immediate outcomes of every patient were collected prospectively. A DSWI risk score was assigned to each patient pre-operatively. The discrimination power of both models, pre-operative and combined, of the Gatti score was assessed with the calculation of the area under the receiver operating characteristic curve. Fourteen (5.5%) patients had DSWI. Major differences both as the baseline characteristics of patients and surgical techniques were found between this series and the original series from which the Gatti score was derived. The area under the receiver operating characteristic curve was 0.78 (95% confidence interval: 0.64-0.92) for the pre-operative model and 0.84 (95% confidence interval: 0.69-0.98) for the combined model. The Gatti score has proven to be effective even in a cohort of French patients despite major differences from the original Italian series. Multi-center validation studies must be performed before introducing the score into clinical practice.
Stauder, Adrienne; Nistor, Katalin; Zakor, Tünde; Szabó, Anita; Nistor, Anikó; Ádám, Szilvia; Konkolÿ Thege, Barna
2017-12-01
To determine national reference values for the Copenhagen Psychosocial Questionnaire (COPSOQ II) across occupational sectors and develop a composite score to estimate the cumulative effect of multiple work-related stressors, in order to facilitate the implementation of occupational health directives on psychosocial risk assessment. Cross-sectional data was collected via an online questionnaire. The sample included 13,104 individuals and was representative of the general Hungarian adult working population in terms of gender, age, education, and occupation. Mean scores were calculated for 18 scales on work environment and for 5 outcome scales of the COPSOQ II across 18 occupational sectors. We analyzed the association between a composite psychosocial risk score (CPRS), reflecting severity of exposure to multiple risk factors, and high stress, burnout, sleep troubles, and poor self-rated health. We found occupation-related differences in the mean scores on all COPSOQ II scales. Scores on the "Stress" scale ranged from 47.9 to 56.2, with the highest mean score in accommodation and food services sector. Variability was greatest with respect to emotional demands (range 40.3-67.6) and smallest with respect to role clarity (range 70.3-75.7). The prevalence of negative health outcomes increased with the CPRS. Five risk categories were formed, for which the odds ratio of negative outcomes ranged from 1.6 to 56.5. The sector-specific psychosocial risk profiles covering 18 work environmental factors can be used as a reference in organizational surveys and international comparisons. The CPRS proved to be a powerful predictor of self-reported negative health outcomes.
Braulke, Friederike; Platzbecker, Uwe; Müller-Thomas, Catharina; Götze, Katharina; Germing, Ulrich; Brümmendorf, Tim H.; Nolte, Florian; Hofmann, Wolf-Karsten; Giagounidis, Aristoteles A. N.; Lübbert, Michael; Greenberg, Peter L.; Bennett, John M.; Solé, Francesc; Mallo, Mar; Slovak, Marilyn L.; Ohyashiki, Kazuma; Le Beau, Michelle M.; Tüchler, Heinz; Pfeilstöcker, Michael; Nösslinger, Thomas; Hildebrandt, Barbara; Shirneshan, Katayoon; Aul, Carlo; Stauder, Reinhard; Sperr, Wolfgang R.; Valent, Peter; Fonatsch, Christa; Trümper, Lorenz; Haase, Detlef; Schanz, Julie
2015-01-01
International Prognostic Scoring Systems are used to determine the individual risk profile of myelodysplastic syndrome patients. For the assessment of International Prognostic Scoring Systems, an adequate chromosome banding analysis of the bone marrow is essential. Cytogenetic information is not available for a substantial number of patients (5%–20%) with dry marrow or an insufficient number of metaphase cells. For these patients, a valid risk classification is impossible. In the study presented here, the International Prognostic Scoring Systems were validated based on fluorescence in situ hybridization analyses using extended probe panels applied to cluster of differentiation 34 positive (CD34+) peripheral blood cells of 328 MDS patients of our prospective multicenter German diagnostic study and compared to chromosome banding results of 2902 previously published patients with myelodysplastic syndromes. For cytogenetic risk classification by fluorescence in situ hybridization analyses of CD34+ peripheral blood cells, the groups differed significantly for overall and leukemia-free survival by uni- and multivariate analyses without discrepancies between treated and untreated patients. Including cytogenetic data of fluorescence in situ hybridization analyses of peripheral CD34+ blood cells (instead of bone marrow banding analysis) into the complete International Prognostic Scoring System assessment, the prognostic risk groups separated significantly for overall and leukemia-free survival. Our data show that a reliable stratification to the risk groups of the International Prognostic Scoring Systems is possible from peripheral blood in patients with missing chromosome banding analysis by using a comprehensive probe panel (clinicaltrials.gov identifier:01355913). PMID:25344522
Chughtai, Abrar Ahmad; MacIntyre, C. Raina
2017-01-01
Abstract The 2014 Ebola virus disease (EVD) outbreak affected several countries worldwide, including six West African countries. It was the largest Ebola epidemic in the history and the first to affect multiple countries simultaneously. Significant national and international delay in response to the epidemic resulted in 28,652 cases and 11,325 deaths. The aim of this study was to develop a risk analysis framework to prioritize rapid response for situations of high risk. Based on findings from the literature, sociodemographic features of the affected countries, and documented epidemic data, a risk scoring framework using 18 criteria was developed. The framework includes measures of socioeconomics, health systems, geographical factors, cultural beliefs, and traditional practices. The three worst affected West African countries (Guinea, Sierra Leone, and Liberia) had the highest risk scores. The scores were much lower in developed countries that experienced Ebola compared to West African countries. A more complex risk analysis framework using 18 measures was compared with a simpler one with 10 measures, and both predicted risk equally well. A simple risk scoring system can incorporate measures of hazard and impact that may otherwise be neglected in prioritizing outbreak response. This framework can be used by public health personnel as a tool to prioritize outbreak investigation and flag outbreaks with potentially catastrophic outcomes for urgent response. Such a tool could mitigate costly delays in epidemic response. PMID:28810081
Phung, Dung; Talukder, Mohammad Radwanur Rahman; Rutherford, Shannon; Chu, Cordia
2016-10-01
To develop a prediction score scheme useful for prevention practitioners and authorities to implement dengue preparedness and controls in the Mekong Delta region (MDR). We applied a spatial scan statistic to identify high-risk dengue clusters in the MDR and used generalised linear-distributed lag models to examine climate-dengue associations using dengue case records and meteorological data from 2003 to 2013. The significant predictors were collapsed into categorical scales, and the β-coefficients of predictors were converted to prediction scores. The score scheme was validated for predicting dengue outbreaks using ROC analysis. The north-eastern MDR was identified as the high-risk cluster. A 1 °C increase in temperature at lag 1-4 and 5-8 weeks increased the dengue risk 11% (95% CI, 9-13) and 7% (95% CI, 6-8), respectively. A 1% rise in humidity increased dengue risk 0.9% (95% CI, 0.2-1.4) at lag 1-4 and 0.8% (95% CI, 0.2-1.4) at lag 5-8 weeks. Similarly, a 1-mm increase in rainfall increased dengue risk 0.1% (95% CI, 0.05-0.16) at lag 1-4 and 0.11% (95% CI, 0.07-0.16) at lag 5-8 weeks. The predicted scores performed with high accuracy in diagnosing the dengue outbreaks (96.3%). This study demonstrates the potential usefulness of a dengue prediction score scheme derived from complex statistical models for high-risk dengue clusters. We recommend a further study to examine the possibility of incorporating such a score scheme into the dengue early warning system in similar climate settings. © 2016 John Wiley & Sons Ltd.
Population Survey of Knowledge about Oral Cancer and Related Factors in the Capital of Iran.
Azimi, Somayyeh; Ghorbani, Zahra; Tennant, Marc; Kruger, Estie; Safiaghdam, Hannaneh; Rafieian, Nasrin
2017-08-24
Knowledge about oral cancer risk factors and signs is thought to improve prevention and early diagnosis, and in turn, increases survival. In this population-based survey, knowledge about oral cancer was assessed in Iran. A total of 1800 self-administered questionnaires (collecting sociodemographic data and questions regarding oral cancer risk factors and signs) were distributed through random sampling. Final scores ranged between 0 and 15 for the risk factors and 0-11 for the signs. Scores below the median indicated a low level of knowledge, scores representing the third quartile of correct answers indicated a moderate level of knowledge, and scores representing the upper quartile indicated a high level of knowledge. Statistical tests were used for analysis of knowledge level in different sociodemographic categories. A total of 1312 participants completed the questionnaires. The average of knowledge scores for risk factors was 5.3 ± 3.0 and for signs was 4.5 ± 2.9. Overall, 75 and 56% respectively were able to identify major risk factors (smoking and alcohol); 23.5% could not define any related signs and symptoms. Dividing scores into quartiles indicated that three out of four people had "low" knowledge about risk factors and 58% had "low" knowledge about signs and symptoms. Females and highly educated people had more knowledge of oral cancer. Significant difference was found between job and level of knowledge (P = 0.001). This survey revealed that public knowledge of oral cancer was not satisfactory in Iran. Efforts should be done to inform and educate people with risk factors, initial clinical presentation, and symptoms, in order to improve prevention and promote early diagnosis.
Cardiovascular risk scores for coronary atherosclerosis.
Yalcin, Murat; Kardesoglu, Ejder; Aparci, Mustafa; Isilak, Zafer; Uz, Omer; Yiginer, Omer; Ozmen, Namik; Cingozbay, Bekir Yilmaz; Uzun, Mehmet; Cebeci, Bekir Sitki
2012-10-01
The objective of this study was to compare frequently used cardiovascular risk scores in predicting the presence of coronary artery disease (CAD) and 3-vessel disease. In 350 consecutive patients (218 men and 132 women) who underwent coronary angiography, the cardiovascular risk level was determined using the Framingham Risk Score (FRS), the Modified Framingham Risk Score (MFRS), the Prospective Cardiovascular Münster (PROCAM) score, and the Systematic Coronary Risk Evaluation (SCORE). The area under the curve for receiver operating characteristic curves showed that FRS had more predictive value than the other scores for CAD (area under curve, 0.76, P < or = 0.001), but all scores had good specificity and positive predictive value. For 3-vessel disease, the FRS had better predictive value than the other scores (area under curve, 0.74, P < or = 0.001), but all scores had good specificity and negative predictive value. The risk scores (FRS, MFRS, PROCAM, and SCORE) may predict the presence and severity of coronary atherosclerosis.The FRS had better predictive value than the other scores.
Surka, Sam; Edirippulige, Sisira; Steyn, Krisela; Gaziano, Thomas; Puoane, Thandi; Levitt, Naomi
2014-09-01
Primary prevention of cardiovascular disease (CVD),by identifying individuals at risk is a well-established, but costly strategy when based on measurements that depend on laboratory analyses. A non-laboratory, paper-based CVD risk assessment chart tool has previously been developed to make screening more affordable in developing countries. Task shifting to community health workers (CHWs) is being investigated to further scale CVD risk screening. This study aimed to develop a mobile phone CVD risk assessment application and to evaluate its impact on CHW training and the duration of screening for CVD in the community by CHWs. A feature phone application was developed using the open source online platform, CommCare(©). CHWs (n=24) were trained to use both paper-based and mobile phone CVD risk assessment tools. They were randomly allocated to using one of the risk tools to screen 10-20 community members and then crossed over to screen the same number, using the alternate risk tool. The impact on CHW training time, screening time and margin of error in calculating risk scores was recorded. A focus group discussion evaluated experiences of CHWs using the two tools. The training time was 12.3h for the paper-based chart tool and 3h for the mobile phone application. 537 people were screened. The mean screening time was 36 min (SD=12.6) using the paper-base chart tool and 21 min (SD=8.71) using the mobile phone application, p=<0.0001. Incorrect calculations (4.3% of average systolic BP measurements, 10.4% of BMI and 3.8% of CVD risk score) were found when using the paper-based chart tool while all the mobile phone calculations were correct. Qualitative findings from the focus group discussion corresponded with the findings of the pilot study. The reduction in CHW training time, CVD risk screening time, lack of errors in calculation of a CVD risk score and end user satisfaction when using a mobile phone application, has implications in terms of adoption and sustainability of this primary prevention strategy to identify people with high CVD risk who can be referred for appropriate diagnoses and treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Surka, Sam; Edirippulige, Sisira; Steyn, Krisela; Gaziano, Thomas; Puoane, Thandi; Levitt, Naomi
2014-01-01
Background Primary prevention of cardiovascular disease (CVD),by identifying individuals at risk is a well-established, but costly strategy when based on measurements that depend on laboratory analyses. A non-laboratory, paper-based CVD risk assessment chart tool has previously been developed to make screening more affordable in developing countries. Task shifting to community health workers (CHWs) is being investigated to further scale CVD risk screening. This study aimed to develop a mobile phone CVD risk assessment application and to evaluate it’s impact on CHW training and the duration of screening for CVD in the community by CHWs. Methods A feature phone application was developed using the open source online platform, CommCare©. CHWs (n=24) were trained to use both paper-based and mobile phone CVD risk assessment tools. They were randomly allocated to using one of the risk tools to screen 10-20 community members and then crossed over to screen the same number, using the alternate risk tool. The impact on CHW training time, screening time and margin of error in calculating risk scores was recorded. A focus group discussion evaluated experiences of CHWs using the two tools. Results The training time was 12.3 hrs for the paper-based chart tool and 3 hours for the mobile phone application. 537 people were screened. The mean screening time was 36 minutes (SD=12.6) using the paper-base chart tool and 21 minutes (SD=8.71) using the mobile phone application , p = <0.0001. Incorrect calculations (4.3 % of average systolic BP measurements, 10.4 % of BMI and 3.8% of CVD risk score) were found when using the paper-based chart tool while all the mobile phone calculations were correct. Qualitative findings from the focus group discussion corresponded with the findings of the pilot study. Conclusion The reduction in CHW training time, CVD risk screening time, lack of errors in calculation of a CVD risk score and end user satisfaction when using a mobile phone application, has implications in terms of adoption and sustainability of this primary prevention strategy to identify people with high CVD risk who can be referred for appropriate diagnoses and treatment. PMID:25002305
Ajisegiri, Whenayon Simeon; Chughtai, Abrar Ahmad; MacIntyre, C Raina
2018-03-01
The 2014 Ebola virus disease (EVD) outbreak affected several countries worldwide, including six West African countries. It was the largest Ebola epidemic in the history and the first to affect multiple countries simultaneously. Significant national and international delay in response to the epidemic resulted in 28,652 cases and 11,325 deaths. The aim of this study was to develop a risk analysis framework to prioritize rapid response for situations of high risk. Based on findings from the literature, sociodemographic features of the affected countries, and documented epidemic data, a risk scoring framework using 18 criteria was developed. The framework includes measures of socioeconomics, health systems, geographical factors, cultural beliefs, and traditional practices. The three worst affected West African countries (Guinea, Sierra Leone, and Liberia) had the highest risk scores. The scores were much lower in developed countries that experienced Ebola compared to West African countries. A more complex risk analysis framework using 18 measures was compared with a simpler one with 10 measures, and both predicted risk equally well. A simple risk scoring system can incorporate measures of hazard and impact that may otherwise be neglected in prioritizing outbreak response. This framework can be used by public health personnel as a tool to prioritize outbreak investigation and flag outbreaks with potentially catastrophic outcomes for urgent response. Such a tool could mitigate costly delays in epidemic response. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.
Esteve-Pastor, María Asunción; Rivera-Caravaca, José Miguel; Roldan, Vanessa; Vicente, Vicente; Valdés, Mariano; Marín, Francisco; Lip, Gregory Y H
2017-10-05
Risk scores in patients with atrial fibrillation (AF) based on clinical factors alone generally have only modest predictive value for predicting high risk patients that sustain events. Biomarkers might be an attractive prognostic tool to improve bleeding risk prediction. The new ABC-Bleeding score performed better than HAS-BLED score in a clinical trial cohort but has not been externally validated. The aim of this study was to analyze the predictive performance of the ABC-Bleeding score compared to HAS-BLED score in an independent "real-world" anticoagulated AF patients with long-term follow-up. We enrolled 1,120 patients stable on vitamin K antagonist treatment. The HAS-BLED and ABC-Bleeding scores were quantified. Predictive values were compared by c-indexes, IDI, NRI, as well as decision curve analysis (DCA). Median HAS-BLED score was 2 (IQR 2-3) and median ABC-Bleeding was 16.5 (IQR 14.3-18.6). After 6.5 years of follow-up, 207 (2.84 %/year) patients had major bleeding events, of which 65 (0.89 %/year) had intracranial haemorrhage (ICH) and 85 (1.17 %/year) had gastrointestinal bleeding events (GIB). The c-index of HAS-BLED was significantly higher than ABC-Bleeding for major bleeding (0.583 vs 0.518; p=0.025), GIB (0.596 vs 0.519; p=0.017) and for the composite of ICH-GIB (0.593 vs 0.527; p=0.030). NRI showed a significant negative reclassification for major bleeding and for the composite of ICH-GIB with the ABC-Bleeding score compared to HAS-BLED. Using DCAs, the use of HAS-BLED score gave an approximate net benefit of 4 % over the ABC-Bleeding score. In conclusion, in the first "real-world" validation of the ABC-Bleeding score, HAS-BLED performed significantly better than the ABC-Bleeding score in predicting major bleeding, GIB and the composite of GIB and ICH.
2011-01-01
Introduction To develop a scoring method for quantifying nutrition risk in the intensive care unit (ICU). Methods A prospective, observational study of patients expected to stay > 24 hours. We collected data for key variables considered for inclusion in the score which included: age, baseline APACHE II, baseline SOFA score, number of comorbidities, days from hospital admission to ICU admission, Body Mass Index (BMI) < 20, estimated % oral intake in the week prior, weight loss in the last 3 months and serum interleukin-6 (IL-6), procalcitonin (PCT), and C-reactive protein (CRP) levels. Approximate quintiles of each variable were assigned points based on the strength of their association with 28 day mortality. Results A total of 597 patients were enrolled in this study. Based on the statistical significance in the multivariable model, the final score used all candidate variables except BMI, CRP, PCT, estimated percentage oral intake and weight loss. As the score increased, so did mortality rate and duration of mechanical ventilation. Logistic regression demonstrated that nutritional adequacy modifies the association between the score and 28 day mortality (p = 0.01). Conclusions This scoring algorithm may be helpful in identifying critically ill patients most likely to benefit from aggressive nutrition therapy. PMID:22085763
Butler, Robert J; Lehr, Michael E; Fink, Michael L; Kiesel, Kyle B; Plisky, Phillip J
2013-09-01
Field expedient screening tools that can identify individuals at an elevated risk for injury are needed to minimize time loss in American football players. Previous research has suggested that poor dynamic balance may be associated with an elevated risk for injury in athletes; however, this has yet to be examined in college football players. To determine if dynamic balance deficits are associated with an elevated risk of injury in collegiate football players. It was hypothesized that football players with lower performance and increased asymmetry in dynamic balance would be at an elevated risk for sustaining a noncontact lower extremity injury. Prospective cohort study. Fifty-nine collegiate American football players volunteered for this study. Demographic information, injury history, and dynamic balance testing performance were collected, and noncontact lower extremity injuries were recorded over the course of the season. Receiver operator characteristic curves were calculated based on performance on the Star Excursion Balance Test (SEBT), including composite score and asymmetry, to determine the population-specific risk cut-off point. Relative risk was then calculated based on these variables, as well as previous injury. A cut-off point of 89.6% composite score on the SEBT optimized the sensitivity (100%) and specificity (71.7%). A college football player who scored below 89.6% was 3.5 times more likely to get injured. Poor performance on the SEBT may be related to an increased risk for sustaining a noncontact lower extremity injury over the course of a competitive American football season. College football players should be screened preseason using the SEBT to identify those at an elevated risk for injury based upon dynamic balance performance to implement injury mitigation strategies to this specific subgroup of athletes.
Irvine, Katharine M; Wockner, Leesa F; Shanker, Mihir; Fagan, Kevin J; Horsfall, Leigh U; Fletcher, Linda M; Ungerer, Jacobus P J; Pretorius, Carel J; Miller, Gregory C; Clouston, Andrew D; Lampe, Guy; Powell, Elizabeth E
2016-03-01
Current tools for risk stratification of chronic liver disease subjects are limited. We aimed to determine whether the serum-based ELF (Enhanced Liver Fibrosis) test predicted liver-related clinical outcomes, or progression to advanced liver disease, and to compare the performance of ELF to liver biopsy and non-invasive algorithms. Three hundred patients with ELF scores assayed at the time of liver biopsy were followed up (median 6.1 years) for liver-related clinical outcomes (n = 16) and clear evidence of progression to advanced fibrosis (n = 18), by review of medical records and clinical data. Fourteen of 73 (19.2%) patients with ELF score indicative of advanced fibrosis (≥9.8, the manufacturer's cut-off) had a liver-related clinical outcome, compared to only two of 227 (<1%) patients with ELF score <9.8. In contrast, the simple scores APRI and FIB-4 would only have predicted subsequent decompensation in six and four patients respectively. A unit increase in ELF score was associated with a 2.53-fold increased risk of a liver-related event (adjusted for age and stage of fibrosis). In patients without advanced fibrosis on biopsy at recruitment, 55% (10/18) with an ELF score ≥9.8 showed clear evidence of progression to advanced fibrosis (after an average 6 years), whereas only 3.5% of those with an ELF score <9.8 (8/207) progressed (average 14 years). In these subjects, a unit increase in ELF score was associated with a 4.34-fold increased risk of progression. The ELF score is a valuable tool for risk stratification of patients with chronic liver disease. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Sekiguchi, Masau; Kakugawa, Yasuo; Matsumoto, Minori; Matsuda, Takahisa
2018-01-22
Risk stratification of screened populations could help improve colorectal cancer (CRC) screening. Use of the modified Asia-Pacific Colorectal Screening (APCS) score has been proposed in the Asia-Pacific region. This study was performed to build a new useful scoring model for CRC screening. Data were reviewed from 5218 asymptomatic Japanese individuals who underwent their first screening colonoscopy. Multivariate logistic regression was used to investigate risk factors for advanced colorectal neoplasia (ACN), and a new scoring model for the prediction of ACN was developed based on the results. The discriminatory capability of the new model and the modified APCS score were assessed and compared. Internal validation was also performed. ACN was detected in 225 participants. An 8-point scoring model for the prediction of ACN was developed using five independent risk factors for ACN (male sex, higher age, presence of two or more first-degree relatives with CRC, body mass index of > 22.5 kg/m 2 , and smoking history of > 18.5 pack-years). The prevalence of ACN was 1.6% (34/2172), 5.3% (127/2419), and 10.2% (64/627) in participants with scores of < 3, ≥ 3 to < 5, and ≥ 5, respectively. The c-statistic of the scoring model was 0.70 (95% confidence interval, 0.67-0.73) in both the development and internal validation sets, and this value was higher than that of the modified APCS score [0.68 (95% confidence interval, 0.65-0.71), P = 0.03]. We built a new simple scoring model for prediction of ACN in a Japanese population that could stratify the screened population into low-, moderate-, and high-risk groups.
Joundi, Raed A; Cipriano, Lauren E; Sposato, Luciano A; Saposnik, Gustavo
2016-05-01
The CHA2DS2-VASc score aims to improve risk stratification of ischemic stroke among patients with atrial fibrillation to identify those who can safely forego oral anticoagulation. Oral anticoagulation treatment guidelines remain uncertain for CHA2DS2-VASc score of 1. We conducted a systematic review and meta-analysis of the risk of ischemic stroke for patients with atrial fibrillation and CHA2DS2-VASc score of 0, 1, or 2 not treated with oral anticoagulation. We searched MEDLINE, Embase, PubMed, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Web of Science from the start of the database up until April 15, 2015. We included studies that stratified the risk of ischemic stroke by CHA2DS2-VASc score for patients with nonvalvular atrial fibrillation. We estimated the summary annual rate of ischemic stroke using random effects meta-analyses and compared the estimated stroke rates with published net-benefit thresholds for initiating anticoagulants. 1162 abstracts were retrieved, of which 10 met all inclusion criteria for the study. There was substantial heterogeneity among studies. The summary estimate for the annual risk of ischemic stroke was 1.61% (95% confidence interval 0%-3.23%) for CHA2DS2-VASc score of 1, meeting the theoretical threshold for using novel oral anticoagulants (0.9%), but below the threshold for warfarin (1.7%). The summary incident risk of ischemic stroke was 0.68% (95% confidence interval 0.12%-1.23%) for CHA2DS2-VASc score of 0 and 2.49% (95% confidence interval 1.16%-3.83%) for CHA2DS2-VASc score of 2. Our meta-analysis of ischemic stroke risk in atrial fibrillation patients suggests that those with CHA2DS2-VASc score of 1 may be considered for a novel oral anticoagulant, but because of high heterogeneity, the decision should be based on individual patient characteristics. © 2016 American Heart Association, Inc.
Diagnostic performance of an acoustic-based system for coronary artery disease risk stratification.
Winther, Simon; Nissen, Louise; Schmidt, Samuel Emil; Westra, Jelmer Sybren; Rasmussen, Laust Dupont; Knudsen, Lars Lyhne; Madsen, Lene Helleskov; Kirk Johansen, Jane; Larsen, Bjarke Skogstad; Struijk, Johannes Jan; Frost, Lars; Holm, Niels Ramsing; Christiansen, Evald Høj; Botker, Hans Erik; Bøttcher, Morten
2018-06-01
Diagnosing coronary artery disease (CAD) continues to require substantial healthcare resources. Acoustic analysis of transcutaneous heart sounds of cardiac movement and intracoronary turbulence due to obstructive coronary disease could potentially change this. The aim of this study was thus to test the diagnostic accuracy of a new portable acoustic device for detection of CAD. We included 1675 patients consecutively with low to intermediate likelihood of CAD who had been referred for cardiac CT angiography. If significant obstruction was suspected in any coronary segment, patients were referred to invasive angiography and fractional flow reserve (FFR) assessment. Heart sound analysis was performed in all patients. A predefined acoustic CAD-score algorithm was evaluated; subsequently, we developed and validated an updated CAD-score algorithm that included both acoustic features and clinical risk factors. Low risk is indicated by a CAD-score value ≤20. Haemodynamically significant CAD assessed from FFR was present in 145 (10.0%) patients. In the entire cohort, the predefined CAD-score had a sensitivity of 63% and a specificity of 44%. In total, 50% had an updated CAD-score value ≤20. At this cut-off, sensitivity was 81% (95% CI 73% to 87%), specificity 53% (95% CI 50% to 56%), positive predictive value 16% (95% CI 13% to 18%) and negative predictive value 96% (95% CI 95% to 98%) for diagnosing haemodynamically significant CAD. Sound-based detection of CAD enables risk stratification superior to clinical risk scores. With a negative predictive value of 96%, this new acoustic rule-out system could potentially supplement clinical assessment to guide decisions on the need for further diagnostic investigation. ClinicalTrials.gov identifier NCT02264717; Results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Modern risk stratification in coronary heart disease.
Ginghina, C; Bejan, I; Ceck, C D
2011-11-14
The prevalence and impact of cardiovascular diseases in the world are growing. There are 2 million deaths due to cardiovascular disease each year in the European Union; the main cause of death being the coronary heart disease responsible for 16% of deaths in men and 15% in women. Prevalence of cardiovascular disease in Romania is estimated at 7 million people, of which 2.8 million have ischemic heart disease. In this epidemiological context, risk stratification is required for individualization of therapeutic strategies for each patient. The continuing evolution of the diagnosis and treatment techniques combines personalized medicine with the trend of therapeutic management leveling, based on guidelines and consensus, which are in constant update. The guidelines used in clinical practice have involved risk stratification and identification of patient groups in whom the risk-benefit ratio of using new diagnostic and therapeutic techniques has a positive value. Presence of several risk factors may indicate a more important total risk than the presence / significant increase from normal values of a single risk factor. Modern trends in risk stratification of patients with coronary heart disease are polarized between the use of simple data versus complex scores, traditional data versus new risk factors, generally valid scores versus personalized scores, depending on patient characteristics, type of coronary artery disease, with impact on the suggested therapy. All known information and techniques can be integrated in a complex system of risk assessment. The current trend in risk assessment is to identify coronary artery disease in early forms, before clinical manifestation, and to guide therapy, particularly in patients with intermediate risk, which can be classified in another class of risk based on new obtained information.
Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap.
Maynard, Greg; Jenkins, Ian H; Merli, Geno J
2013-10-01
Hospital-associated nonsurgical venous thromboembolism (VTE) is an important problem addressed by new guidelines from the American College of Physicians (ACP) and American College of Chest Physicians (AT9). Narrative review and critique. Both guidelines discount asymptomatic VTE outcomes and caution against overprophylaxis, but have different methodologies and estimates of risk/benefit. Guideline complexity and lack of consensus on VTE risk assessment contribute to an implementation gap. Methods to estimate prophylaxis benefit have significant limitations because major trials included mostly screening-detected events. AT9 relies on a single Italian cohort study to conclude that those with a Padua score ≥4 have a very high VTE risk, whereas patients with a score <4 (60% of patients) have a very small risk. However, the cohort population has less comorbidity than US inpatients, and over 1% of patients with a score of 3 suffered pulmonary emboli. The ACP guideline does not endorse any risk-assessment model. AT9 includes the Padua model and Caprini point-based system for nonsurgical inpatients and surgical inpatients, respectively, but there is no evidence they are more effective than simpler risk-assessment models. New VTE prevention guidelines provide varied guidance on important issues including risk assessment. If Padua is used, a threshold of 3, as well as 4, should be considered. Simpler VTE risk-assessment models may be superior to complicated point-based models in environments without sophisticated clinical decision support. © 2013 Society of Hospital Medicine.
Lee, Kyung-Ann; Ryu, Se-Ri; Park, Seong-Jun; Kim, Hae-Rim; Lee, Sang-Heon
2018-05-01
Hyperuricemia and gout are associated with increased risk of cardiovascular disease and metabolic syndrome. The aim of this study was to evaluate the correlation of total tophus volumes, measured using dual-energy computed tomography, with cardiovascular risk and the presence of metabolic syndrome. Dual-energy computed tomography datasets from 91 patients with a diagnosis of gout were analyzed retrospectively. Patients who received urate lowering therapy were excluded to avoid the effect on tophus volume. The total volumes of tophaceous deposition were quantified using automated volume assessment software. The 10-year cardiovascular risk using the Framingham Risk Score and metabolic syndrome based on the Third Adult Treatment Panel criteria were estimated. Fifty-five and 36 patients with positive and negative dual-energy computed tomography results, respectively, were assessed. Patients with positive dual-energy computed tomography results showed significantly higher systolic blood pressure, diastolic blood pressure, fasting glucose, and higher prevalence of chronic kidney disease, compared with those with negative dual-energy computed tomography results. The total tophus volumes were significantly correlated with the Framingham Risk Score, and the number of metabolic syndrome components (r = 0.22 and p = 0.036 and r = 0.373 and p < 0.001, respectively). The total tophus volume was one of the independent prognostic factors for the Framingham Risk Score in a multivariate analysis. This study showed the correlation of total tophus volumes with cardiovascular risk and metabolic syndrome-related comorbidities. A high urate burden could affect unfavorable cardiovascular profiles.
Wong, Carlos K H; Siu, Shing-Chung; Wan, Eric Y F; Jiao, Fang-Fang; Yu, Esther Y T; Fung, Colman S C; Wong, Ka-Wai; Leung, Angela Y M; Lam, Cindy L K
2016-05-01
The aim of the present study was to develop a simple nomogram that can be used to predict the risk of diabetes mellitus (DM) in the asymptomatic non-diabetic subjects based on non-laboratory- and laboratory-based risk algorithms. Anthropometric data, plasma fasting glucose, full lipid profile, exercise habits, and family history of DM were collected from Chinese non-diabetic subjects aged 18-70 years. Logistic regression analysis was performed on a random sample of 2518 subjects to construct non-laboratory- and laboratory-based risk assessment algorithms for detection of undiagnosed DM; both algorithms were validated on data of the remaining sample (n = 839). The Hosmer-Lemeshow test and area under the receiver operating characteristic (ROC) curve (AUC) were used to assess the calibration and discrimination of the DM risk algorithms. Of 3357 subjects recruited, 271 (8.1%) had undiagnosed DM defined by fasting glucose ≥7.0 mmol/L or 2-h post-load plasma glucose ≥11.1 mmol/L after an oral glucose tolerance test. The non-laboratory-based risk algorithm, with scores ranging from 0 to 33, included age, body mass index, family history of DM, regular exercise, and uncontrolled blood pressure; the laboratory-based risk algorithm, with scores ranging from 0 to 37, added triglyceride level to the risk factors. Both algorithms demonstrated acceptable calibration (Hosmer-Lemeshow test: P = 0.229 and P = 0.483) and discrimination (AUC 0.709 and 0.711) for detection of undiagnosed DM. A simple-to-use nomogram for detecting undiagnosed DM has been developed using validated non-laboratory-based and laboratory-based risk algorithms. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Unraveling Exercise Addiction: The Role of Narcissism and Self-Esteem
Cicciarelli, Claudio; Romeo, Vincenzo Maria; Pandolfo, Gianluca
2014-01-01
The aim of this study was to assess the risk of exercise addiction (EA) in fitness clubs and to identify possible factors in the development of the disorder. The Exercise Addiction Inventory (EAI), the Narcissistic Personality Inventory (NPI), and the Coopersmith Self-Esteem Inventory (SEI) were administered to a sample of 150 consecutive gym attenders recruited in fitness centers. Based on EAI total score, high EA risk group (HEA n = 51) and a low EA risk group (LEA n = 69) were identified. HEA reported significantly higher total score (mean = 20.2 versus 14.6) on the NPI scale and lower total score (mean = 32.2 versus 36.4) on the SEI scale than LEA. A stepwise regression analysis indicated that only narcissism and self-esteem total scores (F = 5.66; df = 2; P = 0.006) were good predictors of days per week exercise. The present study confirms the direct and combined role of both labile self-esteem and high narcissism in the development of exercise addiction as predictive factors towards the risk of addiction. Multidisciplinary trained health care providers (physiatrists, psychologists, and psychiatrists) should carefully identify potential overexercise conditions in order to prevent the potential risk of exercise addiction. PMID:25405056
Wu, Chueh-Hung; Chen, Li-Sheng; Yen, Ming-Fang; Chiu, Yueh-Hsia; Fann, Ching-Yuan; Chen, Hsiu-Hsi; Pan, Shin-Liang
2014-01-01
Previous studies on the association between tuberculosis and the risk of developing ischemic stroke have generated inconsistent results. We therefore performed a population-based, propensity score-matched longitudinal follow-up study to investigate whether contracting non-central nervous system (CNS) tuberculosis leads to an increased risk of ischemic stroke. We used a logistic regression model that includes age, sex, pre-existing comorbidities and socioeconomic status as covariates to compute the propensity score. A total of 5804 persons with at least three ambulatory visits in 2001 with the principal diagnosis of non-CNS tuberculosis were enrolled in the tuberculosis group. The non-tuberculosis group consisted of 5804, propensity score-matched subjects without tuberculosis. The three-year ischemic stroke-free survival rates for these 2 groups were estimated using the Kaplan-Meier method. The stratified Cox proportional hazards regression was used to estimate the effect of tuberculosis on the occurrence of ischemic stroke. During three-year follow-up, 176 subjects in the tuberculosis group (3.0%) and 207 in the non-tuberculosis group (3.6%) had ischemic stroke. The hazard ratio for developing ischemic stroke in the tuberculosis group was 0.92 compared to the non-tuberculosis group (95% confidence interval: 0.73-1.14, P = 0.4299). Non-CNS tuberculosis does not increase the risk of subsequent ischemic stroke.
Delcourt, Cécile; Souied, Eric; Sanchez, Alice; Bandello, Francesco
2017-12-01
To develop and validate a risk score for AMD based on a simple self-administered questionnaire. Risk factors having shown the most consistent associations with AMD were included in the STARS (Simplified Théa AMD Risk-Assessment Scale) questionnaire. Two studies were conducted, one in Italy (127 participating ophthalmologists) and one in France (80 participating ophthalmologists). During 1 week, participating ophthalmologists invited all their patients aged 55 years or older to fill in the STARS questionnaire. Based on fundus examination, early AMD was defined by the presence of soft drusen and/or pigmentary abnormalities and late AMD by the presence of geographic atrophy and/or neovascular AMD. The Italian and French samples consisted of 12,639 and 6897 patients, respectively. All 13 risk factors included in the STARS questionnaire showed significant associations with AMD in the Italian sample. The area under the receiving operating characteristic curve for the STARS risk score, derived from the multivariate logistic regression in the Italian sample, was 0.78 in the Italian sample and 0.72 in the French sample. In both samples, less than 10% of patients without AMD were classified at high risk, and less than 13% of late AMD cases were classified as low risk, with a more intermediate situation in early AMD cases. STARS is a new, simple self-assessed questionnaire showing good discrimination of risk for AMD in two large European samples. It might be used by ophthalmologists in routine clinical practice or as a self-assessment for risk of AMD in the general population.
Fang, Lin; Chuang, Deng-Min; Lee, Yookyong
2016-12-01
Recent HIV research suggested assessing adverse childhood experiences (ACEs) as contributing factors of HIV risk behaviors. However, studies often focused on a single type of adverse experience and very few utilized population-based data. This population study examined the associations between ACE (individual and cumulative ACE score) and HIV risk behaviors. We analyzed the 2012 Behavioral Risk Factor Surveillance Survey (BRFSS) from 5 states. The sample consisted of 39,434 adults. Eight types of ACEs that included different types of child abuse and household dysfunctions before the age of 18 were measured. A cumulative score of ACEs was also computed. Logistic regression estimated of the association between ACEs and HIV risk behaviors using odds ratio (OR) with 95% confidence intervals (CIs) for males and females separately. We found that ACEs were positively associated with HIV risk behaviors overall, but the associations differed between males and females in a few instances. While the cumulative ACE score was associated with HIV risk behaviors in a stepwise manner, the pattern varied by gender. For males, the odds of HIV risk increased at a significant level as long as they experienced one ACE, whereas for females, the odds did not increase until they experienced three or more ACEs. Future research should further investigate the gender-specific associations between ACEs and HIV risk behaviors. As childhood adversities are prevalent among general population, and such experiences are associated with increased risk behaviors for HIV transmission, service providers can benefit from the principles of trauma-informed practice.
Massey, Scott; Stallman, John; Lee, Louise; Klingaman, Kathy; Holmerud, David
2011-01-01
This paper describes how a systematic analysis of students at risk for failing the Physician Assistant National Certifying Examination (PANCE) may be used to identify which students may benefit from intervention prior to taking the PANCE and thus increase the likelihood of successful completion of the PANCE. The intervention developed and implemented uses various formative and summative examinations to predict students' PANCE scores with a high degree of accuracy. Eight end-of-rotation exams (EOREs) based upon discipline-specific diseases and averaging 100 questions each, a 360-question PANCE simulation (SUMM I), the PACKRAT, and a 700-question summative cognitive examination based upon the NCCPA blueprint (SUMM II) were administered to all students enrolled in the program during the clinical year starting in January 2010 and concluding in December 2010. When the PACKRAT, SUMM I, SUMM II, and the surgery, women's health, and pediatrics EOREs were combined in a regression model, an Rvalue of 0.87 and an R2 of 0.75 were obtained. A predicted score was generated for the class of 2009. The predicted PANCE score based upon this model had a final correlation of 0.790 with the actual PANCE score. This pilot study demonstrated that valid predicted scores could be generated from formative and summative examinations to provide valuable feedback and to identify students at risk of failing the PANCE.
MetaCompare: A computational pipeline for prioritizing environmental resistome risk.
Oh, Min; Pruden, Amy; Chen, Chaoqi; Heath, Lenwood S; Xia, Kang; Zhang, Liqing
2018-04-26
The spread of antibiotic resistance is a growing public health concern. While numerous studies have highlighted the importance of environmental sources and pathways of the spread of antibiotic resistance, a systematic means of comparing and prioritizing risks represented by various environmental compartments is lacking. Here we introduce MetaCompare, a publicly-available tool for ranking 'resistome risk,' which we define as the potential for antibiotic resistance genes (ARGs) to be associated with mobile genetic elements (MGEs) and mobilize to pathogens based on metagenomic data. A computational pipeline was developed in which each ARG is evaluated based on relative abundance, mobility, and presence within a pathogen. This is determined through assembly of shotgun sequencing data and analysis of contigs containing ARGs to determine if they contain sequence similarity to MGEs or human pathogens. Based on the assembled metagenomes, samples are projected into a 3-D hazard space and assigned resistome risk scores. To validate, we tested previously published metagenomic data derived from distinct aquatic environments. Based on unsupervised machine learning, the test samples clustered in the hazard space in a manner consistent with their origin. The derived scores produced a well-resolved ascending resistome risk ranking of: wastewater treatment plant effluent, dairy lagoon, hospital sewage.
DeVeney, Shari L; Hoffman, Lesa; Cress, Cynthia J
2012-06-01
In this study, the authors compared a multiple-domain strategy for assessing developmental age of young children with developmental disabilities who were at risk for long-term reliance on augmentative and alternative communication (AAC) with a communication-based strategy composed of receptive language and communication indices that may be less affected by physically challenging tasks than traditional developmental age scores. Participants were 42 children (age 9-27 months) with developmental disabilities and who were at risk for long-term reliance on AAC. Children were assessed longitudinally in their homes at 3 occasions over 18 months using multiple-domain and communication-based measures. Confirmatory factor analysis examined dimensionality across the measures, and age-equivalence scores under each strategy were compared, where possible. The communication-based latent factor of developmental age demonstrated good reliability and was almost perfectly correlated with the multiple-domain latent factor. However, the mean age-equivalence score of the communication-based assessment significantly exceeded that of the multiple-domain assessment by 5.3 months across ages. Clinicians working with young children with developmental disabilities should consider a communication-based approach as an alternative developmental age assessment strategy for characterizing children's capabilities, identifying challenges, and developing interventions. A communication-based developmental age estimation is sufficiently reliable and may result in more valid inferences about developmental age for children whose developmental or cognitive age scores may otherwise be limited by their physical capabilities.
Le, Hai-Ha; Subtil, Fabien; Cerou, Marc; Marchant, Ivanny; Al-Gobari, Muaamar; Fall, Mor; Mimouni, Yanis; Kassaï, Behrouz; Lindholm, Lars; Thijs, Lutgarde; Gueyffier, François
2017-11-01
To construct a sudden death risk score specifically for hypertension (HYSUD) patients with or without cardiovascular history. Data were collected from six randomized controlled trials of antihypertensive treatments with 8044 women and 17 604 men differing in age ranges and blood pressure eligibility criteria. In total, 345 sudden deaths (1.35%) occurred during a mean follow-up of 5.16 years. Risk factors of sudden death were examined using a multivariable Cox proportional hazards model adjusted on trials. The model was transformed to an integer system, with points added for each factor according to its association with sudden death risk. Antihypertensive treatment was not associated with a reduction of the sudden death risk and had no interaction with other factors, allowing model development on both treatment and placebo groups. A risk score of sudden death in 5 years was built with seven significant risk factors: age, sex, SBP, serum total cholesterol, cigarette smoking, diabetes, and history of myocardial infarction. In terms of discrimination performance, HYSUD model was adequate with areas under the receiver operating characteristic curve of 77.74% (confidence interval 95%, 74.13-81.35) for the derivation set, of 77.46% (74.09-80.83) for the validation set, and of 79.17% (75.94-82.40) for the whole population. Our work provides a simple risk-scoring system for sudden death prediction in hypertension, using individual data from six randomized controlled trials of antihypertensive treatments. HYSUD score could help assessing a hypertensive individual's risk of sudden death and optimizing preventive therapeutic strategies for these patients.
O'Reilly, Kathleen M; Lamoureux, Christine; Molodecky, Natalie A; Lyons, Hil; Grassly, Nicholas C; Tallis, Graham
2017-05-26
The international spread of wild poliomyelitis outbreaks continues to threaten eradication of poliomyelitis and in 2014 a public health emergency of international concern was declared. Here we describe a risk scoring system that has been used to assess country-level risks of wild poliomyelitis outbreaks, to inform prioritisation of mass vaccination planning, and describe the change in risk from 2014 to 2016. The methods were also used to assess the risk of emergence of vaccine-derived poliomyelitis outbreaks. Potential explanatory variables were tested against the reported outbreaks of wild poliomyelitis since 2003 using multivariable regression analysis. The regression analysis was translated to a risk score and used to classify countries as Low, Medium, Medium High and High risk, based on the predictive ability of the score. Indicators of population immunity, population displacement and diarrhoeal disease were associated with an increased risk of both wild and vaccine-derived outbreaks. High migration from countries with wild cases was associated with wild outbreaks. High birth numbers were associated with an increased risk of vaccine-derived outbreaks. Use of the scoring system is a transparent and rapid approach to assess country risk of wild and vaccine-derived poliomyelitis outbreaks. Since 2008 there has been a steep reduction in the number of wild poliomyelitis outbreaks and the reduction in countries classified as High and Medium High risk has reflected this. The risk of vaccine-derived poliomyelitis outbreaks has varied geographically. These findings highlight that many countries remain susceptible to poliomyelitis outbreaks and maintenance or improvement in routine immunisation is vital.
Williams, Pamela A; O'Donoghue, Amie C; Sullivan, Helen W; Willoughby, Jessica Fitts; Squire, Claudia; Parvanta, Sarah; Betts, Kevin R
2016-04-01
Drug efficacy can be measured by composite scores, which consist of two or more symptoms or other clinical components of a disease. We evaluated how individuals interpret composite scores in direct-to-consumer (DTC) prescription drug advertising. We conducted an experimental study of seasonal allergy sufferers (n=1967) who viewed a fictitious print DTC ad that varied by the type of information featured (general indication, list of symptoms, or definition of composite scores) and the presence or absence of an educational intervention about composite scores. We measured composite score recognition and comprehension, and perceived drug efficacy and risk. Ads that featured either (1) the composite score definition alone or (2) the list of symptoms or general indication information along with the educational intervention improved composite score comprehension. Ads that included the composite score definition or the educational intervention led to lower confidence in the drug's benefits. The composite score definition improved composite score recognition and lowered drug risk perceptions. Adding composite score information to DTC print ads may improve individuals' comprehension of composite scores and affect their perceptions of the drug. Providing composite score information may lead to more informed patient-provider prescription drug decisions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Hinske, Ludwig Christian; Hoechter, Dominik Johannes; Schröeer, Eva; Kneidinger, Nikolaus; Schramm, René; Preissler, Gerhard; Tomasi, Roland; Sisic, Alma; Frey, Lorenz; von Dossow, Vera; Scheiermann, Patrick
2017-06-01
The factors leading to the implementation of unplanned extracorporeal circulation during lung transplantation are poorly defined. Consequently, the authors aimed to identify patients at risk for unplanned extracorporeal circulation during lung transplantation. Retrospective data analysis. Single-center university hospital. A development data set of 170 consecutive patients and an independent validation cohort of 52 patients undergoing lung transplantation. The authors investigated a cohort of 170 consecutive patients undergoing single or sequential bilateral lung transplantation without a priori indication for extracorporeal circulation and evaluated the predictive capability of distinct preoperative and intraoperative variables by using automated model building techniques at three clinically relevant time points (preoperatively, after endotracheal intubation, and after establishing single-lung ventilation). Preoperative mean pulmonary arterial pressure was the strongest predictor for unplanned extracorporeal circulation. A logistic regression model based on preoperative mean pulmonary arterial pressure and lung allocation score achieved an area under the receiver operating characteristic curve of 0.85. Consequently, the authors developed a novel 3-point scoring system based on preoperative mean pulmonary arterial pressure and lung allocation score, which identified patients at risk for unplanned extracorporeal circulation and validated this score in an independent cohort of 52 patients undergoing lung transplantation. The authors showed that patients at risk for unplanned extracorporeal circulation during lung transplantation could be identified by their novel 3-point score. Copyright © 2017 Elsevier Inc. All rights reserved.
Lee, Bora; Lee, Sang Wook; Kang, Hye Rim; Kim, Dae In; Sun, Hwa Yeon; Kim, Jae Heon
2018-01-01
This study attempted to investigate the association between lower urinary tract symptoms (LUTS) and cardiovascular disease (CVD) risk using International Prostate Symptom Score (IPSS) and CVD risk scores and to overcome the limitations of previous relevant studies. A total of 2994 ostensibly healthy males, who participated in a voluntary health check in a health promotion center from January 2010 to December 2014, were reviewed. CVD risk scores were calculated using Framingham risk score and American College of Cardiology (ACC)/American Heart Association (AHA) score. Correlation and multivariate logistic regression analysis to predict the CVD risk severity were performed. Correlation between total IPSS with CVD risk scores demonstrated significant positive associations, which showed higher correlation with ACC/AHA score than the Framingham score (r = 0.18 vs 0.09, respectively). For ACC/AHA score, the partial correlation after adjustment of body mass index (BMI) showed significant positive correlations between all LUTS parameters and PSA. For the Framingham score, all variables, except IPSS Q2 and IPSS Q6, showed significant positive correlations. After adjustment of BMI, prostate volume and PSA, only the severe LUTS group showed significant relationship with intermediate-high CVD risk severity, as compared with normal LUTS group (OR = 2.97, 95%CI (1.35-6.99)). Using two validated CVD risk calculators, we observed that LUTS is closely associated with future CVD risk. To predict the intermediate-high CVD risk severity, severe LUTS was a sentinel sign, the presence of which warrants the importance of an earlier screening for CVD. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wiemker, Rafael; Bülow, Thomas; Blaffert, Thomas; Dharaiya, Ekta
2009-02-01
Presence of emphysema is recognized to be one of the single most significant risk factors in risk models for the prediction of lung cancer. Therefore, an automatically computed emphysema score would be a prime candidate as an additional numerical feature for computer aided diagnosis (CADx) for indeterminate pulmonary nodules. We have applied several histogram-based emphysema scores to 460 thoracic CT scans from the IDRI CT lung image database, and analyzed the emphysema scores in conjunction with 3000 nodule malignancy ratings of 1232 pulmonary nodules made by expert observers. Despite the emphysema being a known risk factor, we have not found any impact on the readers' malignancy rating of nodules found in a patient with higher emphysema score. We have also not found any correlation between the number of expert-detected nodules in a patient and his emphysema score, or the relative craniocaudal location of the nodules and their malignancy rating. The inter-observer agreement of the expert ratings was excellent on nodule diameter (as derived from manual delineations), good for calcification, and only modest for malignancy and shape descriptions such as spiculation, lobulation, margin, etc.
Diagnosis and Treatment of Atrial Fibrillation.
Gutierrez, Cecilia; Blanchard, Daniel G
2016-09-15
Atrial fibrillation is a supraventricular arrhythmia that adversely affects cardiac function and increases the risk of stroke. It is the most common arrhythmia and a major source of morbidity and mortality; its prevalence increases with age. Pulse rate is sensitive, but not specific, for diagnosis, and suspected atrial fibrillation should be confirmed with 12-lead electrocardiography. Because normal electrocardiographic findings do not rule out atrial fibrillation, home monitoring is recommended if there is clinical suspicion of arrhythmia despite normal test results. Treatment is based on decisions made regarding when to convert to normal sinus rhythm vs. when to treat with rate control, and, in either case, how to best reduce the risk of stroke. For most patients, rate control is preferred to rhythm control. Ablation therapy is used to destroy abnormal foci responsible for atrial fibrillation. Anticoagulation reduces the risk of stroke while increasing the risk of bleeding. The CHA2DS2-VASc scoring system assesses the risk of stroke, with a score of 2 or greater indicating a need for anticoagulation. The HAS-BLED score estimates the risk of bleeding. Scores of 3 or greater indicate high risk. Warfarin, dabigatran, factor Xa inhibitors (e.g., rivaroxaban, apixaban, edoxaban), and aspirin are options for stroke prevention. Selection of therapy should be individualized based on risks and potential benefits, cost, and patient preference. Left atrial appendage obliteration is an option for reducing stroke risk. Two implantable devices used to occlude the appendage, the Watchman and the Amplatzer Cardiac Plug, appear to be as effective as warfarin in preventing stroke, but they are invasive. Another percutaneous approach to occlusion, wherein the left atrium is closed off using the Lariat, is also available, but data on its long-term effectiveness and safety are still limited. Surgical treatments for atrial fibrillation are reserved for patients who are undergoing cardiac surgery for other reasons.
AIMS baby movement scale application in high-risk infants early intervention analysis.
Wang, Y; Shi, J-P; Li, Y-H; Yang, W-H; Tian, Y-J; Gao, J; Li, S-J
2016-05-01
We investigated the application of Alberta Infant Motor Scale (AIMS) in screening motor development delay in the follow-up of high-risk infants who were discharged from NICU, to explain the state of infants' motor development and propose early individualized intervention. The study design was a randomized, single-blind trial by selecting patients between April 2015 and November 2015 in our hospital, children nerve recovery branch clinics and 77 cases of high-risk infants. We randomly divided the patients into observation group (39 cases) and control group (38 cases). To evaluate the application with AIMS, observation group was based on evaluation results for the first time to give rehabilitation training plan making, early intervention, control group according to the growth and development milestone in order to guide parents to take family training interval of 3 months. While comparing the two groups of high-risk infants before the intervention, the months of age, gender, risk factors, it was found that the AIMS scores, each position AIMS scores did not show a significant difference in percentile (p>0.05). There was also no significant difference between two groups in the seat and stand AIMS scores before and after intervention (p>0.05). However, the comparison of two groups of high-risk infants after intervention in comparison showed that the observation group supine AIMS scores and AIMS scores were significantly higher than the control group (p<0.05). Prone position AIMS scores observation group was also significantly higher than that of the control group (p<0.01). The corresponding percentile for two groups after the intervention of AIMS scores was less than 10% of cases, which was significantly lower in the observation group (p<0.01). AIMS can predict the development delay in high-risk infants, for improving the early hypernymic diagnosis and intervention.
USDA-ARS?s Scientific Manuscript database
In 2006, the AHA released diet and lifestyle recommendations (AHA-DLR) for cardiovascular disease (CVD) risk reduction. The effect of adherence to these recommendations on CVD risk is unknown. Our objective was to develop a unique diet and lifestyle score based on the AHA-DLR and to evaluate this sc...
Schievink, Bauke; Mol, Peter G M; Lambers Heerspink, Hiddo J
2015-11-01
There is increased interest in developing surrogate endpoints for clinical trials of chronic kidney disease progression, as the established clinically meaningful endpoint end-stage renal disease requires large and lengthy trials to assess drug efficacy. We describe recent developments in the search for novel surrogate endpoints. Declines in estimated glomerular filtration rate (eGFR) of 30% or 40% and albuminuria have been proposed as surrogates for end-stage renal disease. However, changes in eGFR or albuminuria may not be valid under all circumstances as drugs always have effects on multiple renal risk markers. Changes in each of these other 'off-target' risk markers can alter renal risk (either beneficially or adversely), and can thereby confound the relationship between surrogates that are based on single risk markers and renal outcome. Risk algorithms that integrate the short-term drug effects on multiple risk markers to predict drug effects on hard renal outcomes may therefore be more accurate. The validity of these risk algorithms is currently investigated. Given that drugs affect multiple renal risk markers, risk scores that integrate these effects are a promising alternative to using eGFR decline or albuminuria. Proper validation is required before these risk scores can be implemented.
Association of Alzheimer's disease GWAS loci with MRI markers of brain aging.
Chauhan, Ganesh; Adams, Hieab H H; Bis, Joshua C; Weinstein, Galit; Yu, Lei; Töglhofer, Anna Maria; Smith, Albert Vernon; van der Lee, Sven J; Gottesman, Rebecca F; Thomson, Russell; Wang, Jing; Yang, Qiong; Niessen, Wiro J; Lopez, Oscar L; Becker, James T; Phan, Thanh G; Beare, Richard J; Arfanakis, Konstantinos; Fleischman, Debra; Vernooij, Meike W; Mazoyer, Bernard; Schmidt, Helena; Srikanth, Velandai; Knopman, David S; Jack, Clifford R; Amouyel, Philippe; Hofman, Albert; DeCarli, Charles; Tzourio, Christophe; van Duijn, Cornelia M; Bennett, David A; Schmidt, Reinhold; Longstreth, William T; Mosley, Thomas H; Fornage, Myriam; Launer, Lenore J; Seshadri, Sudha; Ikram, M Arfan; Debette, Stephanie
2015-04-01
Whether novel risk variants of Alzheimer's disease (AD) identified through genome-wide association studies also influence magnetic resonance imaging-based intermediate phenotypes of AD in the general population is unclear. We studied association of 24 AD risk loci with intracranial volume, total brain volume, hippocampal volume (HV), white matter hyperintensity burden, and brain infarcts in a meta-analysis of genetic association studies from large population-based samples (N = 8175-11,550). In single-SNP based tests, AD risk allele of APOE (rs2075650) was associated with smaller HV (p = 0.0054) and CD33 (rs3865444) with smaller intracranial volume (p = 0.0058). In gene-based tests, there was associations of HLA-DRB1 with total brain volume (p = 0.0006) and BIN1 with HV (p = 0.00089). A weighted AD genetic risk score was associated with smaller HV (beta ± SE = -0.047 ± 0.013, p = 0.00041), even after excluding the APOE locus (p = 0.029). However, only association of AD genetic risk score with HV, including APOE, was significant after multiple testing correction (including number of independent phenotypes tested). These results suggest that novel AD genetic risk variants may contribute to structural brain aging in nondemented older community persons. Copyright © 2015 Elsevier Inc. All rights reserved.
2010-01-01
Background The purpose of the work reported here is to test reliable molecular profiles using routinely processed formalin-fixed paraffin-embedded (FFPE) tissues from participants of the clinical trial BIG 1-98 with a median follow-up of 60 months. Methods RNA from fresh frozen (FF) and FFPE tumor samples of 82 patients were used for quality control, and independent FFPE tissues of 342 postmenopausal participants of BIG 1-98 with ER-positive cancer were analyzed by measuring prospectively selected genes and computing scores representing the functions of the estrogen receptor (eight genes, ER_8), the progesterone receptor (five genes, PGR_5), Her2 (two genes, HER2_2), and proliferation (ten genes, PRO_10) by quantitative reverse transcription PCR (qRT-PCR) on TaqMan Low Density Arrays. Molecular scores were computed for each category and ER_8, PGR_5, HER2_2, and PRO_10 scores were combined into a RISK_25 score. Results Pearson correlation coefficients between FF- and FFPE-derived scores were at least 0.94 and high concordance was observed between molecular scores and immunohistochemical data. The HER2_2, PGR_5, PRO_10 and RISK_25 scores were significant predictors of disease free-survival (DFS) in univariate Cox proportional hazard regression. PRO_10 and RISK_25 scores predicted DFS in patients with histological grade II breast cancer and in lymph node positive disease. The PRO_10 and PGR_5 scores were independent predictors of DFS in multivariate Cox regression models incorporating clinical risk indicators; PRO_10 outperformed Ki-67 labeling index in multivariate Cox proportional hazard analyses. Conclusions Scores representing the endocrine responsiveness and proliferation status of breast cancers were developed from gene expression analyses based on RNA derived from FFPE tissues. The validation of the molecular scores with tumor samples of participants of the BIG 1-98 trial demonstrates that such scores can serve as independent prognostic factors to estimate disease free survival (DFS) in postmenopausal patients with estrogen receptor positive breast cancer. Trial Registration Current Controlled Trials: NCT00004205 PMID:20144231
Hasford, Joerg; Baccarani, Michele; Hoffmann, Verena; Guilhot, Joelle; Saussele, Susanne; Rosti, Gianantonio; Guilhot, François; Porkka, Kimmo; Ossenkoppele, Gert; Lindoerfer, Doris; Simonsson, Bengt; Pfirrmann, Markus; Hehlmann, Rudiger
2011-07-21
The outcome of chronic myeloid leukemia (CML) has been profoundly changed by the introduction of tyrosine kinase inhibitors into therapy, but the prognosis of patients with CML is still evaluated using prognostic scores developed in the chemotherapy and interferon era. The present work describes a new prognostic score that is superior to the Sokal and Euro scores both in its prognostic ability and in its simplicity. The predictive power of the score was developed and tested on a group of patients selected from a registry of 2060 patients enrolled in studies of first-line treatment with imatinib-based regimes. The EUTOS score using the percentage of basophils and spleen size best discriminated between high-risk and low-risk groups of patients, with a positive predictive value of not reaching a CCgR of 34%. Five-year progression-free survival was significantly better in the low- than in the high-risk group (90% vs 82%, P = .006). These results were confirmed in the validation sample. The score can be used to identify CML patients with significantly lower probabilities of responding to therapy and survival, thus alerting physicians to those patients who require closer observation and early intervention.
Ma, Yucheng; Wang, Qing; Yang, Jiayin; Yan, Lunan
2015-01-01
In order to provide a good match between donor and recipient in liver transplantation, four scoring systems [the product of donor age and Model for End-stage Liver Disease score (D-MELD), the score to predict survival outcomes following liver transplantation (SOFT), the balance of risk score (BAR), and the transplant risk index (TRI)] based on both donor and recipient parameters were designed. This study was conducted to evaluate the performance of the four scores in living donor liver transplantation (LDLT) and compare them with the MELD score. The clinical data of 249 adult patients undergoing LDLT in our center were retrospectively evaluated. The area under the receiver operating characteristic curves (AUCs) of each score were calculated and compared at 1-, 3-, 6-month and 1-year after LDLT. The BAR at 1-, 3-, 6-month and 1-year after LDLT and the D-MELD and TRI at 1-, 3- and 6-month after LDLT showed acceptable performances in the prediction of survival (AUC>0.6), while the SOFT showed poor discrimination at 6-month after LDLT (AUC = 0.569). In addition, the D-MELD and BAR displayed positive correlations with the length of ICU stay (D-MELD, p = 0.025; BAR, p = 0.022). The SOFT was correlated with the time of mechanical ventilation (p = 0.022). The D-MELD, BAR and TRI provided acceptable performance in predicting survival after LDLT. However, even though these scoring systems were based on both donor and recipient parameters, only the BAR provided better performance than the MELD in predicting 1-year survival after LDLT.
2015-01-01
Background and Objectives In order to provide a good match between donor and recipient in liver transplantation, four scoring systems [the product of donor age and Model for End-stage Liver Disease score (D-MELD), the score to predict survival outcomes following liver transplantation (SOFT), the balance of risk score (BAR), and the transplant risk index (TRI)] based on both donor and recipient parameters were designed. This study was conducted to evaluate the performance of the four scores in living donor liver transplantation (LDLT) and compare them with the MELD score. Patients and Methods The clinical data of 249 adult patients undergoing LDLT in our center were retrospectively evaluated. The area under the receiver operating characteristic curves (AUCs) of each score were calculated and compared at 1-, 3-, 6-month and 1-year after LDLT. Results The BAR at 1-, 3-, 6-month and 1-year after LDLT and the D-MELD and TRI at 1-, 3- and 6-month after LDLT showed acceptable performances in the prediction of survival (AUC>0.6), while the SOFT showed poor discrimination at 6-month after LDLT (AUC = 0.569). In addition, the D-MELD and BAR displayed positive correlations with the length of ICU stay (D-MELD, p = 0.025; BAR, p = 0.022). The SOFT was correlated with the time of mechanical ventilation (p = 0.022). Conclusion The D-MELD, BAR and TRI provided acceptable performance in predicting survival after LDLT. However, even though these scoring systems were based on both donor and recipient parameters, only the BAR provided better performance than the MELD in predicting 1-year survival after LDLT. PMID:26378786
Influences on Adaptive Planning to Reduce Flood Risks among Parishes in South Louisiana.
Paille, Mary; Reams, Margaret; Argote, Jennifer; Lam, Nina S-N; Kirby, Ryan
2016-02-01
Residents of south Louisiana face a range of increasing, climate-related flood exposure risks that could be reduced through local floodplain management and hazard mitigation planning. A major incentive for community planning to reduce exposure to flood risks is offered by the Community Rating System (CRS) of the National Flood Insurance Program (NFIP). The NFIP encourages local collective action by offering reduced flood insurance premiums for individual policy holders of communities where suggested risk-reducing measures have been implemented. This preliminary analysis examines the extent to which parishes (counties) in southern Louisiana have implemented the suggested policy actions and identifies key factors that account for variation in the implementation of the measures. More measures implemented results in higher CRS scores. Potential influences on scores include socioeconomic attributes of residents, government capacity, average elevation and past flood events. The results of multiple regression analysis indicate that higher CRS scores are associated most closely with higher median housing values. Furthermore, higher scores are found in parishes with more local municipalities that participate in the CRS program. The number of floods in the last five years and the revenue base of the parish does not appear to influence CRS scores. The results shed light on the conditions under which local adaptive planning to mitigate increasing flood risks is more likely to be implemented and offer insights for program administrators, researchers and community stakeholders.
Influences on Adaptive Planning to Reduce Flood Risks among Parishes in South Louisiana
Paille, Mary; Reams, Margaret; Argote, Jennifer; Lam, Nina S.-N.; Kirby, Ryan
2016-01-01
Residents of south Louisiana face a range of increasing, climate-related flood exposure risks that could be reduced through local floodplain management and hazard mitigation planning. A major incentive for community planning to reduce exposure to flood risks is offered by the Community Rating System (CRS) of the National Flood Insurance Program (NFIP). The NFIP encourages local collective action by offering reduced flood insurance premiums for individual policy holders of communities where suggested risk-reducing measures have been implemented. This preliminary analysis examines the extent to which parishes (counties) in southern Louisiana have implemented the suggested policy actions and identifies key factors that account for variation in the implementation of the measures. More measures implemented results in higher CRS scores. Potential influences on scores include socioeconomic attributes of residents, government capacity, average elevation and past flood events. The results of multiple regression analysis indicate that higher CRS scores are associated most closely with higher median housing values. Furthermore, higher scores are found in parishes with more local municipalities that participate in the CRS program. The number of floods in the last five years and the revenue base of the parish does not appear to influence CRS scores. The results shed light on the conditions under which local adaptive planning to mitigate increasing flood risks is more likely to be implemented and offer insights for program administrators, researchers and community stakeholders. PMID:27330828
Golive, Anjani; May, Heidi T; Bair, Tami L; Jacobs, Victoria; Crandall, Brian G; Cutler, Michael J; Day, John D; Mallender, Charles; Osborn, Jeffrey S; Stevens, Scott M; Weiss, J Peter; Woller, Scott C; Bunch, T Jared
2017-07-01
Among patients with atrial fibrillation (AF), the risk of stroke risk is a significant concern. CHADS 2 and CHA 2 DS 2 -VASc ≤2 scoring have been used to stratify patients into categories of risk. Without randomized, prospective data, the need and type of long-term antithrombotic medications for thromboembolism prevention in lower risk AF patients remains controversial. We sought to define the long-term impact of anticoagulant and antiplatelet therapy use in AF patients at low risk of stroke. A total of 56,764 patients diagnosed with AF and a CHADS 2 score of 0 or 1, or CHA 2 DS 2 -VASc score of 0, 1, or 2 were studied. Antithrombotic therapy was defined as aspirin, clopidogrel (antiplatelet therapy), or warfarin monotherapy (anticoagulation) initiated within 6 months of AF diagnosis. End points included all-cause mortality, cerebrovascular accident, transient ischemic attack (TIA), and major bleed. The average age of the population was 67.0 ± 14.1 years and 56.6% were male. In total, 9,682 received aspirin, 1,802 received clopidogrel, 1,164 received warfarin, and 46,042 did not receive any antithrombotic therapy. Event rates differed between patients with a CHADS 2 score of 0 and 1; 18.5% and 37.8% had died, 1.7% and 3.4% had a stroke, 2.2% and 3.2% had a TIA, and 14% and 12.5% had a major bleed, respectively (p <0.0001 for all). The rates of stroke, TIA, and major bleeding increased as antithrombotic therapy intensity increased from no therapy, to aspirin, to clopidogrel, and to warfarin (all p <0.0001). Similar outcomes were observed in low-risk CHA 2 DS 2 -VASc scores (0 to 2). In low-risk AF patients with a CHADS 2 score of 0 to 1 or CHA 2 DS 2 -VASc score of 0 to 2, the use of aspirin, clopidogrel, and warfarin was not associated with lower stroke rates at 5 years compared with no therapy. However, the use of antithrombotic agents was associated with a significant risk of bleed. Copyright © 2017 Elsevier Inc. All rights reserved.
Likelihood ratio-based integrated personal risk assessment of type 2 diabetes.
Sato, Noriko; Htun, Nay Chi; Daimon, Makoto; Tamiya, Gen; Kato, Takeo; Kubota, Isao; Ueno, Yoshiyuki; Yamashita, Hidetoshi; Fukao, Akira; Kayama, Takamasa; Muramatsu, Masaaki
2014-01-01
To facilitate personalized health care for multifactorial diseases, risks of genetic and clinical/environmental factors should be assessed together for each individual in an integrated fashion. This approach is possible with the likelihood ratio (LR)-based risk assessment system, as this system can incorporate manifold tests. We examined the usefulness of this system for assessing type 2 diabetes (T2D). Our system employed 29 genetic susceptibility variants, body mass index (BMI), and hypertension as risk factors whose LRs can be estimated from openly available T2D association data for the Japanese population. The pretest probability was set at a sex- and age-appropriate population average of diabetes prevalence. The classification performance of our LR-based risk assessment was compared to that of a non-invasive screening test for diabetes called TOPICS (with score based on age, sex, family history, smoking, BMI, and hypertension) using receiver operating characteristic analysis with a community cohort (n = 1263). The area under the receiver operating characteristic curve (AUC) for the LR-based assessment and TOPICS was 0.707 (95% CI 0.665-0.750) and 0.719 (0.675-0.762), respectively. These AUCs were much higher than that of a genetic risk score constructed using the same genetic susceptibility variants, 0.624 (0.574-0.674). The use of ethnically matched LRs is necessary for proper personal risk assessment. In conclusion, although LR-based integrated risk assessment for T2D still requires additional tests that evaluate other factors, such as risks involved in missing heritability, our results indicate the potential usability of LR-based assessment system and stress the importance of stratified epidemiological investigations in personalized medicine.
Cognitive ability in young adulthood predicts risk of early-onset dementia in Finnish men.
Rantalainen, Ville; Lahti, Jari; Henriksson, Markus; Kajantie, Eero; Eriksson, Johan G; Räikkönen, Katri
2018-06-06
To test if the Finnish Defence Forces Basic Intellectual Ability Test scores at 20.1 years predicted risk of organic dementia or Alzheimer disease (AD). Dementia was defined as inpatient or outpatient diagnosis of organic dementia or AD risk derived from Hospital Discharge or Causes of Death Registers in 2,785 men from the Helsinki Birth Cohort Study, divided based on age at first diagnosis into early onset (<65 years) or late onset (≥65 years). The Finnish Defence Forces Basic Intellectual Ability Test comprises verbal, arithmetic, and visuospatial subtests and a total score (scores transformed into a mean of 100 and SD of 15). We used Cox proportional hazard models and adjusted for age at testing, childhood socioeconomic status, mother's age at delivery, parity, participant's birthweight, education, and stroke or coronary heart disease diagnosis. Lower cognitive ability total and verbal ability (hazard ratio [HR] per 1 SD disadvantage >1.69, 95% confidence interval [CI] 1.01-2.63) scores predicted higher early-onset any dementia risk across the statistical models; arithmetic and visuospatial ability scores were similarly associated with early-onset any dementia risk, but these associations weakened after covariate adjustments (HR per 1 SD disadvantage >1.57, 95% CI 0.96-2.57). All associations were rendered nonsignificant when we adjusted for participant's education. Cognitive ability did not predict late-onset dementia risk. These findings reinforce previous suggestions that lower cognitive ability in early life is a risk factor for early-onset dementia. © 2018 American Academy of Neurology.
An instrument for broadened risk assessment in antenatal health care including non-medical issues
Vos, Amber A.; van Veen, Mieke J.; Birnie, Erwin; Denktaş, Semiha; Steegers, Eric A.P.; Bonsel, Gouke J.
2015-01-01
Introduction Growing evidence on the risk contributing role of non-medical factors on pregnancy outcomes urged for a new approach in early antenatal risk selection. The evidence invites to more integration, in particular between the clinical working area and the public health domain. We developed a non-invasive, standardized instrument for comprehensive antenatal risk assessment. The current study presents the application-oriented development of a risk screening instrument for early antenatal detection of risk factors and tailored prevention in an integrated care setting. Methods A review of published instruments complemented with evidence from cohort studies. Selection and standardization of risk factors associated with small for gestational age, preterm birth, congenital anomalies and perinatal mortality. Risk factors were weighted to obtain a cumulative risk score. Responses were then connected to corresponding care pathways. A cumulative risk threshold was defined, which can be adapted to the population and the availability of preventive facilities. A score above the threshold implies multidisciplinary consultation between caregivers. Results The resulting digital score card consisted of 70 items, subdivided into four non-medical and two medical domains. Weighing of risk factors was based on existing evidence. Pilot-evidence from a cohort of 218 pregnancies in a multi-practice urban setting showed a cut-off of 16 points would imply 20% of all pregnant women to be assessed in a multidisciplinary setting. A total of 28 care pathways were defined. Conclusion The resulting score card is a universal risk screening instrument which incorporates recent evidence on non-medical risk factors for adverse pregnancy outcomes and enables systematic risk management in an integrated antenatal health care setting. PMID:25780351
Engberg, Elina; Stach-Lempinen, Beata; Sahrakorpi, Niina; Rönö, Kristiina; Roine, Risto P; Kautiainen, Hannu; Eriksson, Johan G; Koivusalo, Saila B
2015-12-01
To examine differences in antenatal depressive symptoms between women at high risk for gestational diabetes mellitus (GDM) and pregnant women in the general population. We recruited pregnant women at high risk for GDM, based on a history of GDM and/or prepregnancy BMI ≥ 30 kg/m(2), (n = 482) and pregnant women in the general population (n = 358) before 20 weeks of gestation. Depressive symptoms were assessed by the Edinburgh Postnatal Depression Scale (EPDS). Of the women at high risk for GDM, 17% had an EPDS score ≥ 10 (indicating risk for depression) compared to 11% of the pregnant women in the general population (p = .025). The mean EPDS score was also higher in the women at risk for GDM (5.5, SD 4.5 vs. 4.6, SD 3.9, p = .004, effect size 0.21 [95% CI: 0.07 to 0.34]). After adjusting for age, prepregnancy BMI and income, the difference between the groups was no longer significant either in the proportion of women having an EPDS score ≥ 10 (p = .59) or in the mean EPDS score (p=.39). After controlling for age, prepregnancy BMI and income, women at high risk for GDM did not have greater depressive symptoms compared to pregnant women in the general population in early pregnancy. Copyright © 2015 Elsevier Inc. All rights reserved.
von Rosen, P; Frohm, A; Kottorp, A; Fridén, C; Heijne, A
2017-12-01
Many risk factors for injury are presented in the literature, few of those are however consistent and the majority is associated with adult and not adolescent elite athletes. The aim was to identify risk factors for injury in adolescent elite athletes, by applying a biopsychosocial approach. A total of 496 adolescent elite athletes (age range 15-19), participating in 16 different sports, were monitored repeatedly over 52 weeks using a valid questionnaire about injuries, training exposure, sleep, stress, nutrition, and competence-based self-esteem. Univariate and multiple Cox regression analyses were used to calculate hazard ratios (HR) for risk factors for first reported injury. The main finding was that an increase in training load, training intensity, and at the same time decreasing the sleep volume resulted in a higher risk for injury compared to no change in these variables (HR 2.25, 95% CI, 1.46-3.45, P<.01), which was the strongest risk factor identified. In addition, an increase by one score of competence-based self-esteem increased the hazard for injury with 1.02 (HR 95% CI, 1.00-1.04, P=.01). Based on the multiple Cox regression analysis, an athlete having the identified risk factors (Risk Index, competence-based self-esteem), with an average competence-based self-esteem score, had more than a threefold increased risk for injury (HR 3.35), compared to an athlete with a low competence-based self-esteem and no change in sleep or training volume. Our findings confirm injury occurrence as a result of multiple risk factors interacting in complex ways. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
2013-01-01
Background Recent research has used cardiovascular risk scores intended to estimate “total cardiovascular disease (CVD) risk” in individuals to assess the distribution of risk within populations. The research suggested that the adoption of the total risk approach, in comparison to treatment decisions being based on the level of a single risk factor, could lead to reductions in expenditure on preventive cardiovascular drug treatment in low- and middle-income countries. So that the patient benefit associated with savings is highlighted. Methods This study used data from national STEPS surveys (STEPwise Approach to Surveillance) conducted between 2005 and 2010 in Cambodia, Malaysia and Mongolia of men and women aged 40–64 years. The study compared the differences and implications of various approaches to risk estimation at a population level using the World Health Organization/International Society of Hypertension (WHO/ISH) risk score charts. To aid interpretation and adjustment of scores and inform treatment in individuals, the charts are accompanied by practice notes about risk factors not included in the risk score calculations. Total risk was calculated amongst the populations using the charts alone and also adjusted according to these notes. Prevalence of traditional single risk factors was also calculated. Results The prevalence of WHO/ISH “high CVD risk” (≥20% chance of developing a cardiovascular event over 10 years) of 6%, 2.3% and 1.3% in Mongolia, Malaysia and Cambodia, respectively, is in line with recent research when charts alone are used. However, these proportions rise to 33.3%, 20.8% and 10.4%, respectively when individuals with blood pressure > = 160/100 mm/Hg and/or hypertension medication are attributed to “high risk”. Of those at “moderate risk” (10- < 20% chance of developing a cardio vascular event over 10 years), 100%, 94.3% and 30.1%, respectively are affected by at least one risk-increasing factor. Of all individuals, 44.6%, 29.0% and 15.0% are affected by hypertension as a single risk factor (systolic ≥ 140 mmHg or diastolic ≥ 90 mmHg or medication). Conclusions Used on a population level, cardiovascular risk scores may offer useful insights that can assist health service delivery planning. An approach based on overall risk without adjustment of specific risk factors however, may underestimate treatment needs. At the individual level, the total risk approach offers important clinical benefits. However, countries need to develop appropriate clinical guidelines and operational guidance for detection and management of CVD risk using total CVD-risk approach at different levels of health system. Operational research is needed to assess implementation issues. PMID:23734670
A Risk Score for Predicting Multiple Sclerosis.
Dobson, Ruth; Ramagopalan, Sreeram; Topping, Joanne; Smith, Paul; Solanky, Bhavana; Schmierer, Klaus; Chard, Declan; Giovannoni, Gavin
2016-01-01
Multiple sclerosis (MS) develops as a result of environmental influences on the genetically susceptible. Siblings of people with MS have an increased risk of both MS and demonstrating asymptomatic changes in keeping with MS. We set out to develop an MS risk score integrating both genetic and environmental risk factors. We used this score to identify siblings at extremes of MS risk and attempted to validate the score using brain MRI. 78 probands with MS, 121 of their unaffected siblings and 103 healthy controls were studied. Personal history was taken, and serological and genetic analysis using the illumina immunochip was performed. Odds ratios for MS associated with each risk factor were derived from existing literature, and the log values of the odds ratios from each of the risk factors were combined in an additive model to provide an overall score. Scores were initially calculated using log odds ratio from the HLA-DRB1*1501 allele only, secondly using data from all MS-associated SNPs identified in the 2011 GWAS. Subjects with extreme risk scores underwent validation studies. MRI was performed on selected individuals. There was a significant difference in the both risk scores between people with MS, their unaffected siblings and healthy controls (p<0.0005). Unaffected siblings had a risk score intermediate to people with MS and controls (p<0.0005). The best performing risk score generated an AUC of 0.82 (95%CI 0.75-0.88). The risk score demonstrates an AUC on the threshold for clinical utility. Our score enables the identification of a high-risk sibling group to inform pre-symptomatic longitudinal studies.
Performance of Polygenic Scores for Predicting Phobic Anxiety
Walter, Stefan; Glymour, M. Maria; Koenen, Karestan; Liang, Liming; Tchetgen Tchetgen, Eric J.; Cornelis, Marilyn; Chang, Shun-Chiao; Rimm, Eric; Kawachi, Ichiro; Kubzansky, Laura D.
2013-01-01
Context Anxiety disorders are common, with a lifetime prevalence of 20% in the U.S., and are responsible for substantial burdens of disability, missed work days and health care utilization. To date, no causal genetic variants have been identified for anxiety, anxiety disorders, or related traits. Objective To investigate whether a phobic anxiety symptom score was associated with 3 alternative polygenic risk scores, derived from external genome-wide association studies of anxiety, an internally estimated agnostic polygenic score, or previously identified candidate genes. Design Longitudinal follow-up study. Using linear and logistic regression we investigated whether phobic anxiety was associated with polygenic risk scores derived from internal, leave-one out genome-wide association studies, from 31 candidate genes, and from out-of-sample genome-wide association weights previously shown to predict depression and anxiety in another cohort. Setting and Participants Study participants (n = 11,127) were individuals from the Nurses' Health Study and Health Professionals Follow-up Study. Main Outcome Measure Anxiety symptoms were assessed via the 8-item phobic anxiety scale of the Crown Crisp Index at two time points, from which a continuous phenotype score was derived. Results We found no genome-wide significant associations with phobic anxiety. Phobic anxiety was also not associated with a polygenic risk score derived from the genome-wide association study beta weights using liberal p-value thresholds; with a previously published genome-wide polygenic score; or with a candidate gene risk score based on 31 genes previously hypothesized to predict anxiety. Conclusion There is a substantial gap between twin-study heritability estimates of anxiety disorders ranging between 20–40% and heritability explained by genome-wide association results. New approaches such as improved genome imputations, application of gene expression and biological pathways information, and incorporating social or environmental modifiers of genetic risks may be necessary to identify significant genetic predictors of anxiety. PMID:24278274
Bodapati, Rohan K; Kizer, Jorge R; Kop, Willem J; Kamel, Hooman; Stein, Phyllis K
2017-07-21
Heart rate variability (HRV) characterizes cardiac autonomic functioning. The association of HRV with stroke is uncertain. We examined whether 24-hour HRV added predictive value to the Cardiovascular Health Study clinical stroke risk score (CHS-SCORE), previously developed at the baseline examination. N=884 stroke-free CHS participants (age 75.3±4.6), with 24-hour Holters adequate for HRV analysis at the 1994-1995 examination, had 68 strokes over ≤8 year follow-up (median 7.3 [interquartile range 7.1-7.6] years). The value of adding HRV to the CHS-SCORE was assessed with stepwise Cox regression analysis. The CHS-SCORE predicted incident stroke (HR=1.06 per unit increment, P =0.005). Two HRV parameters, decreased coefficient of variance of NN intervals (CV%, P =0.031) and decreased power law slope (SLOPE, P =0.033) also entered the model, but these did not significantly improve the c-statistic ( P =0.47). In a secondary analysis, dichotomization of CV% (LOWCV% ≤12.8%) was found to maximally stratify higher-risk participants after adjustment for CHS-SCORE. Similarly, dichotomizing SLOPE (LOWSLOPE <-1.4) maximally stratified higher-risk participants. When these HRV categories were combined (eg, HIGHCV% with HIGHSLOPE), the c-statistic for the model with the CHS-SCORE and combined HRV categories was 0.68, significantly higher than 0.61 for the CHS-SCORE alone ( P =0.02). In this sample of older adults, 2 HRV parameters, CV% and power law slope, emerged as significantly associated with incident stroke when added to a validated clinical risk score. After each parameter was dichotomized based on its optimal cut point in this sample, their composite significantly improved prediction of incident stroke during ≤8-year follow-up. These findings will require validation in separate, larger cohorts. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Rockall score in predicting outcomes of elderly patients with acute upper gastrointestinal bleeding
Wang, Chang-Yuan; Qin, Jian; Wang, Jing; Sun, Chang-Yi; Cao, Tao; Zhu, Dan-Dan
2013-01-01
AIM: To validate the clinical Rockall score in predicting outcomes (rebleeding, surgery and mortality) in elderly patients with acute upper gastrointestinal bleeding (AUGIB). METHODS: A retrospective analysis was undertaken in 341 patients admitted to the emergency room and Intensive Care Unit of Xuanwu Hospital of Capital Medical University with non-variceal upper gastrointestinal bleeding. The Rockall scores were calculated, and the association between clinical Rockall scores and patient outcomes (rebleeding, surgery and mortality) was assessed. Based on the Rockall scores, patients were divided into three risk categories: low risk ≤ 3, moderate risk 3-4, high risk ≥ 4, and the percentages of rebleeding/death/surgery in each risk category were compared. The area under the receiver operating characteristic (ROC) curve was calculated to assess the validity of the Rockall system in predicting rebleeding, surgery and mortality of patients with AUGIB. RESULTS: A positive linear correlation between clinical Rockall scores and patient outcomes in terms of rebleeding, surgery and mortality was observed (r = 0.962, 0.955 and 0.946, respectively, P = 0.001). High clinical Rockall scores > 3 were associated with adverse outcomes (rebleeding, surgery and death). There was a significant correlation between high Rockall scores and the occurrence of rebleeding, surgery and mortality in the entire patient population (χ2 = 49.29, 23.10 and 27.64, respectively, P = 0.001). For rebleeding, the area under the ROC curve was 0.788 (95%CI: 0.726-0.849, P = 0.001); For surgery, the area under the ROC curve was 0.752 (95%CI: 0.679-0.825, P = 0.001) and for mortality, the area under the ROC curve was 0.787 (95%CI: 0.716-0.859, P = 0.001). CONCLUSION: The Rockall score is clinically useful, rapid and accurate in predicting rebleeding, surgery and mortality outcomes in elderly patients with AUGIB. PMID:23801840
Van Belleghem, Griet; Devos, Stefanie; De Wit, Liesbet; Hubloue, Ives; Lauwaert, Door; Pien, Karen; Putman, Koen
2016-01-01
Injury severity scores are important in the context of developing European and national goals on traffic safety, health-care benchmarking and improving patient communication. Various severity scores are available and are mostly based on Abbreviated Injury Scale (AIS) or International Classification of Diseases (ICD). The aim of this paper is to compare the predictive value for in-hospital mortality between the various severity scores if only International Classification of Diseases, 9th revision, Clinical Modification ICD-9-CM is reported. To estimate severity scores based on the AIS lexicon, ICD-9-CM codes were converted with ICD Programmes for Injury Categorization (ICDPIC) and four AIS-based severity scores were derived: Maximum AIS (MaxAIS), Injury Severity Score (ISS), New Injury Severity Score (NISS) and Exponential Injury Severity Score (EISS). Based on ICD-9-CM, six severity scores were calculated. Determined by the number of injuries taken into account and the means by which survival risk ratios (SRRs) were calculated, four different approaches were used to calculate the ICD-9-based Injury Severity Scores (ICISS). The Trauma Mortality Prediction Model (TMPM) was calculated with the ICD-9-CM-based model averaged regression coefficients (MARC) for both the single worst injury and multiple injuries. Severity scores were compared via model discrimination and calibration. Model comparisons were performed separately for the severity scores based on the single worst injury and multiple injuries. For ICD-9-based scales, estimation of area under the receiver operating characteristic curve (AUROC) ranges between 0.94 and 0.96, while AIS-based scales range between 0.72 and 0.76, respectively. The intercept in the calibration plots is not significantly different from 0 for MaxAIS, ICISS and TMPM. When only ICD-9-CM codes are reported, ICD-9-CM-based severity scores perform better than severity scores based on the conversion to AIS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhang, Dong; Li, Yiping; Yin, Dong; He, Yuan; Chen, Changzhe; Song, Chenxi; Yan, Ruohua; Zhu, Chen'gang; Xu, Bo; Dou, Kefei
2017-03-01
To investigate the predictors of and generate a risk prediction method for periprocedural myocardial infarction (PMI) after percutaneous coronary intervention (PCI) using the new PMI definition proposed by the Society for Cardiovascular Angiography and Interventions (SCAI). The SCAI-defined PMI was found to be associated with worse prognosis than the PMI diagnosed by other definitions. However, few large-sample studies have attempted to predict the risk of SCAI-defined PMI. A total of 3,371 patients (3,516 selective PCIs) were included in this single-center retrospective analysis. The diagnostic criteria for PMI were set according to the SCAI definition. All clinical characteristics, coronary angiography findings and PCI procedural factors were collected. Multivariate logistic regression analysis was performed to identify independent predictors of PMI. To evaluate the risk of PMI, a multivariable risk score (PMI score) was constructed with incremental weights attributed to each component variable according to their estimated coefficients. PMI occurred in 108 (3.1%) of all patients. Age, multivessel treatment, at least one bifurcation treatment and total treated lesion length were independent predictors of SCAI-defined PMI. PMI scores ranged from 0 to 20. The C-statistic of PMI score was 0.71 (95% confidence interval: 0.66-0.76). PMI rates increased significantly from 1.96% in the non-high-risk group (PMI score < 10) to 6.26% in the high-risk group (PMI score ≥ 10) (P < 0.001). Age, multivessel treatment, at least one bifurcation treatment, and total treated lesion length are predictive of PMI. The PMI score could help identify patients at high risk of PMI after PCI. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Gardener, Hannah; Wright, Clinton B; Gu, Yian; Demmer, Ryan T; Boden-Albala, Bernadette; Elkind, Mitchell S V; Sacco, Ralph L; Scarmeas, Nikolaos
2011-12-01
A dietary pattern common in regions near the Mediterranean appears to reduce risk of all-cause mortality and ischemic heart disease. Data on blacks and Hispanics in the United States are lacking, and to our knowledge only one study has examined a Mediterranean-style diet (MeDi) in relation to stroke. In this study, we examined an MeDi in relation to vascular events. The Northern Manhattan Study is a population-based cohort to determine stroke incidence and risk factors (mean ± SD age of participants: 69 ± 10 y; 64% women; 55% Hispanic, 21% white, and 24% black). Diet was assessed at baseline by using a food-frequency questionnaire in 2568 participants. A higher score on a 0-9 scale represented increased adherence to an MeDi. The relation between the MeDi score and risk of ischemic stroke, myocardial infarction (MI), and vascular death was assessed with Cox models, with control for sociodemographic and vascular risk factors. The MeDi-score distribution was as follows: 0-2 (14%), 3 (17%), 4 (22%), 5 (22%), and 6-9 (25%). Over a mean follow-up of 9 y, 518 vascular events accrued (171 ischemic strokes, 133 MIs, and 314 vascular deaths). The MeDi score was inversely associated with risk of the composite outcome of ischemic stroke, MI, or vascular death (P-trend = 0.04) and with vascular death specifically (P-trend = 0.02). Moderate and high MeDi scores were marginally associated with decreased risk of MI. There was no association with ischemic stroke. Higher consumption of an MeDi was associated with decreased risk of vascular events. Results support the role of a diet rich in fruit, vegetables, whole grains, fish, and olive oil in the promotion of ideal cardiovascular health.
Xu, Cheng; Chen, Yu-Pei; Liu, Xu; Tang, Ling-Long; Chen, Lei; Mao, Yan-Ping; Zhang, Yuan; Guo, Rui; Zhou, Guan-Qun; Li, Wen-Fei; Lin, Ai-Hua; Sun, Ying; Ma, Jun
2017-06-01
The effect of socioeconomic factors on receipt of definitive treatment and survival outcomes in non-metastatic head and neck squamous cell carcinoma (HNSCC) remains unclear. Eligible patients (n = 37 995) were identified from the United States Surveillance, Epidemiology and End Results (SEER) database between 2007 and 2012. Socioeconomic factors (i.e., median household income, education level, unemployment rate, insurance status, marital status and residence) were included in univariate/multivariate Cox regression analysis; validated factors were used to generate nomograms for cause-specific survival (CSS) and overall survival (OS), and a prognostic score model for risk stratification. Low- and high-risk groups were compared for all cancer subsites. Impact of race/ethnicity on survival was investigated in each risk group. Marital status, median household income and insurance status were included in the nomograms for CSS and OS, which had higher c-indexes than the 6th edition TNM staging system (all P < 0.001). Based on three disadvantageous socioeconomic factors (i.e., unmarried status, uninsured status, median household income
Prospective evaluation of the Sunshine Appendicitis Grading System score.
Reid, Fiona; Choi, Julian; Williams, Marli; Chan, Steven
2017-05-01
Although there is a wealth of information predicting risk of post-operative intra-abdominal collection and guiding antibiotic therapy following appendicectomy, confusion remains because of lack of consensus on the clinical severity and definition of 'complicated' appendicitis. This study aimed to develop a standardized intra-operative grading system: Sunshine Appendicitis Grading System (SAGS) for acute appendicitis that correlates independently with the risk of intra-abdominal collections. Two-hundred and forty-six patients undergoing emergency laparoscopy for suspected appendicitis were prospectively scored according to the severity of appendicitis and followed up for complications including intra-abdominal collection. After termination of the study, the SAGS score was repeated by an independent surgeon based on operation notes and intra-operative photography to determine inter-rater agreement. The primary outcome measure was incidence of intra-abdominal collection, secondary outcome measures were all complications and length of stay. SAGS score demonstrated good inter-rater agreement (kappa K w 0.869; 95% CI 0.796-0.941; P < 0.001). A risk ratio of 2.594 (95% CI 0.655-4.065; P < 0.001) for intra-abdominal collection was found using SAGS score as a predictor. The discriminative ability of SAGS score was supported by an area under the curve value of 0.850 (95% CI 0.799-0.892; P < 0.001). SAGS score can be used to simply and accurately classify the severity of appendicitis and to independently predict the risk of intra-abdominal collection. It can therefore be used to stratify risk, guide antibiotic therapy, follow-up and standardize the definitions of appendicitis severity for future research. © 2015 Royal Australasian College of Surgeons.
Sicard, Mélanie; Nusinovici, Simon; Hanf, Matthieu; Muller, Jean-Baptiste; Guellec, Isabelle; Ancel, Pierre-Yves; Gascoin, Géraldine; Rozé, Jean-Christophe; Flamant, Cyril
2017-01-01
Preterm infants present higher risk of non-optimal neurodevelopmental outcome. Fetal and postnatal growth, in particular head circumference (HC), is associated with neurodevelopmental outcome. We aimed to calculate the relationship between HC at birth, HC delta Z-score (between birth and hospital discharge), and non-optimal neurodevelopmental outcome at 2 years of corrected age in preterm infants. Surviving infants born ≤34 weeks of gestation were included in the analysis. The relationship between the risk of being non-optimal at 2 years and both HC at birth and HC growth was assessed. The 2 Z-scores were considered first independently and then simultaneously to investigate their effect on the risk of non-optimality using a generalized additive model. A total of 4,046 infants with both HC measures at birth and hospital discharge were included. Infants with small HC at birth (Z-score <-2 SD), or presenting suboptimal HC growth (dZ-score <-2 SD), are at higher risk of non-optimal neurodevelopmental outcome at 2 years (respectively OR 1.7 [95% CI 1.4-2] and OR 1.4 [95% CI 1.2-1.8]). Interestingly, patients cumulating small HC Z-score at birth (-2 SD) and presenting catch-down growth (HC dZ-score [-2 SD]) have a significantly increased risk for neurocognitive impairment (OR >2) while adjusting for gestational age, twin status, sex, and socioeconomic information. HC at birth and HC dZ-score between birth and hospital discharge are synergistically associated to neurodevelopmental outcome at 2 years of corrected age, in a population-based prospective cohort of preterm infants born ≤34 weeks of gestation. © 2017 S. Karger AG, Basel.
FiGHTS: a preliminary screening tool for adolescent firearms-carrying.
Hayes, D Neil; Sege, Robert
2003-12-01
Adolescent firearms-carrying is a risk factor for serious injury and death. Clinical screening tools for firearms-carrying have not yet been developed. We present the development of a preliminary screening test for adolescent firearms-carrying based on the growing body of knowledge of firearms-related risk factors. A convenience sample of 15,000 high school students from the 1999 National Youth Risk Behavior Survey was analyzed for the purpose of model building. Known risk factors for firearms-carrying were candidates for 2 models predicting recent firearms-carrying. The "brief FiGHTS score" screening tool excluded terms related to sexual behavior, significant substance abuse, or criminal behavior (Fi=fighting, G=gender, H=hurt while fighting, T=threatened, S=smoker). An "extended FiGHTS score," which included 13 items, was developed for more precise estimates. The brief FiGHTS score had a sensitivity of 82%, a specificity of 71%, and an area under the receiver operating characteristic (ROC) curve of 0.84. The extended FiGHTS score had an area under the ROC curve of 0.90. Both models performed well in a validation data set of 55,000 students. The brief and extended FiGHTS scores have high sensitivity and specificity for predicting firearms-carrying and may be appropriate for clinical testing.
Risk Factors for Malnutrition Among Children With Cerebral Palsy in Botswana.
Johnson, Allison; Gambrah-Sampaney, Claudia; Khurana, Esha; Baier, James; Baranov, Esther; Monokwane, Baphaleng; Bearden, David R
2017-05-01
Children with cerebral palsy in low-resource settings are at high risk of malnutrition, which further increases their risk of poor health outcomes. However, there are few available data on specific risk factors for malnutrition among children with cerebral palsy in the developing world. We performed a case-control study among children with cerebral palsy receiving care at a tertiary care hospital in Gaborone, Botswana. Children with cerebral palsy and malnutrition were identified according to World Health Organization growth curves and compared with subjects with cerebral palsy without malnutrition. Risk factors for malnutrition were identified using multivariable logistic regression models. These risk factors were then used to generate a Malnutrition Risk Score, and Receiver Operating Characteristic curves were used to identify optimal cutoffs to identify subjects at high risk of malnutrition. We identified 61 children with cerebral palsy, 26 of whom (43%) met criteria for malnutrition. Nonambulatory status (odds ratio 13.8, 95% confidence interval [CI] 3.8-50.1, P < 0.001) and a composite measure of socioeconomic status (odds ratio 1.6, 95% CI 1.0-2.5, P = 0.03) were the strongest risk factors for malnutrition. A Malnutrition Risk Score was constructed based on these risk factors, and receiver operating characteristic curve analysis demonstrated excellent performance characteristics of this score (area under the curve 0.92, 95% CI 0.89-0.94). Malnutrition is common among children with cerebral palsy in Botswana, and a simple risk score may help identify children with the highest risk. Further studies are needed to validate this screening tool and to determine optimal nutritional interventions in this population. Copyright © 2017 Elsevier Inc. All rights reserved.
Braulke, Friederike; Platzbecker, Uwe; Müller-Thomas, Catharina; Götze, Katharina; Germing, Ulrich; Brümmendorf, Tim H; Nolte, Florian; Hofmann, Wolf-Karsten; Giagounidis, Aristoteles A N; Lübbert, Michael; Greenberg, Peter L; Bennett, John M; Solé, Francesc; Mallo, Mar; Slovak, Marilyn L; Ohyashiki, Kazuma; Le Beau, Michelle M; Tüchler, Heinz; Pfeilstöcker, Michael; Nösslinger, Thomas; Hildebrandt, Barbara; Shirneshan, Katayoon; Aul, Carlo; Stauder, Reinhard; Sperr, Wolfgang R; Valent, Peter; Fonatsch, Christa; Trümper, Lorenz; Haase, Detlef; Schanz, Julie
2015-02-01
International Prognostic Scoring Systems are used to determine the individual risk profile of myelodysplastic syndrome patients. For the assessment of International Prognostic Scoring Systems, an adequate chromosome banding analysis of the bone marrow is essential. Cytogenetic information is not available for a substantial number of patients (5%-20%) with dry marrow or an insufficient number of metaphase cells. For these patients, a valid risk classification is impossible. In the study presented here, the International Prognostic Scoring Systems were validated based on fluorescence in situ hybridization analyses using extended probe panels applied to cluster of differentiation 34 positive (CD34(+)) peripheral blood cells of 328 MDS patients of our prospective multicenter German diagnostic study and compared to chromosome banding results of 2902 previously published patients with myelodysplastic syndromes. For cytogenetic risk classification by fluorescence in situ hybridization analyses of CD34(+) peripheral blood cells, the groups differed significantly for overall and leukemia-free survival by uni- and multivariate analyses without discrepancies between treated and untreated patients. Including cytogenetic data of fluorescence in situ hybridization analyses of peripheral CD34(+) blood cells (instead of bone marrow banding analysis) into the complete International Prognostic Scoring System assessment, the prognostic risk groups separated significantly for overall and leukemia-free survival. Our data show that a reliable stratification to the risk groups of the International Prognostic Scoring Systems is possible from peripheral blood in patients with missing chromosome banding analysis by using a comprehensive probe panel (clinicaltrials.gov identifier:01355913). Copyright© Ferrata Storti Foundation.
Bodilsen, Jacob; Dalager-Pedersen, Michael; Schønheyder, Henrik Carl; Nielsen, Henrik
2014-06-01
The morbidity and mortality in community-acquired bacterial meningitis (CABM) remain substantial and treatment outcomes and predictors of a poor prognosis must be assessed regularly. We aimed to describe the outcome of patients with CABM treated with dexamethasone and to assess the performance of the Dutch Meningitis Risk Score (DMRS). We retrospectively evaluated all adults with CABM in North Denmark Region, 1998-2012. Outcomes included in-hospital mortality and Glasgow Outcome Scale (GOS) score. A GOS score of 5 was categorized as a favourable outcome and scores of 1-4 as unfavourable. We used logistic analysis to compute relative risks (RRs) with 95% confidence intervals (CIs) for an unfavourable outcome adjusted for age, sex, and comorbidity. We identified a total of 172 cases of CABM. In-hospital mortality was unaffected by the implementation of dexamethasone in 2003 (19% vs 20%). Dexamethasone treatment was associated with a prompt diagnosis of meningitis and a statistically insignificant decrease in the risk of an unfavourable outcome (33% vs 53%; adjusted RR 0.64, 95% CI 0.41-1.01) and in-hospital mortality (15% vs 24%; adjusted RR 0.72, 95% CI 0.35-1.48). Of the risk factors included in the DMRS, we found age and tachycardia to be significantly associated with an unfavourable outcome in the multivariate analyses. Patients treated with dexamethasone were more likely to have a favourable outcome, although statistical significance was not reached. Several parameters included in the Dutch risk score were also negative predictors in our cohort, although the entire risk score could not be validated due to a lack of data.
Spittal, Matthew J; Bismark, Marie M; Studdert, David M
2015-01-01
Background Medicolegal agencies—such as malpractice insurers, medical boards and complaints bodies—are mostly passive regulators; they react to episodes of substandard care, rather than intervening to prevent them. At least part of the explanation for this reactive role lies in the widely recognised difficulty of making robust predictions about medicolegal risk at the individual clinician level. We aimed to develop a simple, reliable scoring system for predicting Australian doctors’ risks of becoming the subject of repeated patient complaints. Methods Using routinely collected administrative data, we constructed a national sample of 13 849 formal complaints against 8424 doctors. The complaints were lodged by patients with state health service commissions in Australia over a 12-year period. We used multivariate logistic regression analysis to identify predictors of subsequent complaints, defined as another complaint occurring within 2 years of an index complaint. Model estimates were then used to derive a simple predictive algorithm, designed for application at the doctor level. Results The PRONE (Predicted Risk Of New Event) score is a 22-point scoring system that indicates a doctor's future complaint risk based on four variables: a doctor's specialty and sex, the number of previous complaints and the time since the last complaint. The PRONE score performed well in predicting subsequent complaints, exhibiting strong validity and reliability and reasonable goodness of fit (c-statistic=0.70). Conclusions The PRONE score appears to be a valid method for assessing individual doctors’ risks of attracting recurrent complaints. Regulators could harness such information to target quality improvement interventions, and prevent substandard care and patient dissatisfaction. The approach we describe should be replicable in other agencies that handle large numbers of patient complaints or malpractice claims. PMID:25855664
Sjoberg, Daniel D; Vickers, Andrew J; Assel, Melissa; Dahlin, Anders; Poon, Bing Ying; Ulmert, David; Lilja, Hans
2018-06-01
Prostate-specific antigen (PSA) screening reduces prostate cancer deaths but leads to harm from overdiagnosis and overtreatment. To determine the long-term risk of prostate cancer mortality using kallikrein blood markers measured at baseline in a large population of healthy men to identify men with low risk for prostate cancer death. Study based on the Malmö Diet and Cancer cohort enrolling 11 506 unscreened men aged 45-73 yr during 1991-1996, providing cryopreserved blood at enrollment and followed without PSA screening to December 31, 2014. We measured four kallikrein markers in the blood of 1223 prostate cancer cases and 3028 controls. Prostate cancer death (n=317) by PSA and a prespecified statistical model based on the levels of four kallikrein markers. Baseline PSA predicted prostate cancer death with a concordance index of 0.86. In men with elevated PSA (≥2.0ng/ml), predictive accuracy was enhanced by the four-kallikrein panel compared with PSA (0.80 vs 0.73; improvement 0.07; 95% confidence interval 0.04, 0.10). Nearly half of men aged 60+ yr with elevated PSA had a four-kallikrein panel score of <7.5%, translating into 1.7% risk of prostate cancer death at 15 yr-a similar estimate to that of a man with a PSA of 1.6ng/ml. Men with a four-kallikrein panel score of ≥7.5% had a 13% risk of prostate cancer death at 15 yr. A prespecified statistical model based on four kallikrein markers (commercially available as the 4Kscore) reclassified many men with modestly elevated PSA, to have a low long-term risk of prostate cancer death. Men with elevated PSA but low scores from the four-kallikrein panel can be monitored rather than being subject to biopsy. Men with elevated prostate-specific antigen (PSA) are often referred for prostate biopsy. However, men with elevated PSA but low scores from the four-kallikrein panel can be monitored rather than being subject to biopsy. Copyright © 2018 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Lim, Jiyeon; Lee, Yunhee; Shin, Sangah; Lee, Hwi-Won; Kim, Claire E; Lee, Jong-Koo; Lee, Sang-Ah; Kang, Daehee
2018-06-01
Diet quality scores or indices, based on dietary guidelines, are used to summarize dietary intake into a single numeric variable. The aim of this study was to examine the association between the modified diet quality index for Koreans (DQI-K) and mortality among Health Examinees-Gem (HEXA-G) study participants. The DQI-K was modified from the original diet quality index. A total of 134,547 participants (45,207 men and 89,340 women) from the HEXA-G study (2004 and 2013) were included. The DQI-K is based on eight components: 1) daily protein intake, 2) percent of energy from fat, 3) percent of energy from saturated fat, 4) daily cholesterol intake, 5) daily whole-grain intake, 6) daily fruit intake, 7) daily vegetable intake, and 8) daily sodium intake. The association between all-cause mortality and the DQI-K was examined using Cox proportional hazard regression models. Hazard ratios and confidence intervals were estimated after adjusting for age, gender, income, smoking status, alcohol drinking, body mass index, and total energy intake. The total DQI-K score was calculated by summing the scores of the eight components (range 0-9). In the multivariable adjusted models, with good diet quality (score 0-4) as a reference, poor diet quality (score 5-9) was associated with an increased risk of all-cause mortality (hazard ratios = 1.23, 95% confidence intervals = 1.06-1.43). Moreover, a one-unit increase in DQI-K score resulted in a 6% higher mortality risk. A poor diet quality DQI-K score was associated with an increased risk of mortality. The DQI-K in the present study may be used to assess the diet quality of Korean adults.
The Pediatric Risk of Mortality Score: Update 2015
Pollack, Murray M.; Holubkov, Richard; Funai, Tomohiko; Dean, J. Michael; Berger, John T.; Wessel, David L.; Meert, Kathleen; Berg, Robert A.; Newth, Christopher J. L.; Harrison, Rick E.; Carcillo, Joseph; Dalton, Heidi; Shanley, Thomas; Jenkins, Tammara L.; Tamburro, Robert
2016-01-01
Objectives Severity of illness measures have long been used in pediatric critical care. The Pediatric Risk of Mortality is a physiologically based score used to quantify physiologic status, and when combined with other independent variables, it can compute expected mortality risk and expected morbidity risk. Although the physiologic ranges for the Pediatric Risk of Mortality variables have not changed, recent Pediatric Risk of Mortality data collection improvements have been made to adapt to new practice patterns, minimize bias, and reduce potential sources of error. These include changing the outcome to hospital survival/death for the first PICU admission only, shortening the data collection period and altering the Pediatric Risk of Mortality data collection period for patients admitted for “optimizing” care before cardiac surgery or interventional catheterization. This analysis incorporates those changes, assesses the potential for Pediatric Risk of Mortality physiologic variable subcategories to improve score performance, and recalibrates the Pediatric Risk of Mortality score, placing the algorithms (Pediatric Risk of Mortality IV) in the public domain. Design Prospective cohort study from December 4, 2011, to April 7, 2013. Measurements and Main Results Among 10,078 admissions, the unadjusted mortality rate was 2.7% (site range, 1.3–5.0%). Data were divided into derivation (75%) and validation (25%) sets. The new Pediatric Risk of Mortality prediction algorithm (Pediatric Risk of Mortality IV) includes the same Pediatric Risk of Mortality physiologic variable ranges with the subcategories of neurologic and nonneurologic Pediatric Risk of Mortality scores, age, admission source, cardiopulmonary arrest within 24 hours before admission, cancer, and low-risk systems of primary dysfunction. The area under the receiver operating characteristic curve for the development and validation sets was 0.88 ± 0.013 and 0.90 ± 0.018, respectively. The Hosmer-Lemeshow goodness of fit statistics indicated adequate model fit for both the development (p = 0.39) and validation (p = 0.50) sets. Conclusions The new Pediatric Risk of Mortality data collection methods include significant improvements that minimize the potential for bias and errors, and the new Pediatric Risk of Mortality IV algorithm for survival and death has excellent prediction performance. PMID:26492059
Stuart, Elizabeth A.; Lee, Brian K.; Leacy, Finbarr P.
2013-01-01
Objective Examining covariate balance is the prescribed method for determining when propensity score methods are successful at reducing bias. This study assessed the performance of various balance measures, including a proposed balance measure based on the prognostic score (also known as the disease-risk score), to determine which balance measures best correlate with bias in the treatment effect estimate. Study Design and Setting The correlations of multiple common balance measures with bias in the treatment effect estimate produced by weighting by the odds, subclassification on the propensity score, and full matching on the propensity score were calculated. Simulated data were used, based on realistic data settings. Settings included both continuous and binary covariates and continuous covariates only. Results The standardized mean difference in prognostic scores, the mean standardized mean difference, and the mean t-statistic all had high correlations with bias in the effect estimate. Overall, prognostic scores displayed the highest correlations of all the balance measures considered. Prognostic score measure performance was generally not affected by model misspecification and performed well under a variety of scenarios. Conclusion Researchers should consider using prognostic score–based balance measures for assessing the performance of propensity score methods for reducing bias in non-experimental studies. PMID:23849158
Lantelme, Pierre; Eltchaninoff, Hélène; Rabilloud, Muriel; Souteyrand, Géraud; Dupré, Marion; Spaziano, Marco; Bonnet, Marc; Becle, Clément; Riche, Benjamin; Durand, Eric; Bouvier, Erik; Dacher, Jean-Nicolas; Courand, Pierre-Yves; Cassagnes, Lucie; Dávila Serrano, Eduardo E; Motreff, Pascal; Boussel, Loic; Lefèvre, Thierry; Harbaoui, Brahim
2018-05-11
The aim of this study was to develop a new scoring system based on thoracic aortic calcification (TAC) to predict 1-year cardiovascular and all-cause mortality. A calcified aorta is often associated with poor prognosis after transcatheter aortic valve replacement (TAVR). A risk score encompassing aortic calcification may be valuable in identifying poor TAVR responders. The C 4 CAPRI (4 Cities for Assessing CAlcification PRognostic Impact) multicenter study included a training cohort (1,425 patients treated using TAVR between 2010 and 2014) and a contemporary test cohort (311 patients treated in 2015). TAC was measured by computed tomography pre-TAVR. CAPRI risk scores were based on the linear predictors of Cox models including TAC in addition to comorbidities and demographic, atherosclerotic disease and cardiac function factors. CAPRI scores were constructed and tested in 2 independent cohorts. Cardiovascular and all-cause mortality at 1 year was 13.0% and 17.9%, respectively, in the training cohort and 8.2% and 11.8% in the test cohort. The inclusion of TAC in the model improved prediction: 1-cm 3 increase in TAC was associated with a 6% increase in cardiovascular mortality and a 4% increase in all-cause mortality. The predicted and observed survival probabilities were highly correlated (slopes >0.9 for both cardiovascular and all-cause mortality). The model's predictive power was fair (AUC 68% [95% confidence interval [CI]: 64-72]) for both cardiovascular and all-cause mortality. The model performed similarly in the training and test cohorts. The CAPRI score, which combines the TAC variable with classical prognostic factors, is predictive of 1-year cardiovascular and all-cause mortality. Its predictive performance was confirmed in an independent contemporary cohort. CAPRI scores are highly relevant to current practice and strengthen the evidence base for decision making in valvular interventions. Its routine use may help prevent futile procedures. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Gysan, Detlef Bernd; Millentrup, Stefanie; Albus, Christian; Bjarnason-Wehrens, Birna; Latsch, Joachim; Gohlke, Helmut; Herold, Gerd; Wegscheider, Karl; Heming, Christian; Seyfarth, Melchior; Predel, Hans-Georg
2017-09-01
Trial design Prospective randomized multicentre interventional study. Methods Individual cardiovascular risk assessment in Ford Company, Germany employees ( n = 4.196), using the European Society of Cardiology-Systematic Coronary Risk Evaluation (ESC-SCORE) for classification into three risk groups. Subjects assigned to ESC high-risk group (ESC-SCORE ≥ 5%), without a history of cardiovascular disease were eligible for randomization to a multimodal 15-week intervention programme (INT) or to usual care and followed up for 36 months. Objectives Evaluation of the long-term effects of a risk-adjusted multimodal intervention in high-risk subjects. Primary endpoint: reduction of ESC-SCORE in INT versus usual care. Secondary endpoints: composite of fatal and non-fatal cardiovascular events and time to first cardiovascular event. intention-to-treat and per-protocol analysis. Results Four hundred and forty-seven subjects were randomized to INT ( n = 224) or to usual care ( n = 223). After 36 months ESC-SCORE development favouring INT was observed (INT: 8.70% to 10.03% vs. usual care: 8.49% to 12.09%; p = 0.005; net difference: 18.50%). Moreover, a significant reduction in the composite cardiovascular events was observed: (INT: n = 11 vs. usual care: n = 27). Hazard ratio of intervention versus control was 0.51 (95% confidence interval 0.25-1.03; p = 0.062) in the intention-to-treat analysis and 0.41 (95% confidence interval 0.18-0.90; p = 0.026) in the per-protocol analysis, respectively. No intervention-related adverse events or side-effects were observed. Conclusions Our results demonstrate the efficiency of identifying cardiovascular high-risk subjects by the ESC-SCORE in order to enrol them to a risk adjusted primary prevention programme. This strategy resulted in a significant improvement of ESC-SCORE, as well as a reduction in predefined cardiovascular endpoints in the INT within 36 months. (ISRCTN 23536103.).
Wang, Hao; Li, Zhong; Yin, Mei; Chen, Xiao-Mei; Ding, Shi-Fang; Li, Chen; Zhai, Qian; Li, Yuan; Liu, Han; Wu, Da-Wei
2015-04-01
Given the high mortality rates in elderly patients with septic shock, the early recognition of patients at greatest risk of death is crucial for the implementation of early intervention strategies. Serum lactate and N-terminal prohormone of brain natriuretic peptide (NT-proBNP) levels are often elevated in elderly patients with septic shock and are therefore important biomarkers of metabolic and cardiac dysfunction. We hypothesized that a risk stratification system that incorporates the Acute Physiology and Chronic Health Evaluation (APACHE) II score and lactate and NT-proBNP biomarkers would better predict mortality in geriatric patients with septic shock than the APACHE II score alone. A single-center prospective study was conducted from January 2012 to December 2013 in a 30-bed intensive care unit of a triservice hospital. The lactate area score was defined as the sum of the area under the curve of serial lactate levels measured during the 24 hours following admission divided by 24. The NT-proBNP score was assigned based on NT-proBNP levels measured at admission. The combined score was calculated by adding the lactate area and NT-proBNP scores to the APACHE II score. Multivariate logistic regression analyses and receiver operating characteristic curves were used to evaluate which variables and scoring systems served as the best predictors of mortality in elderly septic patients. A total of 115 patients with septic shock were included in the study. The overall 28-day mortality rate was 67.0%. When compared to survivors, nonsurvivors had significantly higher lactate area scores, NT-proBNP scores, APACHE II scores, and combined scores. In the multivariate regression model, the combined score, lactate area score, and mechanical ventilation were independent risk factors associated with death. Receiver operating characteristic curves indicated that the combined score had significantly greater predictive power when compared to the APACHE II score or the NT-proBNP score (P < .05). A combined score that incorporates the APACHE II score with early lactate area and NT-proBNP levels is a useful method for risk stratification in geriatric patients with septic shock. Copyright © 2014 Elsevier Inc. All rights reserved.
Pocock, Stuart J; Huo, Yong; Van de Werf, Frans; Newsome, Simon; Chin, Chee Tang; Vega, Ana Maria; Medina, Jesús; Bueno, Héctor
2017-08-01
Long-term risk of post-discharge mortality associated with acute coronary syndrome remains a concern. The development of a model to reliably estimate two-year mortality risk from hospital discharge post-acute coronary syndrome will help guide treatment strategies. EPICOR (long-tErm follow uP of antithrombotic management patterns In acute CORonary syndrome patients, NCT01171404) and EPICOR Asia (EPICOR Asia, NCT01361386) are prospective observational studies of 23,489 patients hospitalized for an acute coronary syndrome event, who survived to discharge and were then followed up for two years. Patients were enrolled from 28 countries across Europe, Latin America and Asia. Risk scoring for two-year all-cause mortality risk was developed using identified predictive variables and forward stepwise Cox regression. Goodness-of-fit and discriminatory power was estimated. Within two years of discharge 5.5% of patients died. We identified 17 independent mortality predictors: age, low ejection fraction, no coronary revascularization/thrombolysis, elevated serum creatinine, poor EQ-5D score, low haemoglobin, previous cardiac or chronic obstructive pulmonary disease, elevated blood glucose, on diuretics or an aldosterone inhibitor at discharge, male sex, low educational level, in-hospital cardiac complications, low body mass index, ST-segment elevation myocardial infarction diagnosis, and Killip class. Geographic variation in mortality risk was seen following adjustment for other predictive variables. The developed risk-scoring system provided excellent discrimination ( c-statistic=0.80, 95% confidence interval=0.79-0.82) with a steep gradient in two-year mortality risk: >25% (top decile) vs. ~1% (bottom quintile). A simplified risk model with 11 predictors gave only slightly weaker discrimination ( c-statistic=0.79, 95% confidence interval =0.78-0.81). This risk score for two-year post-discharge mortality in acute coronary syndrome patients ( www.acsrisk.org ) can facilitate identification of high-risk patients and help guide tailored secondary prevention measures.
Tracking Success in Large Introductory Classes using Technology
NASA Astrophysics Data System (ADS)
Robertson, Thomas H.
2011-01-01
A common problem frequently encountered in large introductory classes is the anonymity experienced by students. An effort is underway at Ball State University to explore the impact of technology on reducing this anonymity and improving student performance and success. In preparation for this study, performance and success measures for students in a previous class have been examined to provide background for construction of a model for formal testing and a control group for comparison of future results. Student performance measures obtained early in the course and final course grades were examined to identify potential early warning indicators that might be used to plan interventions much earlier than the traditional midterm course reports used to alert freshmen at academic risk. Class participation scores were based on data obtained with a personal response system (i>clicker). The scores were scaled to reflect about 80% comprehension and 20% attendance. Homework scores were obtained using the LON-CAPA Course Management System and instructional materials created by the author. Substantial linear correlations exist between 1) Exam 1 Scores after Four Weeks and 2) Raw Class Participation Scores for the First Six Weeks and the Final Course Score. A more modest linear correlation was found between 3) Homework Scores for First Six Weeks and Final Course Score. Of these three measures, only Class Participation Scores identified all students who ultimately received course grades lower than C. Several students scored in the danger zone according to Homework and Class Participation Scores but earned course grades of C or better. It appears that an early warning plan based on Class Participation Scores would permit effective identification of at-risk students early in the course.
Automated coronary artery calcification detection on low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.
2014-03-01
Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.
2010-01-01
Background Although diabetic patients have an increased rate of cardio-vascular events, there is considerable heterogeneity with respect to cardiovascular risk, requiring new approaches to individual cardiovascular risk factor assessment. In this study we used whole body-MR-angiography (WB-MRA) to assess the degree of atherosclerosis in patients with long-standing diabetes and to determine the association between metabolic syndrome (MetS) and atherosclerotic burden. Methods Long standing (≥10 years) type 1 and type 2 diabetic patients (n = 59; 31 males; 63.3 ± 1.7 years) were examined by WB-MRA. Based on the findings in each vessel, we developed an overall score representing the patient's vascular atherosclerotic burden (MRI-score). The score's association with components of the MetS was assessed. Results The median MRI-score was 1.18 [range: 1.00-2.41] and MetS was present in 58% of the cohort (type 2 diabetics: 73%; type 1 diabetics: 26%). Age (p = 0.0002), HDL-cholesterol (p = 0.016), hypertension (p = 0.0008), nephropathy (p = 0.0093), CHD (p = 0.001) and MetS (p = 0.0011) were significantly associated with the score. Adjusted for age and sex, the score was significantly (p = 0.02) higher in diabetics with MetS (1.450 [1.328-1.572]) compared to those without MetS (1.108 [0.966-1.50]). The number of MetS components was associated with a linear increase in the MRI-score (increase in score: 0.09/MetS component; r2 = 0.24, p = 0.038). Finally, using an established risk algorithm, we found a significant association between MRI-score and 10-year risk for CHD, fatal CHD and stroke. Conclusion In this high-risk diabetic population, WB-MRA revealed large heterogeneity in the degree of systemic atherosclerosis. Presence and number of traits of the MetS are associated with the extent of atherosclerotic burden. These results support the perspective that diabetic patients are a heterogeneous population with increased but varying prevalence of atherosclerosis and risk. PMID:20804545
Findeisen, Hannes M; Weckbach, Sabine; Stark, Renée G; Reiser, Maximilian F; Schoenberg, Stefan O; Parhofer, Klaus G
2010-08-30
Although diabetic patients have an increased rate of cardio-vascular events, there is considerable heterogeneity with respect to cardiovascular risk, requiring new approaches to individual cardiovascular risk factor assessment. In this study we used whole body-MR-angiography (WB-MRA) to assess the degree of atherosclerosis in patients with long-standing diabetes and to determine the association between metabolic syndrome (MetS) and atherosclerotic burden. Long standing (> or = 10 years) type 1 and type 2 diabetic patients (n = 59; 31 males; 63.3 +/- 1.7 years) were examined by WB-MRA. Based on the findings in each vessel, we developed an overall score representing the patient's vascular atherosclerotic burden (MRI-score). The score's association with components of the MetS was assessed. The median MRI-score was 1.18 [range: 1.00-2.41] and MetS was present in 58% of the cohort (type 2 diabetics: 73%; type 1 diabetics: 26%). Age (p = 0.0002), HDL-cholesterol (p = 0.016), hypertension (p = 0.0008), nephropathy (p = 0.0093), CHD (p = 0.001) and MetS (p = 0.0011) were significantly associated with the score. Adjusted for age and sex, the score was significantly (p = 0.02) higher in diabetics with MetS (1.450 [1.328-1.572]) compared to those without MetS (1.108 [0.966-1.50]). The number of MetS components was associated with a linear increase in the MRI-score (increase in score: 0.09/MetS component; r2 = 0.24, p = 0.038). Finally, using an established risk algorithm, we found a significant association between MRI-score and 10-year risk for CHD, fatal CHD and stroke. In this high-risk diabetic population, WB-MRA revealed large heterogeneity in the degree of systemic atherosclerosis. Presence and number of traits of the MetS are associated with the extent of atherosclerotic burden. These results support the perspective that diabetic patients are a heterogeneous population with increased but varying prevalence of atherosclerosis and risk.
Prenatal High Risk Scoring: How Family Doctors Do It
Shea, Philip
1978-01-01
Assessment of risk factors is an integral part of family medicine and of prenatal care. A strong positive relationship has been demonstrated between a high risk score and higher incidence of maternal or perinatal morbidity and mortality. The family physician, because of his previous knowledge of the patient, and his familiarity with a broad range of normals, is in a good position to use his clinical judgement in high risk scoring in pregnancy. We must also be cautious that high risk scoring does not become a self fulfilling prophecy. Risk scoring is simply risk scoring, not a plan of management and intervention. PMID:21301562
Morii, Takeshi; Kishino, Tomonori; Shimamori, Naoko; Motohashi, Mitsue; Ohnishi, Hiroaki; Honya, Keita; Aoyagi, Takayuki; Tajima, Takashi; Ichimura, Shoichi
2018-01-01
Preoperative discrimination between benign and malignant soft tissue tumors is critical for the prevention of excess application of magnetic resonance imaging and biopsy as well as unplanned resection. Although ultrasound, including power Doppler imaging, is an easy, noninvasive, and cost-effective modality for screening soft tissue tumors, few studies have investigated reliable discrimination between benign and malignant soft tissue tumors. To establish a modality for discrimination between benign and malignant soft tissue tumors using ultrasound, we extracted the significant risk factors for malignancy based on ultrasound information from 40 malignant and 56 benign pathologically diagnosed soft tissue tumors and established a scoring system based on these risk factors. The maximum size, tumor margin, and vascularity evaluated using ultrasound were extracted as significant risk factors. Using the odds ratio from a multivariate regression model, a scoring system was established. Receiver operating characteristic analyses revealed a high area under the curve value (0.85), confirming the accuracy of the scoring system. Ultrasound is a useful modality for establishing the differential diagnosis between benign and malignant soft tissue tumors.
Marzocchini, Manrico; Tatàno, Fabio; Moretti, Michela Simona; Antinori, Caterina; Orilisi, Stefano
2018-06-05
A possible approach for determining soil and groundwater quality criteria for contaminated sites is the comparative risk assessment. Originating from but not limited to Italian interest in a decentralised (regional) implementation of comparative risk assessment, this paper first addresses the proposal of an original methodology called CORIAN REG-M , which was created with initial attention to the context of potentially contaminated sites in the Marche Region (Central Italy). To deepen the technical-scientific knowledge and applicability of the comparative risk assessment, the following characteristics of the CORIAN REG-M methodology appear to be relevant: the simplified but logical assumption of three categories of factors (source and transfer/transport of potential contamination, and impacted receptors) within each exposure pathway; the adaptation to quality and quantity of data that are available or derivable at the given scale of concern; the attention to a reliable but unsophisticated modelling; the achievement of a conceptual linkage to the absolute risk assessment approach; and the potential for easy updating and/or refining of the methodology. Further, the application of the CORIAN REG-M methodology to some case-study sites located in the Marche Region indicated the following: a positive correlation can be expected between air and direct contact pathway scores, as well as between individual pathway scores and the overall site scores based on a root-mean-square algorithm; the exposure pathway, which presents the highest variability of scores, tends to be dominant at sites with the highest computed overall site scores; and the adoption of a root-mean-square algorithm can be expected to emphasise the overall site scoring.
Zweiker, David; Zweiker, Robert; Winkler, Elisabeth; Roesch, Konstantina; Schumacher, Martin; Stepan, Vinzenz; Krippl, Peter; Bauer, Norbert; Heine, Martin; Reicht, Gerhard; Zweiker, Gudrun; Sprenger, Martin; Watzinger, Norbert
2017-09-25
Oral anticoagulation (OAC) is state-of-the-art therapy for atrial fibrillation (AF), the most common arrhythmia worldwide. However, little is known about the perception of patients with AF and how it correlates with risk scores used by their physicians. Therefore, we correlated patients' estimates of their own stroke and bleeding risk with the objectively predicted individual risk using CHA 2 DS 2 -VASc and HAS-BLED scores. Cross-sectional prevalence study using convenience sampling and telephone follow-up. Eight hospital departments and one general practitioner in Austria. Patients' perception of stroke and bleeding risk was opposed to commonly used risk scoring. Patients with newly diagnosed AF and indication for anticoagulation. Comparison of subjective risk perception with CHA 2 DS 2 -VASc and HAS-BLED scores showing possible discrepancies between subjective and objective risk estimation. Patients' judgement of their own knowledge on AF and education were also correlated with accuracy of subjective risk appraisal. Ninety-one patients (age 73±11 years, 45% female) were included in this study. Subjective stroke and bleeding risk estimation did not correlate with risk scores (ρ=0.08 and ρ=0.17). The majority of patients (57%) underestimated the individual stroke risk. Patients feared stroke more than bleeding (67% vs 10%). There was no relationship between accurate perception of stroke and bleeding risks and education level. However, we found a correlation between the patients' judgement of their own knowledge of AF and correct assessment of individual stroke risk (ρ=0.24, p=0.02). During follow-up, patients experienced the following events: death (n=5), stroke (n=2), bleeding (n=1). OAC discontinuation rate despite indication was 3%. In this cross-sectional analysis of OAC-naive patients with AF, we found major differences between patients' perceptions and physicians' assessments of risks and benefits of OAC. To ensure shared decision-making and informed consent, more attention should be given to evidence-based and useful communication strategies. NCT03061123. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A Risk Score Model for Evaluation and Management of Patients with Thyroid Nodules.
Zhang, Yongwen; Meng, Fanrong; Hong, Lianqing; Chu, Lanfang
2018-06-12
The study is aimed to establish a simplified and practical tool for analyzing thyroid nodules. A novel risk score model was designed, risk factors including patient history, patient characteristics, physical examination, symptoms of compression, thyroid function, ultrasonography (US) of thyroid and cervical lymph nodes were evaluated and classified into high risk factors, intermediate risk factors, and low risk factors. A total of 243 thyroid nodules in 162 patients were assessed with risk score system and Thyroid Imaging-Reporting and Data System (TI-RADS). The diagnostic performance of risk score system and TI-RADS was compared. The accuracy in the diagnosis of thyroid nodules was 89.3% for risk score system, 74.9% for TI-RADS respectively. The specificity, accuracy and positive predictive value (PPV) of risk score system were significantly higher than the TI-RADS system (χ 2 =26.287, 17.151, 11.983; p <0.05), statistically significant differences were not observed in the sensitivity and negative predictive value (NPV) between the risk score system and TI-RADS (χ 2 =1.276, 0.290; p>0.05). The area under the curve (AUC) for risk score diagnosis system was 0.963, standard error 0.014, 95% confidence interval (CI)=0.934-0.991, the AUC for TI-RADS diagnosis system was 0.912 with standard error 0.021, 95% CI=0.871-0.953, the AUC for risk score system was significantly different from that of TI-RADS (Z=2.02; p <0.05). Risk score model is a reliable, simplified and cost-effective diagnostic tool used in diagnosis of thyroid cancer. The higher the score is, the higher the risk of malignancy will be. © Georg Thieme Verlag KG Stuttgart · New York.
Security risk assessment: applying the concepts of fuzzy logic.
Bajpai, Shailendra; Sachdeva, Anish; Gupta, J P
2010-01-15
Chemical process industries (CPI) handling hazardous chemicals in bulk can be attractive targets for deliberate adversarial actions by terrorists, criminals and disgruntled employees. It is therefore imperative to have comprehensive security risk management programme including effective security risk assessment techniques. In an earlier work, it has been shown that security risk assessment can be done by conducting threat and vulnerability analysis or by developing Security Risk Factor Table (SRFT). HAZOP type vulnerability assessment sheets can be developed that are scenario based. In SRFT model, important security risk bearing factors such as location, ownership, visibility, inventory, etc., have been used. In this paper, the earlier developed SRFT model has been modified using the concepts of fuzzy logic. In the modified SRFT model, two linguistic fuzzy scales (three-point and four-point) are devised based on trapezoidal fuzzy numbers. Human subjectivity of different experts associated with previous SRFT model is tackled by mapping their scores to the newly devised fuzzy scale. Finally, the fuzzy score thus obtained is defuzzyfied to get the results. A test case of a refinery is used to explain the method and compared with the earlier work.
O'Shaughnessy, Fergal; Donnelly, Jennifer C; Cooley, Sharon M; Deering, Mary; Raman, Ajita; Gannon, Geraldine; Hickey, Jane; Holland, Alan; Hayes, Niamh; Bennett, Kathleen; Ní Áinle, Fionnuala; Cleary, Brian J
2017-11-01
Venous thromboembolism risk assessment (VTERA) is recommended in all pregnant and postpartum women. Our objective was to develop, pilot and implement a user-friendly electronic VTERA tool. We developed "Thrombocalc", an electronic VTERA tool using Microsoft EXCEL ™ . Thrombocalc was designed as a score-based tool to facilitate rapid assessment of all women after childbirth. Calculation of a total score estimated risk of venous thromboembolism in line with consensus guidelines. Recommendations for thromboprophylaxis were included in the VTERA output. Implementation was phased. Uptake of the VTERA tool was assessed prospectively by monitoring the proportion of women who gave birth in our institution and had a completed risk assessment. Factors affecting completion and accuracy of risk assessments were also assessed. Thrombocalc was used prospectively to risk-assess 8380 women between September 2014 and December 2015. Compliance with this tool increased dramatically throughout the study period; over 92% of women were risk-assessed in the last quarter of data collection. Compliance was not adversely affected if delivery took place out of working hours [adjusted odds ratio (aOR) 1.03, 95% confidence interval (CI) 0.93-1.14]. Risk assessment was less likely in the case of cesarean deliveries (aOR 0.66, 95% CI 0.60-0.73) and stillborn infants (aOR 0.48, 95% CI 0.29-0.79). Misclassification of risk factors led to approximately 207 (2.5%) inaccurate thromboprophylaxis recommendations. Our electronic, score-based VTERA tool provides a highly effective mechanism for rapid assessment of individual postpartum venous thromboembolism risk in a high-throughput environment. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.
A Risk Prediction Index for Advanced Colorectal Neoplasia at Screening Colonoscopy.
Schroy, Paul C; Wong, John B; O'Brien, Michael J; Chen, Clara A; Griffith, John L
2015-07-01
Eliciting patient preferences within the context of shared decision making has been advocated for colorectal cancer screening. Risk stratification for advanced colorectal neoplasia (ACN) might facilitate more effective shared decision making when selecting an appropriate screening option. Our objective was to develop and validate a clinical index for estimating the probability of ACN at screening colonoscopy. We conducted a cross-sectional analysis of 3,543 asymptomatic, mostly average-risk patients 50-79 years of age undergoing screening colonoscopy at two urban safety net hospitals. Predictors of ACN were identified using multiple logistic regression. Model performance was internally validated using bootstrapping methods. The final index consisted of five independent predictors of risk (age, smoking, alcohol intake, height, and a combined sex/race/ethnicity variable). Smoking was the strongest predictor (net reclassification improvement (NRI), 8.4%) and height the weakest (NRI, 1.5%). Using a simplified weighted scoring system based on 0.5 increments of the adjusted odds ratio, the risk of ACN ranged from 3.2% (95% confidence interval (CI), 2.6-3.9) for the low-risk group (score ≤2) to 8.6% (95% CI, 7.4-9.7) for the intermediate/high-risk group (score 3-11). The model had moderate to good overall discrimination (C-statistic, 0.69; 95% CI, 0.66-0.72) and good calibration (P=0.73-0.93). A simple 5-item risk index based on readily available clinical data accurately stratifies average-risk patients into low- and intermediate/high-risk categories for ACN at screening colonoscopy. Uptake into clinical practice could facilitate more effective shared decision-making for CRC screening, particularly in situations where patient and provider test preferences differ.
den Ruijter, H M; Peters, S A E; Groenewegen, K A; Anderson, T J; Britton, A R; Dekker, J M; Engström, G; Eijkemans, M J; Evans, G W; de Graaf, J; Grobbee, D E; Hedblad, B; Hofman, A; Holewijn, S; Ikeda, A; Kavousi, M; Kitagawa, K; Kitamura, A; Koffijberg, H; Ikram, M A; Lonn, E M; Lorenz, M W; Mathiesen, E B; Nijpels, G; Okazaki, S; O'Leary, D H; Polak, J F; Price, J F; Robertson, C; Rembold, C M; Rosvall, M; Rundek, T; Salonen, J T; Sitzer, M; Stehouwer, C D A; Witteman, J C; Moons, K G; Bots, M L
2013-07-01
The aim of this work was to investigate whether measurement of the mean common carotid intima-media thickness (CIMT) improves cardiovascular risk prediction in individuals with diabetes. We performed a subanalysis among 4,220 individuals with diabetes in a large ongoing individual participant data meta-analysis involving 56,194 subjects from 17 population-based cohorts worldwide. We first refitted the risk factors of the Framingham heart risk score on the individuals without previous cardiovascular disease (baseline model) and then expanded this model with the mean common CIMT (CIMT model). The absolute 10 year risk for developing a myocardial infarction or stroke was estimated from both models. In individuals with diabetes we compared discrimination and calibration of the two models. Reclassification of individuals with diabetes was based on allocation to another cardiovascular risk category when mean common CIMT was added. During a median follow-up of 8.7 years, 684 first-time cardiovascular events occurred among the population with diabetes. The C statistic was 0.67 for the Framingham model and 0.68 for the CIMT model. The absolute 10 year risk for developing a myocardial infarction or stroke was 16% in both models. There was no net reclassification improvement with the addition of mean common CIMT (1.7%; 95% CI -1.8, 3.8). There were no differences in the results between men and women. There is no improvement in risk prediction in individuals with diabetes when measurement of the mean common CIMT is added to the Framingham risk score. Therefore, this measurement is not recommended for improving individual cardiovascular risk stratification in individuals with diabetes.
Predicting stroke through genetic risk functions: The CHARGE risk score project
Ibrahim-Verbaas, Carla A; Fornage, Myriam; Bis, Joshua C; Choi, Seung Hoan; Psaty, Bruce M; Meigs, James B; Rao, Madhu; Nalls, Mike; Fontes, Joao D; O’Donnell, Christopher J.; Kathiresan, Sekar; Ehret, Georg B.; Fox, Caroline S; Malik, Rainer; Dichgans, Martin; Schmidt, Helena; Lahti, Jari; Heckbert, Susan R; Lumley, Thomas; Rice, Kenneth; Rotter, Jerome I; Taylor, Kent D; Folsom, Aaron R; Boerwinkle, Eric; Rosamond, Wayne D; Shahar, Eyal; Gottesman, Rebecca F.; Koudstaal, Peter J; Amin, Najaf; Wieberdink, Renske G.; Dehghan, Abbas; Hofman, Albert; Uitterlinden, André G; DeStefano, Anita L.; Debette, Stephanie; Xue, Luting; Beiser, Alexa; Wolf, Philip A.; DeCarli, Charles; Ikram, M. Arfan; Seshadri, Sudha; Mosley, Thomas H; Longstreth, WT; van Duijn, Cornelia M; Launer, Lenore J
2014-01-01
Background and Purpose Beyond the Framingham Stroke Risk Score (FSRS), prediction of future stroke may improve with a genetic risk score (GRS) based on Single nucleotide polymorphisms (SNPs) associated with stroke and its risk factors. Methods The study includes four population-based cohorts with 2,047 first incident strokes from 22,720 initially stroke-free European origin participants aged 55 years and older, who were followed for up to 20 years. GRS were constructed with 324 SNPs implicated in stroke and 9 risk factors. The association of the GRS to first incident stroke was tested using Cox regression; the GRS predictive properties were assessed with Area under the curve (AUC) statistics comparing the GRS to age sex, and FSRS models, and with reclassification statistics. These analyses were performed per cohort and in a meta-analysis of pooled data. Replication was sought in a case-control study of ischemic stroke (IS). Results In the meta-analysis, adding the GRS to the FSRS, age and sex model resulted in a significant improvement in discrimination (All stroke: Δjoint AUC =0.016, p-value=2.3*10-6; IS: Δ joint AUC =0.021, p-value=3.7*10−7), although the overall AUC remained low. In all studies there was a highly significantly improved net reclassification index (p-values <10−4). Conclusions The SNPs associated with stroke and its risk factors result only in a small improvement in prediction of future stroke compared to the classical epidemiological risk factors for stroke. PMID:24436238
Kim, Bia Z; Patel, Dipika V; Sherwin, Trevor; McGhee, Charles N J
2016-11-01
To evaluate 2 preoperative risk stratification systems for assessing the risk of complications in phacoemulsification cataract surgery, performed by residents, fellows, and attending physicians in a public teaching hospital. Cohort study. One observer assessed the clinical data of 500 consecutive cases, prior to phacoemulsification cataract surgery performed between April and June 2015 at Greenlane Clinical Centre, Auckland, New Zealand. Preoperatively 2 risk scores were calculated for each case using the Muhtaseb and Buckinghamshire risk stratification systems. Complications, intraoperative and postoperative, and visual outcomes were analyzed in relation to these risk scores. Intraoperative complication rates increased with higher risk scores using the Muhtaseb or Buckinghamshire stratification system (P = .001 and P = .003, respectively, n = 500). The odds ratios for residents and fellows were not significantly different from attending physicians after case-mix adjustment according to risk scores (P > .05). Postoperative complication rates increased with higher Buckinghamshire risk scores but not with Muhtaseb scores (P = .014 and P = .094, respectively, n = 476). Postoperative corrected-distance visual acuity was poorer with higher risk scores (P < .001 for both, n = 476). This study confirms that the risk of intraoperative complications increases with higher preoperative risk scores. Furthermore, higher risk scores correlate with poorer postoperative visual acuity and the Buckinghamshire risk score also correlates with postoperative complications. Therefore, preoperative assessment using such risk stratification systems could assist individual informed consent, preoperative surgical planning, safe allocation of cases to trainees, and more meaningful analyses of outcomes for individual surgeons and institutions. Copyright © 2016 Elsevier Inc. All rights reserved.
Siren, Reijo; Eriksson, Johan G; Vanhanen, Hannu
2016-12-01
To examine the long-term impact of health counselling among middle-aged men at high risk of CVD. An observational study with a 5-year follow-up. All men aged 40 years in Helsinki have been invited to a visit to evaluate CVD risk from 2006 onwards. A modified version of the North Karelia project risk tool (CVD risk score) served to assess the risk. High-risk men received lifestyle counselling based on their individual risk profile in 2006 and were invited to a follow-up visit in 2011. Of the 389 originally high-risk men, 159 participated in the follow-up visits in 2011. Based on their follow-up in relation the further risk communication, we divided the participants into three groups: primary health care, occupational health care and no control visits. Lifestyle and CVD risk score change. All groups showed improvements in lifestyles. The CVD risk score decreased the most in the group that continued the risk communication visits in their primary health care centre (6.1 to 4.8 [95% CI -1.6 to -0.6]) compared to those who continued risk communication visits in their occupational health care (6.0 to 5.4 [95% CI -1.3 to 0.3]), and to those with no risk communication visits (6.0 to 5.9 [95% CI -0.5 to 0.4]). These findings indicate that individualized lifestyle counselling improves health behaviour and reduces total CVD risk among middle-aged men at high risk of CVD. Sustained improvement in risk factor status requires ongoing risk communication with health care providers. KEY POINTS Studies of short duration have shown that lifestyle changes reduce the risk of cardiovascular disease among high-risk individuals. Sustaining these lifestyle changes and maintaining the lower disease risk attained can prove challenging. Cardiovascular disease (CVD) risk assessment and individualized health counselling for high-risk men, when implemented in primary health care, have the potential to initiate lifestyle changes that support risk reduction. Attaining a sustainable reduction in CVD risk requires a willingness to engage in risk-related communication from both health care providers and the individual at high risk.
Jonnagaddala, Jitendra; Liaw, Siaw-Teng; Ray, Pradeep; Kumar, Manish; Chang, Nai-Wen; Dai, Hong-Jie
2015-12-01
Coronary artery disease (CAD) often leads to myocardial infarction, which may be fatal. Risk factors can be used to predict CAD, which may subsequently lead to prevention or early intervention. Patient data such as co-morbidities, medication history, social history and family history are required to determine the risk factors for a disease. However, risk factor data are usually embedded in unstructured clinical narratives if the data is not collected specifically for risk assessment purposes. Clinical text mining can be used to extract data related to risk factors from unstructured clinical notes. This study presents methods to extract Framingham risk factors from unstructured electronic health records using clinical text mining and to calculate 10-year coronary artery disease risk scores in a cohort of diabetic patients. We developed a rule-based system to extract risk factors: age, gender, total cholesterol, HDL-C, blood pressure, diabetes history and smoking history. The results showed that the output from the text mining system was reliable, but there was a significant amount of missing data to calculate the Framingham risk score. A systematic approach for understanding missing data was followed by implementation of imputation strategies. An analysis of the 10-year Framingham risk scores for coronary artery disease in this cohort has shown that the majority of the diabetic patients are at moderate risk of CAD. Copyright © 2015 Elsevier Inc. All rights reserved.
Large unbalanced credit scoring using Lasso-logistic regression ensemble.
Wang, Hong; Xu, Qingsong; Zhou, Lifeng
2015-01-01
Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.
Bronner, Shaw; Bauer, Naomi G
2018-05-01
To examine risk factors for injury in pre-professional modern dancers. With prospectively designed screening and injury surveillance, we evaluated four risk factors as categorical predictors of injury: i) hypermobility; ii) dance technique motor-control; iii) muscle tightness; iv) previous injury. Screening and injury data of 180 students enrolled in a university modern dance program were reviewed over 4-yrs of training. Dancers were divided into 3-groups based on predictor scores. Dance exposure was based on hours of technique classes/wk. Negative binomial log-linear analyses were conducted with the four predictors, p < 0.05. Dancers with low and high Beighton scores were 1.43 and 1.22 times more likely to sustain injury than dancers with mid-range scores (p ≤ 0.03). Dancers with better technique (low or medium scores) were 0.86 and 0.63 times less likely to sustain injury (p = 0.013 and p < 0.001) compared to those with poor technique. Dancers with one or 2-4 tight muscles were 2.7 and 4.0 times more likely to sustain injury (p ≤ 0.046). Dancers who sustained 2-4 injuries in the previous year were 1.38 times more likely to sustain subsequent injury (p < 0.001). This contributes new information on the value of preseason screening. Dancers with these risk factors may benefit from prevention programs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Use of Chronic Kidney Disease to Enhance Prediction of Cardiovascular Risk in Those at Medium Risk.
Chia, Yook Chin; Lim, Hooi Min; Ching, Siew Mooi
2015-01-01
Based on global cardiovascular (CV) risk assessment for example using the Framingham risk score, it is recommended that those with high risk should be treated and those with low risk should not be treated. The recommendation for those of medium risk is less clear and uncertain. We aimed to determine whether factoring in chronic kidney disease (CKD) will improve CV risk prediction in those with medium risk. This is a 10-year retrospective cohort study of 905 subjects in a primary care clinic setting. Baseline CV risk profile and serum creatinine in 1998 were captured from patients record. Framingham general cardiovascular disease risk score (FRS) for each patient was computed. All cardiovascular disease (CVD) events from 1998-2007 were captured. Overall, patients with CKD had higher FRS risk score (25.9% vs 20%, p = 0.001) and more CVD events (22.3% vs 11.9%, p = 0.002) over a 10-year period compared to patients without CKD. In patients with medium CV risk, there was no significant difference in the FRS score among those with and without CKD (14.4% vs 14.6%, p = 0.84) However, in this same medium risk group, patients with CKD had more CV events compared to those without CKD (26.7% vs 6.6%, p = 0.005). This is in contrast to patients in the low and high risk group where there was no difference in CVD events whether these patients had or did not have CKD. There were more CV events in the Framingham medium risk group when they also had CKD compared those in the same risk group without CKD. Hence factoring in CKD for those with medium risk helps to further stratify and identify those who are actually at greater risk, when treatment may be more likely to be indicated.
Use of Chronic Kidney Disease to Enhance Prediction of Cardiovascular Risk in Those at Medium Risk
Chia, Yook Chin; Lim, Hooi Min; Ching, Siew Mooi
2015-01-01
Based on global cardiovascular (CV) risk assessment for example using the Framingham risk score, it is recommended that those with high risk should be treated and those with low risk should not be treated. The recommendation for those of medium risk is less clear and uncertain. We aimed to determine whether factoring in chronic kidney disease (CKD) will improve CV risk prediction in those with medium risk. This is a 10-year retrospective cohort study of 905 subjects in a primary care clinic setting. Baseline CV risk profile and serum creatinine in 1998 were captured from patients record. Framingham general cardiovascular disease risk score (FRS) for each patient was computed. All cardiovascular disease (CVD) events from 1998–2007 were captured. Overall, patients with CKD had higher FRS risk score (25.9% vs 20%, p = 0.001) and more CVD events (22.3% vs 11.9%, p = 0.002) over a 10-year period compared to patients without CKD. In patients with medium CV risk, there was no significant difference in the FRS score among those with and without CKD (14.4% vs 14.6%, p = 0.84) However, in this same medium risk group, patients with CKD had more CV events compared to those without CKD (26.7% vs 6.6%, p = 0.005). This is in contrast to patients in the low and high risk group where there was no difference in CVD events whether these patients had or did not have CKD. There were more CV events in the Framingham medium risk group when they also had CKD compared those in the same risk group without CKD. Hence factoring in CKD for those with medium risk helps to further stratify and identify those who are actually at greater risk, when treatment may be more likely to be indicated. PMID:26496190
Glance, Laurent G; Lustik, Stewart J; Hannan, Edward L; Osler, Turner M; Mukamel, Dana B; Qian, Feng; Dick, Andrew W
2012-04-01
To develop a 30-day mortality risk index for noncardiac surgery that can be used to communicate risk information to patients and guide clinical management at the "point-of-care," and that can be used by surgeons and hospitals to internally audit their quality of care. Clinicians rely on the Revised Cardiac Risk Index to quantify the risk of cardiac complications in patients undergoing noncardiac surgery. Because mortality from noncardiac causes accounts for many perioperative deaths, there is also a need for a simple bedside risk index to predict 30-day all-cause mortality after noncardiac surgery. Retrospective cohort study of 298,772 patients undergoing noncardiac surgery during 2005 to 2007 using the American College of Surgeons National Surgical Quality Improvement Program database. The 9-point S-MPM (Surgical Mortality Probability Model) 30-day mortality risk index was derived empirically and includes three risk factors: ASA (American Society of Anesthesiologists) physical status, emergency status, and surgery risk class. Patients with ASA physical status I, II, III, IV or V were assigned either 0, 2, 4, 5, or 6 points, respectively; intermediate- or high-risk procedures were assigned 1 or 2 points, respectively; and emergency procedures were assigned 1 point. Patients with risk scores less than 5 had a predicted risk of mortality less than 0.50%, whereas patients with a risk score of 5 to 6 had a risk of mortality between 1.5% and 4.0%. Patients with a risk score greater than 6 had risk of mortality more than 10%. S-MPM exhibited excellent discrimination (C statistic, 0.897) and acceptable calibration (Hosmer-Lemeshow statistic 13.0, P = 0.023) in the validation data set. Thirty-day mortality after noncardiac surgery can be accurately predicted using a simple and accurate risk score based on information readily available at the bedside. This risk index may play a useful role in facilitating shared decision making, developing and implementing risk-reduction strategies, and guiding quality improvement efforts.
Hansen, Camilla Plambeck; Overvad, Kim; Tetens, Inge; Tjønneland, Anne; Parner, Erik Thorlund; Jakobsen, Marianne Uhre; Dahm, Christina Catherine
2018-05-01
A direct way to evaluate food-based dietary guidelines is to assess if adherence is associated with development of non-communicable diseases. Thus, the objective was to develop an index to assess adherence to the 2013 Danish food-based dietary guidelines and to investigate the association between adherence to the index and risk of myocardial infarction (MI). Population-based cohort study with recruitment of participants in 1993-1997. Information on dietary intake was collected at baseline using an FFQ and an index ranging from 0 to 6 points was created to assess adherence to the 2013 Danish food-based dietary guidelines. MI cases were identified by record linkage to the Danish National Patient Register and the Causes of Death Register. Cox proportional hazards models were used to estimate hazard ratios (HR) of MI. Greater areas of Aarhus and Copenhagen, Denmark. Men and women aged 50-64 years (n 55 021) from the Diet, Cancer and Health study. A total of 3046 participants were diagnosed with first-time MI during a median follow-up of 16·9 years. A higher Danish Dietary Guidelines Index score was associated with a lower risk of MI. After adjustment for potential confounders, the hazard of MI was 13 % lower among men with a score of 3-<4 (HR=0·87; 95 % CI 0·78, 0·96) compared with men with a score of <3. The corresponding HR among women was 0·76 (95 % CI 0·63, 0·93). Adherence to the 2013 Danish food-based dietary guidelines was inversely associated with risk of MI.
Identification of two heritable cross-disorder endophenotypes for Tourette Syndrome
Darrow, Sabrina M.; Hirschtritt, Matthew E.; Davis, Lea K.; Illmann, Cornelia; Osiecki, Lisa; Grados, Marco; Sandor, Paul; Dion, Yves; King, Robert; Pauls, David; Budman, Cathy L.; Cath, Danielle C.; Greenberg, Erica; Lyon, Gholson J.; Yu, Dongmei; McGrath, Lauren M.; McMahon, William M.; Lee, Paul C.; Delucchi, Kevin L.; Scharf, Jeremiah M.; Mathews, Carol A.
2016-01-01
Objective Phenotypic heterogeneity in Tourette syndrome (TS) is partly due to complex genetic relationships between TS, obsessive-compulsive disorder (OCD) and attention deficit/hyperactivity disorder (ADHD). Identifying symptom-based endophenotypes across diagnoses may aid gene-finding efforts. Method 3494 individuals recruited for genetic studies were assessed for TS, OCD, and ADHD symptoms. Symptom-level factor and latent class analyses were conducted in TS families and replicated in an independent sample. Classes were characterized by comorbidity rates and proportion of parents. Heritability and TS-, OCD-, and ADHD-associated polygenic load were estimated. Results We identified two cross-disorder symptom-based phenotypes across analyses: symmetry (symmetry, evening up, checking obsessions; ordering, arranging, counting, writing-rewriting compulsions, repetitive writing tics) and disinhibition (uttering syllables/words, echolalia/palilalia, coprolalia/copropraxia and obsessive urges to offend/mutilate/be destructive). Heritability estimates for both endophenotypes were high (disinhibition factor= 0.35, SE=0.03, p= 4.2 ×10−34; symmetry factor= 0.39, SE=0.03, p= 7.2 ×10−31; symmetry class=0.38, SE=0.10, p=0.001). Mothers of TS probands had high rates of symmetry (49%) but not disinhibition (5%). Polygenic risk scores derived from a TS genome-wide association study (GWAS) were associated with symmetry (p= 0.02), while risk scores derived from an OCD GWAS were not. OCD polygenic risk scores were associated with disinhibition (p =0.03), while TS and ADHD risk scores were not. Conclusions We identified two heritable TS-related endophenotypes that cross traditional diagnostic boundaries. The symmetry phenotype correlated with TS polygenic load, and was present in otherwise “TS-unaffected” mothers, suggesting that this phenotype may reflect additional TS (rather than OCD) genetic liability that is not captured by traditional DSM-based diagnoses. PMID:27809572
Identification of Two Heritable Cross-Disorder Endophenotypes for Tourette Syndrome.
Darrow, Sabrina M; Hirschtritt, Matthew E; Davis, Lea K; Illmann, Cornelia; Osiecki, Lisa; Grados, Marco; Sandor, Paul; Dion, Yves; King, Robert; Pauls, David; Budman, Cathy L; Cath, Danielle C; Greenberg, Erica; Lyon, Gholson J; Yu, Dongmei; McGrath, Lauren M; McMahon, William M; Lee, Paul C; Delucchi, Kevin L; Scharf, Jeremiah M; Mathews, Carol A
2017-04-01
Phenotypic heterogeneity in Tourette syndrome is partly due to complex genetic relationships among Tourette syndrome, obsessive-compulsive disorder (OCD), and attention deficit hyperactivity disorder (ADHD). Identifying symptom-based endophenotypes across diagnoses may aid gene-finding efforts. Assessments for Tourette syndrome, OCD, and ADHD symptoms were conducted in a discovery sample of 3,494 individuals recruited for genetic studies. Symptom-level factor and latent class analyses were conducted in Tourette syndrome families and replicated in an independent sample of 882 individuals. Classes were characterized by comorbidity rates and proportion of parents included. Heritability and polygenic load associated with Tourette syndrome, OCD, and ADHD were estimated. The authors identified two cross-disorder symptom-based phenotypes across analyses: symmetry (symmetry, evening up, checking obsessions; ordering, arranging, counting, writing-rewriting compulsions, repetitive writing tics) and disinhibition (uttering syllables/words, echolalia/palilalia, coprolalia/copropraxia, and obsessive urges to offend/mutilate/be destructive). Heritability estimates for both endophenotypes were high and statistically significant (disinhibition factor=0.35, SE=0.03; symmetry factor=0.39, SE=0.03; symmetry class=0.38, SE=0.10). Mothers of Tourette syndrome probands had high rates of symmetry (49%) but not disinhibition (5%). Polygenic risk scores derived from a Tourette syndrome genome-wide association study (GWAS) were significantly associated with symmetry, while risk scores derived from an OCD GWAS were not. OCD polygenic risk scores were significantly associated with disinhibition, while Tourette syndrome and ADHD risk scores were not. The analyses identified two heritable endophenotypes related to Tourette syndrome that cross traditional diagnostic boundaries. The symmetry phenotype correlated with Tourette syndrome polygenic load and was present in otherwise Tourette-unaffected mothers, suggesting that this phenotype may reflect additional Tourette syndrome (rather than OCD) genetic liability that is not captured by traditional DSM-based diagnoses.
Carmichael, Owen; Schwarz, Christopher; Drucker, David; Fletcher, Evan; Harvey, Danielle; Beckett, Laurel; Jack, Clifford R; Weiner, Michael; DeCarli, Charles
2010-11-01
To evaluate relationships between magnetic resonance imaging (MRI)-based measures of white matter hyperintensities (WMHs), measured at baseline and longitudinally, and 1-year cognitive decline using a large convenience sample in a clinical trial design with a relatively mild profile of cardiovascular risk factors. Convenience sample in a clinical trial design. A total of 804 participants in the Alzheimer Disease Neuroimaging Initiative who received MRI scans, cognitive testing, and clinical evaluations at baseline, 6-month follow-up, and 12-month follow-up visits. For each scan, WMHs were detected automatically on coregistered sets of T1, proton density, and T2 MRI images using a validated method. Mixed-effects regression models evaluated relationships between risk factors for WMHs, WMH volume, and change in outcome measures including Mini-Mental State Examination (MMSE), Alzheimer Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Scale sum of boxes scores. Covariates in these models included race, sex, years of education, age, apolipoprotein E genotype, baseline clinical diagnosis (cognitively normal, mild cognitive impairment, or Alzheimer disease), cardiovascular risk score, and MRI-based hippocampal and brain volumes. Higher baseline WMH volume was associated with greater subsequent 1-year increase in ADAS-Cog and decrease in MMSE scores. Greater WMH volume at follow-up was associated with greater ADAS-Cog and lower MMSE scores at follow-up. Higher baseline age and cardiovascular risk score and more impaired baseline clinical diagnosis were associated with higher baseline WMH volume. White matter hyperintensity volume predicts 1-year cognitive decline in a relatively healthy convenience sample that was similar to clinical trial samples, and therefore should be considered as a covariate of interest at baseline and longitudinally in future AD treatment trials.
Kulshreshtha, Ambar; Vaccarino, Viola; Judd, Suzanne; Howard, Virginia J.; McClellan, William; Muntner, Paul; Hong, Yuling; Safford, Monika; Goyal, Abhinav; Cushman, Mary
2013-01-01
Background and Purpose The American Heart Association developed Life’s Simple 7 (LS7) as a metric defining cardiovascular health. We investigated the association between LS7 and incident stroke in black and white Americans. Methods REGARDS is a national population-based cohort of 30,239 blacks and whites, aged ≥45 years, sampled from the US population in 2003 – 2007. Data were collected by telephone, self-administered questionnaires and an in-home exam. Incident strokes were identified through bi-annual participant contact followed by adjudication of medical records. Levels of the LS7 components (blood pressure, cholesterol, glucose, body mass index, smoking, physical activity, and diet) were each coded as poor (0 point), intermediate (1 point) or ideal (2 points) health. An overall LS7 score was categorized as inadequate (0–4), average (5–9) or optimum (10–14) cardiovascular health. Results Among 22,914 subjects with LS7 data and no previous cardiovascular disease, there were 432 incident strokes over 4.9 years of follow-up. After adjusting for demographics, socioeconomic status, and region of residence, each better health category of the LS7 score was associated with a 25% lower risk of stroke (HR=0.75, 95% CI = 0.63, 0.90). The association was similar for blacks and whites (interaction p-value = 0.55). A one point higher LS7 score was associated with an 8% lower risk of stroke (HR=0.92, 95% CI=0.88, 0.95). Conclusion In both blacks and whites better cardiovascular health, based on the LS7 score, is associated with lower risk of stroke, and a small difference in scores was an important stroke determinant. PMID:23743971
Karam, Nicole; Bataille, Sophie; Marijon, Eloi; Giovannetti, Olivier; Tafflet, Muriel; Savary, Dominique; Benamer, Hakim; Caussin, Christophe; Garot, Philippe; Juliard, Jean-Michel; Pires, Virginie; Boche, Thévy; Dupas, François; Le Bail, Gaelle; Lamhaut, Lionel; Laborne, François; Lefort, Hugues; Mapouata, Mireille; Lapostolle, Frederic; Spaulding, Christian; Empana, Jean-Philippe; Jouven, Xavier; Lambert, Yves
2016-12-20
In-hospital mortality of ST-segment-elevation myocardial infarction (STEMI) has decreased drastically. In contrast, prehospital mortality from sudden cardiac arrest (SCA) remains high and difficult to reduce. Identification of the patients with STEMI at higher risk for prehospital SCA could facilitate rapid triage and intervention in the field. Using a prospective, population-based study evaluating all patients with STEMI managed by emergency medical services in the greater Paris area (11.7 million inhabitants) between 2006 and 2010, we identified characteristics associated with an increased risk of prehospital SCA and used these variables to build an SCA prediction score, which we validated internally and externally. In the overall STEMI population (n=8112; median age, 60 years; 78% male), SCA occurred in 452 patients (5.6%). In multivariate analysis, younger age, absence of obesity, absence of diabetes mellitus, shortness of breath, and a short delay between pain onset and call to emergency medical services were the main predictors of SCA. A score built from these variables predicted SCA, with the risk increasing 2-fold in patients with a score between 10 and 19, 4-fold in those with a score between 20 and 29, and >18-fold in patients with a score ≥30 compared with those with scores <10. The SCA rate was 28.9% in patients with a score ≥30 compared with 1.6% in patients with a score ≤9 (P for trend <0.001). The area under the curve values were 0.7033 in the internal validation sample and 0.6031 in the external validation sample. Sensitivity and specificity varied between 96.9% and 10.5% for scores ≥10 and between 18.0% and 97.6% for scores ≥30, with scores between 20 and 29 achieving the best sensitivity and specificity (65.4% and 62.6%, respectively). At the early phase of STEMI, the risk of prehospital SCA can be determined through a simple score of 5 routinely assessed predictors. This score might help optimize the dispatching and management of patients with STEMI by emergency medical services. © 2016 American Heart Association, Inc.
Pletcher, Mark J; Tice, Jeffrey A; Pignone, Michael; McCulloch, Charles; Callister, Tracy Q; Browner, Warren S
2004-01-01
Background The coronary artery calcium (CAC) score is an independent predictor of coronary heart disease. We sought to combine information from the CAC score with information from conventional cardiac risk factors to produce post-test risk estimates, and to determine whether the score may add clinically useful information. Methods We measured the independent cross-sectional associations between conventional cardiac risk factors and the CAC score among asymptomatic persons referred for non-contrast electron beam computed tomography. Using the resulting multivariable models and published CAC score-specific relative risk estimates, we estimated post-test coronary heart disease risk in a number of different scenarios. Results Among 9341 asymptomatic study participants (age 35–88 years, 40% female), we found that conventional coronary heart disease risk factors including age, male sex, self-reported hypertension, diabetes and high cholesterol were independent predictors of the CAC score, and we used the resulting multivariable models for predicting post-test risk in a variety of scenarios. Our models predicted, for example, that a 60-year-old non-smoking non-diabetic women with hypertension and high cholesterol would have a 47% chance of having a CAC score of zero, reducing her 10-year risk estimate from 15% (per Framingham) to 6–9%; if her score were over 100, however (a 17% chance), her risk estimate would be markedly higher (25–51% in 10 years). In low risk scenarios, the CAC score is very likely to be zero or low, and unlikely to change management. Conclusion Combining information from the CAC score with information from conventional risk factors can change assessment of coronary heart disease risk to an extent that may be clinically important, especially when the pre-test 10-year risk estimate is intermediate. The attached spreadsheet makes these calculations easy. PMID:15327691
Barnes, Geoffrey D; Gu, Xiaokui; Haymart, Brian; Kline-Rogers, Eva; Almany, Steve; Kozlowski, Jay; Besley, Dennis; Krol, Gregory D; Froehlich, James B; Kaatz, Scott
2014-08-01
Guidelines recommend the assessment of stroke and bleeding risk before initiating warfarin anticoagulation in patients with atrial fibrillation. Many of the elements used to predict stroke also overlap with bleeding risk in atrial fibrillation patients and it is tempting to use stroke risk scores to efficiently estimate bleeding risk. Comparison of stroke risk scores to bleeding risk scores to predict bleeding has not been thoroughly assessed. 2600 patients followed at seven anticoagulation clinics were followed from October 2009-May 2013. Five risk models (CHADS2, CHA2DS2-VASc, HEMORR2HAGES, HAS-BLED and ATRIA) were retrospectively applied to each patient. The primary outcome was the first major bleeding event. Area under the ROC curves were compared with C statistic and net reclassification improvement (NRI) analysis was performed. 110 patients experienced a major bleeding event in 2581.6 patient-years (4.5%/year). Mean follow up was 1.0±0.8years. All of the formal bleeding risk scores had a modest predictive value for first major bleeding events (C statistic 0.66-0.69), performing better than CHADS2 and CHA2DS2-VASc scores (C statistic difference 0.10 - 0.16). NRI analysis demonstrated a 52-69% and 47-64% improvement of the formal bleeding risk scores over the CHADS2 score and CHA2DS2-VASc score, respectively. The CHADS2 and CHA2DS2-VASc scores did not perform as well as formal bleeding risk scores for prediction of major bleeding in non-valvular atrial fibrillation patients treated with warfarin. All three bleeding risk scores (HAS-BLED, ATRIA and HEMORR2HAGES) performed moderately well. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gander, Philippa; Briar, Celia; Garden, Alexander; Purnell, Heather; Woodward, Alistair
2010-09-01
To document fatigue in New Zealand junior doctors in hospital-based clinical training positions and identify work patterns associated with work/life balance difficulties. This workforce has had a duty limitation of 72 hours/week since 1985. The authors chose a gender-based analytical approach because of the increasing proportion of female medical graduates. The authors mailed a confidential questionnaire to all 2,154 eligible junior doctors in 2003. The 1,412 respondents were working > or = 40 hours/week (complete questionnaires from 1,366: response rate: 63%; 49% women). For each participant, the authors calculated a multidimensional fatigue risk score based on sleep and work patterns. Women were more likely to report never/rarely getting enough sleep (P < .05), never/rarely waking refreshed (P < .001), and excessive sleepiness (P < .05) and were less likely to live with children up to 12 years old (P < .001). Fatigue risk scores differed by specialty but not by gender.Fatigue risk scores in the highest tertile were an independent risk factor for reporting problems in social life (odds ratio: 3.83; 95% CI: 2.79-5.28), home life (3.37; 2.43-4.67), personal relationships (2.12; 1.57-2.86), and other commitments (3.06; 2.23-4.19).Qualitative analyses indicated a common desire among men and women for better work/life balance and for part-time work, particularly in relation to parenthood. Limitation of duty hours alone is insufficient to manage fatigue risk and difficulties in maintaining work/life balance. These findings have implications for schedule design, professional training, and workforce planning.
Evaluation of polygenic risk scores for predicting breast and prostate cancer risk.
Machiela, Mitchell J; Chen, Chia-Yen; Chen, Constance; Chanock, Stephen J; Hunter, David J; Kraft, Peter
2011-09-01
Recently, polygenic risk scores (PRS) have been shown to be associated with certain complex diseases. The approach has been based on the contribution of counting multiple alleles associated with disease across independent loci, without requiring compelling evidence that every locus had already achieved definitive genome-wide statistical significance. Whether PRS assist in the prediction of risk of common cancers is unknown. We built PRS from lists of genetic markers prioritized by their association with breast cancer (BCa) or prostate cancer (PCa) in a training data set and evaluated whether these scores could improve current genetic prediction of these specific cancers in independent test samples. We used genome-wide association data on 1,145 BCa cases and 1,142 controls from the Nurses' Health Study and 1,164 PCa cases and 1,113 controls from the Prostate Lung Colorectal and Ovarian Cancer Screening Trial. Ten-fold cross validation was used to build and evaluate PRS with 10 to 60,000 independent single nucleotide polymorphisms (SNPs). For both BCa and PCa, the models that included only published risk alleles maximized the cross-validation estimate of the area under the ROC curve (0.53 for breast and 0.57 for prostate). We found no significant evidence that PRS using common variants improved risk prediction for BCa and PCa over replicated SNP scores. © 2011 Wiley-Liss, Inc.
Safari, Saeed; Yousefifard, Mahmoud; Hashemi, Behrooz; Baratloo, Alireza; Forouzanfar, Mohammad Mehdi; Rahmati, Farhad; Motamedi, Maryam; Najafi, Iraj
2016-05-01
During the past decade, using serum biomarkers and clinical decision rules for early prediction of rhabdomyolysis-induced acute kidney injury (AKI) has received much attention from researchers. This study aimed to broadly review the value of scoring systems and urine dipstick in prediction of rhabdomyolysis-induced AKI. The study was designed based on the guidelines of the Meta-analysis of Observational Studies in Epidemiology statement. Search was done in electronic databases of MEDLINE, EMBASE, Cochrane Library, Scopus, and Google Scholar by 2 independent reviewers. Studies evaluating AKI risk factors in rhabdomyolysis patients with the aim of developing a scoring model as well as those assessing the role of urine dipstick in these patients were included. Of the 5997 articles found, 143 were potentially relevant studies. After studying their full texts, 6 articles were entered into the systematic review. Two studies had developed or validated scoring systems of the "rule of thumb," and the AKI index, and the Mangled Extremity Severity Score. Four studies were on the predictive value of urine dipstick in risk prediction of rhabdomyolysis-induced AKI, with favorable results. The findings of this systematic review showed that based on the available resources, using the prediction rules and urine dipstick could be considered as valuable screening tools for detection of patients at risk for AKI following rhabdomyolysis. Yet, the external validity of the mentioned tools should be assessed before their general application in routine practice.
PROgnosticating COeliac patieNts SUrvivaL: the PROCONSUL score.
Biagi, Federico; Schiepatti, Annalisa; Malamut, Georgia; Marchese, Alessandra; Cellier, Christophe; Bakker, Sjoerd F; Mulder, Chris J J; Volta, Umberto; Zingone, Fabiana; Ciacci, Carolina; D'Odorico, Anna; Andrealli, Alida; Astegiano, Marco; Klersy, Catherine; Corazza, Gino R
2014-01-01
It has been shown that mortality rates of coeliac patients correlate with age at diagnosis of coeliac disease, diagnostic delay for coeliac disease, pattern of clinical presentation and HLA typing. Our aim was to create a tool that identifies coeliac patients at higher risk of developing complications. To identify predictors of complications in patients with coeliac disease, we organised an observational multicenter case-control study based on a retrospective collection of clinical data. Clinical data from 116 cases (patients with complicated coeliac disease) and 181 controls (coeliac patients without any complications) were collected from seven European centres. For each case, one or two controls, matched to cases according to the year of assessment, gender and age, were selected. Diagnostic delay, pattern of clinical presentation, HLA typing and age at diagnosis were used as predictors. Differences between cases and controls were detected for diagnostic delay and classical presentation. Conditional logistic models based on these statistically different predictors allowed the development of a score system. Tertiles analysis showed a relationship between score and risk of developing complications. A score that shows the risk of a newly diagnosed coeliac patient developing complications was devised for the first time. This will make it possible to set up the follow-up of coeliac patients with great benefits not only for their health but also for management of economic resources. We think that our results are very encouraging and represent the first attempt to build a prognostic score for coeliac patients.
PROgnosticating COeliac patieNts SUrvivaL: The PROCONSUL Score
Biagi, Federico; Schiepatti, Annalisa; Malamut, Georgia; Marchese, Alessandra; Cellier, Christophe; Bakker, Sjoerd F.; Mulder, Chris J. J.; Volta, Umberto; Zingone, Fabiana; Ciacci, Carolina; D’Odorico, Anna; Andrealli, Alida; Astegiano, Marco; Klersy, Catherine; Corazza, Gino R.
2014-01-01
Introduction It has been shown that mortality rates of coeliac patients correlate with age at diagnosis of coeliac disease, diagnostic delay for coeliac disease, pattern of clinical presentation and HLA typing. Our aim was to create a tool that identifies coeliac patients at higher risk of developing complications. Methods To identify predictors of complications in patients with coeliac disease, we organised an observational multicenter case-control study based on a retrospective collection of clinical data. Clinical data from 116 cases (patients with complicated coeliac disease) and 181 controls (coeliac patients without any complications) were collected from seven European centres. For each case, one or two controls, matched to cases according to the year of assessment, gender and age, were selected. Diagnostic delay, pattern of clinical presentation, HLA typing and age at diagnosis were used as predictors. Results Differences between cases and controls were detected for diagnostic delay and classical presentation. Conditional logistic models based on these statistically different predictors allowed the development of a score system. Tertiles analysis showed a relationship between score and risk of developing complications. Discussion A score that shows the risk of a newly diagnosed coeliac patient developing complications was devised for the first time. This will make it possible to set up the follow-up of coeliac patients with great benefits not only for their health but also for management of economic resources. Conclusions We think that our results are very encouraging and represent the first attempt to build a prognostic score for coeliac patients. PMID:24392112
Álvarez-García, Jesús; Ferrero-Gregori, Andreu; Puig, Teresa; Vázquez, Rafael; Delgado, Juan; Pascual-Figal, Domingo; Alonso-Pulpón, Luis; González-Juanatey, José R; Rivera, Miguel; Worner, Fernando; Bardají, Alfredo; Cinca, Juan
2015-08-01
Prevention of hospital readmissions is one of the main objectives in the management of patients with heart failure (HF). Most of the models predicting readmissions are based on data extracted from hospitalized patients rather than from outpatients. Our objective was to develop a validated score predicting 1-month and 1-year risk of readmission for worsening of HF in ambulatory patients. A cohort of 2507 ambulatory patients with chronic HF was prospectively followed for a median of 3.3 years. Clinical, echocardiographic, ECG, and biochemical variables were used in a competing risk regression analysis to construct a risk score for readmissions due to worsening of HF. Thereafter, the score was externally validated using a different cohort of 992 patients with chronic HF (MUSIC registry). Predictors of 1-month readmission were the presence of elevated natriuretic peptides, left ventricular (LV) HF signs, and estimated glomerular filtration rate (eGFR) <60 mL/min/m(2) . Predictors of 1-year readmission were elevated natriuretic peptides, anaemia, left atrial size >26 mm/m(2) , heart rate >70 b.p.m., LV HF signs, and eGFR <60 mL/min/m(2) . The C-statistics for the models were 0.72 and 0.66, respectively. The cumulative incidence function distinguished low-risk (<1% event rate) and high-risk groups (>5% event rate) for 1-month HF readmission. Likewise, low-risk (7.8%), intermediate-risk (15.6%) and high-risk groups (26.1%) were identified for 1-year HF readmission risk. The C-statistics remained consistent after the external validation (<5% loss of discrimination). The Redin-SCORE predicts early and late readmission for worsening of HF using proven prognostic variables that are routinely collected in outpatient management of chronic HF. © 2015 The Authors. European Journal of Heart Failure published by John Wiley & Sons Ltd on behalf of European Society of Cardiology.
Credit scoring analysis using weighted k nearest neighbor
NASA Astrophysics Data System (ADS)
Mukid, M. A.; Widiharih, T.; Rusgiyono, A.; Prahutama, A.
2018-05-01
Credit scoring is a quatitative method to evaluate the credit risk of loan applications. Both statistical methods and artificial intelligence are often used by credit analysts to help them decide whether the applicants are worthy of credit. These methods aim to predict future behavior in terms of credit risk based on past experience of customers with similar characteristics. This paper reviews the weighted k nearest neighbor (WKNN) method for credit assessment by considering the use of some kernels. We use credit data from a private bank in Indonesia. The result shows that the Gaussian kernel and rectangular kernel have a better performance based on the value of percentage corrected classified whose value is 82.4% respectively.
Predicting Blunt Cerebrovascular Injury in Pediatric Trauma: Validation of the “Utah Score”
Ravindra, Vijay M.; Bollo, Robert J.; Sivakumar, Walavan; Akbari, Hassan; Naftel, Robert P.; Limbrick, David D.; Jea, Andrew; Gannon, Stephen; Shannon, Chevis; Birkas, Yekaterina; Yang, George L.; Prather, Colin T.; Kestle, John R.
2017-01-01
Abstract Risk factors for blunt cerebrovascular injury (BCVI) may differ between children and adults, suggesting that children at low risk for BCVI after trauma receive unnecessary computed tomography angiography (CTA) and high-dose radiation. We previously developed a score for predicting pediatric BCVI based on retrospective cohort analysis. Our objective is to externally validate this prediction score with a retrospective multi-institutional cohort. We included patients who underwent CTA for traumatic cranial injury at four pediatric Level I trauma centers. Each patient in the validation cohort was scored using the “Utah Score” and classified as high or low risk. Before analysis, we defined a misclassification rate <25% as validating the Utah Score. Six hundred forty-five patients (mean age 8.6 ± 5.4 years; 63.4% males) underwent screening for BCVI via CTA. The validation cohort was 411 patients from three sites compared with the training cohort of 234 patients. Twenty-two BCVIs (5.4%) were identified in the validation cohort. The Utah Score was significantly associated with BCVIs in the validation cohort (odds ratio 8.1 [3.3, 19.8], p < 0.001) and discriminated well in the validation cohort (area under the curve 72%). When the Utah Score was applied to the validation cohort, the sensitivity was 59%, specificity was 85%, positive predictive value was 18%, and negative predictive value was 97%. The Utah Score misclassified 16.6% of patients in the validation cohort. The Utah Score for predicting BCVI in pediatric trauma patients was validated with a low misclassification rate using a large, independent, multicenter cohort. Its implementation in the clinical setting may reduce the use of CTA in low-risk patients. PMID:27297774
Kassam, Zain; Fabersunne, Camila Cribb; Smith, Mark B.; Alm, Eric J.; Kaplan, Gilaad G.; Nguyen, Geoffrey C.; Ananthakrishnan, Ashwin N.
2016-01-01
Background Clostridium difficile infection (CDI) is public health threat and associated with significant mortality. However, there is a paucity of objectively derived CDI severity scoring systems to predict mortality. Aims To develop a novel CDI risk score to predict mortality entitled: Clostridium difficile Associated Risk of Death Score (CARDS). Methods We obtained data from the United States 2011 Nationwide Inpatient Sample (NIS) database. All CDI-associated hospitalizations were identified using discharge codes (ICD-9-CM, 008.45). Multivariate logistic regression was utilized to identify independent predictors of mortality. CARDS was calculated by assigning a numeric weight to each parameter based on their odds ratio in the final logistic model. Predictive properties of model discrimination were assessed using the c-statistic and validated in an independent sample using the 2010 NIS database. Results We identified 77,776 hospitalizations, yielding an estimate of 374,747 cases with an associated diagnosis of CDI in the United States, 8% of whom died in the hospital. The 8 severity score predictors were identified on multivariate analysis: age, cardiopulmonary disease, malignancy, diabetes, inflammatory bowel disease, acute renal failure, liver disease and ICU admission, with weights ranging from −1 (for diabetes) to 5 (for ICU admission). The overall risk score in the cohort ranged from 0 to 18. Mortality increased significantly as CARDS increased. CDI-associated mortality was 1.2% with a CARDS of 0 compared to 100% with CARDS of 18. The model performed equally well in our validation cohort. Conclusion CARDS is a promising simple severity score to predict mortality among those hospitalized with CDI. PMID:26849527
Yamanouchi, Masayuki; Hoshino, Junichi; Ubara, Yoshifumi; Takaichi, Kenmei; Kinowaki, Keiichi; Fujii, Takeshi; Ohashi, Kenichi; Mise, Koki; Toyama, Tadashi; Hara, Akinori; Kitagawa, Kiyoki; Shimizu, Miho; Furuichi, Kengo; Wada, Takashi
2018-01-01
There have been a limited number of biopsy-based studies on diabetic nephropathy, and therefore the clinical importance of renal biopsy in patients with diabetes in late-stage chronic kidney disease (CKD) is still debated. We aimed to clarify the renal prognostic value of pathological information to clinical information in patients with diabetes and advanced CKD. We retrospectively assessed 493 type 2 diabetics with biopsy-proven diabetic nephropathy in four centers in Japan. 296 patients with stage 3-5 CKD at the time of biopsy were identified and assigned two risk prediction scores for end-stage renal disease (ESRD): the Kidney Failure Risk Equation (KFRE, a score composed of clinical parameters) and the Diabetic Nephropathy Score (D-score, a score integrated pathological parameters of the Diabetic Nephropathy Classification by the Renal Pathology Society (RPS DN Classification)). They were randomized 2:1 to development and validation cohort. Hazard Ratios (HR) of incident ESRD were reported with 95% confidence interval (CI) of the KFRE, D-score and KFRE+D-score in Cox regression model. Improvement of risk prediction with the addition of D-score to the KFRE was assessed using c-statistics, continuous net reclassification improvement (NRI), and integrated discrimination improvement (IDI). During median follow-up of 1.9 years, 194 patients developed ESRD. The cox regression analysis showed that the KFRE,D-score and KFRE+D-score were significant predictors of ESRD both in the development cohort and in the validation cohort. The c-statistics of the D-score was 0.67. The c-statistics of the KFRE was good, but its predictive value was weaker than that in the miscellaneous CKD cohort originally reported (c-statistics, 0.78 vs. 0.90) and was not significantly improved by adding the D-score (0.78 vs. 0.79, p = 0.83). Only continuous NRI was positive after adding the D-score to the KFRE (0.4%; CI: 0.0-0.8%). We found that the predict values of the KFRE and the D-score were not as good as reported, and combining the D-score with the KFRE did not significantly improve prediction of the risk of ESRD in advanced diabetic nephropathy. To improve prediction of renal prognosis for advanced diabetic nephropathy may require different approaches with combining clinical and pathological parameters that were not measured in the KFRE and the RPS DN Classification.
Ruminative subtypes and impulsivity in risk for suicidal behavior
Valderrama, Jorge; Miranda, Regina; Jeglic, Elizabeth
2016-01-01
Rumination has been previously linked to negative psychological outcomes, including depression and suicidal behavior. However, there has been conflicting research on whether or not two different subtypes of rumination – brooding and reflection – are more or less maladaptive. The present research sought to (1) examine whether individuals high in brooding but lower in reflection would show higher trait and behavioral impulsivity, relative to individuals low in brooding and low in reflection; and (2) examine impulsivity as a mediator of the relation between ruminative subtypes and suicidal ideation. In Study 1, participants (N = 78) were recruited based on high, average, and low scores on a measure of brooding and reflective rumination. Individuals who scored high in brooding and average in reflection scored significantly higher in negative urgency, that is, in the tendency to act rashly in an attempt to reduce negative affect, than did those who scored low in brooding and low in reflection. Study 2 (N = 1638) examined the relationship between ruminative subtypes, impulsivity, and suicide risk. We found an indirect relationship between brooding and suicide risk through lack of premeditation and lack of perseverance, independently of reflection. These findings are discussed in relation to cognitive risk for suicide. PMID:26791398
Oliveira, Alane Cabral Menezes de; Ferreira, Raphaela Costa; Santos, Arianne Albuquerque
2016-04-01
To analyze the relation of abdominal obesity on cardiovascular risk in individuals seen by a clinic school of nutrition, classifying them based on Framingham score. Cross-sectional study, conducted at the nutrition clinic of a private college in the city of Maceió, Alagoas. We included randomly selected adults and elderly individuals with abdominal obesity, of both sexes, treated from August to December of 2009, with no history of cardiomyopathy or cardiovascular events. To determine the cardiovascular risk, the Framingham score was calculated. All analyzes were performed with SPSS software version 20.0, with p <0.05 as significative. We studied 54 subjects, 83% female, the mean age was 48 years old, ranging from 31 to 73 years. No correlation was observed between measurements of waist circumference and cardiovascular risk in the subjects studied (r=0.065, p=0.048), and there was no relationship between these parameters. Abdominal fat distribution was weakly related to cardiovascular risk in patients seen by a clinical school of nutrition.
A scoring system for ascertainment of incident stroke; the Risk Index Score (RISc).
Kass-Hout, T A; Moyé, L A; Smith, M A; Morgenstern, L B
2006-01-01
The main objective of this study was to develop and validate a computer-based statistical algorithm that could be translated into a simple scoring system in order to ascertain incident stroke cases using hospital admission medical records data. The Risk Index Score (RISc) algorithm was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christi (BASIC) project, 2000. The validity of RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment by physician and/or abstractor review of hospital admission records. RISc was developed on 1718 randomly selected patients (training set) and then statistically validated on an independent sample of 858 patients (validation set). A multivariable logistic model was used to develop RISc and subsequently evaluated by goodness-of-fit and receiver operating characteristic (ROC) analyses. The higher the value of RISc, the higher the patient's risk of potential stroke. The study showed RISc was well calibrated and discriminated those who had potential stroke from those that did not on initial screening. In this study we developed and validated a rapid, easy, efficient, and accurate method to ascertain incident stroke cases from routine hospital admission records for epidemiologic investigations. Validation of this scoring system was achieved statistically; however, clinical validation in a community hospital setting is warranted.
Hsieh, Yu-Hsiang; Haukoos, Jason S; Rothman, Richard E
2014-07-01
We sought to evaluate the performance of an abbreviated version of the Denver HIV Risk Score in 2 urban emergency departments (ED) with known high undiagnosed HIV prevalence. We performed a secondary analysis of data collected prospectively between November 2005 and December 2009 as part of an ED-based nontargeted rapid HIV testing program from 2 sites. Demographics; HIV testing history; injection drug use; and select high-risk sexual behaviors, including men who have sex with men, were collected by standardized interview. Information regarding receptive anal intercourse and vaginal intercourse was either not collected or collected inconsistently and was thus omitted from the model to create its abbreviated version. The study cohort included 15184 patients with 114 (0.75%) newly diagnosed with HIV infection. HIV prevalence was 0.41% (95% confidence interval [CI], 0.21%-0.71%) for those with a score less than 20, 0.29% (95% CI, 0.14%-0.52%) for those with a score of 20 to 29, 0.65% (95% CI, 0.48%-0.87%) for those with a score of 30 to 39, 2.38% (95% CI, 1.68%-3.28%) for those with a score of 40 to 49, and 4.57% (95% CI, 2.09%-8.67%) for those with a score of 50 or higher. External validation resulted in good discrimination (area under the receiver operating characteristic curve, 0.75; 95% CI, 0.71-0.79). The calibration regression slope was 0.92 and its R(2) was 0.78. An abbreviated version of the Denver HIV Risk Score had comparable performance to that reported previously, offering a promising alternative strategy for HIV screening in the ED where limited sexual risk behavior information may be obtainable. Copyright © 2014 Elsevier Inc. All rights reserved.
Morcillo, César; Valderas, José M; Roca, Joan M; Oliveró, Ruperto; Núñez, Cristina; Sánchez, Mónica; Bechich, Siraj
2007-03-01
Measurement of coronary artery calcification (CAC) is used in the evaluation of cardiovascular risk. We investigated its usefulness by comparing CAC assessment with that of various risk charts. We determined cardiovascular risk in patients without known atherosclerosis using the 1998 European Task Force (ETF), REGICOR (Registre Gironí del Corazón) and SCORE (Systematic Coronary Risk Evaluation) charts. CAC was assessed by computerized tomography and measurements were classified as low risk (i.e., score <1), intermediate risk (i.e., score 1-100), or high risk (i.e., score >100). The study included 331 patients (mean age 54 [8.5] years, 89% male). In 44.1%, CAC was detected (mean score 96 [278]). The degree of agreement between the cardiovascular risk derived from the CAC score and that derived from the SCORE and ETF charts was acceptable: kappa=.33 (P<.05) and kappa=.28 (P<.05), respectively, but agreement was poor with the REGICOR chart: kappa=.02 (P=.32). The SCORE and ETF charts, respectively, classified 45.0% and 38.3% of patients with a CAC score >100 as high risk, whereas the REGICOR chart did not classify any of these patients as high risk. Male sex, older age, smoking history, and a family history of coronary heart disease were all associated with the detection of CAC. Measurement of CAC demonstrated calcification in 44.1% of patients without known atherosclerosis. By regarding those with a CAC score > 100 as high-risk, 10.4% of patients evaluated using the SCORE chart would be reclassified as high risk, as would 11.6% of those evaluated using the ETF chart, and 18.9% of those evaluated using the REGICOR chart. Consequently, more patients would be eligible for preventative treatment.
Arts, E E A; Popa, C D; Den Broeder, A A; Donders, R; Sandoo, A; Toms, T; Rollefstad, S; Ikdahl, E; Semb, A G; Kitas, G D; Van Riel, P L C M; Fransen, J
2016-04-01
Predictive performance of cardiovascular disease (CVD) risk calculators appears suboptimal in rheumatoid arthritis (RA). A disease-specific CVD risk algorithm may improve CVD risk prediction in RA. The objectives of this study are to adapt the Systematic COronary Risk Evaluation (SCORE) algorithm with determinants of CVD risk in RA and to assess the accuracy of CVD risk prediction calculated with the adapted SCORE algorithm. Data from the Nijmegen early RA inception cohort were used. The primary outcome was first CVD events. The SCORE algorithm was recalibrated by reweighing included traditional CVD risk factors and adapted by adding other potential predictors of CVD. Predictive performance of the recalibrated and adapted SCORE algorithms was assessed and the adapted SCORE was externally validated. Of the 1016 included patients with RA, 103 patients experienced a CVD event. Discriminatory ability was comparable across the original, recalibrated and adapted SCORE algorithms. The Hosmer-Lemeshow test results indicated that all three algorithms provided poor model fit (p<0.05) for the Nijmegen and external validation cohort. The adapted SCORE algorithm mainly improves CVD risk estimation in non-event cases and does not show a clear advantage in reclassifying patients with RA who develop CVD (event cases) into more appropriate risk groups. This study demonstrates for the first time that adaptations of the SCORE algorithm do not provide sufficient improvement in risk prediction of future CVD in RA to serve as an appropriate alternative to the original SCORE. Risk assessment using the original SCORE algorithm may underestimate CVD risk in patients with RA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Barriers to colorectal cancer screening: inadequate knowledge by physicians.
Gennarelli, Melissa; Jandorf, Lina; Cromwell, Caroline; Valdimarsdottir, Heiddis; Redd, William; Itzkowitz, Steven
2005-01-01
The rate of colorectal cancer (CRC) screening remains relatively low. One potential barrier to higher rates is the lack of physician knowledge regarding CRC screening. The purpose of this study was to assess physicians' knowledge of (a) American Cancer Society (ACS) CRC screening guidelines for average-risk and high-risk patients, and (b) general colorectal cancer facts which support these guidelines. We administered a questionnaire to internal medicine residents, internal medicine attendings and medical students who provide care to patients in a low-income, predominantly minority community, to compare their levels of knowledge regarding CRC screening. Mean knowledge scores were calculated based on the number of correct responses. Knowledge of ACS guidelines for average-risk patients was low, although it did increase directly with level of training: medical students obtained a mean score of 32%, residents 49%, and attendings 56% (p<0.001). Knowledge scores for high-risk patients were even lower, with fewer than half of the respondents offering correct answers. Mean knowledge scores of general CRC screening facts increased with level of training: medical students scored 31%, residents 38% and attendings 42% (p<0.001). Knowledge of CRC screening guidelines for both average- and high-risk patients was suboptimal among the medical students, residents and attendings studied. Lack of knowledge about CRC is one barrier to screening that may contribute to underutilization of screening for minority populations. Further educational efforts should be targeted to these health care professionals.
USDA-ARS?s Scientific Manuscript database
An additive genetic risk score (GRS) for coronary heart disease (CHD) has previously been associated with incident CHD in the population-based Greek European Prospective Investigation into Cancer and nutrition (EPIC) cohort. In this study, we explore GRS-‘environment’ joint actions on CHD for severa...
Boulouis, Gregoire; Charidimou, Andreas; Pasi, Marco; Roongpiboonsopit, Duangnapa; Xiong, Li; Auriel, Eitan; van Etten, Ellis S; Martinez-Ramirez, Sergi; Ayres, Alison; Vashkevich, Anastasia; Schwab, Kristin M; Rosand, Jonathan; Goldstein, Joshua N; Gurol, M Edip; Greenberg, Steven M; Viswanathan, Anand
2017-09-15
An MRI-based score of total small vessel disease burden (CAA-SVD-Score) in cerebral amyloid angiopathy (CAA) has been demonstrated to correlate with severity of pathologic changes. Evidence suggests that CAA-related intracerebral hemorrhage (ICH) recurrence risk is associated with specific disease imaging manifestations rather than overall severity. We compared the correlation between the CAA-SVD-Score with the risk of recurrent CAA-related lobar ICH versus the predictive role of each of its components. Consecutive patients with CAA-related ICH from a single-center prospective cohort were analyzed. Radiological markers of CAA related SVD damage were quantified and categorized according to the CAA-SVD-Score (0-6 points). Subjects were followed prospectively for recurrent symptomatic ICH. Adjusted Cox proportional hazards models were used to investigate associations between the CAA-SVD-Score as well as each of the individual MRI signatures of CAA and the risk of recurrent ICH. In 229 CAA patients with ICH, a total of 56 recurrent ICH events occurred during a median follow-up of 2.8years [IQR 0.9-5.4years, 781 person-years). Higher CAA-SVD-Score (HR=1.26 per additional point, 95%CI [1.04-1.52], p=0.015) and older age were independently associated with higher ICH recurrence risk. Analysis of individual markers of CAA showed that CAA-SVD-Score findings were due to the independent effect of disseminated superficial siderosis (HR for disseminated cSS vs none: 2.89, 95%CI [1.47-5.5], p=0.002) and high degree of perivascular spaces enlargement (RR=3.50-95%CI [1.04-21], p=0.042). In lobar CAA-ICH patients, higher CAA-SVD-Score does predict recurrent ICH. Amongst individual elements of the score, superficial siderosis and dilated perivascular spaces are the only markers independently associated with ICH recurrence, contributing to the evidence for distinct CAA phenotypes singled out by neuro-imaging manifestations. Copyright © 2017 Elsevier B.V. All rights reserved.
Metabolomic determinants of metabolic risk in Mexican adolescents
Perng, Wei; Hector, Emily C.; Song, Peter X.K.; Rojo, Martha Maria Tellez; Raskind, Sasha; Kachman, Maureen; Cantoral, Alejandra; Burant, Charles F.; Peterson, Karen E.
2017-01-01
Objective To identify metabolites associated with metabolic risk, separately by sex, in Mexican adolescents. Methods We carried out untargeted metabolomic profiling on fasting serum of 238 youth age 8–14 years, and identified metabolites associated with a metabolic syndrome risk z-score (MetRisk z-score), separately for boys and girls using the simulation and extrapolation (SIMEX) algorithm. We examined associations of each metabolite with MetRisk z-score using linear regression models that accounted for maternal education, child’s age, and pubertal status. Results Of the 938 features identified in metabolomics analysis, 7 named compounds (of 27 identified metabolites) were associated with MetRisk z-score in girls, and 3 named compounds (of 14 identified) were associated with MetRisk z-score in boys. In girls, diacylglycerol (DG) 16:0/16:0, 1,3-dielaidin, myo-inositol, and urate corresponded with higher MetRisk z-score, whereas N-acetylglycine, thymine, and dodecenedioic acid were associated with lower MetRisk z-score. For example, each z-score increment in DG 16:0/16:0 corresponded with 0.60 (0.47, 0.74). In boys, we found positive associations of DG 16:0/16:0, tyrosine, and 5′-methylthioadenosine with MetRisk z-score. Conclusions Metabolites on lipid, amino acid, and carbohydrate metabolism pathways are associated with metabolic risk in girls. Compounds on lipid and DNA pathways correspond with metabolic risk in boys. PMID:28758362
Kok, Victor C; Zhang, Han-Wei; Lin, Chin-Teng; Huang, Shih-Chung; Wu, Ming-Feng
2018-06-18
We hypothesized that hypertensive patients harbor a higher risk of urinary bladder (UB) cancer. We performed a population-based cohort study on adults using a National Health Insurance Research Database (NHIRD) dataset. Hypertension and comparison non-hypertensive (COMP) groups comprising 39,618 patients each were propensity score-matched by age, sex, index date, and medical comorbidities. The outcome was incident UB cancer validated using procedure codes. We constructed multivariable Cox models to derive adjusted hazard ratios (aHRs) and 95% confidence intervals (CIs). Cumulative incidence was compared using a log-rank test. During a total follow-up duration of 380,525 and 372,020 person-years in the hypertension and COMP groups, 248 and 186 patients developed UB cancer, respectively, representing a 32% increase in the risk (aHR, 1.32; 95% CI, 1.09-1.60). Hypertensive women harbored a significantly increased risk of UB cancer (aHR, 1.55; 95% CI, 1.12-2.13) compared with non-hypertensive women, whereas men with hypertension had a statistically non-significant increased risk (aHR, 1.22; 95% CI, 0.96-1.55). The sensitivity analysis demonstrated that the increased risk was sustained throughout different follow-up durations for the entire cohort; a statistical increase in the risk was also noted among hypertensive men. This nationwide population-based propensity score-matched cohort study supports a positive association between hypertension and subsequent UB cancer development.
A Bayesian framework for early risk prediction in traumatic brain injury
NASA Astrophysics Data System (ADS)
Chaganti, Shikha; Plassard, Andrew J.; Wilson, Laura; Smith, Miya A.; Patel, Mayur B.; Landman, Bennett A.
2016-03-01
Early detection of risk is critical in determining the course of treatment in traumatic brain injury (TBI). Computed tomography (CT) acquired at admission has shown latent prognostic value in prior studies; however, no robust clinical risk predictions have been achieved based on the imaging data in large-scale TBI analysis. The major challenge lies in the lack of consistent and complete medical records for patients, and an inherent bias associated with the limited number of patients samples with high-risk outcomes in available TBI datasets. Herein, we propose a Bayesian framework with mutual information-based forward feature selection to handle this type of data. Using multi-atlas segmentation, 154 image-based features (capturing intensity, volume and texture) were computed over 22 ROIs in 1791 CT scans. These features were combined with 14 clinical parameters and converted into risk likelihood scores using Bayes modeling. We explore the prediction power of the image features versus the clinical measures for various risk outcomes. The imaging data alone were more predictive of outcomes than the clinical data (including Marshall CT classification) for discharge disposition with an area under the curve of 0.81 vs. 0.67, but less predictive than clinical data for discharge Glasgow Coma Scale (GCS) score with an area under the curve of 0.65 vs. 0.85. However, in both cases, combining imaging and clinical data increased the combined area under the curve with 0.86 for discharge disposition and 0.88 for discharge GCS score. In conclusion, CT data have meaningful prognostic value for TBI patients beyond what is captured in clinical measures and the Marshall CT classification.
An exploration of the relationship between youth assets and engagement in risky sexual behaviors.
Evans, Alexandra E; Sanderson, Maureen; Griffin, Sarah F; Reininger, Belinda; Vincent, Murray L; Parra-Medina, Debra; Valois, Robert F; Taylor, Doug
2004-11-01
To examine the relationship between specific youth assets and adolescents' engagement in risky sexual behaviors, as measured by an Aggregate Sexual Risk score, and to specifically explore which youth assets and demographic variables were predictive of youth engagement in risky sexual intercourse. A total of 2108 sexually active high school students attending public high schools in a southern state completed a self-report questionnaire that measured youth assets. Based upon responses to items measuring risk behaviors, an Aggregate Sexual Risk score was calculated for each student. Unconditional logistic regression and multivariate logistic regression analyses were conducted to examine the relationships between the assets and the Aggregate Risk Score. Four separate analyses (white females, white males, black females, and black males) were conducted. In general, the patterns in all four groups indicated that students who had an Aggregate Risk Score of > or = 3 (high risk) possessed less of the measured youth assets. The assets that were most significantly associated with engagement in risky sexual behaviors included self peer values regarding risky behaviors, quantity of other adult support, and youths' empathetic relationships. Thus, students who reported not having these assets were significantly more likely to engage in the risky sexual behaviors. Results underscore the relationship of specific youth assets to sexual risk behaviors. Health researcher and practitioners who work to prevent teen pregnancy and sexually transmitted infections among teenagers need to understand and acknowledge these factors within this population so that the assets can be built or strengthened.
Clark, D G; Kapur, P; Geldmacher, D S; Brockington, J C; Harrell, L; DeRamus, T P; Blanton, P D; Lokken, K; Nicholas, A P; Marson, D C
2014-06-01
We constructed random forest classifiers employing either the traditional method of scoring semantic fluency word lists or new methods. These classifiers were then compared in terms of their ability to diagnose Alzheimer disease (AD) or to prognosticate among individuals along the continuum from cognitively normal (CN) through mild cognitive impairment (MCI) to AD. Semantic fluency lists from 44 cognitively normal elderly individuals, 80 MCI patients, and 41 AD patients were transcribed into electronic text files and scored by four methods: traditional raw scores, clustering and switching scores, "generalized" versions of clustering and switching, and a method based on independent components analysis (ICA). Random forest classifiers based on raw scores were compared to "augmented" classifiers that incorporated newer scoring methods. Outcome variables included AD diagnosis at baseline, MCI conversion, increase in Clinical Dementia Rating-Sum of Boxes (CDR-SOB) score, or decrease in Financial Capacity Instrument (FCI) score. Receiver operating characteristic (ROC) curves were constructed for each classifier and the area under the curve (AUC) was calculated. We compared AUC between raw and augmented classifiers using Delong's test and assessed validity and reliability of the augmented classifier. Augmented classifiers outperformed classifiers based on raw scores for the outcome measures AD diagnosis (AUC .97 vs. .95), MCI conversion (AUC .91 vs. .77), CDR-SOB increase (AUC .90 vs. .79), and FCI decrease (AUC .89 vs. .72). Measures of validity and stability over time support the use of the method. Latent information in semantic fluency word lists is useful for predicting cognitive and functional decline among elderly individuals at increased risk for developing AD. Modern machine learning methods may incorporate latent information to enhance the diagnostic value of semantic fluency raw scores. These methods could yield information valuable for patient care and clinical trial design with a relatively small investment of time and money. Published by Elsevier Ltd.
Deng, Fang-Ming; Donin, Nicholas M; Pe Benito, Ruth; Melamed, Jonathan; Le Nobin, Julien; Zhou, Ming; Ma, Sisi; Wang, Jinhua; Lepor, Herbert
2016-08-01
The risk of biochemical recurrence (BCR) following radical prostatectomy for pathologic Gleason 7 prostate cancer varies according to the proportion of Gleason 4 component. We sought to explore the value of several novel quantitative metrics of Gleason 4 disease for the prediction of BCR in men with Gleason 7 disease. We analyzed a cohort of 2630 radical prostatectomy cases from 1990-2007. All pathologic Gleason 7 cases were identified and assessed for quantity of Gleason pattern 4. Three methods were used to quantify the extent of Gleason 4: a quantitative Gleason score (qGS) based on the proportion of tumor composed of Gleason pattern 4, a size-weighted score (swGS) incorporating the overall quantity of Gleason 4, and a size index (siGS) incorporating the quantity of Gleason 4 based on the index lesion. Associations between the above metrics and BCR were evaluated using Cox proportional hazards regression analysis. qGS, swGS, and siGS were significantly associated with BCR on multivariate analysis when adjusted for traditional Gleason score, age, prostate specific antigen, surgical margin, and stage. Using Harrell's c-index to compare the scoring systems, qGS (0.83), swGS (0.84), and siGS (0.84) all performed better than the traditional Gleason score (0.82). Quantitative measures of Gleason pattern 4 predict BCR better than the traditional Gleason score. In men with Gleason 7 prostate cancer, quantitative analysis of the proportion of Gleason pattern 4 (quantitative Gleason score), as well as size-weighted measurement of Gleason 4 (size-weighted Gleason score), and a size-weighted measurement of Gleason 4 based on the largest tumor nodule significantly improve the predicted risk of biochemical recurrence compared with the traditional Gleason score. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Villar, Jesús; Ambrós, Alfonso; Soler, Juan Alfonso; Martínez, Domingo; Ferrando, Carlos; Solano, Rosario; Mosteiro, Fernando; Blanco, Jesús; Martín-Rodríguez, Carmen; Fernández, María Del Mar; López, Julia; Díaz-Domínguez, Francisco J; Andaluz-Ojeda, David; Merayo, Eleuterio; Pérez-Méndez, Lina; Fernández, Rosa Lidia; Kacmarek, Robert M
2016-07-01
Although there is general agreement on the characteristic features of the acute respiratory distress syndrome, we lack a scoring system that predicts acute respiratory distress syndrome outcome with high probability. Our objective was to develop an outcome score that clinicians could easily calculate at the bedside to predict the risk of death of acute respiratory distress syndrome patients 24 hours after diagnosis. A prospective, multicenter, observational, descriptive, and validation study. A network of multidisciplinary ICUs. Six-hundred patients meeting Berlin criteria for moderate and severe acute respiratory distress syndrome enrolled in two independent cohorts treated with lung-protective ventilation. None. Using individual demographic, pulmonary, and systemic data at 24 hours after acute respiratory distress syndrome diagnosis, we derived our prediction score in 300 acute respiratory distress syndrome patients based on stratification of variable values into tertiles, and validated in an independent cohort of 300 acute respiratory distress syndrome patients. Primary outcome was in-hospital mortality. We found that a 9-point score based on patient's age, PaO2/FIO2 ratio, and plateau pressure at 24 hours after acute respiratory distress syndrome diagnosis was associated with death. Patients with a score greater than 7 had a mortality of 83.3% (relative risk, 5.7; 95% CI, 3.0-11.0), whereas patients with scores less than 5 had a mortality of 14.5% (p < 0.0000001). We confirmed the predictive validity of the score in a validation cohort. A simple 9-point score based on the values of age, PaO2/FIO2 ratio, and plateau pressure calculated at 24 hours on protective ventilation after acute respiratory distress syndrome diagnosis could be used in real time for rating prognosis of acute respiratory distress syndrome patients with high probability.
Hijazi, Ziad; Oldgren, Jonas; Lindbäck, Johan; Alexander, John H; Connolly, Stuart J; Eikelboom, John W; Ezekowitz, Michael D; Held, Claes; Hylek, Elaine M; Lopes, Renato D; Siegbahn, Agneta; Yusuf, Salim; Granger, Christopher B; Wallentin, Lars
2016-06-04
The benefit of oral anticoagulation in atrial fibrillation is based on a balance between reduction in ischaemic stroke and increase in major bleeding. We aimed to develop and validate a new biomarker-based risk score to improve the prognostication of major bleeding in patients with atrial fibrillation. We developed and internally validated a new biomarker-based risk score for major bleeding in 14,537 patients with atrial fibrillation randomised to apixaban versus warfarin in the ARISTOTLE trial and externally validated it in 8468 patients with atrial fibrillation randomised to dabigatran versus warfarin in the RE-LY trial. Plasma samples for determination of candidate biomarker concentrations were obtained at randomisation. Major bleeding events were centrally adjudicated. The predictive values of biomarkers and clinical variables were assessed with Cox regression models. The most important variables were included in the score with weights proportional to the model coefficients. The ARISTOTLE and RE-LY trials are registered with ClinicalTrials.gov, numbers NCT00412984 and NCT00262600, respectively. The most important predictors for major bleeding were the concentrations of the biomarkers growth differentiation factor-15 (GDF-15), high-sensitivity cardiac troponin T (cTnT-hs) and haemoglobin, age, and previous bleeding. The ABC-bleeding score (age, biomarkers [GDF-15, cTnT-hs, and haemoglobin], and clinical history [previous bleeding]) score yielded a higher c-index than the conventional HAS-BLED and the newer ORBIT scores for major bleeding in both the derivation cohort (0·68 [95% CI 0·66-0·70] vs 0·61 [0·59-0·63] vs 0·65 [0·62-0·67], respectively; ABC-bleeding vs HAS-BLED p<0·0001 and ABC-bleeding vs ORBIT p=0·0008). ABC-bleeding score also yielded a higher c-index score in the the external validation cohort (0·71 [95% CI 0·68-0·73] vs 0·62 [0·59-0·64] for HAS-BLED vs 0·68 [0·65-0·70] for ORBIT; ABC-bleeding vs HAS-BLED p<0·0001 and ABC-bleeding vs ORBIT p=0·0016). A modified ABC-bleeding score using alternative biomarkers (haematocrit, cTnI-hs, cystatin C, or creatinine clearance) also outperformed the HAS-BLED and ORBIT scores. The ABC-bleeding score, using age, history of bleeding, and three biomarkers (haemoglobin, cTn-hs, and GDF-15 or cystatin C/CKD-EPI) was internally and externally validated and calibrated in large cohorts of patients with atrial fibrillation receiving anticoagulation therapy. The ABC-bleeding score performed better than HAS-BLED and ORBIT scores and should be useful as decision support on anticoagulation treatment in patients with atrial fibrillation. BMS, Pfizer, Boehringer Ingelheim, Roche Diagnostics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sebastião, Emerson; Learmonth, Yvonne C; Motl, Robert W
2017-01-01
Falls are of great concern among persons with multiple sclerosis (MS). To examine differences in metrics of mobility, postural control, and cognition in persons with MS with distinct fall risk status; and to investigate predictors of fall risk group membership using discriminant analysis. Forty-seven persons with MS completed the Activities-Balance Confidence (ABC) Scale and underwent a battery of assessments of mobility, balance, and cognition. Participants further wore an accelerometer for 7 days as an assessment of steps/day. Participants were allocated into fall risk groups based on ABC scale scores (increased fall risk (IFR); and normal fall risk (NFR)). We examined univariate differences between groups using ANOVA, and discriminant function analysis (DFA) identified the significant multivariate predictors of FR status. After controlling for disability level, the IFR group had significantly (p < 0.05) worse scores on measures of mobility (i.e., MSWS-12, 6 MW, and steps/day) compared to the NFR group. DFA identified MSWS-12 and 6 MW scores as significant (p < 0.05) predictors of fall risk group membership. Those two variables collectively explained 55% of variance in fall risk grouping. The findings suggest that mobility should be the focus of rehabilitation programs in persons with MS, especially for those at IFR.
Can subsyndromal manifestations of major depression be identified in children at risk?
Uchida, M; Fitzgerald, M; Lin, K; Carrellas, N; Woodworth, H; Biederman, J
2017-02-01
Children of parents with major depression are at significantly increased risk for developing major depression themselves; however, not all children at genetic risk will develop major depressive disorder (MDD). We investigated the utility of subsyndromal scores on the Child Behavior Checklist (CBCL) Anxiety/Depression scale in identifying children at the highest risk for pediatric MDD from among the pool of children of parents with MDD or bipolar disorder. The sample was derived from two previously conducted longitudinal case-control family studies of psychiatrically and pediatrically referred youth and their families. For this study, probands were stratified based on the presence or absence of a parental mood disorder. Subsyndromal scores on the CBCL Anxiety/Depression scale significantly separated the children at high risk for pediatric MDD from those at low risk in a variety of functional areas, including social and academic functioning. Additionally, children at genetic risk without elevated CBCL Anxiety/Depression scale scores were largely indistinguishable from controls. These results suggest that the CBCL Anxiety/Depression scale can help identify children at highest risk for pediatric MDD. If implemented clinically, this scale would cost-effectively screen children and identify those most in need of early intervention resources to impede the progression of depression. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Pitanga, Francisco José Gondim; Matos, Sheila M A; Almeida, Maria da Conceição; Barreto, Sandhi Maria; Aquino, Estela M L
2018-01-01
Despite reports in the literature that both leisure-time physical activity (LTPA) and commuting physical activity (CPA) can promote health benefits, the literature lacks studies comparing the associations of these domains of physical activity with cardiovascular risk scores. To investigate the association between LTPA and CPA with different cardiovascular risk scores in the cohort of the Longitudinal Study of Adult Health ELSA-Brasil. Cross-sectional study with data from 13,721 participants of both genders, aged 35-74 years, free of cardiovascular disease, from ELSA Brazil. Physical activity was measured using the International Physical Activity Questionnaire (IPAQ). Five cardiovascular risk scores were used: Framingham score - coronary heart disease (cholesterol); Framingham score - coronary heart disease (LDL-C); Framingham score - cardiovascular disease (cholesterol); Framingham score - cardiovascular disease (body mass index, BMI); and pooled cohort equations for atherosclerotic cardiovascular disease (ASCVD). Associations adjusted for confounding variables between physical activity and different cardiovascular risk scores were analyzed by logistic regression. Confidence interval of 95% (95%CI) was considered. LTPA is inversely associated with almost all cardiovascular risk scores analyzed, while CPA shows no statistically significant association with any of them. Dose-response effect in association between LTPA and cardiovascular risk scores was also found, especially in men. LTPA was shown to be associated with the cardiovascular risk scores analyzed, but CPA not. The amount of physical activity (duration and intensity) was more significantly associated, especially in men, with cardiovascular risk scores in ELSA-Brasil.
Tammaro, Leonardo; Buda, Andrea; Di Paolo, Maria Carla; Zullo, Angelo; Hassan, Cesare; Riccio, Elisabetta; Vassallo, Roberto; Caserta, Luigi; Anderloni, Andrea; Natali, Alessandro
2014-09-01
Pre-endoscopic triage of patients who require an early upper endoscopy can improve management of patients with non-variceal upper gastrointestinal bleeding. To validate a new simplified clinical score (T-score) to assess the need of an early upper endoscopy in non variceal bleeding patients. Secondary outcomes were re-bleeding rate, 30-day bleeding-related mortality. In this prospective, multicentre study patients with bleeding who underwent upper endoscopy were enrolled. The accuracy for high risk endoscopic stigmata of the T-score was compared with that of the Glasgow Blatchford risk score. Overall, 602 patients underwent early upper endoscopy, and 472 presented with non-variceal bleeding. High risk endoscopic stigmata were detected in 145 (30.7%) cases. T-score sensitivity and specificity for high risk endoscopic stigmata and bleeding-related mortality was 96% and 30%, and 80% and 71%, respectively. No statistically difference in predicting high risk endoscopic stigmata between T-score and Glasgow Blatchford risk score was observed (ROC curve: 0.72 vs. 0.69, p=0.11). The two scores were also similar in predicting re-bleeding (ROC curve: 0.64 vs. 0.63, p=0.4) and 30-day bleeding-related mortality (ROC curve: 0.78 vs. 0.76, p=0.3). The T-score appeared to predict high risk endoscopic stigmata, re-bleeding and mortality with similar accuracy to Glasgow Blatchford risk score. Such a score may be helpful for the prediction of high-risk patients who need a very early therapeutic endoscopy. Copyright © 2014 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Application of multi-criteria decision-making to risk prioritisation in tidal energy developments
NASA Astrophysics Data System (ADS)
Kolios, Athanasios; Read, George; Ioannou, Anastasia
2016-01-01
This paper presents an analytical multi-criterion analysis for the prioritisation of risks for the development of tidal energy projects. After a basic identification of risks throughout the project and relevant stakeholders in the UK, classified through a political, economic, social, technological, legal and environmental analysis, relevant questionnaires provided scores to each risk and corresponding weights for each of the different sectors. Employing an extended technique for order of preference by similarity to ideal solution as well as the weighted sum method based on the data obtained, the risks identified are ranked based on their criticality, drawing attention of the industry in mitigating the ones scoring higher. Both methods were modified to take averages at different stages of the analysis in order to observe the effects on the final risk ranking. A sensitivity analysis of the results was also carried out with regard to the weighting factors given to the perceived expertise of participants, with different results being obtained whether a linear, squared or square root regression is used. Results of the study show that academics and industry have conflicting opinions with regard to the perception of the most critical risks.
Hsieh, Cheng-Yang; Lee, Cheng-Han; Wu, Darren Philbert; Sung, Sheng-Feng
2018-05-01
Early detection of atrial fibrillation after stroke is important for secondary prevention in stroke patients without known atrial fibrillation (AF). We aimed to compare the performance of CHADS 2 , CHA 2 DS 2 -VASc and HATCH scores in predicting AF detected after stroke (AFDAS) and to test whether adding stroke severity to the risk scores improves predictive performance. Adult patients with first ischemic stroke event but without a prior history of AF were retrieved from a nationwide population-based database. We compared C-statistics of CHADS 2 , CHA 2 DS 2 -VASc and HATCH scores for predicting the occurrence of AFDAS during stroke admission (cohort I) and during follow-up after hospital discharge (cohort II). The added value of stroke severity to prediction models was evaluated using C-statistics, net reclassification improvement, and integrated discrimination improvement. Cohort I comprised 13,878 patients and cohort II comprised 12,567 patients. Among them, 806 (5.8%) and 657 (5.2%) were diagnosed with AF, respectively. The CHADS 2 score had the lowest C-statistics (0.558 in cohort I and 0.597 in cohort II), whereas the CHA 2 DS 2 -VASc score had comparable C-statistics (0.603 and 0.644) to the HATCH score (0.612 and 0.653) in predicting AFDAS. Adding stroke severity to each of the three risk scores significantly increased the model performance. In stroke patients without known AF, all three risk scores predicted AFDAS during admission and follow-up, but with suboptimal discrimination. Adding stroke severity improved their predictive abilities. These risk scores, when combined with stroke severity, may help prioritize patients for continuous cardiac monitoring in daily practice. Copyright © 2018 Elsevier B.V. All rights reserved.
van Giessen, A; Moons, K G M; de Wit, G A; Verschuren, W M M; Boer, J M A; Koffijberg, H
2015-01-01
The value of new biomarkers or imaging tests, when added to a prediction model, is currently evaluated using reclassification measures, such as the net reclassification improvement (NRI). However, these measures only provide an estimate of improved reclassification at population level. We present a straightforward approach to characterize subgroups of reclassified individuals in order to tailor implementation of a new prediction model to individuals expected to benefit from it. In a large Dutch population cohort (n = 21,992) we classified individuals to low (< 5%) and high (≥ 5%) fatal cardiovascular disease risk by the Framingham risk score (FRS) and reclassified them based on the systematic coronary risk evaluation (SCORE). Subsequently, we characterized the reclassified individuals and, in case of heterogeneity, applied cluster analysis to identify and characterize subgroups. These characterizations were used to select individuals expected to benefit from implementation of SCORE. Reclassification after applying SCORE in all individuals resulted in an NRI of 5.00% (95% CI [-0.53%; 11.50%]) within the events, 0.06% (95% CI [-0.08%; 0.22%]) within the nonevents, and a total NRI of 0.051 (95% CI [-0.004; 0.116]). Among the correctly downward reclassified individuals cluster analysis identified three subgroups. Using the characterizations of the typically correctly reclassified individuals, implementing SCORE only in individuals expected to benefit (n = 2,707,12.3%) improved the NRI to 5.32% (95% CI [-0.13%; 12.06%]) within the events, 0.24% (95% CI [0.10%; 0.36%]) within the nonevents, and a total NRI of 0.055 (95% CI [0.001; 0.123]). Overall, the risk levels for individuals reclassified by tailored implementation of SCORE were more accurate. In our empirical example the presented approach successfully characterized subgroups of reclassified individuals that could be used to improve reclassification and reduce implementation burden. In particular when newly added biomarkers or imaging tests are costly or burdensome such a tailored implementation strategy may save resources and improve (cost-)effectiveness.
Park, Song-Yi; Boushey, Carol J; Wilkens, Lynne R; Haiman, Christopher A; Le Marchand, Loïc
2017-08-01
Healthy eating patterns assessed by diet quality indexes (DQIs) have been related to lower risk of colorectal cancer-mostly among whites. We investigated the associations between 4 DQI scores (the Healthy Eating Index 2010 [HEI-2010], the Alternative Healthy Eating Index 2010 [AHEI-2010], the alternate Mediterranean diet score [aMED], and the Dietary Approaches to Stop Hypertension score) and colorectal cancer risk in the Multiethnic Cohort. We analyzed data from 190,949 African American, Native Hawaiian, Japanese American, Latino, and white individuals, 45 to 75 years old, who entered the Multiethnic Cohort study from 1993 through 1996. During an average 16 years of follow-up, 4770 invasive colorectal cancer cases were identified. Scores from all 4 DQIs associated inversely with colorectal cancer risk; higher scores associated with decreasing colorectal cancer risk (all P's for trend ≤ .003). Associations were not significant for AHEI-2010 and aMED scores in women after adjustment for covariates: for the highest vs lowest quintiles, the hazard ratio for the HEI-2010 score in men was 0.69 (95% confidence interval [CI], 0.59-0.80) and in women was 0.82 (95% CI, 0.70-0.96); for the AHEI-2010 score the hazard ratio in men was 0.75 (95% CI, 0.65-0.85) and in women was 0.90 (95% CI, 0.78-1.04); for the aMED score the hazard ratio in men was 0.84 (95% CI, 0.73-0.97) and in women was 0.96 (95% CI, 0.82-1.13); for the Dietary Approaches to Stop Hypertension score the hazard ratio in men was 0.75 (95% CI, 0.66-0.86) and in women was 0.86 (95% CI, 0.75-1.00). Associations were limited to the left colon and rectum for all indexes. The inverse associations were less strong in African American individuals than in the other 4 racial/ethnic groups. Based on an analysis of data from the Multiethnic Cohort Study, high-quality diets are associated with a lower risk of colorectal cancer in most racial/ethnic subgroups. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
Kim, Yeon-Jin; Jang, Hye Min; Lee, Youngjo; Lee, Donghwan; Kim, Dai-Jin
2018-04-25
The associations of Internet addiction (IA) and smartphone addiction (SA) with mental health problems have been widely studied. We investigated the effects of IA and SA on depression and anxiety while adjusting for sociodemographic variables. In this study, 4854 participants completed a cross-sectional web-based survey including socio-demographic items, the Korean Scale for Internet Addiction, the Smartphone Addiction Proneness Scale, and the subscales of the Symptom Checklist 90 Items-Revised. The participants were classified into IA, SA, and normal use (NU) groups. To reduce sampling bias, we applied the propensity score matching method based on genetics matching. The IA group showed an increased risk of depression (relative risk 1.207; p < 0.001) and anxiety (relative risk 1.264; p < 0.001) compared to NUs. The SA group also showed an increased risk of depression (relative risk 1.337; p < 0.001) and anxiety (relative risk 1.402; p < 0.001) compared to NCs. These findings show that both, IA and SA, exerted significant effects on depression and anxiety. Moreover, our findings showed that SA has a stronger relationship with depression and anxiety, stronger than IA, and emphasized the need for prevention and management policy of the excessive smartphone use.
Bagheri, Nasser; Gilmour, Bridget; McRae, Ian; Konings, Paul; Dawda, Paresh; Del Fante, Peter; van Weel, Chris
2015-02-26
Cardiovascular disease (CVD) continues to be a leading cause of illness and death among adults worldwide. The objective of this study was to calculate a CVD risk score from general practice (GP) clinical records and assess spatial variations of CVD risk in communities. We used GP clinical data for 4,740 men and women aged 30 to 74 years with no history of CVD. A 10-year absolute CVD risk score was calculated based on the Framingham risk equation. The individual risk scores were aggregated within each Statistical Area Level One (SA1) to predict the level of CVD risk in that area. Finally, the pattern of CVD risk was visualized to highlight communities with high and low risk of CVD. The overall 10-year risk of CVD in our sample population was 14.6% (95% confidence interval [CI], 14.3%-14.9%). Of the 4,740 patients in our study, 26.7% were at high risk, 29.8% were at moderate risk, and 43.5% were at low risk for CVD over 10 years. The proportion of patients at high risk for CVD was significantly higher in the communities of low socioeconomic status. This study illustrates methods to further explore prevalence, location, and correlates of CVD to identify communities of high levels of unmet need for cardiovascular care and to enable geographic targeting of effective interventions for enhancing early and timely detection and management of CVD in those communities.