Baig, Sheharyar S; Strong, Mark; Rosser, Elisabeth; Taverner, Nicola V; Glew, Ruth; Miedzybrodzka, Zosia; Clarke, Angus; Craufurd, David; Quarrell, Oliver W
2016-10-01
Huntington's disease (HD) is a progressive neurodegenerative condition. At-risk individuals have accessed predictive testing via direct mutation testing since 1993. The UK Huntington's Prediction Consortium has collected anonymised data on UK predictive tests, annually, from 1993 to 2014: 9407 predictive tests were performed across 23 UK centres. Where gender was recorded, 4077 participants were male (44.3%) and 5122 were female (55.7%). The median age of participants was 37 years. The most common reason for predictive testing was to reduce uncertainty (70.5%). Of the 8441 predictive tests on individuals at 50% prior risk, 4629 (54.8%) were reported as mutation negative and 3790 (44.9%) were mutation positive, with 22 (0.3%) in the database being uninterpretable. Using a prevalence figure of 12.3 × 10(-5), the cumulative uptake of predictive testing in the 50% at-risk UK population from 1994 to 2014 was estimated at 17.4% (95% CI: 16.9-18.0%). We present the largest study conducted on predictive testing in HD. Our findings indicate that the vast majority of individuals at risk of HD (>80%) have not undergone predictive testing. Future therapies in HD will likely target presymptomatic individuals; therefore, identifying the at-risk population whose gene status is unknown is of significant public health value.
Baig, Sheharyar S; Strong, Mark; Rosser, Elisabeth; Taverner, Nicola V; Glew, Ruth; Miedzybrodzka, Zosia; Clarke, Angus; Craufurd, David; Quarrell, Oliver W
2016-01-01
Huntington's disease (HD) is a progressive neurodegenerative condition. At-risk individuals have accessed predictive testing via direct mutation testing since 1993. The UK Huntington's Prediction Consortium has collected anonymised data on UK predictive tests, annually, from 1993 to 2014: 9407 predictive tests were performed across 23 UK centres. Where gender was recorded, 4077 participants were male (44.3%) and 5122 were female (55.7%). The median age of participants was 37 years. The most common reason for predictive testing was to reduce uncertainty (70.5%). Of the 8441 predictive tests on individuals at 50% prior risk, 4629 (54.8%) were reported as mutation negative and 3790 (44.9%) were mutation positive, with 22 (0.3%) in the database being uninterpretable. Using a prevalence figure of 12.3 × 10−5, the cumulative uptake of predictive testing in the 50% at-risk UK population from 1994 to 2014 was estimated at 17.4% (95% CI: 16.9–18.0%). We present the largest study conducted on predictive testing in HD. Our findings indicate that the vast majority of individuals at risk of HD (>80%) have not undergone predictive testing. Future therapies in HD will likely target presymptomatic individuals; therefore, identifying the at-risk population whose gene status is unknown is of significant public health value. PMID:27165004
The Theory of Planned Behavior as a Predictor of HIV Testing Intention.
Ayodele, Olabode
2017-03-01
This investigation tests the theory of planned behavior (TPB) as a predictor of HIV testing intention among Nigerian university undergraduate students. A cross-sectional study of 392 students was conducted using a self-administered structured questionnaire that measured socio-demographics, perceived risk of human immunodeficiency virus (HIV) infection, and TPB constructs. Analysis was based on 273 students who had never been tested for HIV. Hierarchical multiple regression analysis assessed the applicability of the TPB in predicting HIV testing intention and additional predictive value of perceived risk of HIV infection. The prediction model containing TPB constructs explained 35% of the variance in HIV testing intention, with attitude and perceived behavioral control making significant and unique contributions to intention. Perceived risk of HIV infection contributed marginally (2%) but significantly to the final prediction model. Findings supported the TPB in predicting HIV testing intention. Although future studies must determine the generalizability of these results, the findings highlight the importance of perceived behavioral control, attitude, and perceived risk of HIV infection in the prediction of HIV testing intention among students who have not previously tested for HIV.
Romanens, Michel; Ackermann, Franz; Spence, John David; Darioli, Roger; Rodondi, Nicolas; Corti, Roberto; Noll, Georg; Schwenkglenks, Matthias; Pencina, Michael
2010-02-01
Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.
Testing the Predictive Validity of the Hendrich II Fall Risk Model.
Jung, Hyesil; Park, Hyeoun-Ae
2018-03-01
Cumulative data on patient fall risk have been compiled in electronic medical records systems, and it is possible to test the validity of fall-risk assessment tools using these data between the times of admission and occurrence of a fall. The Hendrich II Fall Risk Model scores assessed during three time points of hospital stays were extracted and used for testing the predictive validity: (a) upon admission, (b) when the maximum fall-risk score from admission to falling or discharge, and (c) immediately before falling or discharge. Predictive validity was examined using seven predictive indicators. In addition, logistic regression analysis was used to identify factors that significantly affect the occurrence of a fall. Among the different time points, the maximum fall-risk score assessed between admission and falling or discharge showed the best predictive performance. Confusion or disorientation and having a poor ability to rise from a sitting position were significant risk factors for a fall.
Blood test could predict risk of heart attack and subsequent death.
2017-01-18
A high-sensitivity blood test, known as a troponin test, could predict the risk of heart attack and death and patients' response to statins, say researchers from the Universities of Edinburgh and Glasgow.
A prediction model for colon cancer surveillance data.
Good, Norm M; Suresh, Krithika; Young, Graeme P; Lockett, Trevor J; Macrae, Finlay A; Taylor, Jeremy M G
2015-08-15
Dynamic prediction models make use of patient-specific longitudinal data to update individualized survival probability predictions based on current and past information. Colonoscopy (COL) and fecal occult blood test (FOBT) results were collected from two Australian surveillance studies on individuals characterized as high-risk based on a personal or family history of colorectal cancer. Motivated by a Poisson process, this paper proposes a generalized nonlinear model with a complementary log-log link as a dynamic prediction tool that produces individualized probabilities for the risk of developing advanced adenoma or colorectal cancer (AAC). This model allows predicted risk to depend on a patient's baseline characteristics and time-dependent covariates. Information on the dates and results of COLs and FOBTs were incorporated using time-dependent covariates that contributed to patient risk of AAC for a specified period following the test result. These covariates serve to update a person's risk as additional COL, and FOBT test information becomes available. Model selection was conducted systematically through the comparison of Akaike information criterion. Goodness-of-fit was assessed with the use of calibration plots to compare the predicted probability of event occurrence with the proportion of events observed. Abnormal COL results were found to significantly increase risk of AAC for 1 year following the test. Positive FOBTs were found to significantly increase the risk of AAC for 3 months following the result. The covariates that incorporated the updated test results were of greater significance and had a larger effect on risk than the baseline variables. Copyright © 2015 John Wiley & Sons, Ltd.
Crundall, David; Kroll, Victoria
2018-05-18
Can hazard perception testing be useful for the emergency services? Previous research has found emergency response drivers' (ERDs) to perform better than controls, however these studies used clips of normal driving. In contrast, the current study filmed footage from a fire-appliance on blue-light training runs through Nottinghamshire, and endeavoured to discriminate between different groups of EDRs based on experience and collision risk. Thirty clips were selected to create two variants of the hazard perception test: a traditional push-button test requiring speeded-responses to hazards, and a prediction test that occludes at hazard onset and provides four possible outcomes for participants to choose between. Three groups of fire-appliance drivers (novices, low-risk experienced and high-risk experienced), and age-matched controls undertook both tests. The hazard perception test only discriminated between controls and all FA drivers, whereas the hazard prediction test was more sensitive, discriminating between high and low-risk experienced fire appliance drivers. Eye movement analyses suggest that the low-risk drivers were better at prioritising the hazardous precursors, leading to better predictive accuracy. These results pave the way for future assessment and training tools to supplement emergency response driver training, while supporting the growing literature that identifies hazard prediction as a more robust measure of driver safety than traditional hazard perception tests. Copyright © 2018 Elsevier Ltd. All rights reserved.
Gamma Interferon Release Assays for Detection of Mycobacterium tuberculosis Infection
Denkinger, Claudia M.; Kik, Sandra V.; Rangaka, Molebogeng X.; Zwerling, Alice; Oxlade, Olivia; Metcalfe, John Z.; Cattamanchi, Adithya; Dowdy, David W.; Dheda, Keertan; Banaei, Niaz
2014-01-01
SUMMARY Identification and treatment of latent tuberculosis infection (LTBI) can substantially reduce the risk of developing active disease. However, there is no diagnostic gold standard for LTBI. Two tests are available for identification of LTBI: the tuberculin skin test (TST) and the gamma interferon (IFN-γ) release assay (IGRA). Evidence suggests that both TST and IGRA are acceptable but imperfect tests. They represent indirect markers of Mycobacterium tuberculosis exposure and indicate a cellular immune response to M. tuberculosis. Neither test can accurately differentiate between LTBI and active TB, distinguish reactivation from reinfection, or resolve the various stages within the spectrum of M. tuberculosis infection. Both TST and IGRA have reduced sensitivity in immunocompromised patients and have low predictive value for progression to active TB. To maximize the positive predictive value of existing tests, LTBI screening should be reserved for those who are at sufficiently high risk of progressing to disease. Such high-risk individuals may be identifiable by using multivariable risk prediction models that incorporate test results with risk factors and using serial testing to resolve underlying phenotypes. In the longer term, basic research is necessary to identify highly predictive biomarkers. PMID:24396134
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter
2017-01-01
The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models.
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter
2017-01-01
The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models. PMID:29062288
New directions in diagnostic evaluation of insect allergy.
Golden, David B K
2014-08-01
Diagnosis of insect sting allergy and prediction of risk of sting anaphylaxis are often difficult because tests for venom-specific IgE antibodies have a limited positive predictive value and do not reliably predict the severity of sting reactions. Component-resolved diagnosis using recombinant venom allergens has shown promise in improving the specificity of diagnostic testing for insect sting allergy. Basophil activation tests have been explored as more sensitive assays for identification of patients with insect allergy and for prediction of clinical outcomes. Measurement of mast cell mediators reflects the underlying risk for more severe reactions and limited clinical response to treatment. Measurement of IgE to recombinant venom allergens can distinguish cross-sensitization from dual sensitization to honeybee and vespid venoms, thus helping to limit venom immunotherapy to a single venom instead of multiple venoms in many patients. Basophil activation tests can detect venom allergy in patients who show no detectable venom-specific IgE in standard diagnostic tests and can predict increased risk of systemic reactions to venom immunotherapy, and to stings during and after stopping venom immunotherapy. The risk of severe or fatal anaphylaxis to stings can also be predicted by measurement of baseline serum tryptase or other mast cell mediators.
Fulks, Michael; Stout, Robert L; Dolan, Vera F
2012-01-01
Evaluate the degree of medium to longer term mortality prediction possible from a scoring system covering all laboratory testing used for life insurance applicants, as well as blood pressure and build measurements. Using the results of testing for life insurance applicants who reported a Social Security number in conjunction with the Social Security Death Master File, the mortality associated with each test result was defined by age and sex. The individual mortality scores for each test were combined for each individual and a composite mortality risk score was developed. This score was then tested against the insurance applicant dataset to evaluate its ability to discriminate risk across age and sex. The composite risk score was highly predictive of all-cause mortality risk in a linear manner from the best to worst quintile of scores in a nearly identical fashion for each sex and decade of age. Laboratory studies, blood pressure and build from life insurance applicants can be used to create scoring that predicts all-cause mortality across age and sex. Such an approach may hold promise for preventative health screening as well.
Ethical Issues of Predictive Genetic Testing for Diabetes
Haga, Susanne B.
2009-01-01
With the rising number of individuals affected with diabetes and the significant health care costs of treatment, the emphasis on prevention is key to controlling the health burden of this disease. Several genetic and genomic studies have identified genetic variants associated with increased risk to diabetes. As a result, commercial testing is available to predict an individual's genetic risk. Although the clinical benefits of testing have not yet been demonstrated, it is worth considering some of the ethical implications of testing for this common chronic disease. In this article, I discuss several issues that should be considered during the translation of predictive testing for diabetes, including familial implications, improvement of risk communication, implications for behavioral change and health outcomes, the Genetic Information Nondiscrimination Act, direct-to-consumer testing, and appropriate age of testing. PMID:20144329
BRCA1/2 Test Results Impact Risk Management Attitudes, Intentions and Uptake
O’Neill, Suzanne C.; Valdimarsdottir, Heiddis B.; DeMarco, Tiffani A.; Peshkin, Beth N.; Graves, Kristi D.; Brown, Karen; Hurley, Karen E.; Isaacs, Claudine; Hecker, Sharon; Schwartz, Marc D.
2011-01-01
BACKGROUND Women who receive positive or uninformative BRCA1/2 test results face a number of decisions about how to manage their cancer risk. The purpose of this study was to prospectively examine the effect of receiving a positive vs. uninformative BRCA1/2 genetic test result on the perceived pros and cons of risk-reducing mastectomy (RRM) and risk-reducing oophorectomy (RRO) and breast cancer screening. We further examined how perceived pros and cons of surgery predict intention for and uptake of surgery. METHODS 308 women (146 positive, 162 uninformative) were included in RRM and breast cancer screening analyses. 276 women were included in RRO analyses. Participants completed questionnaires at pre-disclosure baseline and 1-, 6-and 12-months post-disclosure. We used linear multiple regression to assess whether test result contributed to change in pros and cons and logistic regression to predict intentions and surgery uptake. RESULTS Receipt of a positive BRCA1/2 test result predicted stronger pros for RRM and RRO (Ps < .001), but not perceived cons of RRM and RRO. Pros of surgery predicted RRM and RRO intentions in carriers and RRO intentions in uninformatives. Cons predicted RRM intentions in carriers. Pros and cons predicted carriers’ RRO uptake in the year after testing (Ps < .001). CONCLUSIONS Receipt of BRCA1/2 mutation test results impacts how carriers see the positive aspects of RRO and RRM and their surgical intentions. Both the positive and negative aspects predict uptake of surgery. PMID:20383578
Improving coeliac disease risk prediction by testing non-HLA variants additional to HLA variants.
Romanos, Jihane; Rosén, Anna; Kumar, Vinod; Trynka, Gosia; Franke, Lude; Szperl, Agata; Gutierrez-Achury, Javier; van Diemen, Cleo C; Kanninga, Roan; Jankipersadsing, Soesma A; Steck, Andrea; Eisenbarth, Georges; van Heel, David A; Cukrowska, Bozena; Bruno, Valentina; Mazzilli, Maria Cristina; Núñez, Concepcion; Bilbao, Jose Ramon; Mearin, M Luisa; Barisani, Donatella; Rewers, Marian; Norris, Jill M; Ivarsson, Anneli; Boezen, H Marieke; Liu, Edwin; Wijmenga, Cisca
2014-03-01
The majority of coeliac disease (CD) patients are not being properly diagnosed and therefore remain untreated, leading to a greater risk of developing CD-associated complications. The major genetic risk heterodimer, HLA-DQ2 and DQ8, is already used clinically to help exclude disease. However, approximately 40% of the population carry these alleles and the majority never develop CD. We explored whether CD risk prediction can be improved by adding non-HLA-susceptible variants to common HLA testing. We developed an average weighted genetic risk score with 10, 26 and 57 single nucleotide polymorphisms (SNP) in 2675 cases and 2815 controls and assessed the improvement in risk prediction provided by the non-HLA SNP. Moreover, we assessed the transferability of the genetic risk model with 26 non-HLA variants to a nested case-control population (n=1709) and a prospective cohort (n=1245) and then tested how well this model predicted CD outcome for 985 independent individuals. Adding 57 non-HLA variants to HLA testing showed a statistically significant improvement compared to scores from models based on HLA only, HLA plus 10 SNP and HLA plus 26 SNP. With 57 non-HLA variants, the area under the receiver operator characteristic curve reached 0.854 compared to 0.823 for HLA only, and 11.1% of individuals were reclassified to a more accurate risk group. We show that the risk model with HLA plus 26 SNP is useful in independent populations. Predicting risk with 57 additional non-HLA variants improved the identification of potential CD patients. This demonstrates a possible role for combined HLA and non-HLA genetic testing in diagnostic work for CD.
Koeda, Yorihiko; Tanaka, Fumitaka; Segawa, Toshie; Ohta, Mutsuko; Ohsawa, Masaki; Tanno, Kozo; Makita, Shinji; Ishibashi, Yasuhiro; Itai, Kazuyoshi; Omama, Shin-Ichi; Onoda, Toshiyuki; Sakata, Kiyomi; Ogasawara, Kuniaki; Okayama, Akira; Nakamura, Motoyuki
2016-05-12
This study compared the combination of estimated glomerular filtration rate (eGFR) and urine albumin-to-creatinine ratio (UACR) vs. eGFR and urine protein reagent strip testing to determine chronic kidney disease (CKD) prevalence, and each method's ability to predict the risk for cardiovascular events in the general Japanese population. Baseline data including eGFR, UACR, and urine dipstick tests were obtained from the general population (n = 22 975). Dipstick test results (negative, trace, positive) were allocated to three levels of UACR (<30, 30-300, >300), respectively. In accordance with Kidney Disease Improving Global Outcomes CKD prognosis heat mapping, the cohort was classified into four risk grades (green: grade 1; yellow: grade 2; orange: grade 3, red: grade 4) based on baseline eGFR and UACR levels or dipstick tests. During the mean follow-up period of 5.6 years, 708 new onset cardiovascular events were recorded. For CKD identified by eGFR and dipstick testing (dipstick test ≥ trace and eGFR <60 mL/min/1.73 m(2)), the incidence of CKD was found to be 9 % in the general population. In comparison to non-CKD (grade 1), although cardiovascular risk was significantly higher in risk grades ≥3 (relative risk (RR) = 1.70; 95 % CI: 1.28-2.26), risk predictive ability was not significant in risk grade 2 (RR = 1.20; 95 % CI: 0.95-1.52). When CKD was defined by eGFR and UACR (UACR ≥30 mg/g Cr and eGFR <60 mL/min/1.73 m(2)), prevalence was found to be 29 %. Predictive ability in risk grade 2 (RR = 1.41; 95 % CI: 1.19-1.66) and risk grade ≥3 (RR = 1.76; 95 % CI: 1.37-2.28) were both significantly greater than for non-CKD. Reclassification analysis showed a significant improvement in risk predictive abilities when CKD risk grading was based on UACR rather than on dipstick testing in this population (p < 0.001). Although prevalence of CKD was higher when detected by UACR rather than urine dipstick testing, the predictive ability for cardiovascular events from UACR-based risk grading was superior to that of dipstick-based risk grading in the general population.
ERIC Educational Resources Information Center
Pouls, Claudia; Jeandarme, Inge
2018-01-01
Background: One of the most extensively tested risk assessment instruments in offenders with an intellectual disability (OIDs) is the Violence Risk Appraisal Guide (VRAG). The purpose of this prospective study was to test the ability of this instrument to predict institutional aggression in OIDs. Method: VRAG scores were collected for 52 OIDs, and…
The development and testing of a skin tear risk assessment tool.
Newall, Nelly; Lewin, Gill F; Bulsara, Max K; Carville, Keryln J; Leslie, Gavin D; Roberts, Pam A
2017-02-01
The aim of the present study is to develop a reliable and valid skin tear risk assessment tool. The six characteristics identified in a previous case control study as constituting the best risk model for skin tear development were used to construct a risk assessment tool. The ability of the tool to predict skin tear development was then tested in a prospective study. Between August 2012 and September 2013, 1466 tertiary hospital patients were assessed at admission and followed up for 10 days to see if they developed a skin tear. The predictive validity of the tool was assessed using receiver operating characteristic (ROC) analysis. When the tool was found not to have performed as well as hoped, secondary analyses were performed to determine whether a potentially better performing risk model could be identified. The tool was found to have high sensitivity but low specificity and therefore have inadequate predictive validity. Secondary analysis of the combined data from this and the previous case control study identified an alternative better performing risk model. The tool developed and tested in this study was found to have inadequate predictive validity. The predictive validity of an alternative, more parsimonious model now needs to be tested. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Ingram, Emily R; Robertson, Iain K; Ogden, Kathryn J; Dennis, Amanda E; Campbell, Joanne E; Corbould, Anne M
2017-06-01
Gestational diabetes mellitus (GDM) is associated with life-long increased risk of type 2 diabetes: affected women are advised to undergo oral glucose tolerance testing (OGTT) at 6-12 weeks postpartum, then glucose screening every 1-3 years. We investigated whether in women with GDM, antenatal clinical factors predicted postpartum abnormal glucose tolerance and compliance with screening. In women with GDM delivering 2007 to mid-2009 in a single hospital, antenatal/obstetric data and glucose tests at 6-12 weeks postpartum and during 5.5 years post-pregnancy were retrospectively collected. Predictors of return for testing and abnormal glucose tolerance were identified using multivariate analysis. Of 165 women, 117 (70.9%) returned for 6-12 week postpartum OGTT: 23 (19.6%) were abnormal. Smoking and parity, independent of socioeconomic status, were associated with non-return for testing. Fasting glucose ≥5.4 mmol/L on pregnancy OGTT predicted both non-return for testing and abnormal OGTT. During 5.5 years post-pregnancy, 148 (89.7%) women accessed glucose screening: nine (6.1%) developed diabetes, 33 (22.3%) had impaired fasting glucose / impaired glucose tolerance. Predictors of abnormal glucose tolerance were fasting glucose ≥5.4 mmol/L and 2-h glucose ≥9.3 mmol/L on pregnancy OGTT (~2.5-fold increased risk), and polycystic ovary syndrome (~3.4 fold increased risk). Risk score calculation, based on combined antenatal factors, did not improve predictions. Antenatal clinical factors were modestly predictive of return for testing and abnormal glucose tolerance post-pregnancy in women with GDM. Risk score calculations were ineffective in predicting outcomes: risk scores developed in other populations require validation. Ongoing glucose screening is indicated for all women with GDM. © 2016 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Hirase, Tatsuya; Inokuchi, Shigeru; Matsusaka, Nobuou; Nakahara, Kazumi; Okita, Minoru
2014-01-01
Developing a practical fall risk assessment tool to predict the occurrence of falls in the primary care setting is important because investigators have reported deterioration of physical function associated with falls. Researchers have used many performance tests to predict the occurrence of falls. These performance tests predict falls and also assess physical function and determine exercise interventions. However, the need for such specialists as physical therapists to accurately conduct these tests limits their use in the primary care setting. Questionnaires for fall prediction offer an easy way to identify high-risk fallers without requiring specialists. Using an existing fall assessment questionnaire, this study aimed to identify items specific to physical function and determine whether those items were able to predict falls and estimate physical function of high-risk fallers. The analysis consisted of both retrospective and prospective studies and used 2 different samples (retrospective, n = 1871; prospective, n = 292). The retrospective study and 3-month prospective study comprised community-dwelling individuals aged 65 years or older and older adults using community day centers. The number of falls, risk factors for falls (15 risk factors on the questionnaire), and physical function determined by chair standing test (CST) and Timed Up and Go Test (TUGT) were assessed. The retrospective study selected fall risk factors related to physical function. The prospective study investigated whether the number of selected risk factors could predict falls. The predictive power was determined using the area under the receiver operating characteristic curve. Seven of the 15 risk factors were related to physical function. The area under the receiver operating characteristic curve for the sum of the selected risk factors of previous falls plus the other risk factors was 0.82 (P = .00). The best cutoff point was 4 risk factors, with sensitivity and specificity of 84% and 68%, respectively. The mean values for the CST and TUGT at the best cutoff point were 12.9 and 12.5 seconds, respectively. In the retrospective study, the values for the CST and TUGT corresponding to the best cutoff point from the prospective study were 13.2 and 11.4 seconds, respectively. This study confirms that a screening tool comprising 7 fall risk factors can be used to predict falls. The values for the CST and TUGT corresponding to the best cutoff point for the selected 7 risk factors determined in our prospective study were similar to the cutoff points for the CST and TUGT in previous studies for fall prediction. We propose that the sum of the selected risk factors of previous falls plus the other risk factors may be identified as the estimated value for physical function. These findings may contribute to earlier identification of high-risk fallers and intervention for fall prevention.
Can we improve clinical prediction of at-risk older drivers?
Bowers, Alex R.; Anastasio, R. Julius; Sheldon, Sarah S.; O’Connor, Margaret G.; Hollis, Ann M.; Howe, Piers D.; Horowitz, Todd S.
2013-01-01
Objectives To conduct a pilot study to evaluate the predictive value of the Montreal Cognitive Assessment test (MoCA) and a brief test of multiple object tracking (MOT) relative to other tests of cognition and attention in identifying at-risk older drivers, and to determine which combination of tests provided the best overall prediction. Methods Forty-seven currently-licensed drivers (58 to 95 years), primarily from a clinical driving evaluation program, participated. Their performance was measured on: (1) a screening test battery, comprising MoCA, MOT, MiniMental State Examination (MMSE), Trail-Making Test, visual acuity, contrast sensitivity, and Useful Field of View (UFOV); and (2) a standardized road test. Results Eighteen participants were rated at-risk on the road test. UFOV subtest 2 was the best single predictor with an area under the curve (AUC) of .84. Neither MoCA nor MOT was a better predictor of the at-risk outcome than either MMSE or UFOV, respectively. The best four-test combination (MMSE, UFOV subtest 2, visual acuity and contrast sensitivity) was able to identify at-risk drivers with 95% specificity and 80% sensitivity (.91 AUC). Conclusions Although the best four-test combination was much better than a single test in identifying at-risk drivers, there is still much work to do in this field to establish test batteries that have both high sensitivity and specificity. PMID:23954688
Prediction of Lateral Ankle Sprains in Football Players Based on Clinical Tests and Body Mass Index.
Gribble, Phillip A; Terada, Masafumi; Beard, Megan Q; Kosik, Kyle B; Lepley, Adam S; McCann, Ryan S; Pietrosimone, Brian G; Thomas, Abbey C
2016-02-01
The lateral ankle sprain (LAS) is the most common injury suffered in sports, especially in football. While suggested in some studies, a predictive role of clinical tests for LAS has not been established. To determine which clinical tests, focused on potentially modifiable factors of movement patterns and body mass index (BMI), could best demonstrate risk of LAS among high school and collegiate football players. Case-control study; Level of evidence, 3. A total of 539 high school and collegiate football players were evaluated during the preseason with the Star Excursion Balance Test (SEBT) and Functional Movement Screen as well as BMI. Results were compared between players who did and did not suffer an LAS during the season. Logistic regression analyses and calculated odds ratios were used to determine which measures predicted risk of LAS. The LAS group performed worse on the SEBT-anterior reaching direction (SEBT-ANT) and had higher BMI as compared with the noninjured group (P < .001). The strongest prediction models corresponded with the SEBT-ANT. Low performance on the SEBT-ANT predicted a risk of LAS in football players. BMI was also significantly higher in football players who sustained an LAS. Identifying clinical tools for successful LAS injury risk prediction will be a critical step toward the creation of effective prevention programs to reduce risk of sustaining an LAS during participation in football. © 2015 The Author(s).
Liu, Chen; Liu, Teli; Zhang, Ning; Liu, Yiqiang; Li, Nan; Du, Peng; Yang, Yong; Liu, Ming; Gong, Kan; Yang, Xing; Zhu, Hua; Yan, Kun; Yang, Zhi
2018-05-02
The purpose of this study was to investigate the performance of 68 Ga-PSMA-617 PET/CT in predicting risk stratification and metastatic risk of prostate cancer. Fifty newly diagnosed patients with prostate cancer as confirmed by needle biopsy were continuously included, 40 in a train set and ten in a test set. 68 Ga-PSMA-617 PET/CT and clinical data of all patients were retrospectively analyzed. Semi-quantitative analysis of PET images provided maximum standardized uptake (SUVmax) of primary prostate cancer and volumetric parameters including intraprostatic PSMA-derived tumor volume (iPSMA-TV) and intraprostatic total lesion PSMA (iTL-PSMA). According to prostate cancer risk stratification criteria of the NCCN Guideline, all patients were simplified into a low-intermediate risk group or a high-risk group. The semi-quantitative parameters of 68 Ga-PSMA-617 PET/CT were used to establish a univariate logistic regression model for high-risk prostate cancer and its metastatic risk, and to evaluate the diagnostic efficacy of the predictive model. In the train set, 30/40 (75%) patients had high-risk prostate cancer and 10/40 (25%) patients had low-to-moderate-risk prostate cancer; in the test set, 8/10 (80%) patients had high-risk prostate cancer while 2/10 (20%) had low-intermediate risk prostate cancer. The univariate logistic regression model established with SUVmax, iPSMA-TV and iTL-PSMA could all effectively predict high-risk prostate cancer; the AUC of ROC were 0.843, 0.802 and 0.900, respectively. Based on the test set, the sensitivity and specificity of each model were 87.5% and 50% for SUVmax, 62.5% and 100% for iPSMA-TV, and 87.5% and 100% for iTL-PSMA, respectively. The iPSMA-TV and iTL-PSMA-based predictive model could predict the metastatic risk of prostate cancer, the AUC of ROC was 0.863 and 0.848, respectively, but the SUVmax-based prediction model could not predict metastatic risk. Semi-quantitative analysis indexes of 68 Ga-PSMA-617 PET/CT imaging can be used as "imaging biomarkers" to predict risk stratification and metastatic risk of prostate cancer.
Yamaki, Takashi; Nozaki, Motohiro; Sakurai, Hiroyuki; Takeuchi, Masaki; Soejima, Kazutaka; Kono, Taro
2005-11-01
Clinical signs and symptoms such as swelling, pain, and redness are unreliable markers of deep vein thrombosis (DVT). Because of this venous duplex scanning (VDS) has been heavily used in DVT detection. The purpose of this study was to determine if a combination of D-dimer testing and pretest clinical score could reduce the use of VDS in symptomatic patients with suspected DVT. One hundred seventy-four consecutive patients with suspected DVT were prospectively evaluated using pretest clinical probability (PCP) score and D-dimer testing before VDS. After calculating clinical probability scores developed by Wells and associates, patients were divided into low risk (
Factors Motivating Individuals to Consider Genetic Testing for Type 2 Diabetes Risk Prediction
Wessel, Jennifer; Gupta, Jyoti; de Groot, Mary
2016-01-01
The purpose of this study was to identify attitudes and perceptions of willingness to participate in genetic testing for type 2 diabetes (T2D) risk prediction in the general population. Adults (n = 598) were surveyed on attitudes about utilizing genetic testing to predict future risk of T2D. Participants were recruited from public libraries (53%), online registry (37%) and a safety net hospital emergency department (10%). Respondents were 37±11 years old, primarily White (54%), female (69%), college educated (46%), with an annual income ≥$25,000 (56%). Half of participants were interested in genetic testing for T2D (52%) and 81% agreed/strongly agreed genetic testing should be available to the public. Only 57% of individuals knew T2D is preventable. A multivariate model to predict interest in genetic testing was adjusted for age, gender, recruitment location and BMI; significant predictors were motivation (high perceived personal risk of T2D [OR = 4.38 (1.76, 10.9)]; family history [OR = 2.56 (1.46, 4.48)]; desire to know risk prior to disease onset [OR = 3.25 (1.94, 5.42)]; and knowing T2D is preventable [OR = 2.11 (1.24, 3.60)], intention (if the cost is free [OR = 10.2 (4.27, 24.6)]; and learning T2D is preventable [OR = 5.18 (1.95, 13.7)]) and trust of genetic testing results [OR = 0.03 (0.003, 0.30)]. Individuals are interested in genetic testing for T2D risk which offers unique information that is personalized. Financial accessibility, validity of the test and availability of diabetes prevention programs were identified as predictors of interest in T2D testing. PMID:26789839
Factors Motivating Individuals to Consider Genetic Testing for Type 2 Diabetes Risk Prediction.
Wessel, Jennifer; Gupta, Jyoti; de Groot, Mary
2016-01-01
The purpose of this study was to identify attitudes and perceptions of willingness to participate in genetic testing for type 2 diabetes (T2D) risk prediction in the general population. Adults (n = 598) were surveyed on attitudes about utilizing genetic testing to predict future risk of T2D. Participants were recruited from public libraries (53%), online registry (37%) and a safety net hospital emergency department (10%). Respondents were 37 ± 11 years old, primarily White (54%), female (69%), college educated (46%), with an annual income ≥$25,000 (56%). Half of participants were interested in genetic testing for T2D (52%) and 81% agreed/strongly agreed genetic testing should be available to the public. Only 57% of individuals knew T2D is preventable. A multivariate model to predict interest in genetic testing was adjusted for age, gender, recruitment location and BMI; significant predictors were motivation (high perceived personal risk of T2D [OR = 4.38 (1.76, 10.9)]; family history [OR = 2.56 (1.46, 4.48)]; desire to know risk prior to disease onset [OR = 3.25 (1.94, 5.42)]; and knowing T2D is preventable [OR = 2.11 (1.24, 3.60)], intention (if the cost is free [OR = 10.2 (4.27, 24.6)]; and learning T2D is preventable [OR = 5.18 (1.95, 13.7)]) and trust of genetic testing results [OR = 0.03 (0.003, 0.30)]. Individuals are interested in genetic testing for T2D risk which offers unique information that is personalized. Financial accessibility, validity of the test and availability of diabetes prevention programs were identified as predictors of interest in T2D testing.
Orucevic, Amila; Bell, John L; McNabb, Alison P; Heidel, Robert E
2017-05-01
Oncotype DX (ODX) recurrence score (RS) breast cancer (BC) assay is costly, and performed in only ~1/3 of estrogen receptor (ER)-positive BC patients in the USA. We have now developed a user-friendly nomogram surrogate prediction model for ODX based on a large dataset from the National Cancer Data Base (NCDB) to assist in selecting patients for which further ODX testing may not be necessary and as a surrogate for patients for which ODX testing is not affordable or available. Six clinicopathologic variables of 27,719 ODX-tested ER+/HER2-/lymph node-negative patients with 6-50 mm tumor size captured by the NCDB from 2010 to 2012 were assessed with logistic regression to predict high-risk or low-risk ODXRS test results with TAILORx-trial and commercial cut-off values; 12,763 ODX-tested patients in 2013 were used for external validation. The predictive accuracy of the regression model was yielded using a Receiver Operator Characteristic analysis. Model fit was analyzed by plotting the predicted probabilities against the actual probabilities. A user-friendly calculator version of nomograms is available online at the University of Tennessee Medical Center website (Knoxville, TN). Grade and progesterone receptor status were the highest predictors of both low-risk and high-risk ODXRS, followed by age, tumor size, histologic tumor type and lymph-vascular invasion (C-indexes-.0.85 vs. 0.88 for TAILORx-trial vs. commercial cut-off values, respectively). This is the first study of this scale showing confidently that clinicopathologic variables can be used for prediction of low-risk or high-risk ODXRS using our nomogram models. These novel nomograms will be useful tools to help physicians and patients decide whether further ODX testing is necessary and are excellent surrogates for patients for which ODX testing is not affordable or available.
Kene, Mamata V; Ballard, Dustin W; Vinson, David R; Rauchwerger, Adina S; Iskin, Hilary R; Kim, Anthony S
2015-09-01
We evaluated emergency physicians' (EP) current perceptions, practice, and attitudes towards evaluating stroke as a cause of dizziness among emergency department patients. We administered a survey to all EPs in a large integrated healthcare delivery system. The survey included clinical vignettes, perceived utility of historical and exam elements, attitudes about the value of and requisite post-test probability of a clinical prediction rule for dizziness. We calculated descriptive statistics and post-test probabilities for such a clinical prediction rule. The response rate was 68% (366/535). Respondents' median practice tenure was eight years (37% female, 92% emergency medicine board certified). Symptom quality and typical vascular risk factors increased suspicion for stroke as a cause of dizziness. Most respondents reported obtaining head computed tomography (CT) (74%). Nearly all respondents used and felt confident using cranial nerve and limb strength testing. A substantial minority of EPs used the Epley maneuver (49%) and HINTS (head-thrust test, gaze-evoked nystagmus, and skew deviation) testing (30%); however, few EPs reported confidence in these tests' bedside application (35% and 16%, respectively). Respondents favorably viewed applying a properly validated clinical prediction rule for assessment of immediate and 30-day stroke risk, but indicated it would have to reduce stroke risk to <0.5% to be clinically useful. EPs report relying on symptom quality, vascular risk factors, simple physical exam elements, and head CT to diagnose stroke as the cause of dizziness, but would find a validated clinical prediction rule for dizziness helpful. A clinical prediction rule would have to achieve a 0.5% post-test stroke probability for acceptability.
Kharroubi, Akram; Saba, Elias; Ghannam, Ibrahim; Darwish, Hisham
2017-12-01
The need for simple self-assessment tools is necessary to predict women at high risk for developing osteoporosis. In this study, tools like the IOF One Minute Test, Fracture Risk Assessment Tool (FRAX), and Simple Calculated Osteoporosis Risk Estimation (SCORE) were found to be valid for Palestinian women. The threshold for predicting women at risk for each tool was estimated. The purpose of this study is to evaluate the validity of the updated IOF (International Osteoporosis Foundation) One Minute Osteoporosis Risk Assessment Test, FRAX, SCORE as well as age alone to detect the risk of developing osteoporosis in postmenopausal Palestinian women. Three hundred eighty-two women 45 years and older were recruited including 131 women with osteoporosis and 251 controls following bone mineral density (BMD) measurement, 287 completed questionnaires of the different risk assessment tools. Receiver operating characteristic (ROC) curves were evaluated for each tool using bone BMD as the gold standard for osteoporosis. The area under the ROC curve (AUC) was the highest for FRAX calculated with BMD for predicting hip fractures (0.897) followed by FRAX for major fractures (0.826) with cut-off values ˃1.5 and ˃7.8%, respectively. The IOF One Minute Test AUC (0.629) was the lowest compared to other tested tools but with sufficient accuracy for predicting the risk of developing osteoporosis with a cut-off value ˃4 total yes questions out of 18. SCORE test and age alone were also as good predictors of risk for developing osteoporosis. According to the ROC curve for age, women ≥64 years had a higher risk of developing osteoporosis. Higher percentage of women with low BMD (T-score ≤-1.5) or osteoporosis (T-score ≤-2.5) was found among women who were not exposed to the sun, who had menopause before the age of 45 years, or had lower body mass index (BMI) compared to controls. Women who often fall had lower BMI and approximately 27% of the recruited postmenopausal Palestinian women had accidents that caused fractures. Simple self-assessment tools like FRAX without BMD, SCORE, and the IOF One Minute Tests were valid for predicting Palestinian postmenopausal women at high risk of developing osteoporosis.
Wijdenes-Pijl, Miranda; Dondorp, Wybo J; Timmermans, Danielle Rm; Cornel, Martina C; Henneman, Lidewij
2011-07-05
This study assessed lay perceptions of issues related to predictive genetic testing for multifactorial diseases. These perceived issues may differ from the "classic" issues, e.g. autonomy, discrimination, and psychological harm that are considered important in predictive testing for monogenic disorders. In this study, type 2 diabetes was used as an example, and perceptions with regard to predictive testing based on DNA test results and family history assessment were compared. Eight focus group interviews were held with 45 individuals aged 35-70 years with (n = 3) and without (n = 1) a family history of diabetes, mixed groups of these two (n = 2), and diabetes patients (n = 2). All interviews were transcribed and analysed using Atlas-ti. Most participants believed in the ability of a predictive test to identify people at risk for diabetes and to motivate preventive behaviour. Different reasons underlying motivation were considered when comparing DNA test results and a family history risk assessment. A perceived drawback of DNA testing was that diabetes was considered not severe enough for this type of risk assessment. In addition, diabetes family history assessment was not considered useful by some participants, since there are also other risk factors involved, not everyone has a diabetes family history or knows their family history, and it might have a negative influence on family relations. Respect for autonomy of individuals was emphasized more with regard to DNA testing than family history assessment. Other issues such as psychological harm, discrimination, and privacy were only briefly mentioned for both tests. The results suggest that most participants believe a predictive genetic test could be used in the prevention of multifactorial disorders, such as diabetes, but indicate points to consider before both these tests are applied. These considerations differ with regard to the method of assessment (DNA test or obtaining family history) and also differ from monogenic disorders.
78 FR 43838 - Airworthiness Directives; Hamilton Sundstrand Corporation Propellers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
... qualitative risk assessment. The data gathered was then used for a more representative quantitative risk analysis. The results from the bond strength tests predicts a significantly lower fleet risk than the prior... predicts a significantly lower fleet risk than the prior qualitative analysis. Accordingly, we withdraw the...
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Wang, Yunzhi; Yan, Shiju; Tan, Maxine; Cheng, Samuel; Liu, Hong; Zheng, Bin
2016-03-01
In order to establish a new personalized breast cancer screening paradigm, it is critically important to accurately predict the short-term risk of a woman having image-detectable cancer after a negative mammographic screening. In this study, we developed and tested a novel short-term risk assessment model based on deep learning method. During the experiment, a number of 270 "prior" negative screening cases was assembled. In the next sequential ("current") screening mammography, 135 cases were positive and 135 cases remained negative. These cases were randomly divided into a training set with 200 cases and a testing set with 70 cases. A deep learning based computer-aided diagnosis (CAD) scheme was then developed for the risk assessment, which consists of two modules: adaptive feature identification module and risk prediction module. The adaptive feature identification module is composed of three pairs of convolution-max-pooling layers, which contains 20, 10, and 5 feature maps respectively. The risk prediction module is implemented by a multiple layer perception (MLP) classifier, which produces a risk score to predict the likelihood of the woman developing short-term mammography-detectable cancer. The result shows that the new CAD-based risk model yielded a positive predictive value of 69.2% and a negative predictive value of 74.2%, with a total prediction accuracy of 71.4%. This study demonstrated that applying a new deep learning technology may have significant potential to develop a new short-term risk predicting scheme with improved performance in detecting early abnormal symptom from the negative mammograms.
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
Incorporating Truncating Variants in PALB2, CHEK2 and ATM into the BOADICEA Breast Cancer Risk Model
Lee, Andrew J.; Cunningham, Alex P.; Tischkowitz, Marc; Simard, Jacques; Pharoah, Paul D.; Easton, Douglas F.; Antoniou, Antonis C.
2016-01-01
Purpose The proliferation of gene-panel testing precipitates the need for a breast cancer (BC) risk model that incorporates the effects of mutations in several genes and family history (FH). We extended the BOADICEA model to incorporate the effects of truncating variants in PALB2, CHEK2 and ATM. Methods The BC incidence was modelled via the explicit effects of truncating variants in BRCA1/2, PALB2, CHEK2 and ATM and other unobserved genetic effects using segregation analysis methods. Results The predicted average BC risk by age 80 for an ATM mutation carrier is 28%, 30% for CHEK2, 50% for PALB2, 74% for BRCA1 and BRCA2. However, the BC risks are predicted to increase with FH-burden. In families with mutations, predicted risks for mutation-negative members depend on both FH and the specific mutation. The reduction in BC risk after negative predictive-testing is greatest when a BRCA1 mutation is identified in the family, but for women whose relatives carry a CHEK2 or ATM mutation, the risks decrease slightly. Conclusions The model may be a valuable tool for counselling women who have undergone gene-panel testing for providing consistent risks and harmonizing their clinical management. A web-application can be used to obtain BC- risks in clinical practice (http://ccge.medschl.cam.ac.uk/boadicea/). PMID:27464310
Lee, Andrew J; Cunningham, Alex P; Tischkowitz, Marc; Simard, Jacques; Pharoah, Paul D; Easton, Douglas F; Antoniou, Antonis C
2016-12-01
The proliferation of gene panel testing precipitates the need for a breast cancer (BC) risk model that incorporates the effects of mutations in several genes and family history (FH). We extended the BOADICEA model to incorporate the effects of truncating variants in PALB2, CHEK2, and ATM. The BC incidence was modeled via the explicit effects of truncating variants in BRCA1/2, PALB2, CHEK2, and ATM and other unobserved genetic effects using segregation analysis methods. The predicted average BC risk by age 80 for an ATM mutation carrier is 28%, 30% for CHEK2, 50% for PALB2, and 74% for BRCA1 and BRCA2. However, the BC risks are predicted to increase with FH burden. In families with mutations, predicted risks for mutation-negative members depend on both FH and the specific mutation. The reduction in BC risk after negative predictive testing is greatest when a BRCA1 mutation is identified in the family, but for women whose relatives carry a CHEK2 or ATM mutation, the risks decrease slightly. The model may be a valuable tool for counseling women who have undergone gene panel testing for providing consistent risks and harmonizing their clinical management. A Web application can be used to obtain BC risks in clinical practice (http://ccge.medschl.cam.ac.uk/boadicea/).Genet Med 18 12, 1190-1198.
Revolutionizing Toxicity Testing For Predicting Developmental Outcomes (DNT4)
Characterizing risk from environmental chemical exposure currently requires extensive animal testing; however, alternative approaches are being researched to increase throughput of chemicals screened, decrease reliance on animal testing, and improve accuracy in predicting adverse...
Varan, Hacer Dogan; Bolayir, Basak; Kara, Ozgur; Arik, Gunes; Kizilarslanoglu, Muhammet Cemal; Kilic, Mustafa Kemal; Sumer, Fatih; Kuyumcu, Mehmet Emin; Yesil, Yusuf; Yavuz, Burcu Balam; Halil, Meltem; Cankurtaran, Mustafa
2016-12-01
Phase angle (PhA) value determined by bioelectrical impedance analysis (BIA) is an indicator of cell membrane damage and body cell mass. Recent studies have shown that low PhA value is associated with increased nutritional risk in various group of patients. However, there have been only a few studies performed globally assessing the relationship between nutritional risk and PhA in hospitalized geriatric patients. The aim of the study is to evaluate the predictive value of the PhA for malnutrition risk in hospitalized geriatric patients. One hundred and twenty-two hospitalized geriatric patients were included in this cross-sectional study. Comprehensive geriatric assessment tests and BIA measurements were performed within the first 48 h after admission. Nutritional risk state of the patients was determined with NRS-2002. Phase angle values of the patients with malnutrition risk were compared with the patients that did not have the same risk. The independent variables for predicting malnutrition risk were determined. SPSS version 15 was utilized for the statistical analyzes. The patients with malnutrition risk had significantly lower phase angle values than the patients without malnutrition risk (p = 0.003). ROC curve analysis suggested that the optimum PhA cut-off point for malnutrition risk was 4.7° with 79.6 % sensitivity, 64.6 % specificity, 73.9 % positive predictive value, and 73.9 % negative predictive value. BMI, prealbumin, PhA, and Mini Mental State Examination Test scores were the independent variables for predicting malnutrition risk. PhA can be a useful, independent indicator for predicting malnutrition risk in hospitalized geriatric patients.
Yoshida, Kunihiro; Tamai, Mariko; Kubota, Takeo; Kawame, Hiroshi; Amano, Naoji; Ikeda, Shu-ichi; Fukushima, Yoshimitsu
2002-02-01
Predictive genetic testing for hereditary neuromuscular diseases is a delicate issue for individuals at risk and their families, as well as for medical staff because these diseases are often late-onset and intractable. Therefore careful pre- and post-test genetic counseling and psychosocial support should be provided along with such genetic testing. The Division of Clinical and Molecular Genetics was established at our hospital in May 1996 to provide skilled professional genetic counseling. Since its establishment, 14 individuals have visited our clinic to request predictive genetic testing for hereditary neuromuscular diseases (4 for myotonic dystrophy, 6 for spinocerebellar ataxia, 3 for Huntington's disease, and 1 for Alzheimer's disease). The main reasons for considering testing were to remove uncertainty about the genetic status and to plan for the future. Nine of 14 individuals requested testing for making decisions about a forthcoming marriage or pregnancy (family planning). Other reasons raised by the individuals included career or financial planning, planning for their own health care, and knowing the risk for their children. At the first genetic counseling session, all of the individuals expressed hopes of not being a gene carrier and of escaping from fear of disease, and seemed not to be mentally well prepared for an increased-risk result. To date, 7 of the 14 individuals have received genetic testing and only one, who underwent predictive genetic testing for spinocerebellar ataxia, was given an increased-risk result. The seven individuals including the one with an increased-risk result, have coped well with their new knowledge about their genetic status after the testing results were disclosed. None of them has expressed regret. In pre-test genetic counseling sessions, we consider it quite important not only to determine the psychological status of the individual, but also to make the individual try to anticipate the changes in his/her life upon receiving an increased-risk or a decreased-risk result. Sufficient time should be taken to build a good relationship between the individual and his/her family and the medical staff during pre-test counseling sessions. This will help the individuals feel satisfied with their own decisions for the future, whether they receive genetic testing or not.
Arning, Larissa; Witt, Constantin N; Epplen, Jörg T; Stemmler, Susanne
2015-01-01
The discovery of the mutation causing Huntington's disease (HD) in 1993 allowed direct mutation analysis and predictive testing to identify currently unaffected carriers with a sensitivity and specificity of virtually 100%. The present study was designed to comprehensively profile the participants who sought predictive testing for HD between 1993 and 2009 in our Huntington centre. Using a retrospective design, we analysed the written documentation of the counselling sessions for all referrals for predictive mutation testing in this time span. Six hundred sixty-three individuals at risk for HD requested predictive testing. Roughly half (n = 333) completed the protocol and received their test result. In general our findings are in accordance with other reports: most participants share an a priori risk of 50% (91.1%); more females request testing (58.5%); and those who ask for the result are mostly in their 30 s (mean = 35.1 years). Of those at 50% or 25% prior risks, 47.4% and 22.7%, respectively, tested positive in accordance with the respective risk of inheriting HD. Generally, more participants with an affected mother than father sought genetic testing (52.5% versus 47.5%). Interestingly, this difference was especially evident in the group of females who finally withdrew from testing (59.1%, p = 0.040). Men, in particular those who decided in favour of the test, were more often accompanied by their partner in the pre-test counselling session than vice versa (67.9% versus 44.7%, p = 0.003). On the other hand, significantly more men who were being tested did not have a companion in the pre-test session as compared with men who decided against the test (40.0% versus 25.7%, p = 0.012). During the first four years of predictive testing (1993–1996) more participants completed the protocol and received their test result as in later years. Yet, in this early time span significantly fewer females finally decided in favour of the test (48.4%, p = 0.005). These findings are discussed longitudinally and in the context of the experience in other centres. We present new gender-specific aspects of decision-making for predictive HD tests.
Köhler, M; Ziegler, A G; Beyerlein, A
2016-06-01
Women with gestational diabetes mellitus (GDM) have an increased risk of diabetes postpartum. We developed a score to predict the long-term risk of postpartum diabetes using clinical and anamnestic variables recorded during or shortly after delivery. Data from 257 GDM women who were prospectively followed for diabetes outcome over 20 years of follow-up were used to develop and validate the risk score. Participants were divided into training and test sets. The risk score was calculated using Lasso Cox regression and divided into four risk categories, and its prediction performance was assessed in the test set. Postpartum diabetes developed in 110 women. The computed training set risk score of 5 × body mass index in early pregnancy (per kg/m(2)) + 132 if GDM was treated with insulin (otherwise 0) + 44 if the woman had a family history of diabetes (otherwise 0) - 35 if the woman lactated (otherwise 0) had R (2) values of 0.23, 0.25, and 0.33 at 5, 10, and 15 years postpartum, respectively, and a C-Index of 0.75. Application of the risk score in the test set resulted in observed risk of postpartum diabetes at 5 years of 11 % for low risk scores ≤140, 29 % for scores 141-220, 64 % for scores 221-300, and 80 % for scores >300. The derived risk score is easy to calculate, allows accurate prediction of GDM-related postpartum diabetes, and may thus be a useful prediction tool for clinicians and general practitioners.
Arthur, Michael W; Brown, Eric C; Briney, John S; Hawkins, J David; Abbott, Robert D; Catalano, Richard F; Becker, Linda; Langer, Michael; Mueller, Martin T
2015-08-01
School administrators and teachers face difficult decisions about how best to use school resources to meet academic achievement goals. Many are hesitant to adopt prevention curricula that are not focused directly on academic achievement. Yet, some have hypothesized that prevention curricula can remove barriers to learning and, thus, promote achievement. We examined relationships among school levels of student substance use and risk and protective factors that predict adolescent problem behaviors and achievement test performance. Hierarchical generalized linear models were used to predict associations involving school-averaged levels of substance use and risk and protective factors and students' likelihood of meeting achievement test standards on the Washington Assessment of Student Learning, statistically controlling for demographic and economic factors known to be associated with achievement. Levels of substance use and risk/protective factors predicted the academic test score performance of students. Many of these effects remained significant even after controlling for model covariates. Implementing prevention programs that target empirically identified risk and protective factors has the potential to have a favorable effect on students' academic achievement. © 2015, American School Health Association.
Perotte, Adler; Ranganath, Rajesh; Hirsch, Jamie S; Blei, David; Elhadad, Noémie
2015-07-01
As adoption of electronic health records continues to increase, there is an opportunity to incorporate clinical documentation as well as laboratory values and demographics into risk prediction modeling. The authors develop a risk prediction model for chronic kidney disease (CKD) progression from stage III to stage IV that includes longitudinal data and features drawn from clinical documentation. The study cohort consisted of 2908 primary-care clinic patients who had at least three visits prior to January 1, 2013 and developed CKD stage III during their documented history. Development and validation cohorts were randomly selected from this cohort and the study datasets included longitudinal inpatient and outpatient data from these populations. Time series analysis (Kalman filter) and survival analysis (Cox proportional hazards) were combined to produce a range of risk models. These models were evaluated using concordance, a discriminatory statistic. A risk model incorporating longitudinal data on clinical documentation and laboratory test results (concordance 0.849) predicts progression from state III CKD to stage IV CKD more accurately when compared to a similar model without laboratory test results (concordance 0.733, P<.001), a model that only considers the most recent laboratory test results (concordance 0.819, P < .031) and a model based on estimated glomerular filtration rate (concordance 0.779, P < .001). A risk prediction model that takes longitudinal laboratory test results and clinical documentation into consideration can predict CKD progression from stage III to stage IV more accurately than three models that do not take all of these variables into consideration. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Kene, Mamata V.; Ballard, Dustin W.; Vinson, David R.; Rauchwerger, Adina S.; Iskin, Hilary R.; Kim, Anthony S.
2015-01-01
Introduction We evaluated emergency physicians’ (EP) current perceptions, practice, and attitudes towards evaluating stroke as a cause of dizziness among emergency department patients. Methods We administered a survey to all EPs in a large integrated healthcare delivery system. The survey included clinical vignettes, perceived utility of historical and exam elements, attitudes about the value of and requisite post-test probability of a clinical prediction rule for dizziness. We calculated descriptive statistics and post-test probabilities for such a clinical prediction rule. Results The response rate was 68% (366/535). Respondents’ median practice tenure was eight years (37% female, 92% emergency medicine board certified). Symptom quality and typical vascular risk factors increased suspicion for stroke as a cause of dizziness. Most respondents reported obtaining head computed tomography (CT) (74%). Nearly all respondents used and felt confident using cranial nerve and limb strength testing. A substantial minority of EPs used the Epley maneuver (49%) and HINTS (head-thrust test, gaze-evoked nystagmus, and skew deviation) testing (30%); however, few EPs reported confidence in these tests’ bedside application (35% and 16%, respectively). Respondents favorably viewed applying a properly validated clinical prediction rule for assessment of immediate and 30-day stroke risk, but indicated it would have to reduce stroke risk to <0.5% to be clinically useful. Conclusion EPs report relying on symptom quality, vascular risk factors, simple physical exam elements, and head CT to diagnose stroke as the cause of dizziness, but would find a validated clinical prediction rule for dizziness helpful. A clinical prediction rule would have to achieve a 0.5% post-test stroke probability for acceptability. PMID:26587108
Luque, M J; Tapia, J L; Villarroel, L; Marshall, G; Musante, G; Carlo, W; Kattan, J
2014-01-01
Develop a risk prediction model for severe intraventricular hemorrhage (IVH) in very low birth weight infants (VLBWI). Prospectively collected data of infants with birth weight 500 to 1249 g born between 2001 and 2010 in centers from the Neocosur Network were used. Forward stepwise logistic regression model was employed. The model was tested in the 2011 cohort and then applied to the population of VLBWI that received prophylactic indomethacin to analyze its effect in the risk of severe IVH. Data from 6538 VLBWI were analyzed. The area under ROC curve for the model was 0.79 and 0.76 when tested in the 2011 cohort. The prophylactic indomethacin group had lower incidence of severe IVH, especially in the highest-risk groups. A model for early severe IVH prediction was developed and tested in our population. Prophylactic indomethacin was associated with a lower risk-adjusted incidence of severe IVH.
AMINI, Payam; AHMADINIA, Hasan; POOROLAJAL, Jalal; MOQADDASI AMIRI, Mohammad
2016-01-01
Background: We aimed to assess the high-risk group for suicide using different classification methods includinglogistic regression (LR), decision tree (DT), artificial neural network (ANN), and support vector machine (SVM). Methods: We used the dataset of a study conducted to predict risk factors of completed suicide in Hamadan Province, the west of Iran, in 2010. To evaluate the high-risk groups for suicide, LR, SVM, DT and ANN were performed. The applied methods were compared using sensitivity, specificity, positive predicted value, negative predicted value, accuracy and the area under curve. Cochran-Q test was implied to check differences in proportion among methods. To assess the association between the observed and predicted values, Ø coefficient, contingency coefficient, and Kendall tau-b were calculated. Results: Gender, age, and job were the most important risk factors for fatal suicide attempts in common for four methods. SVM method showed the highest accuracy 0.68 and 0.67 for training and testing sample, respectively. However, this method resulted in the highest specificity (0.67 for training and 0.68 for testing sample) and the highest sensitivity for training sample (0.85), but the lowest sensitivity for the testing sample (0.53). Cochran-Q test resulted in differences between proportions in different methods (P<0.001). The association of SVM predictions and observed values, Ø coefficient, contingency coefficient, and Kendall tau-b were 0.239, 0.232 and 0.239, respectively. Conclusion: SVM had the best performance to classify fatal suicide attempts comparing to DT, LR and ANN. PMID:27957463
A Tissue Systems Pathology Assay for High-Risk Barrett's Esophagus.
Critchley-Thorne, Rebecca J; Duits, Lucas C; Prichard, Jeffrey W; Davison, Jon M; Jobe, Blair A; Campbell, Bruce B; Zhang, Yi; Repa, Kathleen A; Reese, Lia M; Li, Jinhong; Diehl, David L; Jhala, Nirag C; Ginsberg, Gregory; DeMarshall, Maureen; Foxwell, Tyler; Zaidi, Ali H; Lansing Taylor, D; Rustgi, Anil K; Bergman, Jacques J G H M; Falk, Gary W
2016-06-01
Better methods are needed to predict risk of progression for Barrett's esophagus. We aimed to determine whether a tissue systems pathology approach could predict progression in patients with nondysplastic Barrett's esophagus, indefinite for dysplasia, or low-grade dysplasia. We performed a nested case-control study to develop and validate a test that predicts progression of Barrett's esophagus to high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC), based upon quantification of epithelial and stromal variables in baseline biopsies. Data were collected from Barrett's esophagus patients at four institutions. Patients who progressed to HGD or EAC in ≥1 year (n = 79) were matched with patients who did not progress (n = 287). Biopsies were assigned randomly to training or validation sets. Immunofluorescence analyses were performed for 14 biomarkers and quantitative biomarker and morphometric features were analyzed. Prognostic features were selected in the training set and combined into classifiers. The top-performing classifier was assessed in the validation set. A 3-tier, 15-feature classifier was selected in the training set and tested in the validation set. The classifier stratified patients into low-, intermediate-, and high-risk classes [HR, 9.42; 95% confidence interval, 4.6-19.24 (high-risk vs. low-risk); P < 0.0001]. It also provided independent prognostic information that outperformed predictions based on pathology analysis, segment length, age, sex, or p53 overexpression. We developed a tissue systems pathology test that better predicts risk of progression in Barrett's esophagus than clinicopathologic variables. The test has the potential to improve upon histologic analysis as an objective method to risk stratify Barrett's esophagus patients. Cancer Epidemiol Biomarkers Prev; 25(6); 958-68. ©2016 AACR. ©2016 American Association for Cancer Research.
Predictive value of cervical length measurement and fibronectin testing in threatened preterm labor.
van Baaren, Gert-Jan; Vis, Jolande Y; Wilms, Femke F; Oudijk, Martijn A; Kwee, Anneke; Porath, Martina M; Oei, Guid; Scheepers, Hubertina C J; Spaanderman, Marc E A; Bloemenkamp, Kitty W M; Haak, Monique C; Bolte, Antoinette C; Bax, Caroline J; Cornette, Jérôme M J; Duvekot, Johannes J; Nij Bijvanck, Bas W A; van Eyck, Jim; Franssen, Maureen T M; Sollie, Krystyna M; Vandenbussche, Frank P H A; Woiski, Mallory; Grobman, William A; van der Post, Joris A M; Bossuyt, Patrick M M; Opmeer, Brent C; Mol, Ben W J
2014-06-01
To estimate the performance of combining cervical length measurement with fetal fibronectin testing in predicting delivery in women with symptoms of preterm labor. We conducted a prospective nationwide cohort study in all 10 perinatal centers in The Netherlands. Women with symptoms of preterm labor between 24 and 34 weeks of gestation with intact membranes were included. In all women, qualitative fibronectin testing (0.050-microgram/mL cutoff) and cervical length measurement were performed. Logistic regression was used to predict spontaneous preterm delivery within 7 days after testing. A risk less than 5%, corresponding to the risk for women with a cervical length of at least 25 mm, was considered as low risk. Between December 2009 and August 2012, 714 women were enrolled. Fibronectin results and cervical length were available for 665 women, of whom 80 (12%) delivered within 7 days. Women with a cervical length of at least 30 mm or with a cervical length between 15 and 30 mm with a negative fibronectin result were at low risk (less than 5%) of spontaneous delivery within 7 days. Fibronectin testing in case of a cervical length between 15 and 30 mm additionally classified 103 women (15% of the cohort) as low risk and 36 women (5% of the cohort) as high risk. Cervical length measurement, combined with fetal fibronectin testing in case of a cervical length between 15 and 30 mm, improves identification of women with a low risk to deliver spontaneously within 7 days. II.
Correlation of in vitro challenge testing with consumer use testing for cosmetic products.
Brannan, D K; Dille, J C; Kaufman, D J
1987-01-01
An in vitro microbial challenge test has been developed to predict the likelihood of consumer contamination of cosmetic products. The challenge test involved inoculating product at four concentrations (30, 50, 70, and 100%) with microorganisms known to contaminate cosmetics. Elimination of these microorganisms at each concentration was followed over a 28-day period. The test was used to classify products as poorly preserved, marginally preserved, or well preserved. Consumer use testing was then used to determine whether the test predicted the risk of actual consumer contamination. Products classified by the challenge test as poorly preserved returned 46 to 90% contaminated after use. Products classified by the challenge test as well preserved returned with no contamination. Marginally preserved products returned with 0 to 21% of the used units contaminated. As a result, the challenge test described can be accurately used to predict the risk of consumer contamination of cosmetic products. PMID:3662517
2013-01-01
Background Cardiovascular disease is associated with major morbidity and mortality in women in the Western world. Prediction of an individual cardiovascular disease risk in young women is difficult. It is known that women with hypertensive pregnancy complications have an increased risk for developing cardiovascular disease in later life and pregnancy might be used as a cardiovascular stress test to identify women who are at high risk for cardiovascular disease. In this study we assess the possibility of long term cardiovascular risk prediction in women with a history of hypertensive pregnancy disorders at term. Methods In a longitudinal follow-up study, between June 2008 and November 2010, 300 women with a history of hypertensive pregnancy disorders at term (HTP cohort) and 94 women with a history of normotensive pregnancies at term (NTP cohort) were included. From the cardiovascular risk status that was known two years after index pregnancy we calculated individual (extrapolated) 10-and 30-year cardiovascular event risks using four different risk prediction models including the Framingham risk score, the SCORE score and the Reynolds risk score. Continuous data were analyzed using the Student’s T test and Mann–Whitney U test and categorical data by the Chi-squared test. A poisson regression analysis was performed to calculate the incidence risk ratios and corresponding 95% confidence intervals for the different cardiovascular risk estimation categories. Results After a mean follow-up of 2.5 years, HTP women had significantly higher mean (SD) extrapolated 10-year cardiovascular event risks (HTP 7.2% (3.7); NTP 4.4% (1.9) (p<.001, IRR 5.8, 95% CI 1.9 to 19)) and 30-year cardiovascular event risks (HTP 11% (7.6); NTP 7.3% (3.5) (p<.001, IRR 2.7, 95% CI 1.6 to 4.5)) as compared to NTP women calculated by the Framingham risk scores. The SCORE score and the Reynolds risk score showed similar significant results. Conclusions Women with a history of gestational hypertension or preeclampsia at term have higher predicted (extrapolated) 10-year and 30-year cardiovascular event risks as compared to women with a history of uncomplicated pregnancies. Further large prospective studies have to evaluate whether hypertensive pregnancy disorders have to be included as an independent variable in cardiovascular risk prediction models for women. PMID:23734952
Li, Zhigang; Liu, Weiguo; Zhang, Jinhuan; Hu, Jingwen
2015-09-01
Skull fracture is one of the most common pediatric traumas. However, injury assessment tools for predicting pediatric skull fracture risk is not well established mainly due to the lack of cadaver tests. Weber conducted 50 pediatric cadaver drop tests for forensic research on child abuse in the mid-1980s (Experimental studies of skull fractures in infants, Z Rechtsmed. 92: 87-94, 1984; Biomechanical fragility of the infant skull, Z Rechtsmed. 94: 93-101, 1985). To our knowledge, these studies contained the largest sample size among pediatric cadaver tests in the literature. However, the lack of injury measurements limited their direct application in investigating pediatric skull fracture risks. In this study, 50 pediatric cadaver tests from Weber's studies were reconstructed using a parametric pediatric head finite element (FE) model which were morphed into subjects with ages, head sizes/shapes, and skull thickness values that reported in the tests. The skull fracture risk curves for infants from 0 to 9 months old were developed based on the model-predicted head injury measures through logistic regression analysis. It was found that the model-predicted stress responses in the skull (maximal von Mises stress, maximal shear stress, and maximal first principal stress) were better predictors than global kinematic-based injury measures (peak head acceleration and head injury criterion (HIC)) in predicting pediatric skull fracture. This study demonstrated the feasibility of using age- and size/shape-appropriate head FE models to predict pediatric head injuries. Such models can account for the morphological variations among the subjects, which cannot be considered by a single FE human model.
Klein, A A; Collier, T; Yeates, J; Miles, L F; Fletcher, S N; Evans, C; Richards, T
2017-09-01
A simple and accurate scoring system to predict risk of transfusion for patients undergoing cardiac surgery is lacking. We identified independent risk factors associated with transfusion by performing univariate analysis, followed by logistic regression. We then simplified the score to an integer-based system and tested it using the area under the receiver operator characteristic (AUC) statistic with a Hosmer-Lemeshow goodness-of-fit test. Finally, the scoring system was applied to the external validation dataset and the same statistical methods applied to test the accuracy of the ACTA-PORT score. Several factors were independently associated with risk of transfusion, including age, sex, body surface area, logistic EuroSCORE, preoperative haemoglobin and creatinine, and type of surgery. In our primary dataset, the score accurately predicted risk of perioperative transfusion in cardiac surgery patients with an AUC of 0.76. The external validation confirmed accuracy of the scoring method with an AUC of 0.84 and good agreement across all scores, with a minor tendency to under-estimate transfusion risk in very high-risk patients. The ACTA-PORT score is a reliable, validated tool for predicting risk of transfusion for patients undergoing cardiac surgery. This and other scores can be used in research studies for risk adjustment when assessing outcomes, and might also be incorporated into a Patient Blood Management programme. © The Author 2017. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Comparing toxicologic and epidemiologic studies: methylene chloride--a case study.
Stayner, L T; Bailer, A J
1993-12-01
Exposure to methylene chloride induces lung and liver cancers in mice. The mouse bioassay data have been used as the basis for several cancer risk assessments. The results from epidemiologic studies of workers exposed to methylene chloride have been mixed with respect to demonstrating an increased cancer risk. The results from a negative epidemiologic study of Kodak workers have been used by two groups of investigators to test the predictions from the EPA risk assessment models. These two groups used very different approaches to this problem, which resulted in opposite conclusions regarding the consistency between the animal model predictions and the Kodak study results. The results from the Kodak study are used to test the predictions from OSHA's multistage models of liver and lung cancer risk. Confidence intervals for the standardized mortality ratios (SMRs) from the Kodak study are compared with the predicted confidence intervals derived from OSHA's risk assessment models. Adjustments for the "healthy worker effect," differences in length of follow-up, and dosimetry between animals and humans were incorporated into these comparisons. Based on these comparisons, we conclude that the negative results from the Kodak study are not inconsistent with the predictions from OSHA's risk assessment model.
Decruyenaere, M; Evers-Kiebooms, G; Boogaerts, A; Cloostermans, T; Cassiman, J J; Demyttenaere, K; Dom, R; Fryns, J P; Van den Berghe, H
1997-01-01
Subjective risk perception, perceived impact of Huntington's disease (HD), perceived benefits and barriers of predictive testing and personality characteristics of persons withdrawing from the predictive test programme for HD and of siblings of test applicants were studied in a mailed survey. The belief that important decisions do not need to depend on a test result and the anticipated inability to cope with a bad result played an important role in the decision not to be tested. Nevertheless half of the group who ever considered testing, still planned to undergo a test in the future. A comparison of tested and untested persons revealed that the first group is more likely to overestimate the risk than the second group, but that both groups did not significantly differ from each other regarding anxiety, ego strength and coping strategies. An intrafamilial analysis of tested and untested siblings confirmed these findings. The problems during data collection and the reasons for the dropout are an illustration of the avoidant behaviour regarding HD and the predictive test in many individuals and families.
Madigan, Michael L; Aviles, Jessica; Allin, Leigh J; Nussbaum, Maury A; Alexander, Neil B
2018-04-16
A growing number of studies are using modified treadmills to train reactive balance after trip-like perturbations that require multiple steps to recover balance. The goal of this study was thus to develop and validate a low-tech reactive balance rating method in the context of trip-like treadmill perturbations to facilitate the implementation of this training outside the research setting. Thirty-five residents of five senior congregate housing facilities participated in the study. Subjects completed a series of reactive balance tests on a modified treadmill from which the reactive balance rating was determined, along with a battery of standard clinical balance and mobility tests that predict fall risk. We investigated the strength of correlation between the reactive balance rating and reactive balance kinematics. We compared the strength of correlation between the reactive balance rating and clinical tests predictive of fall risk, with the strength of correlation between reactive balance kinematics and the same clinical tests. We also compared the reactive balance rating between subjects predicted to be at a high or low risk of falling. The reactive balance rating was correlated with reactive balance kinematics (Spearman's rho squared = .04 - .30), exhibited stronger correlations with clinical tests than most kinematic measures (Spearman's rho squared = .00 - .23), and was 42-60% lower among subjects predicted to be at a high risk for falling. The reactive balance rating method may provide a low-tech, valid measure of reactive balance kinematics, and an indicator of fall risk, after trip-like postural perturbations.
Ferrer, Rebecca A; Klein, William M P; Avishai, Aya; Jones, Katelyn; Villegas, Megan; Sheeran, Paschal
2018-01-01
Although risk perception is a key concept in many health behavior theories, little research has explicitly tested when risk perception predicts motivation to take protective action against a health threat (protection motivation). The present study tackled this question by (a) adopting a multidimensional model of risk perception that comprises deliberative, affective, and experiential components (the TRIRISK model), and (b) taking a person-by-situation approach. We leveraged a highly intensive within-subjects paradigm to test features of the health threat (i.e., perceived severity) and individual differences (e.g., emotion reappraisal) as moderators of the relationship between the three types of risk perception and protection motivation in a within-subjects design. Multi-level modeling of 2968 observations (32 health threats across 94 participants) showed interactions among the TRIRISK components and moderation both by person-level and situational factors. For instance, affective risk perception better predicted protection motivation when deliberative risk perception was high, when the threat was less severe, and among participants who engage less in emotional reappraisal. These findings support the TRIRISK model and offer new insights into when risk perceptions predict protection motivation.
Klein, William M. P.; Avishai, Aya; Jones, Katelyn; Villegas, Megan; Sheeran, Paschal
2018-01-01
Although risk perception is a key concept in many health behavior theories, little research has explicitly tested when risk perception predicts motivation to take protective action against a health threat (protection motivation). The present study tackled this question by (a) adopting a multidimensional model of risk perception that comprises deliberative, affective, and experiential components (the TRIRISK model), and (b) taking a person-by-situation approach. We leveraged a highly intensive within-subjects paradigm to test features of the health threat (i.e., perceived severity) and individual differences (e.g., emotion reappraisal) as moderators of the relationship between the three types of risk perception and protection motivation in a within-subjects design. Multi-level modeling of 2968 observations (32 health threats across 94 participants) showed interactions among the TRIRISK components and moderation both by person-level and situational factors. For instance, affective risk perception better predicted protection motivation when deliberative risk perception was high, when the threat was less severe, and among participants who engage less in emotional reappraisal. These findings support the TRIRISK model and offer new insights into when risk perceptions predict protection motivation. PMID:29494705
Pollock, Benjamin D; Hu, Tian; Chen, Wei; Harville, Emily W; Li, Shengxu; Webber, Larry S; Fonseca, Vivian; Bazzano, Lydia A
2017-01-01
To evaluate several adult diabetes risk calculation tools for predicting the development of incident diabetes and pre-diabetes in a bi-racial, young adult population. Surveys beginning in young adulthood (baseline age ≥18) and continuing across multiple decades for 2122 participants of the Bogalusa Heart Study were used to test the associations of five well-known adult diabetes risk scores with incident diabetes and pre-diabetes using separate Cox models for each risk score. Racial differences were tested within each model. Predictive utility and discrimination were determined for each risk score using the Net Reclassification Index (NRI) and Harrell's c-statistic. All risk scores were strongly associated (p<.0001) with incident diabetes and pre-diabetes. The Wilson model indicated greater risk of diabetes for blacks versus whites with equivalent risk scores (HR=1.59; 95% CI 1.11-2.28; p=.01). C-statistics for the diabetes risk models ranged from 0.79 to 0.83. Non-event NRIs indicated high specificity (non-event NRIs: 76%-88%), but poor sensitivity (event NRIs: -23% to -3%). Five diabetes risk scores established in middle-aged, racially homogenous adult populations are generally applicable to younger adults with good specificity but poor sensitivity. The addition of race to these models did not result in greater predictive capabilities. A more sensitive risk score to predict diabetes in younger adults is needed. Copyright © 2017 Elsevier Inc. All rights reserved.
Kumar, Kanta; Peters, Sarah; Barton, Anne
2016-11-08
Rheumatoid arthritis (RA) is a long term condition that requires early treatment to control symptoms and improve long-term outcomes. Lack of response to RA treatments is not only a waste of healthcare resources, but also causes disability and distress to patients. Identifying biomarkers predictive of treatment response offers an opportunity to improve clinical decisions about which treatment to recommend in patients and could ultimately lead to better patient outcomes. The aim of this study was to explore the understanding of and factors affecting Rheumatoid Arthritis (RA) patients' decisions around predictive treatment testing. A qualitative study was conducted with a purposive sample of 16 patients with RA from three major UK cities. Four focus groups explored patient perceptions of the use of biomarker tests to predict response to treatments. Interviews were audio-recorded, transcribed verbatim and analysed using thematic analysis by three researchers. Data were organised within three interlinking themes: [1] Perceptions of predictive tests and patient preference of tests; [2] Utility of the test to manage expectations; [3] The influence of the disease duration on take up of predictive testing. During consultations for predictive testing, patients felt they would need, first, careful explanations detailing the consequences of untreated RA and delayed treatment response and, second, support to balance the risks of tests, which might be invasive and/or only moderately accurate, with the potential benefits of better management of symptoms. This study provides important insights into predictive testing. Besides supporting clinical decision making, the development of predictive testing in RA is largely supported by patients. Developing strategies which communicate risk information about predictive testing effectively while reducing the psychological burden associated with this information will be essential to maximise uptake.
Hua, Fangyuan; Fletcher, Robert J.; Sieving, Kathryn E.; Dorazio, Robert M.
2013-01-01
Predation risk is widely hypothesized as an important force structuring communities, but this potential force is rarely tested experimentally, particularly in terrestrial vertebrate communities. How animals respond to predation risk is generally considered predictable from species life-history and natural-history traits, but rigorous tests of these predictions remain scarce. We report on a large-scale playback experiment with a forest bird community that addresses two questions: (i) does perceived predation risk shape the richness and composition of a breeding bird community? And (ii) can species life-history and natural-history traits predict prey community responses to different types of predation risk? On 9 ha plots, we manipulated cues of three avian predators that preferentially prey on either adult birds or offspring, or both, throughout the breeding season. We found that increased perception of predation risk led to generally negative responses in the abundance, occurrence and/or detection probability of most prey species, which in turn reduced the species richness and shifted the composition of the breeding bird community. Species-level responses were largely predicted from the key natural-history trait of body size, but we did not find support for the life-history theory prediction of the relationship between species' slow/fast life-history strategy and their response to predation risk.
Clinical prediction of fall risk and white matter abnormalities: a diffusion tensor imaging study
USDA-ARS?s Scientific Manuscript database
The Tinetti scale is a simple clinical tool designed to predict risk of falling by focusing on gait and stance impairment in elderly persons. Gait impairment is also associated with white matter (WM) abnormalities. Objective: To test the hypothesis that elderly subjects at risk for falling, as deter...
Negative HPV screening test predicts low cervical cancer risk better than negative Pap test
Based on a study that included more than 1 million women, investigators at NCI have determined that a negative test for HPV infection compared to a negative Pap test provides greater safety, or assurance, against future risk of cervical cancer.
Prediction of preterm birth in twin gestations using biophysical and biochemical tests
Conde-Agudelo, Agustin; Romero, Roberto
2018-01-01
The objective of this study was to determine the performance of biophysical and biochemical tests for the prediction of preterm birth in both asymptomatic and symptomatic women with twin gestations. We identified a total of 19 tests proposed to predict preterm birth, mainly in asymptomatic women. In these women, a single measurement of cervical length with transvaginal ultrasound before 25 weeks of gestation appears to be a good test to predict preterm birth. Its clinical potential is enhanced by the evidence that vaginal progesterone administration in asymptomatic women with twin gestations and a short cervix reduces neonatal morbidity and mortality associated with spontaneous preterm delivery. Other tests proposed for the early identification of asymptomatic women at increased risk of preterm birth showed minimal to moderate predictive accuracy. None of the tests evaluated in this review meet the criteria to be considered clinically useful to predict preterm birth among patients with an episode of preterm labor. However, a negative cervicovaginal fetal fibronectin test could be useful in identifying women who are not at risk for delivering within the next week, which could avoid unnecessary hospitalization and treatment. This review underscores the need to develop accurate tests for predicting preterm birth in twin gestations. Moreover, the use of interventions in these patients based on test results should be associated with the improvement of perinatal outcomes. PMID:25072736
Prediction of preterm birth in twin gestations using biophysical and biochemical tests.
Conde-Agudelo, Agustin; Romero, Roberto
2014-12-01
The objective of this study was to determine the performance of biophysical and biochemical tests for the prediction of preterm birth in both asymptomatic and symptomatic women with twin gestations. We identified a total of 19 tests proposed to predict preterm birth, mainly in asymptomatic women. In these women, a single measurement of cervical length with transvaginal ultrasound before 25 weeks of gestation appears to be a good test to predict preterm birth. Its clinical potential is enhanced by the evidence that vaginal progesterone administration in asymptomatic women with twin gestations and a short cervix reduces neonatal morbidity and mortality associated with spontaneous preterm delivery. Other tests proposed for the early identification of asymptomatic women at increased risk of preterm birth showed minimal to moderate predictive accuracy. None of the tests evaluated in this review meet the criteria to be considered clinically useful to predict preterm birth among patients with an episode of preterm labor. However, a negative cervicovaginal fetal fibronectin test could be useful in identifying women who are not at risk for delivering within the next week, which could avoid unnecessary hospitalization and treatment. This review underscores the need to develop accurate tests for predicting preterm birth in twin gestations. Moreover, the use of interventions in these patients based on test results should be associated with the improvement of perinatal outcomes. Copyright © 2014. Published by Elsevier Inc.
Yehya, Nadir; Wong, Hector R
2018-01-01
The original Pediatric Sepsis Biomarker Risk Model and revised (Pediatric Sepsis Biomarker Risk Model-II) biomarker-based risk prediction models have demonstrated utility for estimating baseline 28-day mortality risk in pediatric sepsis. Given the paucity of prediction tools in pediatric acute respiratory distress syndrome, and given the overlapping pathophysiology between sepsis and acute respiratory distress syndrome, we tested the utility of Pediatric Sepsis Biomarker Risk Model and Pediatric Sepsis Biomarker Risk Model-II for mortality prediction in a cohort of pediatric acute respiratory distress syndrome, with an a priori plan to revise the model if these existing models performed poorly. Prospective observational cohort study. University affiliated PICU. Mechanically ventilated children with acute respiratory distress syndrome. Blood collection within 24 hours of acute respiratory distress syndrome onset and biomarker measurements. In 152 children with acute respiratory distress syndrome, Pediatric Sepsis Biomarker Risk Model performed poorly and Pediatric Sepsis Biomarker Risk Model-II performed modestly (areas under receiver operating characteristic curve of 0.61 and 0.76, respectively). Therefore, we randomly selected 80% of the cohort (n = 122) to rederive a risk prediction model for pediatric acute respiratory distress syndrome. We used classification and regression tree methodology, considering the Pediatric Sepsis Biomarker Risk Model biomarkers in addition to variables relevant to acute respiratory distress syndrome. The final model was comprised of three biomarkers and age, and more accurately estimated baseline mortality risk (area under receiver operating characteristic curve 0.85, p < 0.001 and p = 0.053 compared with Pediatric Sepsis Biomarker Risk Model and Pediatric Sepsis Biomarker Risk Model-II, respectively). The model was tested in the remaining 20% of subjects (n = 30) and demonstrated similar test characteristics. A validated, biomarker-based risk stratification tool designed for pediatric sepsis was adapted for use in pediatric acute respiratory distress syndrome. The newly derived Pediatric Acute Respiratory Distress Syndrome Biomarker Risk Model demonstrates good test characteristics internally and requires external validation in a larger cohort. Tools such as Pediatric Acute Respiratory Distress Syndrome Biomarker Risk Model have the potential to provide improved risk stratification and prognostic enrichment for future trials in pediatric acute respiratory distress syndrome.
Cognitive ability in young adulthood predicts risk of early-onset dementia in Finnish men.
Rantalainen, Ville; Lahti, Jari; Henriksson, Markus; Kajantie, Eero; Eriksson, Johan G; Räikkönen, Katri
2018-06-06
To test if the Finnish Defence Forces Basic Intellectual Ability Test scores at 20.1 years predicted risk of organic dementia or Alzheimer disease (AD). Dementia was defined as inpatient or outpatient diagnosis of organic dementia or AD risk derived from Hospital Discharge or Causes of Death Registers in 2,785 men from the Helsinki Birth Cohort Study, divided based on age at first diagnosis into early onset (<65 years) or late onset (≥65 years). The Finnish Defence Forces Basic Intellectual Ability Test comprises verbal, arithmetic, and visuospatial subtests and a total score (scores transformed into a mean of 100 and SD of 15). We used Cox proportional hazard models and adjusted for age at testing, childhood socioeconomic status, mother's age at delivery, parity, participant's birthweight, education, and stroke or coronary heart disease diagnosis. Lower cognitive ability total and verbal ability (hazard ratio [HR] per 1 SD disadvantage >1.69, 95% confidence interval [CI] 1.01-2.63) scores predicted higher early-onset any dementia risk across the statistical models; arithmetic and visuospatial ability scores were similarly associated with early-onset any dementia risk, but these associations weakened after covariate adjustments (HR per 1 SD disadvantage >1.57, 95% CI 0.96-2.57). All associations were rendered nonsignificant when we adjusted for participant's education. Cognitive ability did not predict late-onset dementia risk. These findings reinforce previous suggestions that lower cognitive ability in early life is a risk factor for early-onset dementia. © 2018 American Academy of Neurology.
Predicting drug-induced liver injury in human with Naïve Bayes classifier approach.
Zhang, Hui; Ding, Lan; Zou, Yi; Hu, Shui-Qing; Huang, Hai-Guo; Kong, Wei-Bao; Zhang, Ji
2016-10-01
Drug-induced liver injury (DILI) is one of the major safety concerns in drug development. Although various toxicological studies assessing DILI risk have been developed, these methods were not sufficient in predicting DILI in humans. Thus, developing new tools and approaches to better predict DILI risk in humans has become an important and urgent task. In this study, we aimed to develop a computational model for assessment of the DILI risk with using a larger scale human dataset and Naïve Bayes classifier. The established Naïve Bayes prediction model was evaluated by 5-fold cross validation and an external test set. For the training set, the overall prediction accuracy of the 5-fold cross validation was 94.0 %. The sensitivity, specificity, positive predictive value and negative predictive value were 97.1, 89.2, 93.5 and 95.1 %, respectively. The test set with the concordance of 72.6 %, sensitivity of 72.5 %, specificity of 72.7 %, positive predictive value of 80.4 %, negative predictive value of 63.2 %. Furthermore, some important molecular descriptors related to DILI risk and some toxic/non-toxic fragments were identified. Thus, we hope the prediction model established here would be employed for the assessment of human DILI risk, and the obtained molecular descriptors and substructures should be taken into consideration in the design of new candidate compounds to help medicinal chemists rationally select the chemicals with the best prospects to be effective and safe.
Arthur, Michael W.; Brown, Eric C.; Briney, John S.; Hawkins, J. David; Abbott, Robert D.; Catalano, Richard F.; Becker, Linda; Langer, Michael; Mueller, Martin T.
2016-01-01
BACKGROUND School administrators and teachers face difficult decisions about how best to use school resources in order to meet academic achievement goals. Many are hesitant to adopt prevention curricula that are not focused directly on academic achievement. Yet, some have hypothesized that prevention curricula can remove barriers to learning and, thus, promote achievement. This study examined relationships between school levels of student substance use and risk and protective factors that predict adolescent problem behaviors and achievement test performance in Washington State. METHODS Hierarchical Generalized Linear Models were used to examine predictive associations between school-averaged levels of substance use and risk and protective factors and Washington State students’ likelihood of meeting achievement test standards on the Washington Assessment of Student Learning, statistically controlling for demographic and economic factors known to be associated with achievement. RESULTS Results indicate that levels of substance use and risk/protective factors predicted the academic test score performance of students. Many of these effects remained significant even after controlling for model covariates. CONCLUSIONS The findings suggest that implementing prevention programs that target empirically identified risk and protective factors have the potential to positively affect students’ academic achievement. PMID:26149305
A systematic review and meta-analysis of tests to predict wound healing in diabetic foot.
Wang, Zhen; Hasan, Rim; Firwana, Belal; Elraiyah, Tarig; Tsapas, Apostolos; Prokop, Larry; Mills, Joseph L; Murad, Mohammad Hassan
2016-02-01
This systematic review summarized the evidence on noninvasive screening tests for the prediction of wound healing and the risk of amputation in diabetic foot ulcers. We searched MEDLINE In-Process & Other Non-Indexed Citations, MEDLINE, Embase, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, and Scopus from database inception to October 2011. We pooled sensitivity, specificity, and diagnostic odds ratio (DOR) and compared test performance. Thirty-seven studies met the inclusion criteria. Eight tests were used to predict wound healing in this setting, including ankle-brachial index (ABI), ankle peak systolic velocity, transcutaneous oxygen measurement (TcPo2), toe-brachial index, toe systolic blood pressure, microvascular oxygen saturation, skin perfusion pressure, and hyperspectral imaging. For the TcPo2 test, the pooled DOR was 15.81 (95% confidence interval [CI], 3.36-74.45) for wound healing and 4.14 (95% CI, 2.98-5.76) for the risk of amputation. ABI was also predictive but to a lesser degree of the risk of amputations (DOR, 2.89; 95% CI, 1.65-5.05) but not of wound healing (DOR, 1.02; 95% CI, 0.40-2.64). It was not feasible to perform meta-analysis comparing the remaining tests. The overall quality of evidence was limited by the risk of bias and imprecision (wide CIs due to small sample size). Several tests may predict wound healing in the setting of diabetic foot ulcer; however, most of the available evidence evaluates only TcPo2 and ABI. The overall quality of the evidence is low, and further research is needed to provide higher quality comparative effectiveness evidence. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Peláez-García, Alberto; Yébenes, Laura; Berjón, Alberto; Angulo, Antonia; Zamora, Pilar; Sánchez-Méndez, José Ignacio; Espinosa, Enrique; Redondo, Andrés; Heredia-Soto, Victoria; Mendiola, Marta; Feliú, Jaime
2017-01-01
Purpose To compare the concordance in risk classification between the EndoPredict and the MammaPrint scores obtained for the same cancer samples on 40 estrogen-receptor positive/HER2-negative breast carcinomas. Methods Formalin-fixed, paraffin-embedded invasive breast carcinoma tissues that were previously analyzed with MammaPrint as part of routine care of the patients, and were classified as high-risk (20 patients) and low-risk (20 patients), were selected to be analyzed by the EndoPredict assay, a second generation gene expression test that combines expression of 8 genes (EP score) with two clinicopathological variables (tumor size and nodal status, EPclin score). Results The EP score classified 15 patients as low-risk and 25 patients as high-risk. EPclin re-classified 5 of the 25 EP high-risk patients into low-risk, resulting in a total of 20 high-risk and 20 low-risk tumors. EP score and MammaPrint score were significantly correlated (p = 0.008). Twelve of 20 samples classified as low-risk by MammaPrint were also low-risk by EP score (60%). 17 of 20 MammaPrint high-risk tumors were also high-risk by EP score. The overall concordance between EP score and MammaPrint was 72.5% (κ = 0.45, (95% CI, 0.182 to 0.718)). EPclin score also correlated with MammaPrint results (p = 0.004). Discrepancies between both tests occurred in 10 cases: 5 MammaPrint low-risk patients were classified as EPclin high-risk and 5 high-risk MammaPrint were classified as low-risk by EPclin and overall concordance of 75% (κ = 0.5, (95% CI, 0.232 to 0.768)). Conclusions This pilot study demonstrates a limited concordance between MammaPrint and EndoPredict. Differences in results could be explained by the inclusion of different gene sets in each platform, the use of different methodology, and the inclusion of clinicopathological parameters, such as tumor size and nodal status, in the EndoPredict test. PMID:28886093
Myers, J E; Kenny, L C; McCowan, L M E; Chan, E H Y; Dekker, G A; Poston, L; Simpson, N A B; North, R A
2013-09-01
To assess the performance of clinical risk factors, uterine artery Doppler and angiogenic markers to predict preterm pre-eclampsia in nulliparous women. Predictive test accuracy study. Prospective multicentre cohort study Screening for Pregnancy Endpoints (SCOPE). Low-risk nulliparous women with a singleton pregnancy were recruited. Clinical risk factor data were obtained and plasma placental growth factor (PlGF), soluble endoglin and soluble fms-like tyrosine kinase-1 (sFlt-1) were measured at 14-16 weeks of gestation. Prediction models were developed using multivariable stepwise logistic regression. Preterm pre-eclampsia (delivered before 37(+0) weeks of gestation). Of the 3529 women recruited, 187 (5.3%) developed pre-eclampsia of whom 47 (1.3%) delivered preterm. Controls (n = 188) were randomly selected from women without preterm pre-eclampsia and included women who developed other pregnancy complications. An area under a receiver operating characteristic curve (AUC) of 0.76 (95% CI 0.67-0.84) was observed using previously reported clinical risk variables. The AUC improved following the addition of PlGF measured at 14-16 weeks (0.84; 95% CI 0.77-0.91), but no further improvement was observed with the addition of uterine artery Doppler or the other angiogenic markers. A sensitivity of 45% (95% CI 0.31-0.59) (5% false-positive rate) and post-test probability of 11% (95% CI 9-13) were observed using clinical risk variables and PlGF measurement. Addition of plasma PlGF at 14-16 weeks of gestation to clinical risk assessment improved the identification of nulliparous women at increased risk of developing preterm pre-eclampsia, but the performance is not sufficient to warrant introduction as a clinical screening test. These findings are marker dependent, not assay dependent; additional markers are needed to achieve clinical utility. © 2013 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2013 RCOG.
Using Growth Rate of Reading Fluency to Predict Performance on Statewide Achievement Tests
ERIC Educational Resources Information Center
Hinkle, Rachelle Whittaker
2011-01-01
Federal legislation has prescribed the increased use of statewide achievement tests as the culmination of a student's knowledge and ability at the end of a grade level; however, schools need to be able to predict those who are at-risk of performing poorly on these high-stakes tests. Three studies served to identify a means of predicting statewide…
Katki, Hormuzd A; Schiffman, Mark
2018-05-01
Our work involves assessing whether new biomarkers might be useful for cervical-cancer screening across populations with different disease prevalences and biomarker distributions. When comparing across populations, we show that standard diagnostic accuracy statistics (predictive values, risk-differences, Youden's index and Area Under the Curve (AUC)) can easily be misinterpreted. We introduce an intuitively simple statistic for a 2 × 2 table, Mean Risk Stratification (MRS): the average change in risk (pre-test vs. post-test) revealed for tested individuals. High MRS implies better risk separation achieved by testing. MRS has 3 key advantages for comparing test performance across populations with different disease prevalences and biomarker distributions. First, MRS demonstrates that conventional predictive values and the risk-difference do not measure risk-stratification because they do not account for test-positivity rates. Second, Youden's index and AUC measure only multiplicative relative gains in risk-stratification: AUC = 0.6 achieves only 20% of maximum risk-stratification (AUC = 0.9 achieves 80%). Third, large relative gains in risk-stratification might not imply large absolute gains if disease is rare, demonstrating a "high-bar" to justify population-based screening for rare diseases such as cancer. We illustrate MRS by our experience comparing the performance of cervical-cancer screening tests in China vs. the USA. The test with the worst AUC = 0.72 in China (visual inspection with acetic acid) provides twice the risk-stratification (i.e. MRS) of the test with best AUC = 0.83 in the USA (human papillomavirus and Pap cotesting) because China has three times more cervical precancer/cancer. MRS could be routinely calculated to better understand the clinical/public-health implications of standard diagnostic accuracy statistics. Published by Elsevier Inc.
Maguire, Tessa; Daffern, Michael; Bowe, Steven J; McKenna, Brian
2017-10-01
In the present study, we explored the predictive validity of the Dynamic Appraisal of Situational Aggression (DASA) assessment tool in male (n = 30) and female (n = 30) patients admitted to the acute units of a forensic mental health hospital. We also tested the psychometric properties of the original DASA bands and novel risk bands. The first 60 days of each patient's file was reviewed to identify daily DASA scores and subsequent risk-related nursing interventions and aggressive behaviour within the following 24 hours. Risk assessments, followed by documented nursing interventions, were removed to preserve the integrity of the risk-assessment analysis. Receiver-operator characteristics were used to test the predictive accuracy of the DASA, and generalized estimating equations (GEE) were used to account for repeated risk assessments, which occurs when analysing short-term risk-assessment data. The results revealed modest predictive validity for males and females. GEE analyses suggested the need to adjust the DASA risk bands to the following (with associated odds ratios (OR) for aggressive behaviour): 0 = low risk; 1, 2, 3 = moderate-risk OR, 4.70 (95% confidence interval (CI): 2.84-7.80); and 4, 5, 6, 7 = high-risk OR, 16.13 (95% CI: 9.71-26.78). The adjusted DASA risk bands could assist nurses by prompting violence-prevention interventions when the level of risk is elevated. © 2017 Australian College of Mental Health Nurses Inc.
Fedoroff, J Paul; Richards, Deborah; Ranger, Rebekah; Curry, Susan
2016-10-01
This CIHR-funded study examined whether certain current risk assessment tools were effective in appraising risk of recidivism in a sample of sex offenders with intellectual disabilities (ID). Fifty men with ID who had engaged in problematic sexual behavior (PSB) were followed for an average of 2.5 years. Recidivism was defined and measured as any illegal or problematic behavior, as well as any problematic but not necessarily illegal behavior. At the beginning of the study, each participant was rated on two risk assessment tools: the Violence Risk Appraisal Guide (VRAG) and the Sex Offender Risk Appraisal Guide (SORAG). During each month of follow-up, participants were also rated on the Short-Dynamic Risk Scale (SDRS), an assessment tool intended to measure the risk of future problematic behaviors. Data was analyzed using t-tests, Cohen's d and area under the curve (AUC) to test predictive validity of the assessment tools. Using the AUC, results showed that the VRAG was predictive of sexual (AUC=0.74), sexual and/or violent (AUC=0.71) and of any criminally chargeable event (AUC=0.69). The SORAG was only significantly predictive of sexual events (AUC=0.70) and the SDRS was predictive of violent events (AUC=0.71). The t-test and Cohen's d analyses, which are less robust to deviations from the assumptions of normal and continuous distribution than AUC, did not yield significant results in each category, and therefore, while the results of this study suggest that the VRAG and the SORAG may be effective tools in measuring the short term risk of sexual recidivism; and the VRAG and SDRS may be effective tools in appraising long term risk of sexual and/or violent recidivism in this population, it should be used with caution. Regardless of the assessment tool used, risk assessments should take into account the differences between sex offenders with and without ID to ensure effective measurement. Copyright © 2016. Published by Elsevier Ltd.
Castle, Philip E.; Glass, Andrew G.; Rush, Brenda B.; Scott, David R.; Wentzensen, Nicolas; Gage, Julia C.; Buckland, Julie; Rydzak, Greg; Lorincz, Attila T.; Wacholder, Sholom
2012-01-01
Purpose To describe the long-term (≥ 10 years) benefits of clinical human papillomavirus (HPV) DNA testing for cervical precancer and cancer risk prediction. Methods Cervicovaginal lavages collected from 19,512 women attending a health maintenance program were retrospectively tested for HPV using a clinical test. HPV positives were tested for HPV16 and HPV18 individually using a research test. A Papanicolaou (Pap) result classified as atypical squamous cells of undetermined significance (ASC-US) or more severe was considered abnormal. Women underwent follow-up prospectively with routine annual Pap testing up to 18 years. Cumulative incidence rates (CIRs) of ≥ grade 3 cervical intraepithelial neoplasia (CIN3+) or cancer for enrollment test results were calculated. Results A baseline negative HPV test provided greater reassurance against CIN3+ over the 18-year follow-up than a normal Pap (CIR, 0.90% v 1.27%). Although both baseline Pap and HPV tests predicted who would develop CIN3+ within the first 2 years of follow-up, only HPV testing predicted who would develop CIN3+ 10 to 18 years later (P = .004). HPV16- and HPV18-positive women with normal Pap were at elevated risk of CIN3+ compared with other HPV-positive women with normal Pap and were at similar risk of CIN3+ compared with women with a low-grade squamous intraepithelial Pap. Conclusion HPV testing to rule out cervical disease followed by Pap testing and possibly combined with the detection of HPV16 and HPV18 among HPV positives to identify those at immediate risk of CIN3+ would be an efficient algorithm for cervical cancer screening, especially in women age 30 years or older. PMID:22851570
Wihlborg, A; Englund, M; Åkesson, K; Gerdhem, P
2015-08-01
In a large cohort of elderly women followed for 10 years, we found that balance, gait speed, and self-reported history of fall independently predicted fracture. These clinical risk factors are easily evaluated and therefore advantageous in a clinical setting. They would improve fracture risk assessment and thereby also fracture prevention. The aim of this study was to identify additional risk factors for osteoporosis-related fracture by investigating the fracture predictive ability of physical performance tests and self-reported history of falls. In the population-based Osteoporosis Prospective Risk Assessment study (OPRA), 1044 women were recruited at the age of 75 and followed for 10 years. At inclusion, knee extension force, standing balance, gait speed, and bone mineral density (BMD) were examined. Falls the year before investigation was assessed by questionnaire. Cox proportional hazards regression analysis was used to determine fracture hazard ratios (HR) with BMD, history of fracture, BMI, smoking habits, bisphosphonate, vitamin D, glucocorticoid, and alcohol use as covariates. Continuous variables were standardized and HR shown for each standard deviation change. Of all women, 427 (41%) sustained at least one fracture during the 10-year follow-up. Failing the balance test had an HR of 1.98 (1.18-3.32) for hip fracture. Each standard deviation decrease in gait speed was associated with an HR of 1.37 (1.14-1.64) for hip fracture. Previous fall had an HR of 1.30 (1.03-1.65) for any fracture; 1.39 (1.08-1.79) for any osteoporosis-related fracture; and 1.60 (1.03-2.48) for distal forearm fracture. Knee extension force did not show fracture predictability. The balance test, gait speed test, and self-reported history of fall all hold independent fracture predictability. Consideration of these clinical risk factors for fracture would improve the fracture risk assessment and subsequently also fracture prevention.
Pretreatment data is highly predictive of liver chemistry signals in clinical trials.
Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T
2012-01-01
The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy's law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones.
Persson, Carina U; Hansson, Per-Olof; Sunnerhagen, Katharina S
2011-03-01
To assess the likelihood of clinical tests for postural balance, walking and motor skills, performed during the first week after stroke, identifying the risk of falling. Prospective study. Patients with first stroke. Assessments were carried out during the first week, and the occurrence of falls was recorded 3, 6 and 12 months after stroke onset. The tests used were: 10-Metre Walking Test (10MWT), Timed Up & Go, Swedish Postural Assessment Scale for Stroke Patients, Berg Balance Scale and Modified Motor Assessment Scale. Cut-off levels were obtained by receiver operation characteristic curves, and odds ratios were used to assess cut-off levels for falling. The analyses were based on 96 patients. Forty-eight percent had at least one fall during the first year. All tests were associated with the risk of falling. The highest predictive values were found for the 10MWT (positive predictive value 64%, negative predictive value 76%). Those subjects who were unable to perform the 10MWT had the highest odds ratio, 6.06 (95% confidence interval 2.66-13.84, p<0.001) of falling. Clinical tests used during the first week after stroke onset can, to some extent, identify those patients at risk of falling during the first year after stroke.
Risk Prediction Models in Psychiatry: Toward a New Frontier for the Prevention of Mental Illnesses.
Bernardini, Francesco; Attademo, Luigi; Cleary, Sean D; Luther, Charles; Shim, Ruth S; Quartesan, Roberto; Compton, Michael T
2017-05-01
We conducted a systematic, qualitative review of risk prediction models designed and tested for depression, bipolar disorder, generalized anxiety disorder, posttraumatic stress disorder, and psychotic disorders. Our aim was to understand the current state of research on risk prediction models for these 5 disorders and thus future directions as our field moves toward embracing prediction and prevention. Systematic searches of the entire MEDLINE electronic database were conducted independently by 2 of the authors (from 1960 through 2013) in July 2014 using defined search criteria. Search terms included risk prediction, predictive model, or prediction model combined with depression, bipolar, manic depressive, generalized anxiety, posttraumatic, PTSD, schizophrenia, or psychosis. We identified 268 articles based on the search terms and 3 criteria: published in English, provided empirical data (as opposed to review articles), and presented results pertaining to developing or validating a risk prediction model in which the outcome was the diagnosis of 1 of the 5 aforementioned mental illnesses. We selected 43 original research reports as a final set of articles to be qualitatively reviewed. The 2 independent reviewers abstracted 3 types of data (sample characteristics, variables included in the model, and reported model statistics) and reached consensus regarding any discrepant abstracted information. Twelve reports described models developed for prediction of major depressive disorder, 1 for bipolar disorder, 2 for generalized anxiety disorder, 4 for posttraumatic stress disorder, and 24 for psychotic disorders. Most studies reported on sensitivity, specificity, positive predictive value, negative predictive value, and area under the (receiver operating characteristic) curve. Recent studies demonstrate the feasibility of developing risk prediction models for psychiatric disorders (especially psychotic disorders). The field must now advance by (1) conducting more large-scale, longitudinal studies pertaining to depression, bipolar disorder, anxiety disorders, and other psychiatric illnesses; (2) replicating and carrying out external validations of proposed models; (3) further testing potential selective and indicated preventive interventions; and (4) evaluating effectiveness of such interventions in the context of risk stratification using risk prediction models. © Copyright 2017 Physicians Postgraduate Press, Inc.
Colorectal Cancer Risk Assessment Tool
... 11/12/2014 Risk Calculator About the Tool Colorectal Cancer Risk Factors Download SAS and Gauss Code Page ... Rectal Cancer: Prevention, Genetics, Causes Tests to Detect Colorectal Cancer and Polyps Cancer Risk Prediction Resources Update November ...
Polygenic risk predicts obesity in both white and black young adults.
Domingue, Benjamin W; Belsky, Daniel W; Harris, Kathleen Mullan; Smolen, Andrew; McQueen, Matthew B; Boardman, Jason D
2014-01-01
To test transethnic replication of a genetic risk score for obesity in white and black young adults using a national sample with longitudinal data. A prospective longitudinal study using the National Longitudinal Study of Adolescent Health Sibling Pairs (n = 1,303). Obesity phenotypes were measured from anthropometric assessments when study members were aged 18-26 and again when they were 24-32. Genetic risk scores were computed based on published genome-wide association study discoveries for obesity. Analyses tested genetic associations with body-mass index (BMI), waist-height ratio, obesity, and change in BMI over time. White and black young adults with higher genetic risk scores had higher BMI and waist-height ratio and were more likely to be obese compared to lower genetic risk age-peers. Sibling analyses revealed that the genetic risk score was predictive of BMI net of risk factors shared by siblings. In white young adults only, higher genetic risk predicted increased risk of becoming obese during the study period. In black young adults, genetic risk scores constructed using loci identified in European and African American samples had similar predictive power. Cumulative information across the human genome can be used to characterize individual level risk for obesity. Measured genetic risk accounts for only a small amount of total variation in BMI among white and black young adults. Future research is needed to identify modifiable environmental exposures that amplify or mitigate genetic risk for elevated BMI.
Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R
2017-03-01
Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = <0.001). In regression analysis, CSRT was best explained by sway, time to complete the 9-Hole Peg test, knee extension strength of the weaker leg, proprioception and the time to complete the Trails B test (multiple R 2 = 0.449, p < 0.001). Conclusions A simple low tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.
Ethical principles and pitfalls of genetic testing for dementia.
Hedera, P
2001-01-01
Progress in the genetics of dementing disorders and the availability of clinical tests for practicing physicians increase the need for a better understanding of multifaceted issues associated with genetic testing. The genetics of dementia is complex, and genetic testing is fraught with many ethical concerns. Genetic testing can be considered for patients with a family history suggestive of a single gene disorder as a cause of dementia. Testing of affected patients should be accompanied by competent genetic counseling that focuses on probabilistic implications for at-risk first-degree relatives. Predictive testing of at-risk asymptomatic patients should be modeled after presymptomatic testing for Huntington's disease. Testing using susceptibility genes has only a limited diagnostic value at present because potential improvement in diagnostic accuracy does not justify potentially negative consequences for first-degree relatives. Predictive testing of unaffected subjects using susceptibility genes is currently not recommended because individual risk cannot be quantified and there are no therapeutic interventions for dementia in presymptomatic patients.
Tarbox-Berry, S I; Perkins, D O; Woods, S W; Addington, J
2018-04-01
Attenuated positive symptom syndrome (APSS), characterized by 'putatively prodromal' attenuated psychotic-like pathology, indicates increased risk for psychosis. Poor premorbid social adjustment predicts severity of APSS symptoms and predicts subsequent psychosis in APSS-diagnosed individuals, suggesting application for improving detection of 'true' prodromal youth who will transition to psychosis. However, these predictive associations have not been tested in controls and therefore may be independent of the APSS diagnosis, negating utility for improving prediction in APSS-diagnosed individuals. Association between premorbid social maladjustment and severity of positive, negative, disorganized, and general APSS symptoms was tested in 156 individuals diagnosed with APSS and 76 help-seeking (non-APSS) controls enrolled in the Enhancing the Prospective Prediction of Psychosis (PREDICT) study using prediction analysis. Premorbid social maladjustment was associated with social anhedonia, reduced expression of emotion, restricted ideational richness, and deficits in occupational functioning, independent of the APSS diagnosis. Associations between social maladjustment and suspiciousness, unusual thought content, avolition, dysphoric mood, and impaired tolerance to normal stress were uniquely present in participants meeting APSS criteria. Social maladjustment was associated with odd behavior/appearance and diminished experience of emotions and self only in participants who did not meet APSS criteria. Predictive associations between poor premorbid social adjustment and attenuated psychotic-like pathology were identified, a subset of which were indicative of high risk for psychosis. This study offers a method for improving risk identification while ruling out low-risk individuals.
NASA Technical Reports Server (NTRS)
Beck, L. R.; Rodriguez, M. H.; Dister, S. W.; Rodriguez, A. D.; Washino, R. K.; Roberts, D. R.; Spanner, M. A.
1997-01-01
A blind test of two remote sensing-based models for predicting adult populations of Anopheles albimanus in villages, an indicator of malaria transmission risk, was conducted in southern Chiapas, Mexico. One model was developed using a discriminant analysis approach, while the other was based on regression analysis. The models were developed in 1992 for an area around Tapachula, Chiapas, using Landsat Thematic Mapper (TM) satellite data and geographic information system functions. Using two remotely sensed landscape elements, the discriminant model was able to successfully distinguish between villages with high and low An. albimanus abundance with an overall accuracy of 90%. To test the predictive capability of the models, multitemporal TM data were used to generate a landscape map of the Huixtla area, northwest of Tapachula, where the models were used to predict risk for 40 villages. The resulting predictions were not disclosed until the end of the test. Independently, An. albimanus abundance data were collected in the 40 randomly selected villages for which the predictions had been made. These data were subsequently used to assess the models' accuracies. The discriminant model accurately predicted 79% of the high-abundance villages and 50% of the low-abundance villages, for an overall accuracy of 70%. The regression model correctly identified seven of the 10 villages with the highest mosquito abundance. This test demonstrated that remote sensing-based models generated for one area can be used successfully in another, comparable area.
Taylor, S
2011-01-01
Community attitudes research regarding genetic issues is important when contemplating the potential value and utilisation of predictive testing for common diseases in mainstream health services. This article aims to report population-based attitudes and discuss their relevance to integrating genetic services in primary health contexts. Men's and women's attitudes were investigated via population-based omnibus telephone survey in Queensland, Australia. Randomly selected adults (n = 1,230) with a mean age of 48.8 years were interviewed regarding perceptions of genetic determinants of health; benefits of genetic testing that predict 'certain' versus 'probable' future illness; and concern, if any, regarding potential misuse of genetic test information. Most (75%) respondents believed genetic factors significantly influenced health status; 85% regarded genetic testing positively although attitudes varied with age. Risk-based information was less valued than certainty-based information, but women valued risk information significantly more highly than men. Respondents reported 'concern' (44%) and 'no concern' (47%) regarding potential misuse of genetic information. This study contributes important population-based data as most research has involved selected individuals closely impacted by genetic disorders. While community attitudes were positive regarding genetic testing, genetic literacy is important to establish. The nature of gender differences regarding risk perception merits further study and has policy and service implications. Community concern about potential genetic discrimination must be addressed if health benefits of testing are to be maximised. Larger questions remain in scientific, policy, service delivery, and professional practice domains before predictive testing for common disorders is efficacious in mainstream health care. Copyright © 2011 S. Karger AG, Basel.
Confidence Testing for Knowledge-Based Global Communities
ERIC Educational Resources Information Center
Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Shymansky, James A.
2009-01-01
This proposal advocates the position that the use of confidence wagering (CW) during testing can predict the accuracy of a student's test answer selection during between-subject assessments. Data revealed female students were more favorable to taking risks when making CW and less inclined toward risk aversion than their male counterparts. Student…
Techniques for predicting high-risk drivers for alcohol countermeasures. Volume 1, Technical report
DOT National Transportation Integrated Search
1979-05-01
This technical report, a companion to the Volume II User Manual by the same name describes the development and testing of predictive models for identifying individual with a high risk of alcohol/related (A/R) crash involvement. From a literature revi...
Prediction of adolescent and adult adiposity outcomes from early life anthropometrics.
Graversen, Lise; Sørensen, Thorkild I A; Gerds, Thomas A; Petersen, Liselotte; Sovio, Ulla; Kaakinen, Marika; Sandbaek, Annelli; Laitinen, Jaana; Taanila, Anja; Pouta, Anneli; Järvelin, Marjo-Riitta; Obel, Carsten
2015-01-01
Maternal body mass index (BMI), birth weight, and preschool BMI may help identify children at high risk of overweight as they are (1) similarly linked to adolescent overweight at different stages of the obesity epidemic, (2) linked to adult obesity and metabolic alterations, and (3) easily obtainable in health examinations in young children. The aim was to develop early childhood prediction models of adolescent overweight, adult overweight, and adult obesity. Prediction models at various ages in the Northern Finland Birth Cohort born in 1966 (NFBC1966) were developed. Internal validation was tested using a bootstrap design, and external validation was tested for the model predicting adolescent overweight using the Northern Finland Birth Cohort born in 1986 (NFBC1986). A prediction model developed in the NFBC1966 to predict adolescent overweight, applied to the NFBC1986, and aimed at labelling 10% as "at risk" on the basis of anthropometric information collected until 5 years of age showed that half of those at risk in fact did become overweight. This group constituted one-third of all who became overweight. Our prediction model identified a subgroup of children at very high risk of becoming overweight, which may be valuable in public health settings dealing with obesity prevention. © 2014 The Obesity Society.
Ahmed, Haitham M; Al-Mallah, Mouaz H; McEvoy, John W; Nasir, Khurram; Blumenthal, Roger S; Jones, Steven R; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J
2015-03-01
To determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival. This was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A "FIT Treadmill Score" was then derived from the β coefficients of the model with the highest survival discrimination. The median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811). The FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
Hachiya, Mizuki; Murata, Shin; Otao, Hiroshi; Ihara, Takehiko; Mizota, Katsuhiko; Asami, Toyoko
2015-01-01
[Purpose] This study aimed to verify the usefulness of a 50-m round walking test developed as an assessment method for walking ability in the elderly. [Subjects] The subjects were 166 elderly requiring long-term care individuals (mean age, 80.5 years). [Methods] In order to evaluate the factors that had affected falls in the subjects in the previous year, we performed the 50-m round walking test, functional reach test, one-leg standing test, and 5-m walking test and measured grip strength and quadriceps strength. [Results] The 50-m round walking test was selected as a variable indicating fall risk based on the results of multiple logistic regression analysis. The cutoff value of the 50-m round walking test for determining fall risk was 0.66 m/sec. The area under the receiver operating characteristic curve was 0.64. The sensitivity of the cutoff value was 65.7%, the specificity was 63.6%, the positive predictive value was 55.0%, the negative predictive value was 73.3%, and the accuracy was 64.5%. [Conclusion] These results suggest that the 50-m round walking test is a potentially useful parameter for the determination of fall risk in the elderly requiring long-term care. PMID:26834327
Hachiya, Mizuki; Murata, Shin; Otao, Hiroshi; Ihara, Takehiko; Mizota, Katsuhiko; Asami, Toyoko
2015-12-01
[Purpose] This study aimed to verify the usefulness of a 50-m round walking test developed as an assessment method for walking ability in the elderly. [Subjects] The subjects were 166 elderly requiring long-term care individuals (mean age, 80.5 years). [Methods] In order to evaluate the factors that had affected falls in the subjects in the previous year, we performed the 50-m round walking test, functional reach test, one-leg standing test, and 5-m walking test and measured grip strength and quadriceps strength. [Results] The 50-m round walking test was selected as a variable indicating fall risk based on the results of multiple logistic regression analysis. The cutoff value of the 50-m round walking test for determining fall risk was 0.66 m/sec. The area under the receiver operating characteristic curve was 0.64. The sensitivity of the cutoff value was 65.7%, the specificity was 63.6%, the positive predictive value was 55.0%, the negative predictive value was 73.3%, and the accuracy was 64.5%. [Conclusion] These results suggest that the 50-m round walking test is a potentially useful parameter for the determination of fall risk in the elderly requiring long-term care.
Clinical history and biologic age predicted falls better than objective functional tests.
Gerdhem, Paul; Ringsberg, Karin A M; Akesson, Kristina; Obrant, Karl J
2005-03-01
Fall risk assessment is important because the consequences, such as a fracture, may be devastating. The objective of this study was to find the test or tests that best predicted falls in a population-based sample of elderly women. The fall-predictive ability of a questionnaire, a subjective estimate of biologic age and objective functional tests (gait, balance [Romberg and sway test], thigh muscle strength, and visual acuity) were compared in 984 randomly selected women, all 75 years of age. A recalled fall was the most important predictor for future falls. Only recalled falls and intake of psycho-active drugs independently predicted future falls. Women with at least five of the most important fall predictors (previous falls, conditions affecting the balance, tendency to fall, intake of psychoactive medication, inability to stand on one leg, high biologic age) had an odds ratio of 11.27 (95% confidence interval 4.61-27.60) for a fall (sensitivity 70%, specificity 79%). The more time-consuming objective functional tests were of limited importance for fall prediction. A simple clinical history, the inability to stand on one leg, and a subjective estimate of biologic age were more important as part of the fall risk assessment.
ERIC Educational Resources Information Center
Sullivan, Helen W.; Beckjord, Ellen Burke; Finney Rutten, Lila J.; Hesse, Bradford W.
2008-01-01
This study tested whether the risk perception attitude framework predicted nutrition-related cancer prevention cognitions and behavioral intentions. Data from the 2003 Health Information National Trends Survey were analyzed to assess respondents' reported likelihood of developing cancer (risk) and perceptions of whether they could lower their…
Low, Yee Syuen; Blöcker, Christopher; McPherson, John R; Tang, See Aik; Cheng, Ying Ying; Wong, Joyner Y S; Chua, Clarinda; Lim, Tony K H; Tang, Choong Leong; Chew, Min Hoe; Tan, Patrick; Tan, Iain B; Rozen, Steven G; Cheah, Peh Yean
2017-09-10
Approximately 20% early-stage (I/II) colorectal cancer (CRC) patients develop metastases despite curative surgery. We aim to develop a formalin-fixed and paraffin-embedded (FFPE)-based predictor of metastases in early-stage, clinically-defined low risk, microsatellite-stable (MSS) CRC patients. We considered genome-wide mRNA and miRNA expression and mutation status of 20 genes assayed in 150 fresh-frozen tumours with known metastasis status. We selected 193 genes for further analysis using NanoString nCounter arrays on corresponding FFPE tumours. Neither mutation status nor miRNA expression improved the estimated prediction. The final predictor, ColoMet19, based on the top 19 genes' mRNA levels trained by Random Forest machine-learning strategy, had an estimated positive-predictive-value (PPV) of 0.66. We tested ColoMet19 on an independent test-set of 131 tumours and obtained a population-adjusted PPV of 0.67 indicating that early-stage CRC patients who tested positive have a 67% risk of developing metastases, substantially higher than the metastasis risk of 40% for node-positive (Stage III) patients who are generally treated with chemotherapy. Predicted-positive patients also had poorer metastasis-free survival (hazard ratios [HR] = 1.92, design-set; HR = 2.05, test-set). Thus, early-stage CRC patients who test positive may be considered for adjuvant therapy after surgery. Copyright © 2017 Elsevier B.V. All rights reserved.
Multiplex proteomics for prediction of major cardiovascular events in type 2 diabetes.
Nowak, Christoph; Carlsson, Axel C; Östgren, Carl Johan; Nyström, Fredrik H; Alam, Moudud; Feldreich, Tobias; Sundström, Johan; Carrero, Juan-Jesus; Leppert, Jerzy; Hedberg, Pär; Henriksen, Egil; Cordeiro, Antonio C; Giedraitis, Vilmantas; Lind, Lars; Ingelsson, Erik; Fall, Tove; Ärnlöv, Johan
2018-05-24
Multiplex proteomics could improve understanding and risk prediction of major adverse cardiovascular events (MACE) in type 2 diabetes. This study assessed 80 cardiovascular and inflammatory proteins for biomarker discovery and prediction of MACE in type 2 diabetes. We combined data from six prospective epidemiological studies of 30-77-year-old individuals with type 2 diabetes in whom 80 circulating proteins were measured by proximity extension assay. Multivariable-adjusted Cox regression was used in a discovery/replication design to identify biomarkers for incident MACE. We used gradient-boosted machine learning and lasso regularised Cox regression in a random 75% training subsample to assess whether adding proteins to risk factors included in the Swedish National Diabetes Register risk model would improve the prediction of MACE in the separate 25% test subsample. Of 1211 adults with type 2 diabetes (32% women), 211 experienced a MACE over a mean (±SD) of 6.4 ± 2.3 years. We replicated associations (<5% false discovery rate) between risk of MACE and eight proteins: matrix metalloproteinase (MMP)-12, IL-27 subunit α (IL-27a), kidney injury molecule (KIM)-1, fibroblast growth factor (FGF)-23, protein S100-A12, TNF receptor (TNFR)-1, TNFR-2 and TNF-related apoptosis-inducing ligand receptor (TRAIL-R)2. Addition of the 80-protein assay to established risk factors improved discrimination in the separate test sample from 0.686 (95% CI 0.682, 0.689) to 0.748 (95% CI 0.746, 0.751). A sparse model of 20 added proteins achieved a C statistic of 0.747 (95% CI 0.653, 0.842) in the test sample. We identified eight protein biomarkers, four of which are novel, for risk of MACE in community residents with type 2 diabetes, and found improved risk prediction by combining multiplex proteomics with an established risk model. Multiprotein arrays could be useful in identifying individuals with type 2 diabetes who are at highest risk of a cardiovascular event.
2011-01-01
Background Genetic risk models could potentially be useful in identifying high-risk groups for the prevention of complex diseases. We investigated the performance of this risk stratification strategy by examining epidemiological parameters that impact the predictive ability of risk models. Methods We assessed sensitivity, specificity, and positive and negative predictive value for all possible risk thresholds that can define high-risk groups and investigated how these measures depend on the frequency of disease in the population, the frequency of the high-risk group, and the discriminative accuracy of the risk model, as assessed by the area under the receiver-operating characteristic curve (AUC). In a simulation study, we modeled genetic risk scores of 50 genes with equal odds ratios and genotype frequencies, and varied the odds ratios and the disease frequency across scenarios. We also performed a simulation of age-related macular degeneration risk prediction based on published odds ratios and frequencies for six genetic risk variants. Results We show that when the frequency of the high-risk group was lower than the disease frequency, positive predictive value increased with the AUC but sensitivity remained low. When the frequency of the high-risk group was higher than the disease frequency, sensitivity was high but positive predictive value remained low. When both frequencies were equal, both positive predictive value and sensitivity increased with increasing AUC, but higher AUC was needed to maximize both measures. Conclusions The performance of risk stratification is strongly determined by the frequency of the high-risk group relative to the frequency of disease in the population. The identification of high-risk groups with appreciable combinations of sensitivity and positive predictive value requires higher AUC. PMID:21797996
A simple risk scoring system for prediction of relapse after inpatient alcohol treatment.
Pedersen, Mads Uffe; Hesse, Morten
2009-01-01
Predicting relapse after alcoholism treatment can be useful in targeting patients for aftercare services. However, a valid and practical instrument for predicting relapse risk does not exist. Based on a prospective study of alcoholism treatment, we developed the Risk of Alcoholic Relapse Scale (RARS) using items taken from the Addiction Severity Index and some basic demographic information. The RARS was cross-validated using two non-overlapping samples, and tested for its ability to predict relapse across different models of treatment. The RARS predicted relapse to drinking within 6 months after alcoholism treatment in both the original and the validation sample, and in a second validation sample it predicted admission to new treatment 3 years after treatment. The RARS can identify patients at high risk of relapse who need extra aftercare and support after treatment.
Ronda, Jocelyn; Gaydos, Charlotte A; Perin, Jamie; Tabacco, Lisa; Coleman, Jenell; Trent, Maria
2018-06-04
Mycoplasma genitalium (MG) is a common sexually transmitted infection (STI) but there are limited strategies to identify individuals at risk of MG. Previously a sex risk quiz was used to predict STIs including Chlamydia trachomatis (CT), Neisseria gonorrhoeae (GC), and/or Trichomonas vaginalis (TV). The original quiz categorized individuals ≤25 years old as at risk of STIs, but the Centers for Disease Control identifies females <25 years old as at risk of STIs. In this study, the quiz was changed to categorize females <25 years old as high risk. The objective was to determine if the age-modified risk quiz predicted MG infection. A cross-sectional analysis of a prospective longitudinal study was performed including female adolescents and young adults (AYA) evaluated in multiple outpatient clinics. Participants completed an age-modified risk quiz about sexual practices. Scores ranged from 0 to 10 and were categorized as low-risk (0-3), medium-risk (4-7), and high-risk (8-10) based upon the STI prevalence for each score. Vaginal and/or endocervical specimens were tested for MG, TV, CT, and GC using the Aptima Gen-Probe nucleic amplification test. There were 693 participants. Most participants reported having 0-1 sexual partners in the last 90 days (91%) and inconsistent condom use (84%). Multivariable logistic regression analysis controlling for race, education, and symptom status demonstrated that a medium-risk score predicted MG infection among AYA <25 years old (adjusted OR 2.56 [95% CI 1.06-6.18]). A risk quiz may be useful during clinical encounters to identify AYA at risk of MG.
Calibration plots for risk prediction models in the presence of competing risks.
Gerds, Thomas A; Andersen, Per K; Kattan, Michael W
2014-08-15
A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Bruijn, Merel M C; Kamphuis, Esme I; Hoesli, Irene M; Martinez de Tejada, Begoña; Loccufier, Anne R; Kühnert, Maritta; Helmer, Hanns; Franz, Marie; Porath, Martina M; Oudijk, Martijn A; Jacquemyn, Yves; Schulzke, Sven M; Vetter, Grit; Hoste, Griet; Vis, Jolande Y; Kok, Marjolein; Mol, Ben W J; van Baaren, Gert-Jan
2016-12-01
The combination of the qualitative fetal fibronectin test and cervical length measurement has a high negative predictive value for preterm birth within 7 days; however, positive prediction is poor. A new bedside quantitative fetal fibronectin test showed potential additional value over the conventional qualitative test, but there is limited evidence on the combination with cervical length measurement. The purpose of this study was to compare quantitative fetal fibronectin and qualitative fetal fibronectin testing in the prediction of spontaneous preterm birth within 7 days in symptomatic women who undergo cervical length measurement. We performed a European multicenter cohort study in 10 perinatal centers in 5 countries. Women between 24 and 34 weeks of gestation with signs of active labor and intact membranes underwent quantitative fibronectin testing and cervical length measurement. We assessed the risk of preterm birth within 7 days in predefined strata based on fibronectin concentration and cervical length. Of 455 women who were included in the study, 48 women (11%) delivered within 7 days. A combination of cervical length and qualitative fibronectin resulted in the identification of 246 women who were at low risk: 164 women with a cervix between 15 and 30 mm and a negative fibronectin test (<50 ng/mL; preterm birth rate, 2%) and 82 women with a cervix at >30 mm (preterm birth rate, 2%). Use of quantitative fibronectin alone resulted in a predicted risk of preterm birth within 7 days that ranged from 2% in the group with the lowest fibronectin level (<10 ng/mL) to 38% in the group with the highest fibronectin level (>500 ng/mL), with similar accuracy as that of the combination of cervical length and qualitative fibronectin. Combining cervical length and quantitative fibronectin resulted in the identification of an additional 19 women at low risk (preterm birth rate, 5%), using a threshold of 10 ng/mL in women with a cervix at <15 mm, and 6 women at high risk (preterm birth rate, 33%) using a threshold of >500 ng/mL in women with a cervix at >30 mm. In women with threatened preterm birth, quantitative fibronectin testing alone performs equal to the combination of cervical length and qualitative fibronectin. Possibly, the combination of quantitative fibronectin testing and cervical length increases this predictive capacity. Cost-effectiveness analysis and the availability of these tests in a local setting should determine the final choice. Copyright © 2016 Elsevier Inc. All rights reserved.
Posturography and risk of recurrent falls in healthy non-institutionalized persons aged over 65.
Buatois, Séverine; Gueguen, René; Gauchard, Gérome C; Benetos, Athanase; Perrin, Philippe P
2006-01-01
A poor postural stability in older people is associated with an increased risk of falling. The posturographic tool has widely been used to assess balance control; however, its value in predicting falls remains unclear. The purpose of this prospective study was to determine the predictive value of posturography in the estimation of the risk of recurrent falls, including a comparison with standard clinical balance tests, in healthy non-institutionalized persons aged over 65. Two hundred and six healthy non-institutionalized volunteers aged over 65 were tested. Postural control was evaluated by posturographic tests, performed on static, dynamic and dynamized platforms (static test, slow dynamic test and Sensory Organization Test [SOT]) and clinical balance tests (Timed 'Up & Go' test, One-Leg Balance, Sit-to-Stand-test). Subsequent falls were monitored prospectively with self-questionnaire sent every 4 months for a period of 16 months after the balance testing. Subjects were classified prospectively in three groups of Non-Fallers (0 fall), Single-Fallers (1 fall) and Multi-Fallers (more than 2 falls). Loss of balance during the last trial of the SOT sensory conflicting condition, when visual and somatosensory inputs were distorted, was the best factor to predict the risk of recurrent falls (OR = 3.6, 95% CI = 1.3-10.11). Multi-Fallers showed no postural adaptation during the repetitive trials of this sensory condition, contrary to Non-Fallers and Single-Fallers. The Multi-Fallers showed significantly more sway when visual inputs were occluded. The clinical balance tests, the static test and the slow dynamic test revealed no significant differences between the groups. In a sample of non-institutionalized older persons aged over 65, posturographic evaluation by the SOT, especially with repetition of the same task in sensory conflicting condition, compared to the clinical tests and the static and dynamic posturographic test, appears to be a more sensitive tool to identify those at high-risk of recurrent falls. Copyright (c) 2006 S. Karger AG, Basel.
Ramirez-Valles, J; Zimmerman, M A; Newcomb, M D
1998-09-01
Sexual activity among high-school-aged youths has steadily increased since the 1970s, emerging as a significant public health concern. Yet, patterns of youth sexual risk behavior are shaped by social class, race, and gender. Based on sociological theories of financial deprivation and collective socialization, we develop and test a model of the relationships among neighborhood poverty; family structure and social class position; parental involvement; prosocial activities; race; and gender as they predict youth sexual risk behavior. We employ structural equation modeling to test this model on a cross-sectional sample of 370 sexually active high-school students from a midwestern city; 57 percent (n = 209) are males and 86 percent are African American. We find that family structure indirectly predicts sexual risk behavior through neighborhood poverty, parental involvement, and prosocial activities. In addition, family class position indirectly predicts sexual risk behavior through neighborhood poverty and prosocial activities. We address implications for theory and health promotion.
Walker, Meghan J; Mirea, Lucia; Glendon, Gord; Ritvo, Paul; Andrulis, Irene L; Knight, Julia A; Chiarelli, Anna M
2014-08-01
While the relationship between perceived risk and breast cancer screening use has been studied extensively, most studies are cross-sectional. We prospectively examined this relationship among 913 women, aged 25-72 with varying levels of familial breast cancer risk from the Ontario site of the Breast Cancer Family Registry. Associations between perceived lifetime breast cancer risk and subsequent use of mammography, clinical breast examination (CBE) and genetic testing were assessed using logistic regression. Overall, perceived risk did not predict subsequent use of mammography, CBE or genetic testing. Among women at moderate/high familial risk, those reporting a perceived risk greater than 50% were significantly less likely to have a CBE (odds ratio (OR) = 0.52, 95% confidence interval (CI): 0.30-0.91, p = 0.04), and non-significantly less likely to have a mammogram (OR = 0.70, 95% CI: 0.40-1.20, p = 0.70) or genetic test (OR = 0.61, 95% CI: 0.34-1.10, p = 0.09) compared to women reporting a perceived risk of 50%. In contrast, among women at low familial risk, those reporting a perceived risk greater than 50% were non-significantly more likely to have a mammogram (OR = 1.13, 95% CI: 0.59-2.16, p = 0.78), CBE (OR = 1.11, 95% CI: 0.63-1.95, p = 0.74) or genetic test (OR = 1.29, 95% CI: 0.50-3.33, p = 0.35) compared to women reporting a perceived risk of 50%. Perceived risk did not significantly predict screening use overall, however this relationship may be moderated by level of familial risk. Results may inform risk education and management strategies for women with varying levels of familial breast cancer risk. Copyright © 2014 Elsevier Ltd. All rights reserved.
Konrad, Sarah K; Miller, Scott N
2012-11-01
A geographical information systems model that identifies regions of the United States of America (USA) susceptible to West Nile virus (WNV) transmission risk is presented. This system has previously been calibrated and tested in the western USA; in this paper we use datasets of WNV-killed birds from South Carolina and Connecticut to test the model in the eastern USA. Because their response to WNV infection is highly predictable, American crows were chosen as the primary source for model calibration and testing. Where crow data are absent, other birds are shown to be an effective substitute. Model results show that the same calibrated model demonstrated to work in the western USA has the same predictive ability in the eastern USA, allowing for a continental-scale evaluation of the transmission risk of WNV at a daily time step. The calibrated model is independent of mosquito species and requires inputs of only local maximum and minimum temperatures. Of benefit to the general public and vector control districts, the model predicts the onset of seasonal transmission risk, although it is less effective at identifying the end of the transmission risk season.
Fulks, Michael; Kaufman, Valerie; Clark, Michael; Stout, Robert L
2017-01-01
- Further refine the independent value of NT-proBNP, accounting for the impact of other test results, in predicting all-cause mortality for individual life insurance applicants with and without heart disease. - Using the Social Security Death Master File and multivariate analysis, relative mortality was determined for 245,322 life insurance applicants ages 50 to 89 tested for NT-proBNP (almost all based on age and policy amount) along with other laboratory tests and measurement of blood pressure and BMI. - NT-proBNP values ≤75 pg/mL included the majority of applicants denying heart disease and had the lowest risk, while values >500 pg/mL for females and >300 pg/mL for males had very high relative risk. Those admitting to heart disease had a higher mortality risk for each band of NT-proBNP relative to those denying heart disease but had a similar and equally predictive risk curve. - NT-proBNP is a strong independent predictor of all-cause mortality in the absence or presence of known heart disease but the range of values associated with increased risk varies by sex.
Distress among women receiving uninformative BRCA1/2 results: 12-month outcomes.
O'Neill, Suzanne C; Rini, Christine; Goldsmith, Rachel E; Valdimarsdottir, Heiddis; Cohen, Lawrence H; Schwartz, Marc D
2009-10-01
Few data are available regarding the long-term psychological impact of uninformative BRCA1/2 test results. This study examines change in distress from pretesting to 12-months post-disclosure, with medical, family history, and psychological variables, such as pretesting perceived risk of carrying a deleterious mutation prior to testing and primary and secondary appraisals, as predictors. Two hundred and nine women with uninformative BRCA1/2 test results completed questionnaires at pretesting and 1-, 6-, and 12-month post-disclosure, including measures of anxiety and depression, cancer-specific and genetic testing distress. We used a mixed models approach to predict change in post-disclosure distress. Distress declined from pretesting to 1-month post-disclosure, but remained stable thereafter. Primary appraisals predicted all types of distress at 1-month post-disclosure. Primary and secondary appraisals predicted genetic testing distress at 1-month as well as change over time. Receiving a variant of uncertain clinical significance and entering testing with a high expectation for carrying a deleterious mutation predicted genetic testing distress that persisted through the year after testing. As a whole, women receiving uninformative BRCA1/2 test results are a resilient group. For some women, distress experienced in the month after testing does not dissipate. Variables, such as heightened pretesting perceived risk and cognitive appraisals, predict greater likelihood for sustained distress in this group and could be amenable to intervention.
Combining Gene Signatures Improves Prediction of Breast Cancer Survival
Zhao, Xi; Naume, Bjørn; Langerød, Anita; Frigessi, Arnoldo; Kristensen, Vessela N.; Børresen-Dale, Anne-Lise; Lingjærde, Ole Christian
2011-01-01
Background Several gene sets for prediction of breast cancer survival have been derived from whole-genome mRNA expression profiles. Here, we develop a statistical framework to explore whether combination of the information from such sets may improve prediction of recurrence and breast cancer specific death in early-stage breast cancers. Microarray data from two clinically similar cohorts of breast cancer patients are used as training (n = 123) and test set (n = 81), respectively. Gene sets from eleven previously published gene signatures are included in the study. Principal Findings To investigate the relationship between breast cancer survival and gene expression on a particular gene set, a Cox proportional hazards model is applied using partial likelihood regression with an L2 penalty to avoid overfitting and using cross-validation to determine the penalty weight. The fitted models are applied to an independent test set to obtain a predicted risk for each individual and each gene set. Hierarchical clustering of the test individuals on the basis of the vector of predicted risks results in two clusters with distinct clinical characteristics in terms of the distribution of molecular subtypes, ER, PR status, TP53 mutation status and histological grade category, and associated with significantly different survival probabilities (recurrence: p = 0.005; breast cancer death: p = 0.014). Finally, principal components analysis of the gene signatures is used to derive combined predictors used to fit a new Cox model. This model classifies test individuals into two risk groups with distinct survival characteristics (recurrence: p = 0.003; breast cancer death: p = 0.001). The latter classifier outperforms all the individual gene signatures, as well as Cox models based on traditional clinical parameters and the Adjuvant! Online for survival prediction. Conclusion Combining the predictive strength of multiple gene signatures improves prediction of breast cancer survival. The presented methodology is broadly applicable to breast cancer risk assessment using any new identified gene set. PMID:21423775
Pretreatment data is highly predictive of liver chemistry signals in clinical trials
Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T
2012-01-01
Purpose The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Patients and methods Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Results Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy’s law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. Conclusion It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones. PMID:23226004
Decruyenaere, M; Evers-Kiebooms, G; Boogaerts, A; Cassiman, J J; Cloostermans, T; Demyttenaere, K; Dom, R; Fryns, J P; Van den Berghe, H
1996-01-01
For people at risk for Huntington's disease, the anxiety and uncertainty about the future may be very burdensome and may be an obstacle to personal decision making about important life issues, for example, procreation. For some at risk persons, this situation is the reason for requesting predictive DNA testing. The aim of this paper is two-fold. First, we want to evaluate whether knowing one's carrier status reduces anxiety and uncertainty and whether it facilitates decision making about procreation. Second, we endeavour to identify pretest predictors of psychological adaptation one year after the predictive test (psychometric evaluation of general anxiety, depression level, and ego strength). The impact of the predictive test result was assessed in 53 subjects tested, using pre- and post-test psychometric measurement and self-report data of follow up interviews. Mean anxiety and depression levels were significantly decreased one year after a good test result; there was no significant change in the case of a bad test result. The mean personality profile, including ego strength, remained unchanged one year after the test. The study further shows that the test result had a definite impact on reproductive decision making. Stepwise multiple regression analyses were used to select the best predictors of the subject's post-test reactions. The results indicate that a careful evaluation of pretest ego strength, depression level, and coping strategies may be helpful in predicting post-test reactions, independently of the carrier status. Test result (carrier/ non-carrier), gender, and age did not significantly contribute to the prediction. About one third of the variance of post-test anxiety and depression level and more than half of the variance of ego strength was explained, implying that other psychological or social aspects should also be taken into account when predicting individual post-test reactions. PMID:8880572
Developing a java android application of KMV-Merton default rate model
NASA Astrophysics Data System (ADS)
Yusof, Norliza Muhamad; Anuar, Aini Hayati; Isa, Norsyaheeda Natasha; Zulkafli, Sharifah Nursyuhada Syed; Sapini, Muhamad Luqman
2017-11-01
This paper presents a developed java android application for KMV-Merton model in predicting the defaut rate of a firm. Predicting default rate is essential in the risk management area as default risk can be immediately transmitted from one entity to another entity. This is the reason default risk is known as a global risk. Although there are several efforts, instruments and methods used to manage the risk, it is said to be insufficient. To the best of our knowledge, there has been limited innovation in developing the default risk mathematical model into a mobile application. Therefore, through this study, default risk is predicted quantitatively using the KMV-Merton model. The KMV-Merton model has been integrated in the form of java program using the Android Studio Software. The developed java android application is tested by predicting the levels of default risk of the three different rated companies. It is found that the levels of default risk are equivalent to the ratings of the respective companies. This shows that the default rate predicted by the KMV-Merton model using the developed java android application can be a significant tool to the risk mangement field. The developed java android application grants users an alternative to predict level of default risk within less procedure.
Prediction of Dementia in Primary Care Patients
Jessen, Frank; Wiese, Birgitt; Bickel, Horst; Eiffländer-Gorfer, Sandra; Fuchs, Angela; Kaduszkiewicz, Hanna; Köhler, Mirjam; Luck, Tobias; Mösch, Edelgard; Pentzek, Michael; Riedel-Heller, Steffi G.; Wagner, Michael; Weyerer, Siegfried; Maier, Wolfgang; van den Bussche, Hendrik
2011-01-01
Background Current approaches for AD prediction are based on biomarkers, which are however of restricted availability in primary care. AD prediction tools for primary care are therefore needed. We present a prediction score based on information that can be obtained in the primary care setting. Methodology/Principal Findings We performed a longitudinal cohort study in 3.055 non-demented individuals above 75 years recruited via primary care chart registries (Study on Aging, Cognition and Dementia, AgeCoDe). After the baseline investigation we performed three follow-up investigations at 18 months intervals with incident dementia as the primary outcome. The best set of predictors was extracted from the baseline variables in one randomly selected half of the sample. This set included age, subjective memory impairment, performance on delayed verbal recall and verbal fluency, on the Mini-Mental-State-Examination, and on an instrumental activities of daily living scale. These variables were aggregated to a prediction score, which achieved a prediction accuracy of 0.84 for AD. The score was applied to the second half of the sample (test cohort). Here, the prediction accuracy was 0.79. With a cut-off of at least 80% sensitivity in the first cohort, 79.6% sensitivity, 66.4% specificity, 14.7% positive predictive value (PPV) and 97.8% negative predictive value of (NPV) for AD were achieved in the test cohort. At a cut-off for a high risk population (5% of individuals with the highest risk score in the first cohort) the PPV for AD was 39.1% (52% for any dementia) in the test cohort. Conclusions The prediction score has useful prediction accuracy. It can define individuals (1) sensitively for low cost-low risk interventions, or (2) more specific and with increased PPV for measures of prevention with greater costs or risks. As it is independent of technical aids, it may be used within large scale prevention programs. PMID:21364746
Prediction of dementia in primary care patients.
Jessen, Frank; Wiese, Birgitt; Bickel, Horst; Eiffländer-Gorfer, Sandra; Fuchs, Angela; Kaduszkiewicz, Hanna; Köhler, Mirjam; Luck, Tobias; Mösch, Edelgard; Pentzek, Michael; Riedel-Heller, Steffi G; Wagner, Michael; Weyerer, Siegfried; Maier, Wolfgang; van den Bussche, Hendrik
2011-02-18
Current approaches for AD prediction are based on biomarkers, which are however of restricted availability in primary care. AD prediction tools for primary care are therefore needed. We present a prediction score based on information that can be obtained in the primary care setting. We performed a longitudinal cohort study in 3.055 non-demented individuals above 75 years recruited via primary care chart registries (Study on Aging, Cognition and Dementia, AgeCoDe). After the baseline investigation we performed three follow-up investigations at 18 months intervals with incident dementia as the primary outcome. The best set of predictors was extracted from the baseline variables in one randomly selected half of the sample. This set included age, subjective memory impairment, performance on delayed verbal recall and verbal fluency, on the Mini-Mental-State-Examination, and on an instrumental activities of daily living scale. These variables were aggregated to a prediction score, which achieved a prediction accuracy of 0.84 for AD. The score was applied to the second half of the sample (test cohort). Here, the prediction accuracy was 0.79. With a cut-off of at least 80% sensitivity in the first cohort, 79.6% sensitivity, 66.4% specificity, 14.7% positive predictive value (PPV) and 97.8% negative predictive value of (NPV) for AD were achieved in the test cohort. At a cut-off for a high risk population (5% of individuals with the highest risk score in the first cohort) the PPV for AD was 39.1% (52% for any dementia) in the test cohort. The prediction score has useful prediction accuracy. It can define individuals (1) sensitively for low cost-low risk interventions, or (2) more specific and with increased PPV for measures of prevention with greater costs or risks. As it is independent of technical aids, it may be used within large scale prevention programs.
Koscik, Rebecca L; Berman, Sara E; Clark, Lindsay R; Mueller, Kimberly D; Okonkwo, Ozioma C; Gleason, Carey E; Hermann, Bruce P; Sager, Mark A; Johnson, Sterling C
2016-11-01
Intraindividual cognitive variability (IICV) has been shown to differentiate between groups with normal cognition, mild cognitive impairment (MCI), and dementia. This study examined whether baseline IICV predicted subsequent mild to moderate cognitive impairment in a cognitively normal baseline sample. Participants with 4 waves of cognitive assessment were drawn from the Wisconsin Registry for Alzheimer's Prevention (WRAP; n=684; 53.6(6.6) baseline age; 9.1(1.0) years follow-up; 70% female; 74.6% parental history of Alzheimer's disease). The primary outcome was Wave 4 cognitive status ("cognitively normal" vs. "impaired") determined by consensus conference; "impaired" included early MCI (n=109), clinical MCI (n=11), or dementia (n=1). Primary predictors included two IICV variables, each based on the standard deviation of a set of scores: "6 Factor IICV" and "4 Test IICV". Each IICV variable was tested in a series of logistic regression models to determine whether IICV predicted cognitive status. In exploratory analyses, distribution-based cutoffs incorporating memory, executive function, and IICV patterns were used to create and test an MCI risk variable. Results were similar for the IICV variables: higher IICV was associated with greater risk of subsequent impairment after covariate adjustment. After adjusting for memory and executive functioning scores contributing to IICV, IICV was not significant. The MCI risk variable also predicted risk of impairment. While IICV in middle-age predicts subsequent impairment, it is a weaker risk indicator than the memory and executive function scores contributing to its calculation. Exploratory analyses suggest potential to incorporate IICV patterns into risk assessment in clinical settings. (JINS, 2016, 22, 1016-1025).
The interpersonal theory of suicide and adolescent suicidal behavior.
Barzilay, S; Feldman, D; Snir, A; Apter, A; Carli, V; Hoven, C W; Wasserman, C; Sarchiapone, M; Wasserman, D
2015-09-01
Joiner's interpersonal theory of suicide (IPTS) proposes that suicide results from the combination of a perception of burdening others, social alienation, and the capability for self-harm. The theory gained some empirical support, however the overall model has yet to be tested. This study aimed to test the main predictions of IPTS in a large community sample of Israeli adolescents. 1196 Israeli Jewish and Arab high-school pupils participating in the SEYLE project completed a self-report questionnaire measuring perceived burdensomeness, thwarted belongingness, health risk behaviors, and non-suicidal self-injury (risk variables), and suicidal ideation and suicide attempts (outcome measures). The data were tested in cross-sectional regression models. Consistent with IPTS, perceived burdensomeness was found to interact with thwarted belongingness, predicting suicidal ideation. Depression mediated most of the effect of thwarted belongingness and perceived burdensomeness on suicidal ideation. Acquired capability for self-harm, as measured by health risk behaviors and direct non-suicidal self-injurious behaviors, predicted suicide attempt. However, this mechanism operated independently from ideation rather than in interaction with it, at variance with IPTS-based predictions. The cross-sectional design precludes conclusions about causality and directionality. Proxy measures were used to test the interpersonal theory constructs. The findings support some of the IPTS predictions but not all, and imply two separate pathways for suicidal behavior in adolescents: one related to internalizing psychopathology and the other to self-harm behaviors. This conceptualization has clinical implications for the differential identification of adolescents at risk for suicidal behavior and for the development of prevention strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
Nambi, Vijay; Chambless, Lloyd; He, Max; Folsom, Aaron R; Mosley, Tom; Boerwinkle, Eric; Ballantyne, Christie M
2012-01-01
Carotid intima-media thickness (CIMT) and plaque information can improve coronary heart disease (CHD) risk prediction when added to traditional risk factors (TRF). However, obtaining adequate images of all carotid artery segments (A-CIMT) may be difficult. Of A-CIMT, the common carotid artery intima-media thickness (CCA-IMT) is relatively more reliable and easier to measure. We evaluated whether CCA-IMT is comparable to A-CIMT when added to TRF and plaque information in improving CHD risk prediction in the Atherosclerosis Risk in Communities (ARIC) study. Ten-year CHD risk prediction models using TRF alone, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque were developed for the overall cohort, men, and women. The area under the receiver operator characteristic curve (AUC), per cent individuals reclassified, net reclassification index (NRI), and model calibration by the Grønnesby-Borgan test were estimated. There were 1722 incident CHD events in 12 576 individuals over a mean follow-up of 15.2 years. The AUC for TRF only, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque models were 0.741, 0.754, and 0.753, respectively. Although there was some discordance when the CCA-IMT + plaque- and A-CIMT + plaque-based risk estimation was compared, the NRI and clinical NRI (NRI in the intermediate-risk group) when comparing the CIMT models with TRF-only model, per cent reclassified, and test for model calibration were not significantly different. Coronary heart disease risk prediction can be improved by adding A-CIMT + plaque or CCA-IMT + plaque information to TRF. Therefore, evaluating the carotid artery for plaque presence and measuring CCA-IMT, which is easier and more reliable than measuring A-CIMT, provide a good alternative to measuring A-CIMT for CHD risk prediction.
Bluett, E J; Lee, E B; Simone, M; Lockhart, G; Twohig, M P; Lensegrav-Benson, Tera; Quakenbush-Roberts, Benita
2016-12-01
The purpose of this study was to test whether pre-treatment levels of psychological flexibility would longitudinally predict quality of life and eating disorder risk in patients at a residential treatment facility for eating disorders. Data on body image psychological flexibility, quality of life, and eating disorder risk were collected from 63 adolescent and 50 adult, female, residential patients (N=113) diagnosed with an eating disorder. These same measures were again collected at post-treatment. Sequential multiple regression analyses were performed to test whether pre-treatment levels of psychological flexibility longitudinally predicted quality of life and eating disorder risk after controlling for age and baseline effects. Pre-treatment psychological flexibility significantly predicted post-treatment quality of life with approximately 19% of the variation being attributable to age and pre-treatment psychological flexibility. Pre-treatment psychological flexibility also significantly predicted post-treatment eating disorder risk with nearly 30% of the variation attributed to age and pre-treatment psychological flexibility. This study suggests that levels of psychological flexibility upon entering treatment for an eating disorder longitudinally predict eating disorder outcome and quality of life. Copyright © 2016 Elsevier Ltd. All rights reserved.
Critchley-Thorne, Rebecca J; Davison, Jon M; Prichard, Jeffrey W; Reese, Lia M; Zhang, Yi; Repa, Kathleen; Li, Jinhong; Diehl, David L; Jhala, Nirag C; Ginsberg, Gregory G; DeMarshall, Maureen; Foxwell, Tyler; Jobe, Blair A; Zaidi, Ali H; Duits, Lucas C; Bergman, Jacques J G H M; Rustgi, Anil; Falk, Gary W
2017-02-01
There is a need for improved tools to detect high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) in patients with Barrett's esophagus. In previous work, we demonstrated that a 3-tier classifier predicted risk of incident progression in Barrett's esophagus. Our aim was to determine whether this risk classifier could detect a field effect in nondysplastic (ND), indefinite for dysplasia (IND), or low-grade dysplasia (LGD) biopsies from Barrett's esophagus patients with prevalent HGD/EAC. We performed a multi-institutional case-control study to evaluate a previously developed risk classifier that is based upon quantitative image features derived from 9 biomarkers and morphology, and predicts risk for HGD/EAC in Barrett's esophagus patients. The risk classifier was evaluated in ND, IND, and LGD biopsies from Barrett's esophagus patients diagnosed with HGD/EAC on repeat endoscopy (prevalent cases, n = 30, median time to HGD/EAC diagnosis 140.5 days) and nonprogressors (controls, n = 145, median HGD/EAC-free surveillance time 2,015 days). The risk classifier stratified prevalent cases and non-progressor patients into low-, intermediate-, and high-risk classes [OR, 46.0; 95% confidence interval, 14.86-169 (high-risk vs. low-risk); P < 0.0001]. The classifier also provided independent prognostic information that outperformed the subspecialist and generalist diagnosis. A tissue systems pathology test better predicts prevalent HGD/EAC in Barrett's esophagus patients than pathologic variables. The results indicate that molecular and cellular changes associated with malignant transformation in Barrett's esophagus may be detectable as a field effect using the test. A tissue systems pathology test may provide an objective method to facilitate earlier identification of Barrett's esophagus patients requiring therapeutic intervention. Cancer Epidemiol Biomarkers Prev; 26(2); 240-8. ©2016 AACR. ©2016 American Association for Cancer Research.
Henderson, Steven; Woods-Fry, Heather; Collin, Charles A; Gagnon, Sylvain; Voloaca, Misha; Grant, John; Rosenthal, Ted; Allen, Wade
2015-05-01
Our research group has previously demonstrated that the peripheral motion contrast threshold (PMCT) test predicts older drivers' self-report accident risk, as well as simulated driving performance. However, the PMCT is too lengthy to be a part of a battery of tests to assess fitness to drive. Therefore, we have developed a new version of this test, which takes under two minutes to administer. We assessed the motion contrast thresholds of 24 younger drivers (19-32) and 25 older drivers (65-83) with both the PMCT-10min and the PMCT-2min test and investigated if thresholds were associated with measures of simulated driving performance. Younger participants had significantly lower motion contrast thresholds than older participants and there were no significant correlations between younger participants' thresholds and any measures of driving performance. The PMCT-10min and the PMCT-2min thresholds of older drivers' predicted simulated crash risk, as well as the minimum distance of approach to all hazards. This suggests that our tests of motion processing can help predict the risk of collision or near collision in older drivers. Thresholds were also correlated with the total lane deviation time, suggesting a deficiency in processing of peripheral flow and delayed detection of adjacent cars. The PMCT-2min is an improved version of a previously validated test, and it has the potential to help assess older drivers' fitness to drive. Copyright © 2015 Elsevier Ltd. All rights reserved.
Deckel, A W; Hesselbrock, V; Bauer, L
1995-04-01
This experiment examined the relationship between anterior brain functioning and alcohol-related expectancies. Ninety-one young men at risk for developing alcoholism were assessed on the Alcohol Expectancy Questionnaire (AEQ) and administered neuropsychological and EEG tests. Three of the scales on the AEQ, including the "Enhanced Sexual Functioning" scale, the "Increased Social Assertiveness" scale, and items from the "Global/Positive Change scale," were used, because each of these scales has been found to discriminate alcohol-based expectancies adequately by at least two separate sets of investigators. Regression analysis found that anterior neuropsychological tests (including the Wisconsin Card Sorting test, the Porteus Maze test, the Controlled Oral Word Fluency test, and the Luria-Nebraska motor functioning tests) were predictive of the AEQ scale scores on regression analysis. One of the AEQ scales, "Enhanced Sexual Functioning," was also predicted by WAIS-R-Verbal scales, whereas the "Global/Positive" AEQ scale was predicted by the WAIS-R Performance scales. Regression analysis using EEG power as predictors found that left versus right hemisphere "difference" scores obtained from frontal EEG leads were predictive of the three AEQ scales. Conversely, parietal EEG power did not significantly predict any of the expectancy scales. It is concluded that anterior brain any of the expectancy scales. It is concluded that anterior brain functioning is associated with alcohol-related expectancies. These findings suggest that alcohol-related expectancy may be, in part, biologically determined by frontal/prefrontal systems, and that dysfunctioning in these systems may serve as a risk factor for the development of alcohol-related behaviors.
Mariño, Tania Cruz; Armiñán, Rubén Reynaldo; Cedeño, Humberto Jorge; Mesa, José Miguel Laffita; Zaldivar, Yanetza González; Rodríguez, Raúl Aguilera; Santos, Miguel Velázquez; Mederos, Luis Enrique Almaguer; Herrera, Milena Paneque; Pérez, Luis Velázquez
2011-06-01
Predictive testing protocols are intended to help patients affected with hereditary conditions understand their condition and make informed reproductive choices. However, predictive protocols may expose clinicians and patients to ethical dilemmas that interfere with genetic counseling and the decision making process. This paper describes ethical dilemmas in a series of five cases involving predictive testing for hereditary ataxias in Cuba. The examples herein present evidence of the deeply controversial situations faced by both individuals at risk and professionals in charge of these predictive studies, suggesting a need for expanded guidelines to address such complexities.
Gim, Jungsoo; Kim, Wonji; Kwak, Soo Heon; Choi, Hosik; Park, Changyi; Park, Kyong Soo; Kwon, Sunghoon; Park, Taesung; Won, Sungho
2017-11-01
Despite the many successes of genome-wide association studies (GWAS), the known susceptibility variants identified by GWAS have modest effect sizes, leading to notable skepticism about the effectiveness of building a risk prediction model from large-scale genetic data. However, in contrast to genetic variants, the family history of diseases has been largely accepted as an important risk factor in clinical diagnosis and risk prediction. Nevertheless, the complicated structures of the family history of diseases have limited their application in clinical practice. Here, we developed a new method that enables incorporation of the general family history of diseases with a liability threshold model, and propose a new analysis strategy for risk prediction with penalized regression analysis that incorporates both large numbers of genetic variants and clinical risk factors. Application of our model to type 2 diabetes in the Korean population (1846 cases and 1846 controls) demonstrated that single-nucleotide polymorphisms accounted for 32.5% of the variation explained by the predicted risk scores in the test data set, and incorporation of family history led to an additional 6.3% improvement in prediction. Our results illustrate that family medical history provides valuable information on the variation of complex diseases and improves prediction performance. Copyright © 2017 by the Genetics Society of America.
We applied a generic approach to estimate and test predictions of population risks of mercury (Hg) exposure and habitat alteration on common loons (Gavia immer) breeding in New Hampshire (NH), USA. We developed a publically-accessible data system, integrating environmental data ...
Pepper, Gillian V; Nettle, Daniel
2014-09-01
Socioeconomic gradients in health behavior are pervasive and well documented. Yet, there is little consensus on their causes. Behavioral ecological theory predicts that, if people of lower socioeconomic position (SEP) perceive greater personal extrinsic mortality risk than those of higher SEP, they should disinvest in their future health. We surveyed North American adults for reported effort in looking after health, perceived extrinsic and intrinsic mortality risks, and measures of SEP. We examined the relationships between these variables and found that lower subjective SEP predicted lower reported health effort. Lower subjective SEP was also associated with higher perceived extrinsic mortality risk, which in turn predicted lower reported health effort. The effect of subjective SEP on reported health effort was completely mediated by perceived extrinsic mortality risk. Our findings indicate that perceived extrinsic mortality risk may be a key factor underlying SEP gradients in motivation to invest in future health.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Risk factors for Apgar score using artificial neural networks.
Ibrahim, Doaa; Frize, Monique; Walker, Robin C
2006-01-01
Artificial Neural Networks (ANNs) have been used in identifying the risk factors for many medical outcomes. In this paper, the risk factors for low Apgar score are introduced. This is the first time, to our knowledge, that the ANNs are used for Apgar score prediction. The medical domain of interest used is the perinatal database provided by the Perinatal Partnership Program of Eastern and Southeastern Ontario (PPPESO). The ability of the feed forward back propagation ANNs to generate strong predictive model with the most influential variables is tested. Finally, minimal sets of variables (risk factors) that are important in predicting Apgar score outcome without degrading the ANN performance are identified.
Sex similarities and differences in risk factors for recurrence of major depression.
van Loo, Hanna M; Aggen, Steven H; Gardner, Charles O; Kendler, Kenneth S
2017-11-27
Major depression (MD) occurs about twice as often in women as in men, but it is unclear whether sex differences subsist after disease onset. This study aims to elucidate potential sex differences in rates and risk factors for MD recurrence, in order to improve prediction of course of illness and understanding of its underlying mechanisms. We used prospective data from a general population sample (n = 653) that experienced a recent episode of MD. A diverse set of potential risk factors for recurrence of MD was analyzed using Cox models subject to elastic net regularization for males and females separately. Accuracy of the prediction models was tested in same-sex and opposite-sex test data. Additionally, interactions between sex and each of the risk factors were investigated to identify potential sex differences. Recurrence rates and the impact of most risk factors were similar for men and women. For both sexes, prediction models were highly multifactorial including risk factors such as comorbid anxiety, early traumas, and family history. Some subtle sex differences were detected: for men, prediction models included more risk factors concerning characteristics of the depressive episode and family history of MD and generalized anxiety, whereas for women, models included more risk factors concerning early and recent adverse life events and socioeconomic problems. No prominent sex differences in risk factors for recurrence of MD were found, potentially indicating similar disease maintaining mechanisms for both sexes. Course of MD is a multifactorial phenomenon for both males and females.
Li, Wen; Zhao, Li-Zhong; Ma, Dong-Wang; Wang, De-Zheng; Shi, Lei; Wang, Hong-Lei; Dong, Mo; Zhang, Shu-Yi; Cao, Lei; Zhang, Wei-Hua; Zhang, Xi-Peng; Zhang, Qing-Huai; Yu, Lin; Qin, Hai; Wang, Xi-Mo; Chen, Sam Li-Sheng
2018-05-01
We aimed to predict colorectal cancer (CRC) based on the demographic features and clinical correlates of personal symptoms and signs from Tianjin community-based CRC screening data.A total of 891,199 residents who were aged 60 to 74 and were screened in 2012 were enrolled. The Lasso logistic regression model was used to identify the predictors for CRC. Predictive validity was assessed by the receiver operating characteristic (ROC) curve. Bootstrapping method was also performed to validate this prediction model.CRC was best predicted by a model that included age, sex, education level, occupations, diarrhea, constipation, colon mucosa and bleeding, gallbladder disease, a stressful life event, family history of CRC, and a positive fecal immunochemical test (FIT). The area under curve (AUC) for the questionnaire with a FIT was 84% (95% CI: 82%-86%), followed by 76% (95% CI: 74%-79%) for a FIT alone, and 73% (95% CI: 71%-76%) for the questionnaire alone. With 500 bootstrap replications, the estimated optimism (<0.005) shows good discrimination in validation of prediction model.A risk prediction model for CRC based on a series of symptoms and signs related to enteric diseases in combination with a FIT was developed from first round of screening. The results of the current study are useful for increasing the awareness of high-risk subjects and for individual-risk-guided invitations or strategies to achieve mass screening for CRC.
Breast Cancer Risk Prediction and Mammography Biopsy Decisions
Armstrong, Katrina; Handorf, Elizabeth A.; Chen, Jinbo; Demeter, Mirar N. Bristol
2012-01-01
Background Controversy continues about screening mammography, in part because of the risk of false-negative and false-positive mammograms. Pre-test breast cancer risk factors may improve the positive and negative predictive value of screening. Purpose To create a model that estimates the potential impact of pre-test risk prediction using clinical and genomic information on the reclassification of women with abnormal mammograms (BI-RADS3 and BI-RADS4 [Breast Imaging-Reporting and Data System]) above and below the threshold for breast biopsy. Methods The current study modeled 1-year breast cancer risk in women with abnormal screening mammograms using existing data on breast cancer risk factors, 12 validated breast cancer single nucleotide polymorphisms (SNPs), and probability of cancer given the BI-RADS category. Examination was made of reclassification of women above and below biopsy thresholds of 1%, 2%, and 3% risk. The Breast Cancer Surveillance Consortium data were collected from 1996 to 2002. Data analysis was conducted in 2010 and 2011. Results Using a biopsy risk threshold of 2% and the standard risk factor model, 5% of women with a BI-RADS3 mammogram had a risk above the threshold, and 3% of women with BIRADS4A mammograms had a risk below the threshold. The addition of 12 SNPs in the model resulted in 8% of women with a BI-RADS3 mammogram above the threshold for biopsy and 7% of women with BI-RADS4A mammograms below the threshold. Conclusions The incorporation of pre-test breast cancer risk factors could change biopsy decisions for a small proportion of women with abnormal mammograms. The greatest impact comes from standard breast cancer risk factors. PMID:23253645
Foreman, K. Bo; Addison, Odessa; Kim, Han S.; Dibble, Leland E.
2010-01-01
Introduction Despite clear deficits in postural control, most clinical examination tools lack accuracy in identifying persons with Parkinson disease (PD) who have fallen or are at risk for falls. We assert that this is in part due to the lack of ecological validity of the testing. Methods To test this assertion, we examined the responsiveness and predictive validity of the Functional Gait Assessment (FGA), the Pull test, and the Timed up and Go (TUG) during clinically defined ON and OFF medication states. To address responsiveness, ON/OFF medication performance was compared. To address predictive validity, areas under the curve (AUC) of receiver operating characteristic (ROC) curves were compared. Comparisons were made using separate non-parametric tests. Results Thirty-six persons (24 male, 12 female) with PD (22 fallers, 14 non-fallers) participated. Only the FGA was able to detect differences between fallers and non-fallers for both ON/OFF medication testing. The predictive validity of the FGA and the TUG for fall identification was higher during OFF medication compared to ON medication testing. The predictive validity of the FGA was higher than the TUG and the Pull test during ON and OFF medication testing. Discussion In order to most accurately identify fallers, clinicians should test persons with PD in ecologically relevant conditions and tasks. In this study, interpretation of the OFF medication performance and use of the FGA provided more accurate prediction of those who would fall. PMID:21215674
Cohen, Jérémie F.; Cohen, Robert; Levy, Corinne; Thollot, Franck; Benani, Mohamed; Bidet, Philippe; Chalumeau, Martin
2015-01-01
Background: Several clinical prediction rules for diagnosing group A streptococcal infection in children with pharyngitis are available. We aimed to compare the diagnostic accuracy of rules-based selective testing strategies in a prospective cohort of children with pharyngitis. Methods: We identified clinical prediction rules through a systematic search of MEDLINE and Embase (1975–2014), which we then validated in a prospective cohort involving French children who presented with pharyngitis during a 1-year period (2010–2011). We diagnosed infection with group A streptococcus using two throat swabs: one obtained for a rapid antigen detection test (StreptAtest, Dectrapharm) and one obtained for culture (reference standard). We validated rules-based selective testing strategies as follows: low risk of group A streptococcal infection, no further testing or antibiotic therapy needed; intermediate risk of infection, rapid antigen detection for all patients and antibiotic therapy for those with a positive test result; and high risk of infection, empiric antibiotic treatment. Results: We identified 8 clinical prediction rules, 6 of which could be prospectively validated. Sensitivity and specificity of rules-based selective testing strategies ranged from 66% (95% confidence interval [CI] 61–72) to 94% (95% CI 92–97) and from 40% (95% CI 35–45) to 88% (95% CI 85–91), respectively. Use of rapid antigen detection testing following the clinical prediction rule ranged from 24% (95% CI 21–27) to 86% (95% CI 84–89). None of the rules-based selective testing strategies achieved our diagnostic accuracy target (sensitivity and specificity > 85%). Interpretation: Rules-based selective testing strategies did not show sufficient diagnostic accuracy in this study population. The relevance of clinical prediction rules for determining which children with pharyngitis should undergo a rapid antigen detection test remains questionable. PMID:25487666
Translation and validation of the Canadian diabetes risk assessment questionnaire in China.
Guo, Jia; Shi, Zhengkun; Chen, Jyu-Lin; Dixon, Jane K; Wiley, James; Parry, Monica
2018-01-01
To adapt the Canadian Diabetes Risk Assessment Questionnaire for the Chinese population and to evaluate its psychometric properties. A cross-sectional study was conducted with a convenience sample of 194 individuals aged 35-74 years from October 2014 to April 2015. The Canadian Diabetes Risk Assessment Questionnaire was adapted and translated for the Chinese population. Test-retest reliability was conducted to measure stability. Criterion and convergent validity of the adapted questionnaire were assessed using 2-hr 75 g oral glucose tolerance tests and the Finnish Diabetes Risk Scores, respectively. Sensitivity and specificity were evaluated to establish its predictive validity. The test-retest reliability was 0.988. Adequate validity of the adapted questionnaire was demonstrated by positive correlations found between the scores and 2-hr 75 g oral glucose tolerance tests (r = .343, p < .001) and with the Finnish Diabetes Risk Scores (r = .738, p < .001). The area under receiver operating characteristic curve was 0.705 (95% CI .632, .778), demonstrating moderate diagnostic value at a cutoff score of 30. The sensitivity was 73%, with a positive predictive value of 57% and negative predictive value of 78%. Our results provided evidence supporting the translation consistency, content validity, convergent validity, criterion validity, sensitivity, and specificity of the translated Canadian Diabetes Risk Assessment Questionnaire with minor modifications. This paper provides clinical, practical, and methodological information on how to adapt a diabetes risk calculator between cultures for public health nurses. © 2017 Wiley Periodicals, Inc.
Handels, Ron L H; Vos, Stephanie J B; Kramberger, Milica G; Jelic, Vesna; Blennow, Kaj; van Buchem, Mark; van der Flier, Wiesje; Freund-Levi, Yvonne; Hampel, Harald; Olde Rikkert, Marcel; Oleksik, Ania; Pirtosek, Zvezdan; Scheltens, Philip; Soininen, Hilkka; Teunissen, Charlotte; Tsolaki, Magda; Wallin, Asa K; Winblad, Bengt; Verhey, Frans R J; Visser, Pieter Jelle
2017-08-01
We aimed to determine the added value of cerebrospinal fluid (CSF) to clinical and imaging tests to predict progression from mild cognitive impairment (MCI) to any type of dementia. The risk of progression to dementia was estimated using two logistic regression models based on 250 MCI participants: the first included standard clinical measures (demographic, clinical, and imaging test information) without CSF biomarkers, and the second included standard clinical measures with CSF biomarkers. Adding CSF improved predictive accuracy with 0.11 (scale from 0-1). Of all participants, 136 (54%) had a change in risk score of 0.10 or higher (which was considered clinically relevant), of whom in 101, it was in agreement with their dementia status at follow-up. An individual person's risk of progression from MCI to dementia can be improved by relying on CSF biomarkers in addition to recommended clinical and imaging tests for usual care. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Eslami, Mohammad H; Rybin, Denis V; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik
2018-01-01
The purpose of this study is to externally validate a recently reported Vascular Study Group of New England (VSGNE) risk predictive model of postoperative mortality after elective abdominal aortic aneurysm (AAA) repair and to compare its predictive ability across different patients' risk categories and against the established risk predictive models using the Vascular Quality Initiative (VQI) AAA sample. The VQI AAA database (2010-2015) was queried for patients who underwent elective AAA repair. The VSGNE cases were excluded from the VQI sample. The external validation of a recently published VSGNE AAA risk predictive model, which includes only preoperative variables (age, gender, history of coronary artery disease, chronic obstructive pulmonary disease, cerebrovascular disease, creatinine levels, and aneurysm size) and planned type of repair, was performed using the VQI elective AAA repair sample. The predictive value of the model was assessed via the C-statistic. Hosmer-Lemeshow method was used to assess calibration and goodness of fit. This model was then compared with the Medicare, Vascular Governance Northwest model, and Glasgow Aneurysm Score for predicting mortality in VQI sample. The Vuong test was performed to compare the model fit between the models. Model discrimination was assessed in different risk group VQI quintiles. Data from 4431 cases from the VSGNE sample with the overall mortality rate of 1.4% was used to develop the model. The internally validated VSGNE model showed a very high discriminating ability in predicting mortality (C = 0.822) and good model fit (Hosmer-Lemeshow P = .309) among the VSGNE elective AAA repair sample. External validation on 16,989 VQI cases with an overall 0.9% mortality rate showed very robust predictive ability of mortality (C = 0.802). Vuong tests yielded a significant fit difference favoring the VSGNE over then Medicare model (C = 0.780), Vascular Governance Northwest (0.774), and Glasgow Aneurysm Score (0.639). Across the 5 risk quintiles, the VSGNE model predicted observed mortality significantly with great accuracy. This simple VSGNE AAA risk predictive model showed very high discriminative ability in predicting mortality after elective AAA repair among a large external independent sample of AAA cases performed by a diverse array of physicians nationwide. The risk score based on this simple VSGNE model can reliably stratify patients according to their risk of mortality after elective AAA repair better than other established models. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Net Reclassification Indices for Evaluating Risk-Prediction Instruments: A Critical Review
Kerr, Kathleen F.; Wang, Zheyu; Janes, Holly; McClelland, Robyn L.; Psaty, Bruce M.; Pepe, Margaret S.
2014-01-01
Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For pre-defined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true-positive and false-positive rates. We advocate use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid pre-defined risk categories. However, it suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap methods rather than published variance formulas. The preferred single-number summary of the prediction increment is the improvement in net benefit. PMID:24240655
Evaluation of the field relevance of several injury risk functions.
Prasad, Priya; Mertz, Harold J; Dalmotas, Danius J; Augenstein, Jeffrey S; Diggs, Kennerly
2010-11-01
An evaluation of the four injury risk curves proposed in the NHTSA NCAP for estimating the risk of AIS>= 3 injuries to the head, neck, chest and AIS>=2 injury to the Knee-Thigh-Hip (KTH) complex has been conducted. The predicted injury risk to the four body regions based on driver dummy responses in over 300 frontal NCAP tests were compared against those to drivers involved in real-world crashes of similar severity as represented in the NASS. The results of the study show that the predicted injury risks to the head and chest were slightly below those in NASS, and the predicted risk for the knee-thigh-hip complex was substantially below that observed in the NASS. The predicted risk for the neck by the Nij curve was greater than the observed risk in NASS by an order of magnitude due to the Nij risk curve predicting a non-zero risk when Nij = 0. An alternative and published Nte risk curve produced a risk estimate consistent with the NASS estimate of neck injury. Similarly, an alternative and published chest injury risk curve produced a risk estimate that was within the bounds of the NASS estimates. No published risk curve for femur compressive load could be found that would give risk estimates consistent with the range of the NASS estimates. Additional work on developing a femur compressive load risk curve is recommended.
Mahmood, Eitezaz; Matyal, Robina; Mueller, Ariel; Mahmood, Feroze; Tung, Avery; Montealegre-Gallegos, Mario; Schermerhorn, Marc; Shahul, Sajid
2018-03-01
In some institutions, the current blood ordering practice does not discriminate minimally invasive endovascular aneurysm repair (EVAR) from open procedures, with consequent increasing costs and likelihood of blood product wastage for EVARs. This limitation in practice can possibly be addressed with the development of a reliable prediction model for transfusion risk in EVAR patients. We used the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) database to create a model for prediction of intraoperative blood transfusion occurrence in patients undergoing EVAR. Afterward, we tested our predictive model on the Vascular Study Group of New England (VSGNE) database. We used the ACS NSQIP database for patients who underwent EVAR from 2011 to 2013 (N = 4709) as our derivation set for identifying a risk index for predicting intraoperative blood transfusion. We then developed a clinical risk score and validated this model using patients who underwent EVAR from 2003 to 2014 in the VSGNE database (N = 4478). The transfusion rates were 8.4% and 6.1% for the ACS NSQIP (derivation set) and VSGNE (validation) databases, respectively. Hemoglobin concentration, American Society of Anesthesiologists class, age, and aneurysm diameter predicted blood transfusion in the derivation set. When it was applied on the validation set, our risk index demonstrated good discrimination in both the derivation and validation set (C statistic = 0.73 and 0.70, respectively) and calibration using the Hosmer-Lemeshow test (P = .27 and 0.31) for both data sets. We developed and validated a risk index for predicting the likelihood of intraoperative blood transfusion in EVAR patients. Implementation of this index may facilitate the blood management strategies specific for EVAR. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Prediction of chronic post-operative pain: pre-operative DNIC testing identifies patients at risk.
Yarnitsky, David; Crispel, Yonathan; Eisenberg, Elon; Granovsky, Yelena; Ben-Nun, Alon; Sprecher, Elliot; Best, Lael-Anson; Granot, Michal
2008-08-15
Surgical and medical procedures, mainly those associated with nerve injuries, may lead to chronic persistent pain. Currently, one cannot predict which patients undergoing such procedures are 'at risk' to develop chronic pain. We hypothesized that the endogenous analgesia system is key to determining the pattern of handling noxious events, and therefore testing diffuse noxious inhibitory control (DNIC) will predict susceptibility to develop chronic post-thoracotomy pain (CPTP). Pre-operative psychophysical tests, including DNIC assessment (pain reduction during exposure to another noxious stimulus at remote body area), were conducted in 62 patients, who were followed 29.0+/-16.9 weeks after thoracotomy. Logistic regression revealed that pre-operatively assessed DNIC efficiency and acute post-operative pain intensity were two independent predictors for CPTP. Efficient DNIC predicted lower risk of CPTP, with OR 0.52 (0.33-0.77 95% CI, p=0.0024), i.e., a 10-point numerical pain scale (NPS) reduction halves the chance to develop chronic pain. Higher acute pain intensity indicated OR of 1.80 (1.28-2.77, p=0.0024) predicting nearly a double chance to develop chronic pain for each 10-point increase. The other psychophysical measures, pain thresholds and supra-threshold pain magnitudes, did not predict CPTP. For prediction of acute post-operative pain intensity, DNIC efficiency was not found significant. Effectiveness of the endogenous analgesia system obtained at a pain-free state, therefore, seems to reflect the individual's ability to tackle noxious events, identifying patients 'at risk' to develop post-intervention chronic pain. Applying this diagnostic approach before procedures that might generate pain may allow individually tailored pain prevention and management, which may substantially reduce suffering.
Jonas, Susanna; Wild, Claudia; Schamberger, Chantal
2003-02-01
The aim of this health technology assessment was to analyse the current scientific and genetic counselling on predictive genetic testing for hereditary breast and colorectal cancer. Predictive genetic testing will be available for several common diseases in the future and questions related to financial issues and quality standards will be raised. This report is based on a systematic/nonsystematic literature search in several databases (e.g. EmBase, Medline, Cochrane Library) and on a specific health technology assessment report (CCOHTA) and review (American Gastroenterological Ass.), respectively. Laboratory test methods, early detection methods and the benefit from prophylactic interventions were analysed and social consequences interpreted. Breast and colorectal cancer are counted among the most frequently cancer diseases. Most of them are based on random accumulation of risk factors, 5-10% show a familial determination. A hereditary modified gene is responsible for the increased cancer risk. In these families, high tumour frequency, young age at diagnosis and multiple primary tumours are remarkable. GENETIC DIAGNOSIS: Sequence analysis is the gold standard. Denaturing high performance liquid chromatography is a quick alternative method. The identification of the responsible gene defect in an affected family member is important. If the test result is positive there is an uncertainty whether the disease will develop or not, when and in which degree, which is founded in the geno-/phenotype correlation. The individual risk estimation is based upon empirical evidence. The test results affect the whole family. Currently, primary prevention is possible for familial adenomatous polyposis (celecoxib, prophylactic colectomy) and for hereditary mamma carcinoma (prophylactic mastectomy). The so-called preventive medical check-ups are early detection examinations. The evidence about early detection methods for colorectal cancer is better than for breast cancer. Prophylactic mastectomy (PM) reduces the relative breast cancer risk by approximately 90%. The question is if PM has an impact on mortality. The acceptance of PM is culture-dependent. Colectomy can be used as a prophylactic (FAP) and therapeutic method. After surgery, the cancer risk remains high and so early detection examinations are still necessary. EVIDENCE-BASED STATEMENTS: The evidence is often fragmentary and of limited quality. For objective test result presentation information about sensitivity, specificity, positive predictive value, and number needed to screen and treat, respectively, are necessary. New identification of mutations and demand will lead to an increase of predictive genetic counselling and testing. There is a gap between predictive genetic diagnosis and prediction, prevention, early detection and surgical interventions. These circumstances require a basic strategy. Since predictive genetic diagnosis is a very sensitive issue it is important to deal with it carefully in order to avoid inappropriate hopes. Thus, media, experts and politicians need to consider opportunities and limitations in their daily decision-making processes.
Petersen, Japke F; Stuiver, Martijn M; Timmermans, Adriana J; Chen, Amy; Zhang, Hongzhen; O'Neill, James P; Deady, Sandra; Vander Poorten, Vincent; Meulemans, Jeroen; Wennerberg, Johan; Skroder, Carl; Day, Andrew T; Koch, Wayne; van den Brekel, Michiel W M
2018-05-01
TNM-classification inadequately estimates patient-specific overall survival (OS). We aimed to improve this by developing a risk-prediction model for patients with advanced larynx cancer. Cohort study. We developed a risk prediction model to estimate the 5-year OS rate based on a cohort of 3,442 patients with T3T4N0N+M0 larynx cancer. The model was internally validated using bootstrapping samples and externally validated on patient data from five external centers (n = 770). The main outcome was performance of the model as tested by discrimination, calibration, and the ability to distinguish risk groups based on tertiles from the derivation dataset. The model performance was compared to a model based on T and N classification only. We included age, gender, T and N classification, and subsite as prognostic variables in the standard model. After external validation, the standard model had a significantly better fit than a model based on T and N classification alone (C statistic, 0.59 vs. 0.55, P < .001). The model was able to distinguish well among three risk groups based on tertiles of the risk score. Adding treatment modality to the model did not decrease the predictive power. As a post hoc analysis, we tested the added value of comorbidity as scored by American Society of Anesthesiologists score in a subsample, which increased the C statistic to 0.68. A risk prediction model for patients with advanced larynx cancer, consisting of readily available clinical variables, gives more accurate estimations of the estimated 5-year survival rate when compared to a model based on T and N classification alone. 2c. Laryngoscope, 128:1140-1145, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Stark, Zornitza; Wallace, Jane; Gillam, Lynn; Burgess, Matthew; Delatycki, Martin B
2016-10-01
Predictive genetic testing for a neurodegenerative condition in one individual in a family may have implications for other family members, in that it can reveal their genetic status. Herein a complex clinical case is explored where the testing wish of one family member was in direct conflict to that of another. The son of a person at 50% risk of an autosomal dominant neurodegenerative condition requested testing to reveal his genetic status. The main reason for the request was if he had the familial mutation, he and his partner planned to utilise preimplantation genetic diagnosis to prevent his offspring having the condition. His at-risk parent was clear that if they found out they had the mutation, they would commit suicide. We assess the potential benefits and harms from acceding to or denying such a request and present an approach to balancing competing rights of individuals within families at risk of late-onset genetic conditions, where family members have irreconcilable differences with respect to predictive testing. We argue that while it may not be possible to completely avoid harm in these situations, it is important to consider the magnitude of risks, and make every effort to limit the potential for adverse outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Rini, Christine; O'Neill, Suzanne C; Valdimarsdottir, Heiddis; Goldsmith, Rachel E; Jandorf, Lina; Brown, Karen; DeMarco, Tiffani A; Peshkin, Beth N; Schwartz, Marc D
2009-09-01
To investigate high-risk breast cancer survivors' risk reduction decision making and decisional conflict after an uninformative BRCA1/2 test. Prospective, longitudinal study of 182 probands undergoing BRCA1/2 testing, with assessments 1-, 6-, and 12-months postdisclosure. Primary predictors were health beliefs and emotional responses to testing assessed 1-month postdisclosure. Main outcomes included women's perception of whether they had made a final risk management decision (decision status) and decisional conflict related to this issue. There were four patterns of decision making, depending on how long it took women to make a final decision and the stability of their decision status across assessments. Late decision makers and nondecision makers reported the highest decisional conflict; however, substantial numbers of women--even early and intermediate decision makers--reported elevated decisional conflict. Analyses predicting decisional conflict 1- and 12-months postdisclosure found that, after accounting for control variables and decision status, health beliefs and emotional factors predicted decisional conflict at different timepoints, with health beliefs more important 1 month after test disclosure and emotional factors more important 1 year later. Many of these women may benefit from decision making assistance. Copyright 2009 APA, all rights reserved.
Rini, Christine; O’Neill, Suzanne C.; Valdimarsdottir, Heiddis; Goldsmith, Rachel E.; DeMarco, Tiffani A.; Peshkin, Beth N.; Schwartz, Marc D.
2012-01-01
Objective To investigate high-risk breast cancer survivors’ risk reduction decision making and decisional conflict after an uninformative BRCA1/2 test. Design Prospective, longitudinal study of 182 probands undergoing BRCA1/2 testing, with assessments 1-, 6-, and 12-months post-disclosure. Measures Primary predictors were health beliefs and emotional responses to testing assessed 1-month post-disclosure. Main outcomes included women’s perception of whether they had made a final risk management decision (decision status) and decisional conflict related to this issue. Results There were four patterns of decision making, depending on how long it took women to make a final decision and the stability of their decision status across assessments. Late decision makers and non-decision makers reported the highest decisional conflict; however, substantial numbers of women—even early and intermediate decision makers—reported elevated decisional conflict. Analyses predicting decisional conflict 1- and 12-months post-disclosure found that, after accounting for controls and decision status, health beliefs and emotional factors predicted decisional conflict at different timepoints, with health beliefs more important one month after test disclosure and health beliefs more important one year later. Conclusion Many of these women may benefit from decision making assistance. PMID:19751083
Christian, Susan; Atallah, Joseph; Clegg, Robin; Giuffre, Michael; Huculak, Cathleen; Dzwiniel, Tara; Parboosingh, Jillian; Taylor, Sherryl; Somerville, Martin
2018-02-01
Predictive genetic testing in minors should be considered when clinical intervention is available. Children who carry a pathogenic variant for an inherited arrhythmia or cardiomyopathy require regular cardiac screening and may be prescribed medication and/or be told to modify their physical activity. Medical genetics and pediatric cardiology charts were reviewed to identify factors associated with uptake of genetic testing and cardiac evaluation for children at risk for long QT syndrome, hypertrophic cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy. The data collected included genetic diagnosis, clinical symptoms in the carrier parent, number of children under 18 years of age, age of children, family history of sudden cardiac arrest/death, uptake of cardiac evaluation and if evaluated, phenotype for each child. We identified 97 at risk children from 58 families found to carry a pathogenic variant for one of these conditions. Sixty six percent of the families pursued genetic testing and 73% underwent cardiac screening when it was recommended. Declining predictive genetic testing was significantly associated with genetic specialist recommendation (p < 0.001) and having an asymptomatic carrier father (p = 0.006). Cardiac evaluation was significantly associated with uptake of genetic testing (p = 0.007). This study provides a greater understanding of factors associated with uptake of genetic testing and cardiac evaluation in children at risk of an inherited arrhythmia or cardiomyopathy. It also identifies a need to educate families about the importance of cardiac evaluation even in the absence of genetic testing.
Zain, Maryam; Awan, Fazli Rabbi; Cooper, Jackie A; Li, Ka Wah; Palmen, Jutta; Acharya, Jay; Howard, Philip; Baig, Shahid M; Elkeles, Robert S; Stephens, Jeffrey W; Ireland, Helen; Humphries, Steve E
2014-09-01
To determine the sequence variant of TLL1 gene (rs1503298, T > C) in three British cohorts (PREDICT, UDACS and ED) of patients with type-2 Diabetes mellitus (T2DM) in order to assess its association with coronary heart disease (CHD). Analytical study. UCL, London, UK. Participants were genotyped in 2011-2012 for TLL1 SNP. Samples and related information were previously collected in 2001-2003 for PREDICT, and in 2001-2002 for UDACS and ED groups. Patients included in PREDICT (n=600), UDACS (n=1020) and ED (n=1240) had Diabetes. TLL1 SNP (rs1503298, T > C) was genotyped using TaqMan technology. Allele frequencies were compared using c2 test, and tested for Hardy-Weinberg equilibrium. The risk of disease was assessed from Odds ratios (OR) with 95% Confidence Intervals (95% CI). Moreover, for the PREDICT cohort, the SNP association was tested with Coronary Artery Calcification (CAC) scores. No significant association was found for this SNP with CHD or CAC scores in these cohorts. This SNP could not be confirmed as a risk factor for CHD in T2DM patients. However, the low power of thesmall sample size available is a limitation to the modest effect on risk. Further studies in larger samples would be useful.
ERIC Educational Resources Information Center
Jelicic, Helena; Bobek, Deborah L.; Phelps, Erin; Lerner, Richard M.; Lerner, Jacqueline V.
2007-01-01
Theories of positive youth development (PYD) regard such development as bases of both community contributions and lessened likelihood of risk/problem behaviors. Using data from the 4-H Study of PYD, we tested these expectations by examining if PYD in Grade 5 predicted both youth contributions and risk behaviors and depression in Grade 6. Results…
Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène
2015-03-01
Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.
Gene panel testing for hereditary breast cancer.
Winship, Ingrid; Southey, Melissa C
2016-03-21
Inherited predisposition to breast cancer is explained only in part by mutations in the BRCA1 and BRCA2 genes. Most families with an apparent familial clustering of breast cancer who are investigated through Australia's network of genetic services and familial cancer centres do not have mutations in either of these genes. More recently, additional breast cancer predisposition genes, such as PALB2, have been identified. New genetic technology allows a panel of multiple genes to be tested for mutations in a single test. This enables more women and their families to have risk assessment and risk management, in a preventive approach to predictable breast cancer. Predictive testing for a known family-specific mutation in a breast cancer predisposition gene provides personalised risk assessment and evidence-based risk management. Breast cancer predisposition gene panel tests have a greater diagnostic yield than conventional testing of only the BRCA1 and BRCA2 genes. The clinical validity and utility of some of the putative breast cancer predisposition genes is not yet clear. Ethical issues warrant consideration, as multiple gene panel testing has the potential to identify secondary findings not originally sought by the test requested. Multiple gene panel tests may provide an affordable and effective way to investigate the heritability of breast cancer.
Mura, Thibault; Baramova, Marieta; Gabelle, Audrey; Artero, Sylvaine; Dartigues, Jean-François; Amieva, Hélène; Berr, Claudine
2017-03-23
Our study aimed to determine whether the consideration of socio-demographic features improves the prediction of Alzheimer's dementia (AD) at 5 years when using the Free and Cued Selective Reminding Test (FCSRT) in the general older population. Our analyses focused on 2558 subjects from the prospective Three-City Study, a cohort of community-dwelling individuals aged 65 years and over, with FCSRT scores. Four "residual scores" and "risk scores" were built that included the FCSRT scores and socio-demographic variables. The predictive performance of crude, residual and risk scores was analyzed by comparing the areas under the ROC curve (AUC). In total, 1750 subjects were seen 5 years after completing the FCSRT. AD was diagnosed in 116 of them. Compared with the crude free-recall score, the predictive performances of the residual score and of the risk score were not significantly improved (AUC: 0.83 vs 0.82 and 0.88 vs 0.89 respectively). Using socio-demographic features in addition to the FCSRT does not improve its predictive performance for dementia or AD.
Predicting adolescent's cyberbullying behavior: A longitudinal risk analysis.
Barlett, Christopher P
2015-06-01
The current study used the risk factor approach to test the unique and combined influence of several possible risk factors for cyberbullying attitudes and behavior using a four-wave longitudinal design with an adolescent US sample. Participants (N = 96; average age = 15.50 years) completed measures of cyberbullying attitudes, perceptions of anonymity, cyberbullying behavior, and demographics four times throughout the academic school year. Several logistic regression equations were used to test the contribution of these possible risk factors. Results showed that (a) cyberbullying attitudes and previous cyberbullying behavior were important unique risk factors for later cyberbullying behavior, (b) anonymity and previous cyberbullying behavior were valid risk factors for later cyberbullying attitudes, and (c) the likelihood of engaging in later cyberbullying behavior increased with the addition of risk factors. Overall, results show the unique and combined influence of such risk factors for predicting later cyberbullying behavior. Results are discussed in terms of theory. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Prediction of breast cancer risk with volatile biomarkers in breath.
Phillips, Michael; Cataneo, Renee N; Cruz-Ramos, Jose Alfonso; Huston, Jan; Ornelas, Omar; Pappas, Nadine; Pathak, Sonali
2018-03-23
Human breath contains volatile organic compounds (VOCs) that are biomarkers of breast cancer. We investigated the positive and negative predictive values (PPV and NPV) of breath VOC biomarkers as indicators of breast cancer risk. We employed ultra-clean breath collection balloons to collect breath samples from 54 women with biopsy-proven breast cancer and 124 cancer-free controls. Breath VOCs were analyzed with gas chromatography (GC) combined with either mass spectrometry (GC MS) or surface acoustic wave detection (GC SAW). Chromatograms were randomly assigned to a training set or a validation set. Monte Carlo analysis identified significant breath VOC biomarkers of breast cancer in the training set, and these biomarkers were incorporated into a multivariate algorithm to predict disease in the validation set. In the unsplit dataset, the predictive algorithms generated discriminant function (DF) values that varied with sensitivity, specificity, PPV and NPV. Using GC MS, test accuracy = 90% (area under curve of receiver operating characteristic in unsplit dataset) and cross-validated accuracy = 77%. Using GC SAW, test accuracy = 86% and cross-validated accuracy = 74%. With both assays, a low DF value was associated with a low risk of breast cancer (NPV > 99.9%). A high DF value was associated with a high risk of breast cancer and PPV rising to 100%. Analysis of breath VOC samples collected with ultra-clean balloons detected biomarkers that accurately predicted risk of breast cancer.
Campos, Rui C; Holden, Ronald R; Costa, Fátima; Oliveira, Ana Rita; Abreu, Marta; Fresca, Natália
2017-02-01
Background and aims(s): The study evaluated the contribution of coping strategies, based on the Toulousiane conceptualization of coping, to the prediction of suicide risk and tested the moderating effect of gender, controlling for depressive symptoms. A two-time data collection design was used. A community sample of 195 adults (91 men and 104 women) ranging in age from 19 to 65 years and living in several Portuguese regions, mostly in Alentejo, participated in this research. Gender, depressive symptoms, control, and withdrawal and conversion significantly predicted suicide risk and gender interacted with control, withdrawal and conversion, and social distraction in the prediction of suicide risk. Coping predicted suicide risk only for women. Results have important implications for assessment and intervention with suicide at-risk individuals. In particular,the evaluation and development of coping skills is indicated as a goal for therapists having suicide at-risk women as clients.
Neuroanatomy Predicts Individual Risk Attitudes
Gilaie-Dotan, Sharon; Tymula, Agnieszka; Cooper, Nicole; Kable, Joseph W.; Glimcher, Paul W.
2014-01-01
Over the course of the last decade a multitude of studies have investigated the relationship between neural activations and individual human decision-making. Here we asked whether the anatomical features of individual human brains could be used to predict the fundamental preferences of human choosers. To that end, we quantified the risk attitudes of human decision-makers using standard economic tools and quantified the gray matter cortical volume in all brain areas using standard neurobiological tools. Our whole-brain analysis revealed that the gray matter volume of a region in the right posterior parietal cortex was significantly predictive of individual risk attitudes. Participants with higher gray matter volume in this region exhibited less risk aversion. To test the robustness of this finding we examined a second group of participants and used econometric tools to test the ex ante hypothesis that gray matter volume in this area predicts individual risk attitudes. Our finding was confirmed in this second group. Our results, while being silent about causal relationships, identify what might be considered the first stable biomarker for financial risk-attitude. If these results, gathered in a population of midlife northeast American adults, hold in the general population, they will provide constraints on the possible neural mechanisms underlying risk attitudes. The results will also provide a simple measurement of risk attitudes that could be easily extracted from abundance of existing medical brain scans, and could potentially provide a characteristic distribution of these attitudes for policy makers. PMID:25209279
USDA-ARS?s Scientific Manuscript database
Quarantine host range tests accurately predict direct risk of biological control agents to non-target species. However, a well-known indirect effect of biological control of weeds releases is spillover damage to non-target species. Spillover damage may occur when the population of agents achieves ou...
Predictive gene testing for Huntington disease and other neurodegenerative disorders.
Wedderburn, S; Panegyres, P K; Andrew, S; Goldblatt, J; Liebeck, T; McGrath, F; Wiltshire, M; Pestell, C; Lee, J; Beilby, J
2013-12-01
Controversies exist around predictive testing (PT) programmes in neurodegenerative disorders. This study sets out to answer the following questions relating to Huntington disease (HD) and other neurodegenerative disorders: differences between these patients in their PT journeys, why and when individuals withdraw from PT, and decision-making processes regarding reproductive genetic testing. A case series analysis of patients having PT from the multidisciplinary Western Australian centre for PT over the past 20 years was performed using internationally recognised guidelines for predictive gene testing in neurodegenerative disorders. Of 740 at-risk patients, 518 applied for PT: 466 at risk of HD, 52 at risk of other neurodegenerative disorders - spinocerebellar ataxias, hereditary prion disease and familial Alzheimer disease. Thirteen percent withdrew from PT - 80.32% of withdrawals occurred during counselling stages. Major withdrawal reasons related to timing in the patients' lives or unknown as the patient did not disclose the reason. Thirty-eight HD individuals had reproductive genetic testing: 34 initiated prenatal testing (of which eight withdrew from the process) and four initiated pre-implantation genetic diagnosis. There was no recorded or other evidence of major psychological reactions or suicides during PT. People withdrew from PT in relation to life stages and reasons that are unknown. Our findings emphasise the importance of: (i) adherence to internationally recommended guidelines for PT; (ii) the role of the multidisciplinary team in risk minimisation; and (iii) patient selection. © 2013 The Authors; Internal Medicine Journal © 2013 Royal Australasian College of Physicians.
The Acquired Preparedness Model of Risk for Bulimic Symptom Development
Combs, Jessica L.; Smith, Gregory T.; Flory, Kate; Simmons, Jean R.; Hill, Kelly K.
2010-01-01
The authors applied person-environment transaction theory to test the acquired preparedness model of eating disorder risk. The model holds that (a) middle school girls high in the trait of ineffectiveness are differentially prepared to acquire high risk expectancies for reinforcement from dieting/thinness; (b) those expectancies predict subsequent binge eating and purging; and (c) the influence of the disposition of ineffectiveness on binge eating and purging is mediated by dieting/thinness expectancies. In a three-wave longitudinal study of 394 middle school girls, they found support for the model. Seventh grade girls’ scores on ineffectiveness predicted their subsequent endorsement of high risk dieting/thinness expectancies, which in turn predicted subsequent increases in binge eating and purging. Statistical tests of mediation supported the hypothesis that the prospective relation between ineffectiveness and binge eating was mediated by dieting/thinness expectancies, as was the prospective relation between ineffectiveness and purging. This application of a basic science theory to eating disorder risk appears fruitful, and the findings suggest the importance of early interventions that address both disposition and learning. PMID:20853933
Lehr, M E; Plisky, P J; Butler, R J; Fink, M L; Kiesel, K B; Underwood, F B
2013-08-01
In athletics, efficient screening tools are sought to curb the rising number of noncontact injuries and associated health care costs. The authors hypothesized that an injury prediction algorithm that incorporates movement screening performance, demographic information, and injury history can accurately categorize risk of noncontact lower extremity (LE) injury. One hundred eighty-three collegiate athletes were screened during the preseason. The test scores and demographic information were entered into an injury prediction algorithm that weighted the evidence-based risk factors. Athletes were then prospectively followed for noncontact LE injury. Subsequent analysis collapsed the groupings into two risk categories: Low (normal and slight) and High (moderate and substantial). Using these groups and noncontact LE injuries, relative risk (RR), sensitivity, specificity, and likelihood ratios were calculated. Forty-two subjects sustained a noncontact LE injury over the course of the study. Athletes identified as High Risk (n = 63) were at a greater risk of noncontact LE injury (27/63) during the season [RR: 3.4 95% confidence interval 2.0 to 6.0]. These results suggest that an injury prediction algorithm composed of performance on efficient, low-cost, field-ready tests can help identify individuals at elevated risk of noncontact LE injury. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Chao, Tze-Fan; Lip, Gregory Y H; Lin, Yenn-Jiang; Chang, Shih-Lin; Lo, Li-Wei; Hu, Yu-Feng; Tuan, Ta-Chuan; Liao, Jo-Nan; Chung, Fa-Po; Chen, Tzeng-Ji; Chen, Shih-Ann
2018-03-01
While modifiable bleeding risks should be addressed in all patients with atrial fibrillation (AF), use of a bleeding risk score enables clinicians to 'flag up' those at risk of bleeding for more regular patient contact reviews. We compared a risk assessment strategy for major bleeding and intracranial hemorrhage (ICH) based on modifiable bleeding risk factors (referred to as a 'MBR factors' score) against established bleeding risk stratification scores (HEMORR 2 HAGES, HAS-BLED, ATRIA, ORBIT). A nationwide cohort study of 40,450 AF patients who received warfarin for stroke prevention was performed. The clinical endpoints included ICH and major bleeding. Bleeding scores were compared using receiver operating characteristic (ROC) curves (areas under the ROC curves [AUCs], or c-index) and the net reclassification index (NRI). During a follow up of 4.60±3.62years, 1581 (3.91%) patients sustained ICH and 6889 (17.03%) patients sustained major bleeding events. All tested bleeding risk scores at baseline were higher in those sustaining major bleeds. When compared to no ICH, patients sustaining ICH had higher baseline HEMORR 2 HAGES (p=0.003), HAS-BLED (p<0.001) and MBR factors score (p=0.013) but not ATRIA and ORBIT scores. When HAS-BLED was compared to other bleeding scores, c-indexes were significantly higher compared to MBR factors (p<0.001) and ORBIT (p=0.05) scores for major bleeding. C-indexes for the MBR factors score was significantly lower compared to all other scores (De long test, all p<0.001). When NRI was performed, HAS-BLED outperformed all other bleeding risk scores for major bleeding (all p<0.001). C-indexes for ATRIA and ORBIT scores suggested no significant prediction for ICH. All contemporary bleeding risk scores had modest predictive value for predicting major bleeding but the best predictive value and NRI was found for the HAS-BLED score. Simply depending on modifiable bleeding risk factors had suboptimal predictive value for the prediction of major bleeding in AF patients, when compared to the HAS-BLED score. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Laksmiastuti, Sri Ratna; Budiardjo, Sarworini Bagio; Sutadi, Heriandi
2017-06-01
Predicting caries risk in children can be done by identifying caries risk factors. It is an important measure which contributes to best understanding of the cariogenic profile of the patient. Identification could be done by clinical examination and answering the questionnaire. We arrange the study to verify the questionnaire validation for predicting caries risk in children. The study was conducted on 62 pairs of mothers and their children, aged between 3 and 5 years. The questionnaire consists of 10 questions concerning mothers' attitude and knowledge about oral health. The reliability and validity test is based on Cronbach's alpha and correlation coefficient value. All question are reliable (Cronbach's alpha = 0.873) and valid (Corrected item-total item correlation >0.4). Five questionnaires of mother's attitude about oral health and five questionnaires of mother's knowledge about oral health are reliable and valid for predicting caries risk in children.
Andrews, Donald A; Guzzo, Lina; Raynor, Peter; Rowe, Robert C; Rettinger, L Jill; Brews, Albert; Wormith, J Stephen
2012-02-01
The Level of Service/Case Management Inventory (LS/CMI) and the Youth version (YLS/CMI) generate an assessment of risk/need across eight domains that are considered to be relevant for girls and boys and for women and men. Aggregated across five data sets, the predictive validity of each of the eight domains was gender-neutral. The composite total score (LS/CMI total risk/need) was strongly associated with the recidivism of males (mean r = .39, mean AUC = .746) and very strongly associated with the recidivism of females (mean r = .53, mean AUC = .827). The enhanced validity of LS total risk/need with females was traced to the exceptional validity of Substance Abuse with females. The intra-data set conclusions survived the introduction of two very large samples composed of female offenders exclusively. Finally, the mean incremental contributions of gender and the gender-by-risk level interactions in the prediction of criminal recidivism were minimal compared to the relatively strong validity of the LS/CMI risk level. Although the variance explained by gender was minimal and although high-risk cases were high-risk cases regardless of gender, the recidivism rates of lower risk females were lower than the recidivism rates of lower risk males, suggesting possible implications for test interpretation and policy.
Memory Resilience to Alzheimer's Genetic Risk: Sex Effects in Predictor Profiles.
McDermott, Kirstie L; McFall, G Peggy; Andrews, Shea J; Anstey, Kaarin J; Dixon, Roger A
2017-10-01
Apolipoprotein E (APOE) ɛ4 and Clusterin (CLU) C alleles are risk factors for Alzheimer's disease (AD) and episodic memory (EM) decline. Memory resilience occurs when genetically at-risk adults perform at high and sustained levels. We investigated whether (a) memory resilience to AD genetic risk is predicted by biological and other risk markers and (b) the prediction profiles vary by sex and AD risk variant. Using a longitudinal sample of nondemented adults (n = 642, aged 53-95) we focused on memory resilience (over 9 years) to 2 AD risk variants (APOE, CLU). Growth mixture models classified resilience. Random forest analysis, stratified by sex, tested the predictive importance of 22 nongenetic risk factors from 5 domains (n = 24-112). For both sexes, younger age, higher education, stronger grip, and everyday novel cognitive activity predicted memory resilience. For women, 9 factors from functional, health, mobility, and lifestyle domains were also predictive. For men, only fewer depressive symptoms was an additional important predictor. The prediction profiles were similar for APOE and CLU. Although several factors predicted resilience in both sexes, a greater number applied only to women. Sex-specific mechanisms and intervention targets are implied. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Subbian, Vignesh; Meunier, Jason M; Korfhagen, Joseph J; Ratcliff, Jonathan J; Shaw, George J; Beyette, Fred R
2014-01-01
Post-Concussion Syndrome (PCS) is a common sequelae of mild Traumatic Brain Injury (mTBI). Currently, there is no reliable test to determine which patients will develop PCS following an mTBI. As a result, clinicians are challenged to identify patients at high risk for subsequent PCS. Hence, there is a need to develop an objective test that can guide clinical risk stratification and predict the likelihood of PCS at the initial point of care in an Emergency Department (ED). This paper presents the results of robotic-assisted neurologic testing completed on mTBI patients in the ED and its ability to predict PCS at 3 weeks post-injury. Preliminary results show that abnormal proprioception, as measured using robotic testing is associated with higher risk of developing PCS following mTBI. In this pilot study, proprioceptive measures obtained through robotic testing had a 77% specificity (95CI: 46%-94%) and a 64% sensitivity (95CI: 41%-82%).
Cameron, Linda D; Sherman, Kerry A; Marteau, Theresa M; Brown, Paul M
2009-05-01
Genetic tests vary in their prediction of disease occurrence, with some mutations conferring relatively low risk and others indicating near certainty. The authors assessed how increments in absolute risk of disease influence risk perceptions, interest, and expected consequences of genetic tests for diseases of varying severity. Adults (N = 752), recruited from New Zealand, Australia, and the United Kingdom for an online analogue study, were randomly assigned to receive information about a test of genetic risk for diabetes, heart disease, colon cancer, or lung cancer. The lifetime risk varied across conditions by 10% increments, from 20% to 100%. Participants completed measures of perceived likelihood of disease for individuals with mutations, risk-related affect, interest, and testing consequences. Analyses revealed two increment clusters yielding differences in likelihood perceptions: A "moderate-risk" cluster (20%-70%), and a "high-risk" cluster (80%-100%). Risk increment influenced anticipated worry, feelings of risk, testing-induced distress, and family obligations, with nonlinear patterns including disproportionately high responses for the 50% increment. Risk increment did not alter testing interest or perceived benefits. These patterns of effects held across the four diseases. Magnitude of risk from genetic testing has a nonlinear influence on risk-related appraisals and affect but is unrelated to test interest.
Collins, J; Ryan, L; Truby, H
2014-10-01
In the future, it may be possible for individuals to take a genetic test to determine their genetic predisposition towards developing lifestyle-related chronic diseases. A systematic review of the literature was undertaken to identify the factors associated with an interest in having predictive genetic testing for obesity, type II diabetes and heart disease amongst unaffected adults. Ovid Medline, PsycINFO and EMBASE online databases were searched using predefined search terms. Publications meeting the inclusion criteria (English language, free-living adult population not selected as a result of their disease diagnosis, reporting interest as an outcome, not related to a single gene inherited disease) were assessed for quality and content. Narrative synthesis of the results was undertaken. From the 2329 publications retrieved, eight studies met the inclusion criteria and were included in the review. Overall, the evidence base was small but of positive quality. Interest was associated with personal attitudes towards disease risk and the provision of information about genetic testing, shaped by perceived risk of disease and expected outcomes of testing. The role of demographic factors was investigated with largely inconclusive findings. Interest in predictive genetic testing for obesity, type II diabetes or heart disease was greatest amongst those who perceived the risk of disease to be high and/or the outcomes of testing to be beneficial. © 2013 The British Dietetic Association Ltd.
Family Conflict Interacts with Genetic Liability in Predicting Childhood and Adolescent Depression
ERIC Educational Resources Information Center
Rice, Frances; Harold, Gordon T.; Shelton, Katherine H.; Thapar, Anita
2006-01-01
Objective: To test for gene-environment interaction with depressive symptoms and family conflict. Specifically, to first examine whether the influence of family conflict in predicting depressive symptoms is increased in individuals at genetic risk of depression. Second, to test whether the genetic component of variance in depressive symptoms…
Wilde, Alex; Meiser, Bettina; Mitchell, Philip B; Schofield, Peter R
2010-01-01
The past decade has seen rapid advances in the identification of associations between candidate genes and a range of common multifactorial disorders. This paper evaluates public attitudes towards the complexity of genetic risk prediction in psychiatry involving susceptibility genes, uncertain penetrance and gene-environment interactions on which successful molecular-based mental health interventions will depend. A qualitative approach was taken to enable the exploration of the views of the public. Four structured focus groups were conducted with a total of 36 participants. The majority of participants indicated interest in having a genetic test for susceptibility to major depression, if it was available. Having a family history of mental illness was cited as a major reason. After discussion of perceived positive and negative implications of predictive genetic testing, nine of 24 participants initially interested in having such a test changed their mind. Fear of genetic discrimination and privacy issues predominantly influenced change of attitude. All participants still interested in having a predictive genetic test for risk for depression reported they would only do so through trusted medical professionals. Participants were unanimously against direct-to-consumer genetic testing marketed through the Internet, although some would consider it if there was suitable protection against discrimination. The study highlights the importance of general practitioner and public education about psychiatric genetics, and the availability of appropriate treatment and support services prior to implementation of future predictive genetic testing services.
NASA Technical Reports Server (NTRS)
Hohnloser, S. H.; Klingenheben, T.; Li, Y. G.; Zabel, M.; Peetermans, J.; Cohen, R. J.
1998-01-01
INTRODUCTION: The current standard for arrhythmic risk stratification is electrophysiologic (EP) testing, which, due to its invasive nature, is limited to patients already known to be at high risk. A number of noninvasive tests, such as determination of left ventricular ejection fraction (LVEF) or heart rate variability, have been evaluated as additional risk stratifiers. Microvolt T wave alternans (TWA) is a promising new risk marker. Prospective evaluation of noninvasive risk markers in low- or moderate-risk populations requires studies involving very large numbers of patients, and in such studies, documentation of the occurrence of ventricular tachyarrhythmias is difficult. In the present study, we identified a high-risk population, recipients of an implantable cardioverter defibrillator (ICD), and prospectively compared microvolt TWA with invasive EP testing and other risk markers with respect to their ability to predict recurrence of ventricular tachyarrhythmias as documented by ICD electrograms. METHODS AND RESULTS: Ninety-five patients with a history of ventricular tachyarrhythmias undergoing implantation of an ICD underwent EP testing, assessment of TWA, as well as determination of LVEF, baroreflex sensitivity, signal-averaged ECG, analysis of 24-hour Holter monitoring, and QT dispersion from the 12-lead surface ECG. The endpoint of the study was first appropriate ICD therapy for electrogram-documented ventricular fibrillation or tachycardia during follow-up. Kaplan-Meier survival analysis revealed that TWA (P < 0.006) and LVEF (P < 0.04) were the only significant univariate risk stratifiers. EP testing was not statistically significant (P < 0.2). Multivariate Cox regression analysis revealed that TWA was the only statistically significant independent risk factor. CONCLUSIONS: Measurement of microvolt TWA compared favorably with both invasive EP testing and other currently used noninvasive risk assessment methods in predicting recurrence of ventricular tachyarrhythmias in ICD recipients. This study suggests that TWA might also be a powerful tool for risk stratification in low- or moderate-risk patients, and needs to be prospectively evaluated in such populations.
NASA Astrophysics Data System (ADS)
Love, D. M.; Venturas, M.; Sperry, J.; Wang, Y.; Anderegg, W.
2017-12-01
Modeling approaches for tree stomatal control often rely on empirical fitting to provide accurate estimates of whole tree transpiration (E) and assimilation (A), which are limited in their predictive power by the data envelope used to calibrate model parameters. Optimization based models hold promise as a means to predict stomatal behavior under novel climate conditions. We designed an experiment to test a hydraulic trait based optimization model, which predicts stomatal conductance from a gain/risk approach. Optimal stomatal conductance is expected to maximize the potential carbon gain by photosynthesis, and minimize the risk to hydraulic transport imposed by cavitation. The modeled risk to the hydraulic network is assessed from cavitation vulnerability curves, a commonly measured physiological trait in woody plant species. Over a growing season garden grown plots of aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas) were subjected to three distinct drought treatments (moderate, severe, severe with rehydration) relative to a control plot to test model predictions. Model outputs of predicted E, A, and xylem pressure can be directly compared to both continuous data (whole tree sapflux, soil moisture) and point measurements (leaf level E, A, xylem pressure). The model also predicts levels of whole tree hydraulic impairment expected to increase mortality risk. This threshold is used to estimate survivorship in the drought treatment plots. The model can be run at two scales, either entirely from climate (meteorological inputs, irrigation) or using the physiological measurements as a starting point. These data will be used to study model performance and utility, and aid in developing the model for larger scale applications.
Construction and evaluation of FiND, a fall risk prediction model of inpatients from nursing data.
Yokota, Shinichiroh; Ohe, Kazuhiko
2016-04-01
To construct and evaluate an easy-to-use fall risk prediction model based on the daily condition of inpatients from secondary use electronic medical record system data. The present authors scrutinized electronic medical record system data and created a dataset for analysis by including inpatient fall report data and Intensity of Nursing Care Needs data. The authors divided the analysis dataset into training data and testing data, then constructed the fall risk prediction model FiND from the training data, and tested the model using the testing data. The dataset for analysis contained 1,230,604 records from 46,241 patients. The sensitivity of the model constructed from the training data was 71.3% and the specificity was 66.0%. The verification result from the testing dataset was almost equivalent to the theoretical value. Although the model's accuracy did not surpass that of models developed in previous research, the authors believe FiND will be useful in medical institutions all over Japan because it is composed of few variables (only age, sex, and the Intensity of Nursing Care Needs items), and the accuracy for unknown data was clear. © 2016 Japan Academy of Nursing Science.
Bruijn, Mmc; Vis, J Y; Wilms, F F; Oudijk, M A; Kwee, A; Porath, M M; Oei, G; Scheepers, Hcj; Spaanderman, Mea; Bloemenkamp, Kwm; Haak, M C; Bolte, A C; Vandenbussche, Fpha; Woiski, M D; Bax, C J; Cornette, Jmj; Duvekot, J J; Nij Bijvanck, Bwa; van Eyck, J; Franssen, Mtm; Sollie, K M; van der Post, Jam; Bossuyt, Pmm; Opmeer, B C; Kok, M; Mol, Bwj; van Baaren, G-J
2016-11-01
To evaluate whether in symptomatic women, the combination of quantitative fetal fibronectin (fFN) testing and cervical length (CL) improves the prediction of preterm delivery (PTD) within 7 days compared with qualitative fFN and CL. Post hoc analysis of frozen fFN samples of a nationwide cohort study. Ten perinatal centres in the Netherlands. Symptomatic women between 24 and 34 weeks of gestation. The risk of PTD <7 days was estimated in predefined CL and fFN strata. We used logistic regression to develop a model including quantitative fFN and CL, and one including qualitative fFN (threshold 50 ng/ml) and CL. We compared the models' capacity to identify women at low risk (<5%) for delivery within 7 days using a reclassification table. Spontaneous delivery within 7 days after study entry. We studied 350 women, of whom 69 (20%) delivered within 7 days. The risk of PTD in <7 days ranged from 2% in the lowest fFN group (<10 ng/ml) to 71% in the highest group (>500 ng/ml). Multivariable logistic regression showed an increasing risk of PTD in <7 days with rising fFN concentration [10-49 ng/ml: odds ratio (OR) 1.3, 95% confidence interval (95% CI) 0.23-7.0; 50-199 ng/ml: OR 3.2, 95% CI 0.79-13; 200-499 ng/ml: OR 9.0, 95% CI 2.3-35; >500 ng/ml: OR 39, 95% CI 9.4-164] and shortening of the CL (OR 0.86 per mm, 95% CI 0.82-0.90). Use of quantitative fFN instead of qualitative fFN resulted in reclassification of 18 (5%) women from high to low risk, of whom one (6%) woman delivered within 7 days. In symptomatic women, quantitative fFN testing does not improve the prediction of PTD within 7 days compared with qualitative fFN testing in combination with CL measurement in terms of reclassification from high to low (<5%) risk, but it adds value across the risk range. Quantitative fFN testing adds value to qualitative fFN testing with CL measurement in the prediction of PTD. © 2015 Royal College of Obstetricians and Gynaecologists.
Prediction of near-term breast cancer risk using a Bayesian belief network
NASA Astrophysics Data System (ADS)
Zheng, Bin; Ramalingam, Pandiyarajan; Hariharan, Harishwaran; Leader, Joseph K.; Gur, David
2013-03-01
Accurately predicting near-term breast cancer risk is an important prerequisite for establishing an optimal personalized breast cancer screening paradigm. In previous studies, we investigated and tested the feasibility of developing a unique near-term breast cancer risk prediction model based on a new risk factor associated with bilateral mammographic density asymmetry between the left and right breasts of a woman using a single feature. In this study we developed a multi-feature based Bayesian belief network (BBN) that combines bilateral mammographic density asymmetry with three other popular risk factors, namely (1) age, (2) family history, and (3) average breast density, to further increase the discriminatory power of our cancer risk model. A dataset involving "prior" negative mammography examinations of 348 women was used in the study. Among these women, 174 had breast cancer detected and verified in the next sequential screening examinations, and 174 remained negative (cancer-free). A BBN was applied to predict the risk of each woman having cancer detected six to 18 months later following the negative screening mammography. The prediction results were compared with those using single features. The prediction accuracy was significantly increased when using the BBN. The area under the ROC curve increased from an AUC=0.70 to 0.84 (p<0.01), while the positive predictive value (PPV) and negative predictive value (NPV) also increased from a PPV=0.61 to 0.78 and an NPV=0.65 to 0.75, respectively. This study demonstrates that a multi-feature based BBN can more accurately predict the near-term breast cancer risk than with a single feature.
Campbell, Suzann K; Kolobe, Thubi H A; Wright, Benjamin D; Linacre, John Michael
2002-04-01
The Test of Infant Motor Performance (TIMP) is a test of functional movement in infants from 32 weeks' post-conceptional age to 4 months postterm. The purpose of this study was to assess in 96 infants (44 females, 52 males) with varying risk, the relation between measures on the TIMP at 7, 30, 60, and 90 days after term age and percentile ranks (PR) on the Alberta Infant Motor Scale (AIMS). Correlation between scores on the TIMP and the AIMS was highest for TIMP tests at 90 days and AIMS testing at 6 months (r=0.67, p=0.0001), but all comparisons were statistically significant except those between the TIMP at 7 days and AIMS PR at 9 months. In a multiple regression analysis combining a perinatal risk score and 7-day TIMP measures to predict 12-month AIMS PR, risk, but not TIMP, predicted outcome (21% of variance explained). At older ages TIMP measures made increasing contributions to prediction of 12-month AIMS PR (30% of variance explained by 90-day TIMP). The best TIMP score to maximize specificity and correctly identify 84% of the infants above versus below the 10th PR at 6 months was a cut-off point of 1 SD below the mean. The same cut-off point correctly identified 88% of the infants at 12 months. A cut-off of -0.5 SD, however, maximized sensitivity at 92%. A negative test result, i.e. score above -0.5 SD at 3 months, carried only a 2% probability of a poor 12-month outcome. We conclude that TIMP scores significantly predict AIMS PR 6 to 12 months later, but the TIMP at 3 months of age has the greatest degree of validity for predicting motor performance on the AIMS at 12 months and can be used clinically to identify infants likely to benefit from intervention.
McDowell, Michelle E; Occhipinti, Stefano; Chambers, Suzanne K
2013-11-01
To examine how family history of prostate cancer, risk perceptions, and heuristic decision strategies influence prostate cancer screening behavior. Men with a first-degree family history of prostate cancer (FDRs; n = 207) and men without a family history (PM; n = 239) completed a Computer Assisted Telephone Interview (CATI) examining prostate cancer risk perceptions, PSA testing behaviors, perceptions of similarity to the typical man who gets prostate cancer (representativeness heuristic), and availability of information about prostate cancer (availability heuristic). A path model explored family history as influencing the availability of information about prostate cancer (number of acquaintances with prostate cancer and number of recent discussions about prostate cancer) to mediate judgments of risk and to predict PSA testing behaviors and family history as a moderator of the relationship between representativeness (perceived similarity) and risk perceptions. FDRs reported greater risk perceptions and a greater number of PSA tests than did PM. Risk perceptions predicted increased PSA testing only in path models and was significant only for PM in multi-Group SEM analyses. Family history moderated the relationship between similarity perceptions and risk perceptions such that the relationship between these variables was significant only for FDRs. Recent discussions about prostate cancer mediated the relationships between family history and risk perceptions, and the number of acquaintances men knew with prostate cancer mediated the relationship between family history and PSA testing behavior. Family history interacts with the individuals' broader social environment to influence risk perceptions and screening behavior. Research into how risk perceptions develop and what primes behavior change is crucial to underpin psychological or public health intervention that seeks to influence health decision making.
Hann, Katie E J; Fraser, Lindsay; Side, Lucy; Gessler, Sue; Waller, Jo; Sanderson, Saskia C; Freeman, Madeleine; Jacobs, Ian; Lanceley, Anne
2017-12-16
Ovarian cancer is usually diagnosed at a late stage when outcomes are poor. Personalised ovarian cancer risk prediction, based on genetic and epidemiological information and risk stratified management in adult women could improve outcomes. Examining health care professionals' (HCP) attitudes to ovarian cancer risk stratified management, willingness to support women, self-efficacy (belief in one's own ability to successfully complete a task), and knowledge about ovarian cancer will help identify training needs in anticipation of personalised ovarian cancer risk prediction being introduced. An anonymous survey was distributed online to HCPs via relevant professional organisations in the UK. Kruskal-Wallis tests and pairwise comparisons were used to compare knowledge and self-efficacy scores between different types of HCPs, and attitudes toward population-based genetic testing and risk stratified management were described. Content analysis was undertaken of free text responses concerning HCPs willingness to discuss risk management options with women. One hundred forty-six eligible HCPs completed the survey: oncologists (31%); genetics clinicians (30%); general practitioners (22%); gynaecologists (10%); nurses (4%); and 'others'. Scores for knowledge of ovarian cancer and genetics, and self-efficacy in conducting a cancer risk consultation were generally high but significantly lower for general practitioners compared to genetics clinicians, oncologists, and gynaecologists. Support for population-based genetic testing was not high (<50%). Attitudes towards ovarian cancer risk stratification were mixed, although the majority of participants indicated a willingness to discuss management options with patients. Larger samples are required to investigate attitudes to population-based genetic testing for ovarian cancer risk and to establish why some HCPs are hesitant to offer testing to all adult female patients. If ovarian cancer risk assessment using genetic testing and non-genetic information including epidemiological information is rolled out on a population basis, training will be needed for HCPs in primary care to enable them to provide appropriate support to women at each stage of the process.
Forman, Jason L.; Kent, Richard W.; Mroz, Krystoffer; Pipkorn, Bengt; Bostrom, Ola; Segui-Gomez, Maria
2012-01-01
This study sought to develop a strain-based probabilistic method to predict rib fracture risk with whole-body finite element (FE) models, and to describe a method to combine the results with collision exposure information to predict injury risk and potential intervention effectiveness in the field. An age-adjusted ultimate strain distribution was used to estimate local rib fracture probabilities within an FE model. These local probabilities were combined to predict injury risk and severity within the whole ribcage. The ultimate strain distribution was developed from a literature dataset of 133 tests. Frontal collision simulations were performed with the THUMS (Total HUman Model for Safety) model with four levels of delta-V and two restraints: a standard 3-point belt and a progressive 3.5–7 kN force-limited, pretensioned (FL+PT) belt. The results of three simulations (29 km/h standard, 48 km/h standard, and 48 km/h FL+PT) were compared to matched cadaver sled tests. The numbers of fractures predicted for the comparison cases were consistent with those observed experimentally. Combining these results with field exposure informantion (ΔV, NASS-CDS 1992–2002) suggests a 8.9% probability of incurring AIS3+ rib fractures for a 60 year-old restrained by a standard belt in a tow-away frontal collision with this restraint, vehicle, and occupant configuration, compared to 4.6% for the FL+PT belt. This is the first study to describe a probabilistic framework to predict rib fracture risk based on strains observed in human-body FE models. Using this analytical framework, future efforts may incorporate additional subject or collision factors for multi-variable probabilistic injury prediction. PMID:23169122
Turusheva, Anna; Frolova, Elena; Bert, Vaes; Hegendoerfer, Eralda; Degryse, Jean-Marie
2017-07-01
Prediction models help to make decisions about further management in clinical practice. This study aims to develop a mortality risk score based on previously identified risk predictors and to perform internal and external validations. In a population-based prospective cohort study of 611 community-dwelling individuals aged 65+ in St. Petersburg (Russia), all-cause mortality risks over 2.5 years follow-up were determined based on the results obtained from anthropometry, medical history, physical performance tests, spirometry and laboratory tests. C-statistic, risk reclassification analysis, integrated discrimination improvement analysis, decision curves analysis, internal validation and external validation were performed. Older adults were at higher risk for mortality [HR (95%CI)=4.54 (3.73-5.52)] when two or more of the following components were present: poor physical performance, low muscle mass, poor lung function, and anemia. If anemia was combined with high C-reactive protein (CRP) and high B-type natriuretic peptide (BNP) was added the HR (95%CI) was slightly higher (5.81 (4.73-7.14)) even after adjusting for age, sex and comorbidities. Our models were validated in an external population of adults 80+. The extended model had a better predictive capacity for cardiovascular mortality [HR (95%CI)=5.05 (2.23-11.44)] compared to the baseline model [HR (95%CI)=2.17 (1.18-4.00)] in the external population. We developed and validated a new risk prediction score that may be used to identify older adults at higher risk for mortality in Russia. Additional studies need to determine which targeted interventions improve the outcomes of these at-risk individuals. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Eisenhart, T.; Josset, L.; Rising, J. A.; Devineni, N.; Lall, U.
2017-12-01
In the wake of recent water crises, the need to understand and predict the risk of water stress in urban and rural areas has grown. This understanding has the potential to improve decision making in public resource management, policy making, risk management and investment decisions. Assuming an underlying relationship between urban and rural water stress and observable features, we apply Deep Learning and Supervised Learning models to uncover hidden nonlinear patterns from spatiotemporal datasets. Results of interest includes prediction accuracy on extreme categories (i.e. urban areas highly prone to water stress) and not solely the average risk for urban or rural area, which adds complexity to the tuning of model parameters. We first label urban water stressed counties using annual water quality violations and compile a comprehensive spatiotemporal dataset that captures the yearly evolution of climatic, demographic and economic factors of more than 3,000 US counties over the 1980-2010 period. As county-level data reporting is not done on a yearly basis, we test multiple imputation methods to get around the issue of missing data. Using Python libraries, TensorFlow and scikit-learn, we apply and compare the ability of, amongst other methods, Recurrent Neural Networks (testing both LSTM and GRU cells), Convolutional Neural Networks and Support Vector Machines to predict urban water stress. We evaluate the performance of those models over multiple time spans and combine methods to diminish the risk of overfitting and increase prediction power on test sets. This methodology seeks to identify hidden nonlinear patterns to assess the predominant data features that influence urban and rural water stress. Results from this application at the national scale will assess the performance of deep learning models to predict water stress risk areas across all US counties and will highlight a predominant Machine Learning method for modeling water stress risk using spatiotemporal data.
Castellanos-Ryan, Natalie; O'Leary-Barrett, Maeve; Sully, Laura; Conrod, Patricia
2013-01-01
This study assessed the validity, sensitivity, and specificity of the Substance Use Risk Profile Scale (SURPS), a measure of personality risk factors for substance use and other behavioral problems in adolescence. The concurrent and predictive validity of the SURPS was tested in a sample of 1,162 adolescents (mean age: 13.7 years) using linear and logistic regressions, while its sensitivity and specificity were examined using the receiver operating characteristics curve analyses. Concurrent and predictive validity tests showed that all 4 brief scales-hopelessness (H), anxiety sensitivity (AS), impulsivity (IMP), and sensation seeking (SS)-were related, in theoretically expected ways, to measures of substance use and other behavioral and emotional problems. Results also showed that when using the 4 SURPS subscales to identify adolescents "at risk," one can identify a high number of those who developed problems (high sensitivity scores ranging from 72 to 91%). And, as predicted, because each scale is related to specific substance and mental health problems, good specificity was obtained when using the individual personality subscales (e.g., most adolescents identified at high risk by the IMP scale developed conduct or drug use problems within the next 18 months [a high specificity score of 70 to 80%]). The SURPS is a valuable tool for identifying adolescents at high risk for substance misuse and other emotional and behavioral problems. Implications of findings for the use of this measure in future research and prevention interventions are discussed. Copyright © 2012 by the Research Society on Alcoholism.
A risk prediction model for xerostomia: a retrospective cohort study.
Villa, Alessandro; Nordio, Francesco; Gohel, Anita
2016-12-01
We investigated the prevalence of xerostomia in dental patients and built a xerostomia risk prediction model by incorporating a wide range of risk factors. Socio-demographic data, past medical history, self-reported dry mouth and related symptoms were collected retrospectively from January 2010 to September 2013 for all new dental patients. A logistic regression framework was used to build a risk prediction model for xerostomia. External validation was performed using an independent data set to test the prediction power. A total of 12 682 patients were included in this analysis (54.3%, females). Xerostomia was reported by 12.2% of patients. The proportion of people reporting xerostomia was higher among those who were taking more medications (OR = 1.11, 95% CI = 1.08-1.13) or recreational drug users (OR = 1.4, 95% CI = 1.1-1.9). Rheumatic diseases (OR = 2.17, 95% CI = 1.88-2.51), psychiatric diseases (OR = 2.34, 95% CI = 2.05-2.68), eating disorders (OR = 2.28, 95% CI = 1.55-3.36) and radiotherapy (OR = 2.00, 95% CI = 1.43-2.80) were good predictors of xerostomia. For the test model performance, the ROC-AUC was 0.816 and in the external validation sample, the ROC-AUC was 0.799. The xerostomia risk prediction model had high accuracy and discriminated between high- and low-risk individuals. Clinicians could use this model to identify the classes of medications and systemic diseases associated with xerostomia. © 2015 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.
IL-8 predicts pediatric oncology patients with febrile neutropenia at low risk for bacteremia.
Cost, Carrye R; Stegner, Martha M; Leonard, David; Leavey, Patrick
2013-04-01
Despite a low bacteremia rate, pediatric oncology patients are frequently admitted for febrile neutropenia. A pediatric risk prediction model with high sensitivity to identify patients at low risk for bacteremia is not available. We performed a single-institution prospective cohort study of pediatric oncology patients with febrile neutropenia to create a risk prediction model using clinical factors, respiratory viral infection, and cytokine expression. Pediatric oncology patients with febrile neutropenia were enrolled between March 30, 2010 and April 1, 2011 and managed per institutional protocol. Blood samples for C-reactive protein and cytokine expression and nasopharyngeal swabs for respiratory viral testing were obtained. Medical records were reviewed for clinical data. Statistical analysis utilized mixed multiple logistic regression modeling. During the 12-month period, 195 febrile neutropenia episodes were enrolled. There were 24 (12%) episodes of bacteremia. Univariate analysis revealed several factors predictive for bacteremia, and interleukin (IL)-8 was the most predictive variable in the multivariate stepwise logistic regression. Low serum IL-8 predicted patients at low risk for bacteremia with a sensitivity of 0.9 and negative predictive value of 0.98. IL-8 is a highly sensitive predictor for patients at low risk for bacteremia. IL-8 should be utilized in a multi-institution prospective trial to assign risk stratification to pediatric patients admitted with febrile neutropenia.
A Community-based Cross-sectional Study of Cardiovascular Risk in a Rural Community of Puducherry.
Shrivastava, Saurabh R; Ghorpade, Arun G; Shrivastava, Prateek S
2015-01-01
The World Health Organization (WHO) / International Society of Hypertension (ISH) risk prediction chart can predict the risk of cardiovascular events in any population. To assess the prevalence of cardiovascular risk factors and to estimate the cardiovascular risk using the WHO/ISH risk charts. A cross-sectional study was done from November 2011 to January 2012 in a rural area of Puducherry. Method of sampling was a single stage cluster random sampling, and subjects were enrolled depending on their suitability with the inclusion and exclusion criteria. The data collection tool was a piloted and semi-structured questionnaire, while WHO/ISH cardiovascular risk prediction charts for the South-East Asian region was used to predict the cardiovascular risk. Institutional Ethics committee permission was obtained before the start of the study. Statistical analysis was done using SPSS version 16 and appropriate statistical tests were applied. The mean age in years was 54.2 (±11.1) years with 46.7% of the participants being male. On application of the WHO/ISH risk prediction charts, almost 17% of the study subjects had moderate or high risk for a cardiovascular event. Additionally, high salt diet, alcohol use and low HDL levels, were identified as the major CVD risk factors. To conclude, stratification of people on the basis of risk prediction chart is a major step to have a clear idea about the magnitude of the problem. The findings of the current study revealed that there is a high burden of CVD risk in the rural Puducherry.
Taksler, Glen B; Perzynski, Adam T; Kattan, Michael W
2017-04-01
Recommendations for colorectal cancer screening encourage patients to choose among various screening methods based on individual preferences for benefits, risks, screening frequency, and discomfort. We devised a model to illustrate how individuals with varying tolerance for screening complications risk might decide on their preferred screening strategy. We developed a discrete-time Markov mathematical model that allowed hypothetical individuals to maximize expected lifetime utility by selecting screening method, start age, stop age, and frequency. Individuals could choose from stool-based testing every 1 to 3 years, flexible sigmoidoscopy every 1 to 20 years with annual stool-based testing, colonoscopy every 1 to 20 years, or no screening. We compared the life expectancy gained from the chosen strategy with the life expectancy available from a benchmark strategy of decennial colonoscopy. For an individual at average risk of colorectal cancer who was risk neutral with respect to screening complications (and therefore was willing to undergo screening if it would actuarially increase life expectancy), the model predicted that he or she would choose colonoscopy every 10 years, from age 53 to 73 years, consistent with national guidelines. For a similar individual who was moderately averse to screening complications risk (and therefore required a greater increase in life expectancy to accept potential risks of colonoscopy), the model predicted that he or she would prefer flexible sigmoidoscopy every 12 years with annual stool-based testing, with 93% of the life expectancy benefit of decennial colonoscopy. For an individual with higher risk aversion, the model predicted that he or she would prefer 2 lifetime flexible sigmoidoscopies, 20 years apart, with 70% of the life expectancy benefit of decennial colonoscopy. Mathematical models may formalize how individuals with different risk attitudes choose between various guideline-recommended colorectal cancer screening strategies.
First trimester prediction of maternal glycemic status.
Gabbay-Benziv, Rinat; Doyle, Lauren E; Blitzer, Miriam; Baschat, Ahmet A
2015-05-01
To predict gestational diabetes mellitus (GDM) or normoglycemic status using first trimester maternal characteristics. We used data from a prospective cohort study. First trimester maternal characteristics were compared between women with and without GDM. Association of these variables with sugar values at glucose challenge test (GCT) and subsequent GDM was tested to identify key parameters. A predictive algorithm for GDM was developed and receiver operating characteristics (ROC) statistics was used to derive the optimal risk score. We defined normoglycemic state, when GCT and all four sugar values at oral glucose tolerance test, whenever obtained, were normal. Using same statistical approach, we developed an algorithm to predict the normoglycemic state. Maternal age, race, prior GDM, first trimester BMI, and systolic blood pressure (SBP) were all significantly associated with GDM. Age, BMI, and SBP were also associated with GCT values. The logistic regression analysis constructed equation and the calculated risk score yielded sensitivity, specificity, positive predictive value, and negative predictive value of 85%, 62%, 13.8%, and 98.3% for a cut-off value of 0.042, respectively (ROC-AUC - area under the curve 0.819, CI - confidence interval 0.769-0.868). The model constructed for normoglycemia prediction demonstrated lower performance (ROC-AUC 0.707, CI 0.668-0.746). GDM prediction can be achieved during the first trimester encounter by integration of maternal characteristics and basic measurements while normoglycemic status prediction is less effective.
Child-related cognitions and affective functioning of physically abusive and comparison parents.
Haskett, Mary E; Smith Scott, Susan; Grant, Raven; Ward, Caryn Sabourin; Robinson, Canby
2003-06-01
The goal of this research was to utilize the cognitive behavioral model of abusive parenting to select and examine risk factors to illuminate the unique and combined influences of social cognitive and affective variables in predicting abuse group membership. Participants included physically abusive parents (n=56) and a closely-matched group of comparison parents (n=62). Social cognitive risk variables measured were (a) parent's expectations for children's abilities and maturity, (b) parental attributions of intentionality of child misbehavior, and (c) parents' perceptions of their children's adjustment. Affective risk variables included (a) psychopathology and (b) parenting stress. A series of logistic regression models were constructed to test the individual, combined, and interactive effects of risk variables on abuse group membership. The full set of five risk variables was predictive of abuse status; however, not all variables were predictive when considered individually and interactions did not contribute significantly to prediction. A risk composite score computed for each parent based on the five risk variables significantly predicted abuse status. Wide individual differences in risk across the five variables were apparent within the sample of abusive parents. Findings were generally consistent with a cognitive behavioral model of abuse, with cognitive variables being more salient in predicting abuse status than affective factors. Results point to the importance of considering diversity in characteristics of abusive parents.
Are Genetic Tests for Atherosclerosis Ready for Routine Clinical Use?
Paynter, Nina P; Ridker, Paul M; Chasman, Daniel I
2016-02-19
In this review, we lay out 3 areas currently being evaluated for incorporation of genetic information into clinical practice related to atherosclerosis. The first, familial hypercholesterolemia, is the clearest case for utility of genetic testing in diagnosis and potentially guiding treatment. Already in use for confirmatory testing of familial hypercholesterolemia and for cascade screening of relatives, genetic testing is likely to expand to help establish diagnoses and facilitate research related to most effective therapies, including new agents, such as PCSK9 inhibitors. The second area, adding genetic information to cardiovascular risk prediction for primary prevention, is not currently recommended. Although identification of additional variants may add substantially to prediction in the future, combining known variants has not yet demonstrated sufficient improvement in prediction for incorporation into commonly used risk scores. The third area, pharmacogenetics, has utility for some therapies today. Future utility for pharmacogenetics will wax or wane depending on the nature of available drugs and therapeutic strategies. © 2016 American Heart Association, Inc.
Feasibility of Predicting MCI/AD Using Neuropsychological Tests and Serum β-Amyloid
Luis, Cheryl A.; Abdullah, Laila; Ait-Ghezala, Ghania; Mouzon, Benoit; Keegan, Andrew P.; Crawford, Fiona; Mullan, Michael
2011-01-01
We examined the usefulness of brief neuropsychological tests and serum Aβ as a predictive test for detecting MCI/AD in older adults. Serum Aβ levels were measured from 208 subjects who were cognitively normal at enrollment and blood draw. Twenty-eight of the subjects subsequently developed MCI (n = 18) or AD (n = 10) over the follow-up period. Baseline measures of global cognition, memory, language fluency, and serum Aβ1–42 and the ratio of serum Aβ1–42/Aβ1–40 were significant predictors for future MCI/AD using Cox regression with demographic variables, APOE ε4, vascular risk factors, and specific medication as covariates. An optimal sensitivity of 85.2% and specificity of 86.5% for predicting MCI/AD was achieved using ROC analyses. Brief neuropsychological tests and measurements of Aβ1–42 obtained via blood warrants further study as a practical and cost effective method for wide-scale screening for identifying older adults who may be at-risk for pathological cognitive decline. PMID:21660215
ERIC Educational Resources Information Center
Peters, S. Colby; Woolley, Michael E.
2015-01-01
Data from the School Success Profile generated by 19,228 middle and high school students were organized into three broad categories of risk and protective factors--control, support, and challenge--to examine the relative and combined power of aggregate scale scores in each category so as to predict academic success. It was hypothesized that higher…
An Empirical Test of Ecodevelopmental Theory in Predicting HIV Risk Behaviors among Hispanic Youth
ERIC Educational Resources Information Center
Prado, Guillermo; Huang, Shi; Maldonado-Molina, Mildred; Bandiera, Frank; Schwartz, Seth J.; de la Vega, Pura; Brown, C. Hendricks; Pantin, Hilda
2010-01-01
Ecodevelopmental theory is a theoretical framework used to explain the interplay among risk and protective processes associated with HIV risk behaviors among adolescents. Although ecodevelopmentally based interventions have been found to be efficacious in preventing HIV risk behaviors among Hispanic youth, this theory has not yet been directly…
Early detection of Alzheimer disease: methods, markers, and misgivings.
Green, R C; Clarke, V C; Thompson, N J; Woodard, J L; Letz, R
1997-01-01
There is at present no reliable predictive test for most forms of Alzheimer disease (AD). Although some information about future risk for disease is available in theory through ApoE genotyping, it is of limited accuracy and utility. Once neuroprotective treatments are available for AD, reliable early detection will become a key component of the treatment strategy. We recently conducted a pilot survey eliciting attitudes and beliefs toward an unspecified and hypothetical predictive test for AD. The survey was completed by a convenience sample of 176 individuals, aged 22-77, which was 75% female, 30% African-American, and of which 33% had a family member with AD. The survey revealed that 69% of this sample would elect to obtain predictive testing for AD if the test were 100% accurate. Individuals were more likely to desire predictive testing if they had an a priori belief that they would develop AD (p = 0.0001), had a lower educational level (p = 0.003), were worried that they would develop AD (p = 0.02), had a self-defined history of depression (p = 0.04), and had a family member with AD (p = 0.04). However, the desire for predictive testing was not significantly associated with age, gender, ethnicity, or income. The desire to obtain predictive testing for AD decreased as the assumed accuracy of the hypothetical test decreased. A better short-term strategy for early detection of AD may be computer-based neuropsychological screening of at-risk (older aged) individuals to identify very early cognitive impairment. Individuals identified in this manner could be referred for diagnostic evaluation and early cases of AD could be identified and treated. A new self-administered, touch-screen, computer-based, neuropsychological screening instrument called Neurobehavioral Evaluation System-3 is described, which may facilitate this type of screening.
Bogani, Giorgio; Taverna, Francesca; Lombardo, Claudia; Ditto, Antonino; Martinelli, Fabio; Signorelli, Mauro; Chiappa, Valentina; Leone Roberti Maggiore, U; Mosca, Lavinia; Sabatucci, Ilaria; Scaffa, Cono; Lorusso, Domenica; Raspagliesi, Francesco
2018-01-01
To assess the risk of developing high-grade cervical dysplasia among women with low-grade cervical cytology and nonvisible squamocolumnar junction (SCJ) at colposcopic examination. Data of consecutive women with low-grade intraepithelial lesion(≤LSIL) undergoing colposcopic examination, which was unsatisfactory (due to the lack of the visualization of the entire SCJ), were retrospectively reviewed. The risk of developing high-grade cervical intraepithelial neoplasia (CIN2+) was assessed using Kaplan-Meier and Cox models. Data of 86 women were retrieved. Mean (standard deviation [SD]) age was 36.3 (13.4) years. A total of 71 (82.5%) patients had high-risk human papillomavirus (HR-HPV) at the time of diagnosis. Among the 63 patients undergoing repetition of HPV testing, 15 (24%) and 48 (76%) women had positive and negative tests for HR-HPV at 12 months, respectively. We observed that 5 (33%) of 15 patients with HPV persistence developed CIN2+, while only 1 (2%) patient of 48 patients without HPV persistence developed CIN2+ (odds ratio [OR]: 23.5; 95% confidence interval [CI]: 2.46-223.7; P < .001). The length of HR-HPV persistence correlated with an increased risk of developing CIN2+ ( P < .001; P for trend). High-risk HPV persistence is the only factor predicting for CIN2+ (hazard ratio: 3.19; 95% CI: 1.55-6.57; P = .002). High-risk HPV persistence predicts the risk of developing CIN2+ in patients with unsatisfactory colposcopic examination. Further studies are warranted in order to implement the use of HPV testing in patients with unsatisfactory colposcopy.
Predictive Modeling of Developmental Toxicity
The use of alternative methods in conjunction with traditional in vivo developmental toxicity testing has the potential to (1) reduce cost and increase throughput of testing the chemical universe, (2) prioritize chemicals for further targeted toxicity testing and risk assessment,...
King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I.; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin
2011-01-01
Background Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. Results 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse. PMID:21853028
King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin
2011-01-01
Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freund, D; Zhang, R; Sanders, M
Purpose: Post-irradiation cerebral necrosis (PICN) is a severe late effect that can Result from brain cancers treatment using radiation therapy. The purpose of this study was to compare the treatment plans and predicted risk of PICN after volumetric modulated arc therapy (VMAT) to the risk after passively scattered proton therapy (PSPT) and intensity modulated proton therapy (IMPT) in a cohort of pediatric patients. Methods: Thirteen pediatric patients with varying age and sex were selected for this study. A clinical treatment volume (CTV) was constructed for 8 glioma patients and 5 ependymoma patients. Prescribed dose was 54 Gy over 30 fractionsmore » to the planning volume. Dosimetric endpoints were compared between VMAT and proton plans. The normal tissue complication probability (NTCP) following VMAT and proton therapy planning was also calculated using PICN as the biological endpoint. Sensitivity tests were performed to determine if predicted risk of PICN was sensitive to positional errors, proton range errors and selection of risk models. Results: Both PSPT and IMPT plans resulted in a significant increase in the maximum dose and reduction in the total brain volume irradiated to low doses compared with the VMAT plans. The average ratios of NTCP between PSPT and VMAT were 0.56 and 0.38 for glioma and ependymoma patients respectively and the average ratios of NTCP between IMPT and VMAT were 0.67 and 0.68 for glioma and ependymoma plans respectively. Sensitivity test revealed that predicted ratios of risk were insensitive to range and positional errors but varied with risk model selection. Conclusion: Both PSPT and IMPT plans resulted in a decrease in the predictive risk of necrosis for the pediatric plans studied in this work. Sensitivity analysis upheld the qualitative findings of the risk models used in this study, however more accurate models that take into account dose and volume are needed.« less
Risk Preferences and Predictions about Others: No Association with 2D:4D Ratio
Lima de Miranda, Katharina; Neyse, Levent; Schmidt, Ulrich
2018-01-01
Prenatal androgen exposure affects the brain development of the fetus which may facilitate certain behaviors and decision patterns in the later life. The ratio between the lengths of second and the fourth fingers (2D:4D) is a negative biomarker of the ratio between prenatal androgen and estrogen exposure and men typically have lower ratios than women. In line with the typical findings suggesting that women are more risk averse than men, several studies have also shown negative relationships between 2D:4D and risk taking although the evidence is not conclusive. Previous studies have also reported that both men and women believe women are more risk averse than men. In the current study, we re-test the relationship between 2D:4D and risk preferences in a German student sample and also investigate whether the 2D:4D ratio is associated with people’s perceptions about others’ risk preferences. Following an incentivized risk elicitation task, we asked all participants their predictions about (i) others’ responses (without sex specification), (ii) men’s responses, and (iii) women’s responses; then measured their 2D:4D ratios. In line with the previous findings, female participants in our sample were more risk averse. While both men and women underestimated other participants’ (non sex-specific) and women’s risky decisions on average, their predictions about men were accurate. We also found evidence for the false consensus effect, as risky choices are positively correlated with predictions about other participants’ risky choices. The 2D:4D ratio was not directly associated either with risk preferences or the predictions of other participants’ choices. An unexpected finding was that women with mid-range levels of 2D:4D estimated significantly larger sex differences in participants’ decisions. This finding needs further testing in future studies. PMID:29472846
Zaqout, M; Michels, N; Bammann, K; Ahrens, W; Sprengeler, O; Molnar, D; Hadjigeorgiou, C; Eiben, G; Konstabel, K; Russo, P; Jiménez-Pavón, D; Moreno, L A; De Henauw, S
2016-07-01
The aim of the study was to assess the associations of individual and combined physical fitness components with single and clustering of cardio-metabolic risk factors in children. This 2-year longitudinal study included a total of 1635 European children aged 6-11 years. The test battery included cardio-respiratory fitness (20-m shuttle run test), upper-limb strength (handgrip test), lower-limb strength (standing long jump test), balance (flamingo test), flexibility (back-saver sit-and-reach) and speed (40-m sprint test). Metabolic risk was assessed through z-score standardization using four components: waist circumference, blood pressure (systolic and diastolic), blood lipids (triglycerides and high-density lipoprotein) and insulin resistance (homeostasis model assessment). Mixed model regression analyses were adjusted for sex, age, parental education, sugar and fat intake, and body mass index. Physical fitness was inversely associated with clustered metabolic risk (P<0.001). All coefficients showed a higher clustered metabolic risk with lower physical fitness, except for upper-limb strength (β=0.057; P=0.002) where the opposite association was found. Cardio-respiratory fitness (β=-0.124; P<0.001) and lower-limb strength (β=-0.076; P=0.002) were the most important longitudinal determinants. The effects of cardio-respiratory fitness were even independent of the amount of vigorous-to-moderate activity (β=-0.059; P=0.029). Among all the metabolic risk components, blood pressure seemed not well predicted by physical fitness, while waist circumference, blood lipids and insulin resistance all seemed significantly predicted by physical fitness. Poor physical fitness in children is associated with the development of cardio-metabolic risk factors. Based on our results, this risk might be modified by improving mainly cardio-respiratory fitness and lower-limb muscular strength.
Looman, Jan; Abracen, Jeffrey
2011-03-01
There has been relatively little research on the degree to which measures of lifetime history of substance abuse add to the prediction of risk based on actuarial measures alone among sexual offenders. This issue is of relevance in that a history of substance abuse is related to relapse to substance using behavior. Furthermore, substance use has been found to be related to recidivism among sexual offenders. To investigate whether lifetime history of substance abuse adds to prediction over and above actuarial instruments alone, several measures of substance abuse were administered in conjunction with the Sex Offender Risk Appraisal Guide (SORAG). The SORAG was found to be the most accurate actuarial instrument for the prediction of serious recidivism (i.e., sexual or violent) among the sample included in the present investigation. Complete information, including follow-up data, were available for 250 offenders who attended the Regional Treatment Centre Sex Offender Treatment Program (RTCSOTP). The Michigan Alcohol Screening Test (MAST) and the Drug Abuse Screening Test (DAST) were used to assess lifetime history of substance abuse. The results of logistic regression procedures indicated that both the SORAG and the MAST independently added to the prediction of serious recidivism. The DAST did not add to prediction over the use of the SORAG alone. Implications for both the assessment and treatment of sexual offenders are discussed.
Lauer, Michael S; Pothier, Claire E; Magid, David J; Smith, S Scott; Kattan, Michael W
2007-12-18
The exercise treadmill test is recommended for risk stratification among patients with intermediate to high pretest probability of coronary artery disease. Posttest risk stratification is based on the Duke treadmill score, which includes only functional capacity and measures of ischemia. To develop and externally validate a post-treadmill test, multivariable mortality prediction rule for adults with suspected coronary artery disease and normal electrocardiograms. Prospective cohort study conducted from September 1990 to May 2004. Exercise treadmill laboratories in a major medical center (derivation set) and a separate HMO (validation set). 33,268 patients in the derivation set and 5821 in the validation set. All patients had normal electrocardiograms and were referred for evaluation of suspected coronary artery disease. The derivation set patients were followed for a median of 6.2 years. A nomogram-illustrated model was derived on the basis of variables easily obtained in the stress laboratory, including age; sex; history of smoking, hypertension, diabetes, or typical angina; and exercise findings of functional capacity, ST-segment changes, symptoms, heart rate recovery, and frequent ventricular ectopy in recovery. The derivation data set included 1619 deaths. Although both the Duke treadmill score and our nomogram-illustrated model were significantly associated with death (P < 0.001), the nomogram was better at discrimination (concordance index for right-censored data, 0.83 vs. 0.73) and calibration. We reclassified many patients with intermediate- to high-risk Duke treadmill scores as low risk on the basis of the nomogram. The model also predicted 3-year mortality rates well in the validation set: Based on an optimal cut-point for a negative predictive value of 0.97, derivation and validation rates were, respectively, 1.7% and 2.5% below the cut-point and 25% and 29% above the cut-point. Blood test-based measures or left ventricular ejection fraction were not included. The nomogram can be applied only to patients with a normal electrocardiogram. Clinical utility remains to be tested. A simple nomogram based on easily obtained pretest and exercise test variables predicted all-cause mortality in adults with suspected coronary artery disease and normal electrocardiograms.
Sosenko, Jay M; Skyler, Jay S; Palmer, Jerry P; Krischer, Jeffrey P; Yu, Liping; Mahon, Jeffrey; Beam, Craig A; Boulware, David C; Rafkin, Lisa; Schatz, Desmond; Eisenbarth, George
2013-09-01
We assessed whether a risk score that incorporates levels of multiple islet autoantibodies could enhance the prediction of type 1 diabetes (T1D). TrialNet Natural History Study participants (n = 784) were tested for three autoantibodies (GADA, IA-2A, and mIAA) at their initial screening. Samples from those positive for at least one autoantibody were subsequently tested for ICA and ZnT8A. An autoantibody risk score (ABRS) was developed from a proportional hazards model that combined autoantibody levels from each autoantibody along with their designations of positivity and negativity. The ABRS was strongly predictive of T1D (hazard ratio [with 95% CI] 2.72 [2.23-3.31], P < 0.001). Receiver operating characteristic curve areas (with 95% CI) for the ABRS revealed good predictability (0.84 [0.78-0.90] at 2 years, 0.81 [0.74-0.89] at 3 years, P < 0.001 for both). The composite of levels from the five autoantibodies was predictive of T1D before and after an adjustment for the positivity or negativity of autoantibodies (P < 0.001). The findings were almost identical when ICA was excluded from the risk score model. The combination of the ABRS and the previously validated Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) predicted T1D more accurately (0.93 [0.88-0.98] at 2 years, 0.91 [0.83-0.99] at 3 years) than either the DPTRS or the ABRS alone (P ≤ 0.01 for all comparisons). These findings show the importance of considering autoantibody levels in assessing the risk of T1D. Moreover, levels of multiple autoantibodies can be incorporated into an ABRS that accurately predicts T1D.
Sosenko, Jay M.; Skyler, Jay S.; Palmer, Jerry P.; Krischer, Jeffrey P.; Yu, Liping; Mahon, Jeffrey; Beam, Craig A.; Boulware, David C.; Rafkin, Lisa; Schatz, Desmond; Eisenbarth, George
2013-01-01
OBJECTIVE We assessed whether a risk score that incorporates levels of multiple islet autoantibodies could enhance the prediction of type 1 diabetes (T1D). RESEARCH DESIGN AND METHODS TrialNet Natural History Study participants (n = 784) were tested for three autoantibodies (GADA, IA-2A, and mIAA) at their initial screening. Samples from those positive for at least one autoantibody were subsequently tested for ICA and ZnT8A. An autoantibody risk score (ABRS) was developed from a proportional hazards model that combined autoantibody levels from each autoantibody along with their designations of positivity and negativity. RESULTS The ABRS was strongly predictive of T1D (hazard ratio [with 95% CI] 2.72 [2.23–3.31], P < 0.001). Receiver operating characteristic curve areas (with 95% CI) for the ABRS revealed good predictability (0.84 [0.78–0.90] at 2 years, 0.81 [0.74–0.89] at 3 years, P < 0.001 for both). The composite of levels from the five autoantibodies was predictive of T1D before and after an adjustment for the positivity or negativity of autoantibodies (P < 0.001). The findings were almost identical when ICA was excluded from the risk score model. The combination of the ABRS and the previously validated Diabetes Prevention Trial–Type 1 Risk Score (DPTRS) predicted T1D more accurately (0.93 [0.88–0.98] at 2 years, 0.91 [0.83–0.99] at 3 years) than either the DPTRS or the ABRS alone (P ≤ 0.01 for all comparisons). CONCLUSIONS These findings show the importance of considering autoantibody levels in assessing the risk of T1D. Moreover, levels of multiple autoantibodies can be incorporated into an ABRS that accurately predicts T1D. PMID:23818528
MANUSCRIPT IN PRESS: DEMENTIA & GERIATRIC COGNITIVE DISORDERS
O’Bryant, Sid E.; Xiao, Guanghua; Barber, Robert; Cullum, C. Munro; Weiner, Myron; Hall, James; Edwards, Melissa; Grammas, Paula; Wilhelmsen, Kirk; Doody, Rachelle; Diaz-Arrastia, Ramon
2015-01-01
Background Prior work on the link between blood-based biomarkers and cognitive status has largely been based on dichotomous classifications rather than detailed neuropsychological functioning. The current project was designed to create serum-based biomarker algorithms that predict neuropsychological test performance. Methods A battery of neuropsychological measures was administered. Random forest analyses were utilized to create neuropsychological test-specific biomarker risk scores in a training set that were entered into linear regression models predicting the respective test scores in the test set. Serum multiplex biomarker data were analyzed on 108 proteins from 395 participants (197 AD cases and 198 controls) from the Texas Alzheimer’s Research and Care Consortium. Results The biomarker risk scores were significant predictors (p<0.05) of scores on all neuropsychological tests. With the exception of premorbid intellectual status (6.6%), the biomarker risk scores alone accounted for a minimum of 12.9% of the variance in neuropsychological scores. Biomarker algorithms (biomarker risk scores + demographics) accounted for substantially more variance in scores. Review of the variable importance plots indicated differential patterns of biomarker significance for each test, suggesting the possibility of domain-specific biomarker algorithms. Conclusions Our findings provide proof-of-concept for a novel area of scientific discovery, which we term “molecular neuropsychology.” PMID:24107792
ERIC Educational Resources Information Center
Duell, Natasha; Steinberg, Laurence; Chein, Jason; Al-Hassan, Suha M.; Bacchini, Dario; Lei, Chang; Chaudhary, Nandita; Di Giunta, Laura; Dodge, Kenneth A.; Fanti, Kostas A.; Lansford, Jennifer E.; Malone, Patrick S.; Oburu, Paul; Pastorelli, Concetta; Skinner, Ann T.; Sorbring, Emma; Tapanya, Sombat; Uribe Tirado, Liliana Maria; Alampay, Liane Peña
2016-01-01
In the present analysis, we test the dual systems model of adolescent risk taking in a cross-national sample of over 5,200 individuals aged 10 through 30 (M = 17.05 years, SD = 5.91) from 11 countries. We examine whether reward seeking and self-regulation make independent, additive, or interactive contributions to risk taking, and ask whether…
Pneumococcal vaccine targeting strategy for older adults: customized risk profiling.
Balicer, Ran D; Cohen, Chandra J; Leibowitz, Morton; Feldman, Becca S; Brufman, Ilan; Roberts, Craig; Hoshen, Moshe
2014-02-12
Current pneumococcal vaccine campaigns take a broad, primarily age-based approach to immunization targeting, overlooking many clinical and administrative considerations necessary in disease prevention and resource planning for specific patient populations. We aim to demonstrate the utility of a population-specific predictive model for hospital-treated pneumonia to direct effective vaccine targeting. Data was extracted for 1,053,435 members of an Israeli HMO, age 50 and older, during the study period 2008-2010. We developed and validated a logistic regression model to predict hospital-treated pneumonia using training and test samples, including a set of standard and population-specific risk factors. The model's predictive value was tested for prospectively identifying cases of pneumonia and invasive pneumococcal disease (IPD), and was compared to the existing international paradigm for patient immunization targeting. In a multivariate regression, age, co-morbidity burden and previous pneumonia events were most strongly positively associated with hospital-treated pneumonia. The model predicting hospital-treated pneumonia yielded a c-statistic of 0.80. Utilizing the predictive model, the top 17% highest-risk within the study validation population were targeted to detect 54% of those members who were subsequently treated for hospitalized pneumonia in the follow up period. The high-risk population identified through this model included 46% of the follow-up year's IPD cases, and 27% of community-treated pneumonia cases. These outcomes were compared with international guidelines for risk for pneumococcal diseases that accurately identified only 35% of hospitalized pneumonia, 41% of IPD cases and 21% of community-treated pneumonia. We demonstrate that a customized model for vaccine targeting performs better than international guidelines, and therefore, risk modeling may allow for more precise vaccine targeting and resource allocation than current national and international guidelines. Health care managers and policy-makers may consider the strategic potential of utilizing clinical and administrative databases for creating population-specific risk prediction models to inform vaccination campaigns. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zhao, Lue Ping; Carlsson, Annelie; Larsson, Helena Elding; Forsander, Gun; Ivarsson, Sten A; Kockum, Ingrid; Ludvigsson, Johnny; Marcus, Claude; Persson, Martina; Samuelsson, Ulf; Örtqvist, Eva; Pyo, Chul-Woo; Bolouri, Hamid; Zhao, Michael; Nelson, Wyatt C; Geraghty, Daniel E; Lernmark, Åke
2017-11-01
It is of interest to predict possible lifetime risk of type 1 diabetes (T1D) in young children for recruiting high-risk subjects into longitudinal studies of effective prevention strategies. Utilizing a case-control study in Sweden, we applied a recently developed next generation targeted sequencing technology to genotype class II genes and applied an object-oriented regression to build and validate a prediction model for T1D. In the training set, estimated risk scores were significantly different between patients and controls (P = 8.12 × 10 -92 ), and the area under the curve (AUC) from the receiver operating characteristic (ROC) analysis was 0.917. Using the validation data set, we validated the result with AUC of 0.886. Combining both training and validation data resulted in a predictive model with AUC of 0.903. Further, we performed a "biological validation" by correlating risk scores with 6 islet autoantibodies, and found that the risk score was significantly correlated with IA-2A (Z-score = 3.628, P < 0.001). When applying this prediction model to the Swedish population, where the lifetime T1D risk ranges from 0.5% to 2%, we anticipate identifying approximately 20 000 high-risk subjects after testing all newborns, and this calculation would identify approximately 80% of all patients expected to develop T1D in their lifetime. Through both empirical and biological validation, we have established a prediction model for estimating lifetime T1D risk, using class II HLA. This prediction model should prove useful for future investigations to identify high-risk subjects for prevention research in high-risk populations. Copyright © 2017 John Wiley & Sons, Ltd.
Spyropoulos, Evangelos; Kotsiris, Dimitrios; Spyropoulos, Katherine; Panagopoulos, Aggelos; Galanakis, Ioannis; Mavrikos, Stamatios
2017-02-01
We developed a mathematical "prostate cancer (PCa) conditions simulating" predictive model (PCP-SMART), from which we derived a novel PCa predictor (prostate cancer risk determinator [PCRD] index) and a PCa risk equation. We used these to estimate the probability of finding PCa on prostate biopsy, on an individual basis. A total of 371 men who had undergone transrectal ultrasound-guided prostate biopsy were enrolled in the present study. Given that PCa risk relates to the total prostate-specific antigen (tPSA) level, age, prostate volume, free PSA (fPSA), fPSA/tPSA ratio, and PSA density and that tPSA ≥ 50 ng/mL has a 98.5% positive predictive value for a PCa diagnosis, we hypothesized that correlating 2 variables composed of 3 ratios (1, tPSA/age; 2, tPSA/prostate volume; and 3, fPSA/tPSA; 1 variable including the patient's tPSA and the other, a tPSA value of 50 ng/mL) could operate as a PCa conditions imitating/simulating model. Linear regression analysis was used to derive the coefficient of determination (R 2 ), termed the PCRD index. To estimate the PCRD index's predictive validity, we used the χ 2 test, multiple logistic regression analysis with PCa risk equation formation, calculation of test performance characteristics, and area under the receiver operating characteristic curve analysis using SPSS, version 22 (P < .05). The biopsy findings were positive for PCa in 167 patients (45.1%) and negative in 164 (44.2%). The PCRD index was positively signed in 89.82% positive PCa cases and negative in 91.46% negative PCa cases (χ 2 test; P < .001; relative risk, 8.98). The sensitivity was 89.8%, specificity was 91.5%, positive predictive value was 91.5%, negative predictive value was 89.8%, positive likelihood ratio was 10.5, negative likelihood ratio was 0.11, and accuracy was 90.6%. Multiple logistic regression revealed the PCRD index as an independent PCa predictor, and the formulated risk equation was 91% accurate in predicting the probability of finding PCa. On the receiver operating characteristic analysis, the PCRD index (area under the curve, 0.926) significantly (P < .001) outperformed other, established PCa predictors. The PCRD index effectively predicted the prostate biopsy outcome, correctly identifying 9 of 10 men who were eventually diagnosed with PCa and correctly ruling out PCa for 9 of 10 men who did not have PCa. Its predictive power significantly outperformed established PCa predictors, and the formulated risk equation accurately calculated the probability of finding cancer on biopsy, on an individual patient basis. Copyright © 2016 Elsevier Inc. All rights reserved.
Identification of Patients at Risk for Hereditary Colorectal Cancer
Mishra, Nitin; Hall, Jason
2012-01-01
Diagnosis of hereditary colorectal cancer syndromes requires clinical suspicion and knowledge of such syndromes. Lynch syndrome is the most common cause of hereditary colorectal cancer. Other less common causes include familial adenomatous polyposis (FAP), Peutz-Jeghers syndrome (PJS), juvenile polyposis syndrome, and others. There have been a growing number of clinical and molecular tools used to screen and test at risk individuals. Screening tools include diagnostic clinical criteria, family history, genetic prediction models, and tumor testing. Patients who are high risk based on screening should be referred for genetic testing. PMID:23730221
Mand, Cara; Gillam, Lynn; Delatycki, Martin B; Duncan, Rony E
2012-09-01
Predictive genetic testing is now routinely offered to asymptomatic adults at risk for genetic disease. However, testing of minors at risk for adult-onset conditions, where no treatment or preventive intervention exists, has evoked greater controversy and inspired a debate spanning two decades. This review aims to provide a detailed longitudinal analysis and concludes by examining the debate's current status and prospects for the future. Fifty-three relevant theoretical papers published between 1990 and December 2010 were identified, and interpretative content analysis was employed to catalogue discrete arguments within these papers. Novel conclusions were drawn from this review. While the debate's first voices were raised in opposition of testing and their arguments have retained currency over many years, arguments in favour of testing, which appeared sporadically at first, have gained momentum more recently. Most arguments on both sides are testable empirical claims, so far untested, rather than abstract ethical or philosophical positions. The dispute, therein, lies not so much in whether minors should be permitted to access predictive genetic testing but whether these empirical claims on the relative benefits or harms of testing should be assessed.
Gaziano, Thomas A; Young, Cynthia R; Fitzmaurice, Garrett; Atwood, Sidney; Gaziano, J Michael
2008-01-01
Summary Background Around 80% of all cardiovascular deaths occur in developing countries. Assessment of those patients at high risk is an important strategy for prevention. Since developing countries have limited resources for prevention strategies that require laboratory testing, we assessed if a risk prediction method that did not require any laboratory tests could be as accurate as one requiring laboratory information. Methods The National Health and Nutrition Examination Survey (NHANES) was a prospective cohort study of 14 407 US participants aged between 25–74 years at the time they were first examined (between 1971 and 1975). Our follow-up study population included participants with complete information on these surveys who did not report a history of cardiovascular disease (myocardial infarction, heart failure, stroke, angina) or cancer, yielding an analysis dataset N=6186. We compared how well either method could predict first-time fatal and non-fatal cardiovascular disease events in this cohort. For the laboratory-based model, which required blood testing, we used standard risk factors to assess risk of cardiovascular disease: age, systolic blood pressure, smoking status, total cholesterol, reported diabetes status, and current treatment for hypertension. For the non-laboratory-based model, we substituted body-mass index for cholesterol. Findings In the cohort of 6186, there were 1529 first-time cardiovascular events and 578 (38%) deaths due to cardiovascular disease over 21 years. In women, the laboratory-based model was useful for predicting events, with a c statistic of 0·829. The c statistic of the non-laboratory-based model was 0·831. In men, the results were similar (0·784 for the laboratory-based model and 0·783 for the non-laboratory-based model). Results were similar between the laboratory-based and non-laboratory-based models in both men and women when restricted to fatal events only. Interpretation A method that uses non-laboratory-based risk factors predicted cardiovascular events as accurately as one that relied on laboratory-based values. This approach could simplify risk assessment in situations where laboratory testing is inconvenient or unavailable. PMID:18342687
[Cardiorespiratory fitness and cardiometabolic risk in young adults].
Secchi, Jeremías D; García, Gastón C
2013-01-01
The assessment of VO₂max allow classify subjects according to the health risk. However the factors that may affect the classifications have been little studied. The main purpose was to determine whether the type of VO₂max prediction equation and the Fitnessgram criterion-referenced standards modified the proportion of young adults classified with a level of aerobic capacity cardiometabolic risk indicative. The study design was observational, cross-sectional and relational. Young adults (n= 240) participated voluntarily. The VO₂max was estimated by 20-m shuttle run test applying 9 predictive equations. The differences in the classifications were analyzed with the Cochran Q and McNemar tests. The level of aerobic capacity indicative of cardiometabolic risk ranged between 7.1% and 70.4% depending on the criterion-referenced standards and predictive equation used (p<0.001). A higher percentage of women were classified with an unhealthy level in all equations (women: 29.4% to 85.3% vs 4.8% to 51% in men), regardless of the criterion-referenced standards (p<0.001). Both sexes and irrespective of the equation applied the old criterion-referenced standards classified a lower proportion of subjects (men: 4.8% to 48.1% and women: 39.4% a 68.4%) with unhealthy aerobic capacity (p ≤ 0.004). The type of VO₂max prediction equation and Fitnessgram criterion-referenced standards changed classifications young adults with a level of aerobic capacity of cardiometabolic risk indicative.
Banks, Siobhan; Catcheside, Peter; Lack, Leon; Grunstein, Ron R; McEvoy, R Doug
2004-09-15
Partial sleep deprivation and alcohol consumption are a common combination, particularly among young drivers. We hypothesized that while low blood alcohol concentration (<0.05 g/dL) may not significantly increase crash risk, the combination of partial sleep deprivation and low blood alcohol concentration would cause significant performance impairment. Experimental Sleep Disorders Unit Laboratory 20 healthy volunteers (mean age 22.8 years; 9 men). Subjects underwent driving simulator testing at 1 am on 2 nights a week apart. On the night preceding simulator testing, subjects were partially sleep deprived (5 hours in bed). Alcohol consumption (2-3 standard alcohol drinks over 2 hours) was randomized to 1 of the 2 test nights, and blood alcohol concentrations were estimated using a calibrated Breathalyzer. During the driving task subjects were monitored continuously with electroencephalography for sleep episodes and were prompted every 4.5 minutes for answers to 2 perception scales-performance and crash risk. Mean blood alcohol concentration on the alcohol night was 0.035 +/- 0.015 g/dL. Compared with conditions during partial sleep deprivation alone, subjects had more microsleeps, impaired driving simulator performance, and poorer ability to predict crash risk in the combined partial sleep deprivation and alcohol condition. Women predicted crash risk more accurately than did men in the partial sleep deprivation condition, but neither men nor women predicted the risk accurately in the sleep deprivation plus alcohol condition. Alcohol at legal blood alcohol concentrations appears to increase sleepiness and impair performance and the detection of crash risk following partial sleep deprivation. When partially sleep deprived, women appear to be either more perceptive of increased crash risk or more willing to admit to their driving limitations than are men. Alcohol eliminated this behavioral difference.
Ergul, Yakup; Ozturk, Erkut; Ozyilmaz, Isa; Unsal, Serkan; Carus, Hayat; Tola, Hasan Tahsin; Tanidir, Ibrahim Cansaran; Guzeltas, Alper
2015-01-01
We aimed to determine the correlation between noninvasive testing (exercise stress testing [EST] and adenosine responsiveness of accessory pathway [AP] ) and invasive electrophysiology study (EPS) for assessment antegrade conduction of the AP in Wolff-Parkinson-White syndrome. This prospective, observational study enrolled 40 children (58% male children, median age of 13 years, and median weight of 47.5 kg) with Wolff-Parkinson-White syndrome. Conduction through the AP to a cycle length of ≤250 ms was considered rapid or high-risk; otherwise, patients were nonrapid or low-risk. The sudden disappearance of the delta-wave was seen in 10 cases (25%) during EST. Accessory pathway was found to be high-risk in 13 cases (13/40, 32.5%) while the accessory path was identified as low-risk in 27 cases; however, six patients (15%) had blocked AP conduction with adenosine during EPS. Low-risk classification by EST alone to identify patients with nonrapid conduction in baseline EPS had a specificity of 93% and a positive predictive value of 90% (accuracy 54%). Blocked AP conduction with adenosine as a marker of nonrapid baseline AP conduction had a specificity of 93% and a positive predictive value of 84%. Finally, AP was adenosine nonresponsive in the majority of patients (28/30, 93%) with persistent delta-waves, 40% of those who had a sudden disappearance of delta-waves had an adenosine-responsive AP (P value: .028). Abrupt loss of preexcitation during EST and blocked AP conduction with adenosine had high specificity and positive predictive value for nonrapid and low-risk antegrade conduction during baseline invasive EPS. Successful risk stratification of pediatric patients with Wolff-Parkinson-White is possible through the use of EST and the adenosine responsiveness of AP. © 2015 Wiley Periodicals, Inc.
Ferrer, Rebecca A; Klein, William M P; Persoskie, Alexander; Avishai-Yitshak, Aya; Sheeran, Paschal
2016-10-01
Although risk perception is a key predictor in health behavior theories, current conceptions of risk comprise only one (deliberative) or two (deliberative vs. affective/experiential) dimensions. This research tested a tripartite model that distinguishes among deliberative, affective, and experiential components of risk perception. In two studies, and in relation to three common diseases (cancer, heart disease, diabetes), we used confirmatory factor analyses to examine the factor structure of the tripartite risk perception (TRIRISK) model and compared the fit of the TRIRISK model to dual-factor and single-factor models. In a third study, we assessed concurrent validity by examining the impact of cancer diagnosis on (a) levels of deliberative, affective, and experiential risk perception, and (b) the strength of relations among risk components, and tested predictive validity by assessing relations with behavioral intentions to prevent cancer. The tripartite factor structure was supported, producing better model fit across diseases (studies 1 and 2). Inter-correlations among the components were significantly smaller among participants who had been diagnosed with cancer, suggesting that affected populations make finer-grained distinctions among risk perceptions (study 3). Moreover, all three risk perception components predicted unique variance in intentions to engage in preventive behavior (study 3). The TRIRISK model offers both a novel conceptualization of health-related risk perceptions, and new measures that enhance predictive validity beyond that engendered by unidimensional and bidimensional models. The present findings have implications for the ways in which risk perceptions are targeted in health behavior change interventions, health communications, and decision aids.
The Role of Social Novelty in Risk Seeking and Exploratory Behavior: Implications for Addictions.
Mitchell, Simon; Gao, Jennifer; Hallett, Mark; Voon, Valerie
2016-01-01
Novelty preference or sensation seeking is associated with disorders of addiction and predicts rodent compulsive drug use and adolescent binge drinking in humans. Novelty has also been shown to influence choice in the context of uncertainty and reward processing. Here we introduce a novel or familiar neutral face stimuli and investigate its influence on risk-taking choices in healthy volunteers. We focus on behavioural outcomes and imaging correlates to the prime that might predict risk seeking. We hypothesized that subjects would be more risk seeking following a novel relative to familiar stimulus. We adapted a risk-taking task involving acceptance or rejection of a 50:50 choice of gain or loss that was preceded by a familiar (pre-test familiarization) or novel face prime. Neutral expression faces of males and females were used as primes. Twenty-four subjects were first tested behaviourally and then 18 scanned using a different variant of the same task under functional MRI. We show enhanced risk taking to both gain and loss anticipation following novel relative to familiar images and particularly for the low gain condition. Greater risk taking behaviour and self-reported exploratory behaviours was predicted by greater right ventral putaminal activity to novel versus familiar contexts. Social novelty appears to have a contextually enhancing effect on augmenting risky choices possibly mediated via ventral putaminal dopaminergic activity. Our findings link the observation that novelty preference and sensation seeking are important traits predicting the initiation and maintenance of risky behaviours, including substance and behavioural addictions.
Disrupted latent inhibition in individuals at ultra high-risk for developing psychosis.
Kraus, Michael; Rapisarda, Attilio; Lam, Max; Thong, Jamie Y J; Lee, Jimmy; Subramaniam, Mythily; Collinson, Simon L; Chong, Siow Ann; Keefe, Richard S E
2016-12-01
The addition of off-the-shelf cognitive measures to established prodromal criteria has resulted in limited improvement in the prediction of conversion to psychosis. Tests that assess cognitive processes central to schizophrenia might better identify those at highest risk. The latent inhibition paradigm assesses a subject's tendency to ignore irrelevant stimuli, a process integral to healthy perceptual and cognitive function that has been hypothesized to be a key deficit underlying the development of schizophrenia. In this study, 142 young people at ultra high-risk for developing psychosis and 105 controls were tested on a within-subject latent inhibition paradigm. Additionally, we later inquired about the strategy that each subject employed to complete the test, and further investigated the relationship between reported strategy and the extent of latent inhibition exhibited. Unlike controls, ultra high-risk subjects did not demonstrate a significant latent inhibition effect. This difference between groups became greater when controlling for strategy. The lack of latent inhibition effect in our ultra high-risk sample suggests that individuals at ultra high-risk for psychosis are impaired in their allocation of attentional resources based on past predictive value of repeated stimuli. This fundamental deficit in the allocation of attention may contribute to the broader array of cognitive impairments and clinical symptoms displayed by individuals at ultra high-risk for psychosis.
Hengartner, M P; Heekeren, K; Dvorsky, D; Walitza, S; Rössler, W; Theodoridou, A
2017-09-01
The aim of this study was to critically examine the prognostic validity of various clinical high-risk (CHR) criteria alone and in combination with additional clinical characteristics. A total of 188 CHR positive persons from the region of Zurich, Switzerland (mean age 20.5 years; 60.2% male), meeting ultra high-risk (UHR) and/or basic symptoms (BS) criteria, were followed over three years. The test battery included the Structured Interview for Prodromal Syndromes (SIPS), verbal IQ and many other screening tools. Conversion to psychosis was defined according to ICD-10 criteria for schizophrenia (F20) or brief psychotic disorder (F23). Altogether n=24 persons developed manifest psychosis within three years and according to Kaplan-Meier survival analysis, the projected conversion rate was 17.5%. The predictive accuracy of UHR was statistically significant but poor (area under the curve [AUC]=0.65, P<.05), whereas BS did not predict psychosis beyond mere chance (AUC=0.52, P=.730). Sensitivity and specificity were 0.83 and 0.47 for UHR, and 0.96 and 0.09 for BS. UHR plus BS achieved an AUC=0.66, with sensitivity and specificity of 0.75 and 0.56. In comparison, baseline antipsychotic medication yielded a predictive accuracy of AUC=0.62 (sensitivity=0.42; specificity=0.82). A multivariable prediction model comprising continuous measures of positive symptoms and verbal IQ achieved a substantially improved prognostic accuracy (AUC=0.85; sensitivity=0.86; specificity=0.85; positive predictive value=0.54; negative predictive value=0.97). We showed that BS have no predictive accuracy beyond chance, while UHR criteria poorly predict conversion to psychosis. Combining BS with UHR criteria did not improve the predictive accuracy of UHR alone. In contrast, dimensional measures of both positive symptoms and verbal IQ showed excellent prognostic validity. A critical re-thinking of binary at-risk criteria is necessary in order to improve the prognosis of psychotic disorders. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Arts, E E A; Popa, C D; Den Broeder, A A; Donders, R; Sandoo, A; Toms, T; Rollefstad, S; Ikdahl, E; Semb, A G; Kitas, G D; Van Riel, P L C M; Fransen, J
2016-04-01
Predictive performance of cardiovascular disease (CVD) risk calculators appears suboptimal in rheumatoid arthritis (RA). A disease-specific CVD risk algorithm may improve CVD risk prediction in RA. The objectives of this study are to adapt the Systematic COronary Risk Evaluation (SCORE) algorithm with determinants of CVD risk in RA and to assess the accuracy of CVD risk prediction calculated with the adapted SCORE algorithm. Data from the Nijmegen early RA inception cohort were used. The primary outcome was first CVD events. The SCORE algorithm was recalibrated by reweighing included traditional CVD risk factors and adapted by adding other potential predictors of CVD. Predictive performance of the recalibrated and adapted SCORE algorithms was assessed and the adapted SCORE was externally validated. Of the 1016 included patients with RA, 103 patients experienced a CVD event. Discriminatory ability was comparable across the original, recalibrated and adapted SCORE algorithms. The Hosmer-Lemeshow test results indicated that all three algorithms provided poor model fit (p<0.05) for the Nijmegen and external validation cohort. The adapted SCORE algorithm mainly improves CVD risk estimation in non-event cases and does not show a clear advantage in reclassifying patients with RA who develop CVD (event cases) into more appropriate risk groups. This study demonstrates for the first time that adaptations of the SCORE algorithm do not provide sufficient improvement in risk prediction of future CVD in RA to serve as an appropriate alternative to the original SCORE. Risk assessment using the original SCORE algorithm may underestimate CVD risk in patients with RA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
James, Katherine M; Cowl, Clayton T; Tilburt, Jon C; Sinicrope, Pamela S; Robinson, Marguerite E; Frimannsdottir, Katrin R; Tiedje, Kristina; Koenig, Barbara A
2011-10-01
To assess the impact of direct-to-consumer (DTC) predictive genomic risk information on perceived risk and worry in the context of routine clinical care. Patients attending a preventive medicine clinic between June 1 and December 18, 2009, were randomly assigned to receive either genomic risk information from a DTC product plus usual care (n=74) or usual care alone (n=76). At intervals of 1 week and 1 year after their clinic visit, participants completed surveys containing validated measures of risk perception and levels of worry associated with the 12 conditions assessed by the DTC product. Of 345 patients approached, 150 (43%) agreed to participate, 64 (19%) refused, and 131 (38%) did not respond. Compared with those receiving usual care, participants who received genomic risk information initially rated their risk as higher for 4 conditions (abdominal aneurysm [P=.001], Graves disease [P=.04], obesity [P=.01], and osteoarthritis [P=.04]) and lower for one (prostate cancer [P=.02]). Although differences were not significant, they also reported higher levels of worry for 7 conditions and lower levels for 5 others. At 1 year, there were no significant differences between groups. Predictive genomic risk information modestly influences risk perception and worry. The extent and direction of this influence may depend on the condition being tested and its baseline prominence in preventive health care and may attenuate with time.
Ho, Gloria Y F; Einstein, Mark H; Romney, Seymour L; Kadish, Anna S; Abadi, Maria; Mikhail, Magdy; Basu, Jayasri; Thysen, Benjamin; Reimers, Laura; Palan, Prabhudas R; Trim, Shelly; Soroudi, Nafisseh; Burk, Robert D
2011-10-01
: This study examines risk factors for persistent cervical intraepithelial neoplasia (CIN) and examines whether human papillomavirus (HPV) testing predicts persistent lesions. : Women with histologically diagnosed CIN 1 or CIN 2 (n = 206) were followed up every 3 months without treatment. Human papillomavirus genotyping, plasma levels of ascorbic acid, and red blood cell folate levels were obtained. Cervical biopsy at 12 months determined the presence of CIN. Relative risk (RR) was estimated by log-linked binomial regression models. : At 12 months, 70% of CIN 1 versus 54% of CIN 2 lesions spontaneously regressed (p < .001). Levels of folate or ascorbic acid were not associated with persistent CIN at 12 months. Compared with HPV-negative women, those with multiple HPV types (RRs ranged from 1.68 to 2.17 at each follow-up visit) or high-risk types (RRs range = 1.74-2.09) were at increased risk for persistent CIN; women with HPV-16/18 had the highest risk (RRs range = 1.91-2.21). Persistent infection with a high-risk type was also associated with persistent CIN (RRs range = 1.50-2.35). Typing for high-risk HPVs at 6 months only had a sensitivity of 46% in predicting persistence of any lesions at 12 months. : Spontaneous regression of CIN 1 and 2 occurs frequently within 12 months. Human papillomavirus infection is the major risk factor for persistent CIN. However, HPV testing cannot reliably predict persistence of any lesion.
Hommen, Udo; Schmitt, Walter; Heine, Simon; Brock, Theo Cm; Duquesne, Sabine; Manson, Phil; Meregalli, Giovanna; Ochoa-Acuña, Hugo; van Vliet, Peter; Arts, Gertie
2016-01-01
This case study of the Society of Environmental Toxicology and Chemistry (SETAC) workshop MODELINK demonstrates the potential use of mechanistic effects models for macrophytes to extrapolate from effects of a plant protection product observed in laboratory tests to effects resulting from dynamic exposure on macrophyte populations in edge-of-field water bodies. A standard European Union (EU) risk assessment for an example herbicide based on macrophyte laboratory tests indicated risks for several exposure scenarios. Three of these scenarios are further analyzed using effect models for 2 aquatic macrophytes, the free-floating standard test species Lemna sp., and the sediment-rooted submerged additional standard test species Myriophyllum spicatum. Both models include a toxicokinetic (TK) part, describing uptake and elimination of the toxicant, a toxicodynamic (TD) part, describing the internal concentration-response function for growth inhibition, and a description of biomass growth as a function of environmental factors to allow simulating seasonal dynamics. The TK-TD models are calibrated and tested using laboratory tests, whereas the growth models were assumed to be fit for purpose based on comparisons of predictions with typical growth patterns observed in the field. For the risk assessment, biomass dynamics are predicted for the control situation and for several exposure levels. Based on specific protection goals for macrophytes, preliminary example decision criteria are suggested for evaluating the model outputs. The models refined the risk indicated by lower tier testing for 2 exposure scenarios, while confirming the risk associated for the third. Uncertainties related to the experimental and the modeling approaches and their application in the risk assessment are discussed. Based on this case study and the assumption that the models prove suitable for risk assessment once fully evaluated, we recommend that 1) ecological scenarios be developed that are also linked to the exposure scenarios, and 2) quantitative protection goals be set to facilitate the interpretation of model results for risk assessment. © 2015 SETAC.
Luo, Wei; Tran, Truyen; Berk, Michael; Venkatesh, Svetha
2016-01-01
Background Although physical illnesses, routinely documented in electronic medical records (EMR), have been found to be a contributing factor to suicides, no automated systems use this information to predict suicide risk. Objective The aim of this study is to quantify the impact of physical illnesses on suicide risk, and develop a predictive model that captures this relationship using EMR data. Methods We used history of physical illnesses (except chapter V: Mental and behavioral disorders) from EMR data over different time-periods to build a lookup table that contains the probability of suicide risk for each chapter of the International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) codes. The lookup table was then used to predict the probability of suicide risk for any new assessment. Based on the different lengths of history of physical illnesses, we developed six different models to predict suicide risk. We tested the performance of developed models to predict 90-day risk using historical data over differing time-periods ranging from 3 to 48 months. A total of 16,858 assessments from 7399 mental health patients with at least one risk assessment was used for the validation of the developed model. The performance was measured using area under the receiver operating characteristic curve (AUC). Results The best predictive results were derived (AUC=0.71) using combined data across all time-periods, which significantly outperformed the clinical baseline derived from routine risk assessment (AUC=0.56). The proposed approach thus shows potential to be incorporated in the broader risk assessment processes used by clinicians. Conclusions This study provides a novel approach to exploit the history of physical illnesses extracted from EMR (ICD-10 codes without chapter V-mental and behavioral disorders) to predict suicide risk, and this model outperforms existing clinical assessments of suicide risk. PMID:27400764
ERIC Educational Resources Information Center
Serry, Tanya Anne; Castles, Anne; Mensah, Fiona K.; Bavin, Edith L.; Eadie, Patricia; Pezic, Angela; Prior, Margot; Bretherton, Lesley; Reilly, Sheena
2015-01-01
The paper reports on a study designed to develop a risk model that can best predict single-word spelling in seven-year-old children when they were aged 4 and 5. Test measures, personal characteristics and environmental influences were all considered as variables from a community sample of 971 children. Strong concurrent correlations were found…
Predicting high-risk preterm birth using artificial neural networks.
Catley, Christina; Frize, Monique; Walker, C Robin; Petriu, Dorina C
2006-07-01
A reengineered approach to the early prediction of preterm birth is presented as a complimentary technique to the current procedure of using costly and invasive clinical testing on high-risk maternal populations. Artificial neural networks (ANNs) are employed as a screening tool for preterm birth on a heterogeneous maternal population; risk estimations use obstetrical variables available to physicians before 23 weeks gestation. The objective was to assess if ANNs have a potential use in obstetrical outcome estimations in low-risk maternal populations. The back-propagation feedforward ANN was trained and tested on cases with eight input variables describing the patient's obstetrical history; the output variables were: 1) preterm birth; 2) high-risk preterm birth; and 3) a refined high-risk preterm birth outcome excluding all cases where resuscitation was delivered in the form of free flow oxygen. Artificial training sets were created to increase the distribution of the underrepresented class to 20%. Training on the refined high-risk preterm birth model increased the network's sensitivity to 54.8%, compared to just over 20% for the nonartificially distributed preterm birth model.
Recent ecological responses to climate change support predictions of high extinction risk
Maclean, Ilya M. D.; Wilson, Robert J.
2011-01-01
Predicted effects of climate change include high extinction risk for many species, but confidence in these predictions is undermined by a perceived lack of empirical support. Many studies have now documented ecological responses to recent climate change, providing the opportunity to test whether the magnitude and nature of recent responses match predictions. Here, we perform a global and multitaxon metaanalysis to show that empirical evidence for the realized effects of climate change supports predictions of future extinction risk. We use International Union for Conservation of Nature (IUCN) Red List criteria as a common scale to estimate extinction risks from a wide range of climate impacts, ecological responses, and methods of analysis, and we compare predictions with observations. Mean extinction probability across studies making predictions of the future effects of climate change was 7% by 2100 compared with 15% based on observed responses. After taking account of possible bias in the type of climate change impact analyzed and the parts of the world and taxa studied, there was less discrepancy between the two approaches: predictions suggested a mean extinction probability of 10% across taxa and regions, whereas empirical evidence gave a mean probability of 14%. As well as mean overall extinction probability, observations also supported predictions in terms of variability in extinction risk and the relative risk associated with broad taxonomic groups and geographic regions. These results suggest that predictions are robust to methodological assumptions and provide strong empirical support for the assertion that anthropogenic climate change is now a major threat to global biodiversity. PMID:21746924
Recent ecological responses to climate change support predictions of high extinction risk.
Maclean, Ilya M D; Wilson, Robert J
2011-07-26
Predicted effects of climate change include high extinction risk for many species, but confidence in these predictions is undermined by a perceived lack of empirical support. Many studies have now documented ecological responses to recent climate change, providing the opportunity to test whether the magnitude and nature of recent responses match predictions. Here, we perform a global and multitaxon metaanalysis to show that empirical evidence for the realized effects of climate change supports predictions of future extinction risk. We use International Union for Conservation of Nature (IUCN) Red List criteria as a common scale to estimate extinction risks from a wide range of climate impacts, ecological responses, and methods of analysis, and we compare predictions with observations. Mean extinction probability across studies making predictions of the future effects of climate change was 7% by 2100 compared with 15% based on observed responses. After taking account of possible bias in the type of climate change impact analyzed and the parts of the world and taxa studied, there was less discrepancy between the two approaches: predictions suggested a mean extinction probability of 10% across taxa and regions, whereas empirical evidence gave a mean probability of 14%. As well as mean overall extinction probability, observations also supported predictions in terms of variability in extinction risk and the relative risk associated with broad taxonomic groups and geographic regions. These results suggest that predictions are robust to methodological assumptions and provide strong empirical support for the assertion that anthropogenic climate change is now a major threat to global biodiversity.
Atashi, Alireza; Amini, Shahram; Tashnizi, Mohammad Abbasi; Moeinipour, Ali Asghar; Aazami, Mathias Hossain; Tohidnezhad, Fariba; Ghasemi, Erfan; Eslami, Saeid
2018-01-01
Introduction The European System for Cardiac Operative Risk Evaluation II (EuroSCORE II) is a prediction model which maps 18 predictors to a 30-day post-operative risk of death concentrating on accurate stratification of candidate patients for cardiac surgery. Objective The objective of this study was to determine the performance of the EuroSCORE II risk-analysis predictions among patients who underwent heart surgeries in one area of Iran. Methods A retrospective cohort study was conducted to collect the required variables for all consecutive patients who underwent heart surgeries at Emam Reza hospital, Northeast Iran between 2014 and 2015. Univariate and multivariate analysis were performed to identify covariates which significantly contribute to higher EuroSCORE II in our population. External validation was performed by comparing the real and expected mortality using area under the receiver operating characteristic curve (AUC) for discrimination assessment. Also, Brier Score and Hosmer-Lemeshow goodness-of-fit test were used to show the overall performance and calibration level, respectively. Results Two thousand five hundred eight one (59.6% males) were included. The observed mortality rate was 3.3%, but EuroSCORE II had a prediction of 4.7%. Although the overall performance was acceptable (Brier score=0.047), the model showed poor discriminatory power by AUC=0.667 (sensitivity=61.90, and specificity=66.24) and calibration (Hosmer-Lemeshow test, P<0.01). Conclusion Our study showed that the EuroSCORE II discrimination power is less than optimal for outcome prediction and less accurate for resource allocation programs. It highlights the need for recalibration of this risk stratification tool aiming to improve post cardiac surgery outcome predictions in Iran. PMID:29617500
Baker, Stuart G; Schuit, Ewoud; Steyerberg, Ewout W; Pencina, Michael J; Vickers, Andrew; Vickers, Andew; Moons, Karel G M; Mol, Ben W J; Lindeman, Karen S
2014-09-28
An important question in the evaluation of an additional risk prediction marker is how to interpret a small increase in the area under the receiver operating characteristic curve (AUC). Many researchers believe that a change in AUC is a poor metric because it increases only slightly with the addition of a marker with a large odds ratio. Because it is not possible on purely statistical grounds to choose between the odds ratio and AUC, we invoke decision analysis, which incorporates costs and benefits. For example, a timely estimate of the risk of later non-elective operative delivery can help a woman in labor decide if she wants an early elective cesarean section to avoid greater complications from possible later non-elective operative delivery. A basic risk prediction model for later non-elective operative delivery involves only antepartum markers. Because adding intrapartum markers to this risk prediction model increases AUC by 0.02, we questioned whether this small improvement is worthwhile. A key decision-analytic quantity is the risk threshold, here the risk of later non-elective operative delivery at which a patient would be indifferent between an early elective cesarean section and usual care. For a range of risk thresholds, we found that an increase in the net benefit of risk prediction requires collecting intrapartum marker data on 68 to 124 women for every correct prediction of later non-elective operative delivery. Because data collection is non-invasive, this test tradeoff of 68 to 124 is clinically acceptable, indicating the value of adding intrapartum markers to the risk prediction model. Copyright © 2014 John Wiley & Sons, Ltd.
Fazel, Seena; Chang, Zheng; Fanshawe, Thomas; Långström, Niklas; Lichtenstein, Paul; Larsson, Henrik; Mallett, Susan
2016-06-01
More than 30 million people are released from prison worldwide every year, who include a group at high risk of perpetrating interpersonal violence. Because there is considerable inconsistency and inefficiency in identifying those who would benefit from interventions to reduce this risk, we developed and validated a clinical prediction rule to determine the risk of violent offending in released prisoners. We did a cohort study of a population of released prisoners in Sweden. Through linkage of population-based registers, we developed predictive models for violent reoffending for the cohort. First, we developed a derivation model to determine the strength of prespecified, routinely obtained criminal history, sociodemographic, and clinical risk factors using multivariable Cox proportional hazard regression, and then tested them in an external validation. We measured discrimination and calibration for prediction of our primary outcome of violent reoffending at 1 and 2 years using cutoffs of 10% for 1-year risk and 20% for 2-year risk. We identified a cohort of 47 326 prisoners released in Sweden between 2001 and 2009, with 11 263 incidents of violent reoffending during this period. We developed a 14-item derivation model to predict violent reoffending and tested it in an external validation (assigning 37 100 individuals to the derivation sample and 10 226 to the validation sample). The model showed good measures of discrimination (Harrell's c-index 0·74) and calibration. For risk of violent reoffending at 1 year, sensitivity was 76% (95% CI 73-79) and specificity was 61% (95% CI 60-62). Positive and negative predictive values were 21% (95% CI 19-22) and 95% (95% CI 94-96), respectively. At 2 years, sensitivity was 67% (95% CI 64-69) and specificity was 70% (95% CI 69-72). Positive and negative predictive values were 37% (95% CI 35-39) and 89% (95% CI 88-90), respectively. Of individuals with a predicted risk of violent reoffending of 50% or more, 88% had drug and alcohol use disorders. We used the model to generate a simple, web-based, risk calculator (OxRec) that is free to use. We have developed a prediction model in a Swedish prison population that can assist with decision making on release by identifying those who are at low risk of future violent offending, and those at high risk of violent reoffending who might benefit from drug and alcohol treatment. Further assessments in other populations and countries are needed. Wellcome Trust, the Swedish Research Council, and the Swedish Research Council for Health, Working Life and Welfare. Copyright © 2016 Fazel et al. Open Access article distributed under the terms of CC BY. Published by Elsevier Ltd.. All rights reserved.
Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.
Schoene, Daniel; Delbaere, Kim; Lord, Stephen R
2017-08-01
Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Schaefer, Jonathan D; Scult, Matthew A; Caspi, Avshalom; Arseneault, Louise; Belsky, Daniel W; Hariri, Ahmad R; Harrington, Honalee; Houts, Renate; Ramrakha, Sandhya; Poulton, Richie; Moffitt, Terrie E
2017-11-16
Cognitive impairment has been identified as an important aspect of major depressive disorder (MDD). We tested two theories regarding the association between MDD and cognitive functioning using data from longitudinal cohort studies. One theory, the cognitive reserve hypothesis, suggests that higher cognitive ability in childhood decreases risk of later MDD. The second, the scarring hypothesis, instead suggests that MDD leads to persistent cognitive deficits following disorder onset. We tested both theories in the Dunedin Study, a population-representative cohort followed from birth to midlife and assessed repeatedly for both cognitive functioning and psychopathology. We also used data from the Environmental Risk Longitudinal Twin Study to test whether childhood cognitive functioning predicts future MDD risk independent of family-wide and genetic risk using a discordant twin design. Contrary to both hypotheses, we found that childhood cognitive functioning did not predict future risk of MDD, nor did study members with a past history of MDD show evidence of greater cognitive decline unless MDD was accompanied by other comorbid psychiatric conditions. Our results thus suggest that low cognitive functioning is related to comorbidity, but is neither an antecedent nor an enduring consequence of MDD. Future research may benefit from considering cognitive deficits that occur during depressive episodes from a transdiagnostic perspective.
Discrimination of health risk by combined body mass index and waist circumference.
Ardern, Christopher I; Katzmarzyk, Peter T; Janssen, Ian; Ross, Robert
2003-01-01
NIH Clinical Guidelines (1998) recommend the measurement of waist circumference (WC, centimeters) within body mass index (BMI, kilograms per square meter) categories as a screening tool for increased health risk. The Canada Heart Health Surveys (1986 through 1992) were used to describe the prevalence of the metabolic syndrome in Canada and to test the use of the NIH guidelines for predicting metabolic risk factors. The sample included 7981 participants ages 20 to 74 years who had complete data for WC, BMI, high-density lipoprotein-cholesterol, triglycerides, diabetic status, and systolic and diastolic blood pressures. National Cholesterol Education Program Adult Treatment Panel III risk categories were used to identify the metabolic syndrome and associated risk factors. Logistic regression was used to test the hypothesis that WC improves the prediction of the metabolic syndrome, within overweight (25 to 29.9 kg/m(2)) and obese I (30 to 34.9 kg/m(2)) BMI categories. The prevalence of the metabolic syndrome was 17.0% in men and 13.2% in women. The odds ratios (OR) for the prediction of the metabolic syndrome were elevated in overweight [OR, 1.85; 95% confidence interval (95%CI), 1.02 to 3.35] and obese (OR, 2.35; 95%CI, 1.25 to 4.42) women with a high WC compared with overweight and obese women with a low WC, respectively. On the other hand, WC was not predictive of the metabolic syndrome or component risk factors in men, within BMI categories. In women already at increased health risk because of an elevated BMI, the additional measurement of WC may help identify cardiovascular risk.
Adegboyega, Titilayo O; Borgert, Andrew J; Lambert, Pamela J; Jarman, Benjamin T
2017-01-01
Discussing potential morbidity and mortality is essential to informed decision-making and consent. The American College of Surgery National Surgical Quality Improvement Program developed an online risk calculator (RC) using patient-specific information to determine operative risk. Colorectal procedures at our independent academic medical center from 2010 to 2011 were evaluated. The RC's predicted outcomes were compared with observed outcomes. Statistical analysis included Brier score, Wilcoxon sign rank test, and standardized event ratio. There were 324 patients included. The RC's Brier score was .24 (.015-.219) for predicting mortality and morbidity, respectively. The observed event rate for surgical site infection and any complication was higher than the RC predicted (standardized event ratio 1.9 CI [1.49 to 2.39] and 1.39 CI [1.14 to 1.68], respectively). The observed length of stay was longer than predicted (5.6 vs 6.6 days, P < .001). The RC underestimated the surgical site infection and overall complication rates. The RC is a valuable tool in predicting risk for adverse outcomes; however, institution-specific trends may influence actual risk. Surgeons and institutions must recognize areas where they are outliers from estimated risks and tailor risk discussions accordingly. Copyright © 2016 Elsevier Inc. All rights reserved.
Urinary lithogenesis risk tests: comparison of a commercial kit and a laboratory prototype test.
Grases, Félix; Costa-Bauzá, Antonia; Prieto, Rafel M; Arrabal, Miguel; De Haro, Tomás; Lancina, Juan A; Barbuzano, Carmen; Colom, Sergi; Riera, Joaquín; Perelló, Joan; Isern, Bernat; Sanchis, Pilar; Conte, Antonio; Barragan, Fernando; Gomila, Isabel
2011-11-01
Renal stone formation is a multifactorial process depending in part on urine composition. Other parameters relate to structural or pathological features of the kidney. To date, routine laboratory estimation of urolithiasis risk has been based on determination of urinary composition. This process requires collection of at least two 24 h urine samples, which is tedious for patients. The most important feature of urinary lithogenic risk is the balance between various urinary parameters, although unknown factors may be involved. The objective of this study was to compare data obtained using a commercial kit with those of a laboratory prototype, using a multicentre approach, to validate the utility of these methods in routine clinical practice. A simple new commercial test (NefroPlus®; Sarstedt AG & Co., Nümbrecht, Germany) evaluating the capacity of urine to crystallize calcium salts, and thus permitting detection of patients at risk for stone development, was compared with a prototype test previously described by this group. Urine of 64 volunteers produced during the night was used in these comparisons. The commercial test was also used to evaluate urine samples of 83 subjects in one of three hospitals. Both methods were essentially in complete agreement (98%) with respect to test results. The multicentre data were: sensitivity 94.7%; specificity 76.9%; positive predictive value (lithogenic urine) 90.0%; negative predictive value (non-lithogenic urine) 87.0%; test efficacy 89.2%. The new commercial NefroPlus test offers fast and cheap evaluation of the overall risk of development of urinary calcium-containing calculi.
Rosellini, Anthony J.; Monahan, John; Street, Amy E.; Hill, Eric D.; Petukhova, Maria; Reis, Ben Y.; Sampson, Nancy A.; Benedek, David M.; Bliese, Paul; Stein, Murray B.; Ursano, Robert J.; Kessler, Ronald C.
2016-01-01
Growing concerns exist about violent crimes perpetrated by U.S. military personnel. Although interventions exist to reduce violent crimes in high-risk populations, optimal implementation requires evidence-based targeting. The goal of the current study was to use machine learning methods (stepwise and penalized regression; random forests) to develop models to predict minor violent crime perpetration among U.S. Army soldiers. Predictors were abstracted from administrative data available for all 975,057 soldiers in the U.S. Army 2004–2009, among whom 25,966 men and 2,728 women committed a first founded minor violent crime (simple assault, blackmail-extortion-intimidation, rioting, harassment). Temporally prior administrative records measuring socio-demographic, Army career, criminal justice, medical/pharmacy, and contextual variables were used to build separate male and female prediction models that were then tested in an independent 2011–2013 sample. Final model predictors included young age, low education, early career stage, prior crime involvement, and outpatient treatment for diverse emotional and substance use problems. Area under the receiver operating characteristic curve was 0.79 (for men and women) in the 2004–2009 training sample and 0.74–0.82 (men-women) in the 2011–2013 test sample. 30.5–28.9% (men-women) of all administratively-recorded crimes in 2004–2009 were committed by the 5% of soldiers having highest predicted risk, with similar proportions (28.5–29.0%) when the 2004–2009 coefficients were applied to the 2011–2013 test sample. These results suggest that it may be possible to target soldiers at high-risk of violence perpetration for preventive interventions, although final decisions about such interventions would require weighing predicted effectiveness against intervention costs and competing risks. PMID:27741501
Rosellini, Anthony J; Monahan, John; Street, Amy E; Hill, Eric D; Petukhova, Maria; Reis, Ben Y; Sampson, Nancy A; Benedek, David M; Bliese, Paul; Stein, Murray B; Ursano, Robert J; Kessler, Ronald C
2017-01-01
Growing concerns exist about violent crimes perpetrated by U.S. military personnel. Although interventions exist to reduce violent crimes in high-risk populations, optimal implementation requires evidence-based targeting. The goal of the current study was to use machine learning methods (stepwise and penalized regression; random forests) to develop models to predict minor violent crime perpetration among U.S. Army soldiers. Predictors were abstracted from administrative data available for all 975,057 soldiers in the U.S. Army 2004-2009, among whom 25,966 men and 2728 women committed a first founded minor violent crime (simple assault, blackmail-extortion-intimidation, rioting, harassment). Temporally prior administrative records measuring socio-demographic, Army career, criminal justice, medical/pharmacy, and contextual variables were used to build separate male and female prediction models that were then tested in an independent 2011-2013 sample. Final model predictors included young age, low education, early career stage, prior crime involvement, and outpatient treatment for diverse emotional and substance use problems. Area under the receiver operating characteristic curve was 0.79 (for men and women) in the 2004-2009 training sample and 0.74-0.82 (men-women) in the 2011-2013 test sample. 30.5-28.9% (men-women) of all administratively-recorded crimes in 2004-2009 were committed by the 5% of soldiers having highest predicted risk, with similar proportions (28.5-29.0%) when the 2004-2009 coefficients were applied to the 2011-2013 test sample. These results suggest that it may be possible to target soldiers at high-risk of violence perpetration for preventive interventions, although final decisions about such interventions would require weighing predicted effectiveness against intervention costs and competing risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sparse feature selection for classification and prediction of metastasis in endometrial cancer.
Ahsen, Mehmet Eren; Boren, Todd P; Singh, Nitin K; Misganaw, Burook; Mutch, David G; Moore, Kathleen N; Backes, Floor J; McCourt, Carolyn K; Lea, Jayanthi S; Miller, David S; White, Michael A; Vidyasagar, Mathukumalli
2017-03-27
Metastasis via pelvic and/or para-aortic lymph nodes is a major risk factor for endometrial cancer. Lymph-node resection ameliorates risk but is associated with significant co-morbidities. Incidence in patients with stage I disease is 4-22% but no mechanism exists to accurately predict it. Therefore, national guidelines for primary staging surgery include pelvic and para-aortic lymph node dissection for all patients whose tumor exceeds 2cm in diameter. We sought to identify a robust molecular signature that can accurately classify risk of lymph node metastasis in endometrial cancer patients. 86 tumors matched for age and race, and evenly distributed between lymph node-positive and lymph node-negative cases, were selected as a training cohort. Genomic micro-RNA expression was profiled for each sample to serve as the predictive feature matrix. An independent set of 28 tumor samples was collected and similarly characterized to serve as a test cohort. A feature selection algorithm was designed for applications where the number of samples is far smaller than the number of measured features per sample. A predictive miRNA expression signature was developed using this algorithm, which was then used to predict the metastatic status of the independent test cohort. A weighted classifier, using 18 micro-RNAs, achieved 100% accuracy on the training cohort. When applied to the testing cohort, the classifier correctly predicted 90% of node-positive cases, and 80% of node-negative cases (FDR = 6.25%). Results indicate that the evaluation of the quantitative sparse-feature classifier proposed here in clinical trials may lead to significant improvement in the prediction of lymphatic metastases in endometrial cancer patients.
Whitney, Jon; Corredor, German; Janowczyk, Andrew; Ganesan, Shridar; Doyle, Scott; Tomaszewski, John; Feldman, Michael; Gilmore, Hannah; Madabhushi, Anant
2018-05-30
Gene-expression companion diagnostic tests, such as the Oncotype DX test, assess the risk of early stage Estrogen receptor (ER) positive (+) breast cancers, and guide clinicians in the decision of whether or not to use chemotherapy. However, these tests are typically expensive, time consuming, and tissue-destructive. In this paper, we evaluate the ability of computer-extracted nuclear morphology features from routine hematoxylin and eosin (H&E) stained images of 178 early stage ER+ breast cancer patients to predict corresponding risk categories derived using the Oncotype DX test. A total of 216 features corresponding to the nuclear shape and architecture categories from each of the pathologic images were extracted and four feature selection schemes: Ranksum, Principal Component Analysis with Variable Importance on Projection (PCA-VIP), Maximum-Relevance, Minimum Redundancy Mutual Information Difference (MRMR MID), and Maximum-Relevance, Minimum Redundancy - Mutual Information Quotient (MRMR MIQ), were employed to identify the most discriminating features. These features were employed to train 4 machine learning classifiers: Random Forest, Neural Network, Support Vector Machine, and Linear Discriminant Analysis, via 3-fold cross validation. The four sets of risk categories, and the top Area Under the receiver operating characteristic Curve (AUC) machine classifier performances were: 1) Low ODx and Low mBR grade vs. High ODx and High mBR grade (Low-Low vs. High-High) (AUC = 0.83), 2) Low ODx vs. High ODx (AUC = 0.72), 3) Low ODx vs. Intermediate and High ODx (AUC = 0.58), and 4) Low and Intermediate ODx vs. High ODx (AUC = 0.65). Trained models were tested independent validation set of 53 cases which comprised of Low and High ODx risk, and demonstrated per-patient accuracies ranging from 75 to 86%. Our results suggest that computerized image analysis of digitized H&E pathology images of early stage ER+ breast cancer might be able predict the corresponding Oncotype DX risk categories.
Simard, Jacques; Dumont, Martine; Moisan, Anne‐Marie; Gaborieau, Valérie; Vézina, Hélène; Durocher, Francine; Chiquette, Jocelyne; Plante, Marie; Avard, Denise; Bessette, Paul; Brousseau, Claire; Dorval, Michel; Godard, Béatrice; Houde, Louis; Joly, Yann; Lajoie, Marie‐Andrée; Leblanc, Gilles; Lépine, Jean; Lespérance, Bernard; Malouin, Hélène; Parboosingh, Jillian; Pichette, Roxane; Provencher, Louise; Rhéaume, Josée; Sinnett, Daniel; Samson, Carolle; Simard, Jean‐Claude; Tranchant, Martine; Voyer, Patricia; BRCAs, INHERIT; Easton, Douglas; Tavtigian, Sean V; Knoppers, Bartha‐Maria; Laframboise, Rachel; Bridge, Peter; Goldgar, David
2007-01-01
Background and objective In clinical settings with fixed resources allocated to predictive genetic testing for high‐risk cancer predisposition genes, optimal strategies for mutation screening programmes are critically important. These depend on the mutation spectrum found in the population under consideration and the frequency of mutations detected as a function of the personal and family history of cancer, which are both affected by the presence of founder mutations and demographic characteristics of the underlying population. The results of multistep genetic testing for mutations in BRCA1 or BRCA2 in a large series of families with breast cancer in the French‐Canadian population of Quebec, Canada are reported. Methods A total of 256 high‐risk families were ascertained from regional familial cancer clinics throughout the province of Quebec. Initially, families were tested for a panel of specific mutations known to occur in this population. Families in which no mutation was identified were then comprehensively tested. Three algorithms to predict the presence of mutations were evaluated, including the prevalence tables provided by Myriad Genetics Laboratories, the Manchester Scoring System and a logistic regression approach based on the data from this study. Results 8 of the 15 distinct mutations found in 62 BRCA1/BRCA2‐positive families had never been previously reported in this population, whereas 82% carried 1 of the 4 mutations currently observed in ⩾2 families. In the subset of 191 families in which at least 1 affected individual was tested, 29% carried a mutation. Of these 27 BRCA1‐positive and 29 BRCA2‐positive families, 48 (86%) were found to harbour a mutation detected by the initial test. Among the remaining 143 inconclusive families, all 8 families found to have a mutation after complete sequencing had Manchester Scores ⩾18. The logistic regression and Manchester Scores provided equal predictive power, and both were significantly better than the Myriad Genetics Laboratories prevalence tables (p<0.001). A threshold of Manchester Score ⩾18 provided an overall sensitivity of 86% and a specificity of 82%, with a positive predictive value of 66% in this population. Conclusion In this population, a testing strategy with an initial test using a panel of reported recurrent mutations, followed by full sequencing in families with Manchester Scores ⩾18, represents an efficient test in terms of overall cost and sensitivity. PMID:16905680
2011-01-01
Background Endothelial function has been shown to be a highly sensitive marker for the overall cardiovascular risk of an individual. Furthermore, there is evidence of important sex differences in endothelial function that may underlie the differential presentation of cardiovascular disease (CVD) in women relative to men. As such, measuring endothelial function may have sex-specific prognostic value for the prediction of CVD events, thus improving risk stratification for the overall prediction of CVD in both men and women. The primary objective of this study is to assess the clinical utility of the forearm hyperaemic reactivity (FHR) test (a proxy measure of endothelial function) for the prediction of CVD events in men vs. women using a novel, noninvasive nuclear medicine -based approach. It is hypothesised that: 1) endothelial dysfunction will be a significant predictor of 5-year CVD events independent of baseline stress test results, clinical, demographic, and psychological variables in both men and women; and 2) endothelial dysfunction will be a better predictor of 5-year CVD events in women compared to men. Methods/Design A total of 1972 patients (812 men and 1160 women) undergoing a dipyridamole stress testing were recruited. Medical history, CVD risk factors, health behaviours, psychological status, and gender identity were assessed via structured interview or self-report questionnaires at baseline. In addition, FHR was assessed, as well as levels of sex hormones via blood draw. Patients will be followed for 5 years to assess major CVD events (cardiac mortality, non-fatal MI, revascularization procedures, and cerebrovascular events). Discussion This is the first study to determine the extent and nature of any sex differences in the ability of endothelial function to predict CVD events. We believe the results of this study will provide data that will better inform the choice of diagnostic tests in men and women and bring the quality of risk stratification in women on par with that of men. PMID:21831309
Behavioral Risk Assessment From Newborn to Preschool: The Value of Older Siblings.
Rodrigues, Michelle; Binnoon-Erez, Noam; Plamondon, Andre; Jenkins, Jennifer M
2017-08-01
The aim of this study was to examine the plausibility of a risk prediction tool in infancy for school-entry emotional and behavioral problems. Familial aggregation has been operationalized previously as maternal psychopathology. The hypothesis was tested that older sibling (OS) psychopathology, as an indicator of familial aggregation, would enable a fair level of risk prediction compared with previous research, when combined with traditional risk factors. By using a longitudinal design, data on child and family risk factors were collected on 323 infants ( M = 2.00 months), all of whom had OSs. Infants were followed up 4.5 years later when both parents provided ratings of emotional and behavioral problems. Multiple regression and receiver operating characteristic curve analyses were conducted for emotional, conduct, and attention problems separately. The emotional and behavioral problems of OSs at infancy were the strongest predictors of the same problems in target children 4.5 years later. Other risk factors, including maternal depression and socioeconomic status provided extra, but weak, significant prediction. The area under the receiver operating characteristic curve for emotional and conduct problems yielded a fair prediction. This study is the first to offer a fair degree of prediction from risk factors at birth to school-entry emotional and behavioral problems. This degree of prediction was achieved with the inclusion of the emotional and behavioral problems of OSs (thus limiting generalizability to children with OSs). The inclusion of OS psychopathology raises risk prediction to a fair level. Copyright © 2017 by the American Academy of Pediatrics.
Predicting Math Outcomes from a Reading Screening Assessment in Grades 3-8. REL 2016-180
ERIC Educational Resources Information Center
Truckenmiller, Adrea J.; Petscher, Yaacov; Gaughan, Linda; Dwyer, Ted
2016-01-01
District and state education leaders frequently use screening assessments to identify students who are at risk of performing poorly on end-of-year achievement tests. This study examines the use of a universal screening assessment of reading skills for early identification of students at risk of low achievement on nationally normed tests of reading…
Using liver enzymes as screening tests to predict mortality risk.
Fulks, Michael; Stout, Robert L; Dolan, Vera F
2008-01-01
Determine the relationship between liver function test results (GGT, alkaline phosphatase, AST, and ALT) and all-cause mortality in life insurance applicants. By use of the Social Security Master Death File, mortality was examined in 1,905,664 insurance applicants for whom blood samples were submitted to the Clinical Reference Laboratory. There were 50,174 deaths observed in this study population. Results were stratified by 3 age/sex groups: females, age <60; males, age <60; and all, age 60+. Liver function test values were grouped using percentiles of their distribution in these 3 age/sex groups, as well as ranges of actual values. Using the risk of the middle 50% of the population by distribution as a reference, relative mortality observed for GGT and alkaline phosphatase was linear with a steep slope from very low to relatively high values. Relative mortality was increased at lower values for both AST and ALT. ALT did not predict mortality for values above the middle 50% of its distribution. GGT and alkaline phosphatase are significant predictors of mortality risk for all values. ALT is still useful for triggering further testing for hepatitis, but AST should be used instead to assess mortality risk linked with transaminases.
Teyhen, Deydre S; Shaffer, Scott W; Butler, Robert J; Goffar, Stephen L; Kiesel, Kyle B; Rhon, Daniel I; Boyles, Robert E; McMillian, Daniel J; Williamson, Jared N; Plisky, Phillip J
2016-10-01
Performance on movement tests helps to predict injury risk in a variety of physically active populations. Understanding baseline measures for normal is an important first step. Determine differences in physical performance assessments and describe normative values for these tests based on military unit type. Assessment of power, balance, mobility, motor control, and performance on the Army Physical Fitness Test were assessed in a cohort of 1,466 soldiers. Analysis of variance was performed to compare the results based on military unit type (Rangers, Combat, Combat Service, and Combat Service Support) and analysis of covariance was performed to determine the influence of age and gender. Rangers performed the best on all performance and fitness measures (p < 0.05). Combat soldiers performed better than Combat Service and Service Support soldiers on several physical performance tests and the Army Physical Fitness Test (p < 0.05). Performance in Combat Service and Service Support soldiers was equivalent on most measures (p < 0.05). Functional performance and level of fitness varied significantly by military unit type. Understanding these differences will provide a foundation for future injury prediction and prevention strategies. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Nicolau-Raducu, Ramona; Gitman, Marina; Ganier, Donald; Loss, George E; Cohen, Ari J; Patel, Hamang; Girgrah, Nigel; Sekar, Krish; Nossaman, Bobby
2015-01-01
Current American College of Cardiology/American Heart Association guidelines caution that preoperative noninvasive cardiac tests may have poor predictive value for detecting coronary artery disease in liver transplant candidates. The purpose of our study was to evaluate the role of clinical predictor variables for early and late cardiac morbidity and mortality and the predictive values of noninvasive cardiac tests for perioperative cardiac events in a high-risk liver transplant population. In all, 389 adult recipients were retrospectively analyzed for a median follow-up time of 3.4 years (range = 2.3-4.4 years). Overall survival was 83%. During the first year after transplantation, cardiovascular morbidity and mortality rates were 15.2% and 2.8%. In patients who survived the first year, cardiovascular morbidity and mortality rates were 3.9% and 2%, with cardiovascular etiology as the third leading cause of death. Dobutamine stress echocardiography (DSE) and single-photon emission computed tomography had respective sensitivities of 9% and 57%, specificities of 98% and 75%, positive predictive values of 33% and 28%, and negative predictive values of 89% and 91% for predicting early cardiac events. A rate blood pressure product less than 12,000 with DSE was associated with an increased risk for postoperative atrial fibrillation. Correspondence analysis identified a statistical association between nonalcoholic steatohepatitis/cryptogenic cirrhosis and postoperative myocardial ischemia. Logistic regression identified 3 risk factors for postoperative acute coronary syndrome: age, history of coronary artery disease, and pretransplant requirement for vasopressors. Multivariable analysis showed statistical associations of the Model for End-Stage Liver Disease score and the development of acute kidney injury as risk factors for overall cardiac-related mortality. These findings may help in identifying high-risk patients and may lead to the development of better cardiac tests. © 2014 American Association for the Study of Liver Diseases.
Predictors of Home Radon Testing and Implications for Testing Promotion Programs.
ERIC Educational Resources Information Center
Sandman, Peter M.; Weinstein, Neil D.
1993-01-01
Analysis of 4 New Jersey studies of 3,329 homeowners found that (1) thinking about radon testing is predicted by general radon knowledge; (2) decision to test is related to perceived likelihood of risk; and (3) actual testing is influenced by situational factors such as locating and choosing test kits. (SK)
Wertz, J.; Caspi, A.; Belsky, D. W.; Beckley, A. L.; Arseneault, L.; Barnes, J. C.; Corcoran, D. L.; Hogan, S.; Houts, R. M.; Morgan, N.; Odgers, C. L.; Prinz, J. A.; Sugden, K.; Williams, B. S.; Poulton, R.; Moffitt, T. E.
2018-01-01
Drawing on psychological and sociological theories of crime causation, we tested the hypothesis that genetic risk for low educational attainment (assessed via a genome-wide polygenic score) is associated with offending. We further tested hypotheses of how polygenic risk relates to the development of antisocial behavior from childhood through adulthood. Across the Dunedin and E-Risk birth cohorts of individuals growing up 20 years and 20,000 kilometres apart, education polygenic scores predicted risk of a criminal record, with modest effects. Polygenic risk manifested during primary schooling, in lower cognitive abilities, lower self-control, academic difficulties, and truancy, and predicted a life-course persistent pattern of antisocial behavior that onsets in childhood and persists into adulthood. Crime is central in the nature/nurture debate, and findings reported here demonstrate how molecular-genetic discoveries can be incorporated into established theories of antisocial behavior. They also suggest the hypothesis that improving school experiences might prevent genetic influences on crime from unfolding. PMID:29513605
A decision model to predict the risk of the first fall onset.
Deschamps, Thibault; Le Goff, Camille G; Berrut, Gilles; Cornu, Christophe; Mignardot, Jean-Baptiste
2016-08-01
Miscellaneous features from various domains are accepted to be associated with the risk of falling in the elderly. However, only few studies have focused on establishing clinical tools to predict the risk of the first fall onset. A model that would objectively and easily evaluate the risk of a first fall occurrence in the coming year still needs to be built. We developed a model based on machine learning, which might help the medical staff predict the risk of the first fall onset in a one-year time window. Overall, 426 older adults who had never fallen were assessed on 73 variables, comprising medical, social and physical outcomes, at t0. Each fall was recorded at a prospective 1-year follow-up. A decision tree was built on a randomly selected training subset of the cohort (80% of the full-set) and validated on an independent test set. 82 participants experienced a first fall during the follow-up. The machine learning process independently extracted 13 powerful parameters and built a model showing 89% of accuracy for the overall classification with 83%-82% of true positive fallers and 96%-61% of true negative non-fallers (training set vs. independent test set). This study provides a pilot tool that could easily help the gerontologists refine the evaluation of the risk of the first fall onset and prioritize the effective prevention strategies. The study also offers a transparent framework for future, related investigation that would validate the clinical relevance of the established model by independently testing its accuracy on larger cohort. Copyright © 2016 Elsevier Inc. All rights reserved.
Irvine, Michael A; Konrad, Bernhard P; Michelow, Warren; Balshaw, Robert; Gilbert, Mark; Coombs, Daniel
2018-03-01
Increasing HIV testing rates among high-risk groups should lead to increased numbers of cases being detected. Coupled with effective treatment and behavioural change among individuals with detected infection, increased testing should also reduce onward incidence of HIV in the population. However, it can be difficult to predict the strengths of these effects and thus the overall impact of testing. We construct a mathematical model of an ongoing HIV epidemic in a population of gay, bisexual and other men who have sex with men. The model incorporates different levels of infection risk, testing habits and awareness of HIV status among members of the population. We introduce a novel Bayesian analysis that is able to incorporate potentially unreliable sexual health survey data along with firm clinical diagnosis data. We parameterize the model using survey and diagnostic data drawn from a population of men in Vancouver, Canada. We predict that increasing testing frequency will yield a small-scale but long-term impact on the epidemic in terms of new infections averted, as well as a large short-term impact on numbers of detected cases. These effects are predicted to occur even when a testing intervention is short-lived. We show that a short-lived but intensive testing campaign can potentially produce many of the same benefits as a campaign that is less intensive but of longer duration. © 2018 The Author(s).
Shah, Jai L.; Tandon, Neeraj; Keshavan, Matcheri S.
2016-01-01
Aim Accurate prediction of which individuals will go on to develop psychosis would assist early intervention and prevention paradigms. We sought to review investigations of prospective psychosis prediction based on markers and variables examined in longitudinal familial high-risk (FHR) studies. Methods We performed literature searches in MedLine, PubMed and PsycINFO for articles assessing performance characteristics of predictive clinical tests in FHR studies of psychosis. Studies were included if they reported one or more predictive variables in subjects at FHR for psychosis. We complemented this search strategy with references drawn from articles, reviews, book chapters and monographs. Results Across generations of familial high-risk projects, predictive studies have investigated behavioral, cognitive, psychometric, clinical, neuroimaging, and other markers. Recent analyses have incorporated multivariate and multi-domain approaches to risk ascertainment, although with still generally modest results. Conclusions While a broad range of risk factors has been identified, no individual marker or combination of markers can at this time enable accurate prospective prediction of emerging psychosis for individuals at FHR. We outline the complex and multi-level nature of psychotic illness, the myriad of factors influencing its development, and methodological hurdles to accurate and reliable prediction. Prospects and challenges for future generations of FHR studies are discussed in the context of early detection and intervention strategies. PMID:23693118
The ACS NSQIP Risk Calculator Is a Fair Predictor of Acute Periprosthetic Joint Infection.
Wingert, Nathaniel C; Gotoff, James; Parrilla, Edgardo; Gotoff, Robert; Hou, Laura; Ghanem, Elie
2016-07-01
Periprosthetic joint infection (PJI) is a severe complication from the patient's perspective and an expensive one in a value-driven healthcare model. Risk stratification can help identify those patients who may have risk factors for complications that can be mitigated in advance of elective surgery. Although numerous surgical risk calculators have been created, their accuracy in predicting outcomes, specifically PJI, has not been tested. (1) How accurate is the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) Surgical Site Infection Calculator in predicting 30-day postoperative infection? (2) How accurate is the calculator in predicting 90-day postoperative infection? We isolated 1536 patients who underwent 1620 primary THAs and TKAs at our institution during 2011 to 2013. Minimum followup was 90 days. The ACS NSQIP Surgical Risk Calculator was assessed in its ability to predict acute PJI within 30 and 90 days postoperatively. Patients who underwent a repeat surgical procedure within 90 days of the index arthroplasty and in whom at least one positive intraoperative culture was obtained at time of reoperation were considered to have PJI. A total of 19 cases of PJI were identified, including 11 at 30 days and an additional eight instances by 90 days postoperatively. Patient-specific risk probabilities for PJI based on demographics and comorbidities were recorded from the ACS NSQIP Surgical Risk Calculator website. The area under the curve (AUC) for receiver operating characteristic (ROC) curves was calculated to determine the predictability of the risk probability for PJI. The AUC is an effective method for quantifying the discriminatory capacity of a diagnostic test to correctly classify patients with and without infection in which it is defined as excellent (AUC 0.9-1), good (AUC 0.8-0.89), fair (AUC 0.7-0.79), poor (AUC 0.6-0.69), or fail/no discriminatory capacity (AUC 0.5-0.59). A p value of < 0.05 was considered to be statistically significant. The ACS NSQIP Surgical Risk Calculator showed only fair accuracy in predicting 30-day PJI (AUC: 74.3% [confidence interval {CI}, 59.6%-89.0%]. For 90-day PJI, the risk calculator was also only fair in accuracy (AUC: 71.3% [CI, 59.9%-82.6%]). Conclusions The ACS NSQIP Surgical Risk Calculator is a fair predictor of acute PJI at the 30- and 90-day intervals after primary THA and TKA. Practitioners should exercise caution in using this tool as a predictive aid for PJI, because it demonstrates only fair value in this application. Existing predictive tools for PJI could potentially be made more robust by incorporating preoperative risk factors and including operative and early postoperative variables. Level III, diagnostic study.
He, Zihuai; Xu, Bin; Lee, Seunggeun; Ionita-Laza, Iuliana
2017-09-07
Substantial progress has been made in the functional annotation of genetic variation in the human genome. Integrative analysis that incorporates such functional annotations into sequencing studies can aid the discovery of disease-associated genetic variants, especially those with unknown function and located outside protein-coding regions. Direct incorporation of one functional annotation as weight in existing dispersion and burden tests can suffer substantial loss of power when the functional annotation is not predictive of the risk status of a variant. Here, we have developed unified tests that can utilize multiple functional annotations simultaneously for integrative association analysis with efficient computational techniques. We show that the proposed tests significantly improve power when variant risk status can be predicted by functional annotations. Importantly, when functional annotations are not predictive of risk status, the proposed tests incur only minimal loss of power in relation to existing dispersion and burden tests, and under certain circumstances they can even have improved power by learning a weight that better approximates the underlying disease model in a data-adaptive manner. The tests can be constructed with summary statistics of existing dispersion and burden tests for sequencing data, therefore allowing meta-analysis of multiple studies without sharing individual-level data. We applied the proposed tests to a meta-analysis of noncoding rare variants in Metabochip data on 12,281 individuals from eight studies for lipid traits. By incorporating the Eigen functional score, we detected significant associations between noncoding rare variants in SLC22A3 and low-density lipoprotein and total cholesterol, associations that are missed by standard dispersion and burden tests. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Construction and Validation of SRA-FV Need Assessment.
Thornton, David; Knight, Raymond A
2015-08-01
This article describes the construction and testing of a newly designed instrument to assess psychological factors associated with increased rates of sexual recidivism. The new instrument (Structured Risk Assessment-Forensic Version or SRA-FV) was based on previous research using the SRA framework. This article describes the results of testing SRA-FV with a large sample (N = 566) of sexual offenders being evaluated for an early civil commitment program. SRA-FV was found to significantly predict sexual recidivism for both child molesters and rapists and to have incremental predictive value relative to two widely used static actuarial instruments (Static-99R; Risk Matrix 2000/S). © The Author(s) 2013.
Toward a clinically useful method of predicting early breast-feeding attrition.
Lewallen, Lynne Porter; Dick, Margaret J; Wall, Yolanda; Zickefoose, Kimberly Taylor; Hannah, Susan Hensley; Flowers, Janet; Powell, Wanda
2006-08-01
The overall purpose of this study was to revise and test an instrument to identify, during the early postpartum period, women at risk for early breast-feeding attrition. This study was completed in two phases: the first phase tested a revision of the Breast-Feeding Attrition Prediction Tool (BAPT); the second, a new instrument, the Breast-Feeding Attitude Scale (BrAS), which was adapted from the BAPT. The two phases of this study involved 415 pregnant and postpartum women. Women answered questions either by phone (pregnant women) or in their hospital rooms after delivery (postpartum women). Data were analyzed using t tests and reliability analysis. The BAPT did not predict early breast-feeding attrition; however, the BrAS did differentiate between the attitudes of breast-feeding women and those of formula-feeding women and had adequate reliability. Women at risk for early breast-feeding attrition should be identified early so nursing interventions can be directed toward preventing early unintended weaning. Although the BrAS did not reliably identify women at risk in this sample, it did highlight important differences between breast-feeding and formula-feeding women that can be used in designing preconceptional or prenatal educational assessments and interventions.
Bruwer, Zandrè; Algar, Ursula; Vorster, Alvera; Fieggen, Karen; Davidson, Alan; Goldberg, Paul; Wainwright, Helen; Ramesar, Rajkumar
2014-04-01
Biallelic germline mutations in mismatch repair genes predispose to constitutional mismatch repair deficiency syndrome (CMMR-D). The condition is characterized by a broad spectrum of early-onset tumors, including hematological, brain and bowel and is frequently associated with features of Neurofibromatosis type 1. Few definitive screening recommendations have been suggested and no published reports have described predictive testing. We report on the first case of predictive testing for CMMR-D following the identification of two non-consanguineous parents, with the same heterozygous mutation in MLH1: c.1528C > T. The genetic counseling offered to the family, for their two at-risk daughters, is discussed with a focus on the ethical considerations of testing children for known cancer-causing variants. The challenges that are encountered when reporting on heterozygosity in a child younger than 18 years (disclosure of carrier status and risk for Lynch syndrome), when discovered during testing for homozygosity, are addressed. In addition, the identification of CMMR-D in a three year old, and the recommended clinical surveillance that was proposed for this individual is discussed. Despite predictive testing and presymptomatic screening, the sudden death of the child with CMMR-D syndrome occurred 6 months after her last surveillance MRI. This report further highlights the difficulty of developing guidelines, as a result of the rarity of cases and diversity of presentation.
Daniels, Molly S.; Babb, Sheri A.; King, Robin H.; Urbauer, Diana L.; Batte, Brittany A.L.; Brandt, Amanda C.; Amos, Christopher I.; Buchanan, Adam H.; Mutch, David G.; Lu, Karen H.
2014-01-01
Purpose Identification of the 10% to 15% of patients with ovarian cancer who have germline BRCA1 or BRCA2 mutations is important for management of both patients and relatives. The BRCAPRO model, which estimates mutation likelihood based on personal and family cancer history, can inform genetic testing decisions. This study's purpose was to assess the accuracy of BRCAPRO in women with ovarian cancer. Methods BRCAPRO scores were calculated for 589 patients with ovarian cancer referred for genetic counseling at three institutions. Observed mutations were compared with those predicted by BRCAPRO. Analysis of variance was used to assess factors impacting BRCAPRO accuracy. Results One hundred eighty (31%) of 589 patients with ovarian cancer tested positive. At BRCAPRO scores less than 40%, more mutations were observed than expected (93 mutations observed v 34.1 mutations expected; P < .001). If patients with BRCAPRO scores less than 10% had not been tested, 51 (28%) of 180 mutations would have been missed. BRCAPRO underestimated the risk for high-grade serous ovarian cancers but overestimated the risk for other histologies (P < .001), underestimation increased as age at diagnosis decreased (P = .02), and model performance varied by institution (P = .02). Conclusion Patients with ovarian cancer classified as low risk by BRCAPRO are more likely to test positive than predicted. The risk of a mutation in patients with low BRCAPRO scores is high enough to warrant genetic testing. This study demonstrates that assessment of family history by a validated model cannot effectively target testing to a high-risk ovarian cancer patient population, which strongly supports the recommendation to offer BRCA1/BRCA2 genetic testing to all patients with high-grade serous ovarian cancer regardless of family history. PMID:24638001
Effective use of outcomes data in cardiovascular surgery
NASA Astrophysics Data System (ADS)
Yasnoff, William A.; Page, U. S.
1994-12-01
We have established the Merged Cardiac Registry (MCR) containing over 100,000 cardiovascular surgery cases from 47 sites in the U.S. and Europe. MCR outcomes data are used by the contributors for clinical quality improvement. A tool for prospective prediction of mortality and stroke for coronary artery bypass graft surgery (83% of the cases), known as RiskMaster, has been developed using a Bayesian model based on 40,819 patients who had their surgery from 1988-92, and tested on 4,244 patients from 1993. In patients with mortality risks of 10% or less (92% of cases), the average risk prediction is identical to the actual 30- day mortality (p > 0.37), while risk is overestimated in higher risk patients. The receiver operating characteristic curve area for mortality prediction is 0.76 +/- 0.02. The RiskMaster prediction tool is now available online or as a standalone software package. MCR data also shows that average mortality risk is identical for a given body surface area regardless of gender. Outcomes data measure the benefits of health care, and are therefore an essential element in cost/benefit analysis. We believe their cost is justified by their use for the rational assessment of treatment alternatives.
Mendoza-Vazquez, Manuel; Davidsson, Johan; Brolin, Karin
2015-12-01
There is a need to improve the protection to the thorax of occupants in frontal car crashes. Finite element human body models are a more detailed representation of humans than anthropomorphic test devices (ATDs). On the other hand, there is no clear consensus on the injury criteria and the thresholds to use with finite element human body models to predict rib fractures. The objective of this study was to establish a set of injury risk curves to predict rib fractures using a modified Total HUman Model for Safety (THUMS). Injury criteria at the global, structural and material levels were computed with a modified THUMS in matched Post Mortem Human Subjects (PMHSs) tests. Finally, the quality of each injury risk curve was determined. For the included PMHS tests and the modified THUMS, DcTHOR and shear stress were the criteria at the global and material levels that reached an acceptable quality. The injury risk curves at the structural level did not reach an acceptable quality. Copyright © 2015 Elsevier Ltd. All rights reserved.
Makizako, Hyuma; Shimada, Hiroyuki; Doi, Takehiko; Tsutsumimoto, Kota; Nakakubo, Sho; Hotta, Ryo; Suzuki, Takao
2017-04-01
Lower extremity functioning is important for maintaining activity in elderly people. Optimal cutoff points for standard measurements of lower extremity functioning would help identify elderly people who are not disabled but have a high risk of developing disability. The purposes of this study were: (1) to determine the optimal cutoff points of the Five-Times Sit-to-Stand Test and the Timed "Up & Go" Test for predicting the development of disability and (2) to examine the impact of poor performance on both tests on the prediction of the risk of disability in elderly people dwelling in the community. This was a prospective cohort study. A population of 4,335 elderly people dwelling in the community (mean age = 71.7 years; 51.6% women) participated in baseline assessments. Participants were monitored for 2 years for the development of disability. During the 2-year follow-up period, 161 participants (3.7%) developed disability. The optimal cutoff points of the Five-Times Sit-to-Stand Test and the Timed "Up & Go" Test for predicting the development of disability were greater than or equal to 10 seconds and greater than or equal to 9 seconds, respectively. Participants with poor performance on the Five-Times Sit-to-Stand Test (hazard ratio = 1.88; 95% CI = 1.11-3.20), the Timed "Up & Go" Test (hazard ratio = 2.24; 95% CI = 1.42-3.53), or both tests (hazard ratio = 2.78; 95% CI = 1.78-4.33) at the baseline assessment had a significantly higher risk of developing disability than participants who had better lower extremity functioning. All participants had good initial functioning and participated in assessments on their own. Causes of disability were not assessed. Assessments of lower extremity functioning with the Five-Times Sit-to-Stand Test and the Timed "Up & Go" Test, especially poor performance on both tests, were good predictors of future disability in elderly people dwelling in the community. © 2017 American Physical Therapy Association
Psychological opportunities and hazards in predictive genetic testing for cancer risk.
Codori, A M
1997-03-01
Although the availability of genetic tests seems like an unequivocally favorable turn of events, they are, in fact, not without controversy. At the center of the controversy is a question regarding the risks and benefits of genetic testing. Many geneticists, ethicists, psychologists, and persons at risk for cancer are concerned about the potentially adverse psychological effects of genetic testing on tested persons and their families. In addition, the screening and interventions that are useful in the general population remain to be shown effective in those with high genetic cancer risk. Consequently, there have been calls for caution in moving genetic testing out of research laboratories and into commercial laboratories until their impact and the effectiveness of cancer prevention strategies can be studied. This article examines the arguments and data for and against this caution, citing examples related to hereditary nonpolyposis colon cancer and drawing upon literature on testing for other genetic diseases.
Paziewska, Agnieszka; Cukrowska, Bozena; Dabrowska, Michalina; Goryca, Krzysztof; Piatkowska, Magdalena; Kluska, Anna; Mikula, Michal; Karczmarski, Jakub; Oralewska, Beata; Rybak, Anna; Socha, Jerzy; Balabas, Aneta; Zeber-Lubecka, Natalia; Ambrozkiewicz, Filip; Konopka, Ewa; Trojanowska, Ilona; Zagroba, Malgorzata; Szperl, Malgorzata; Ostrowski, Jerzy
2015-01-01
Assessment of non-HLA variants alongside standard HLA testing was previously shown to improve the identification of potential coeliac disease (CD) patients. We intended to identify new genetic variants associated with CD in the Polish population that would improve CD risk prediction when used alongside HLA haplotype analysis. DNA samples of 336 CD and 264 unrelated healthy controls were used to create DNA pools for a genome wide association study (GWAS). GWAS findings were validated with individual HLA tag single nucleotide polymorphism (SNP) typing of 473 patients and 714 healthy controls. Association analysis using four HLA-tagging SNPs showed that, as was found in other populations, positive predicting genotypes (HLA-DQ2.5/DQ2.5, HLA-DQ2.5/DQ2.2, and HLA-DQ2.5/DQ8) were found at higher frequencies in CD patients than in healthy control individuals in the Polish population. Both CD-associated SNPs discovered by GWAS were found in the CD susceptibility region, confirming the previously-determined association of the major histocompatibility (MHC) region with CD pathogenesis. The two most significant SNPs from the GWAS were rs9272346 (HLA-dependent; localized within 1 Kb of DQA1) and rs3130484 (HLA-independent; mapped to MSH5). Specificity of CD prediction using the four HLA-tagging SNPs achieved 92.9%, but sensitivity was only 45.5%. However, when a testing combination of the HLA-tagging SNPs and the MSH5 SNP was used, specificity decreased to 80%, and sensitivity increased to 74%. This study confirmed that improvement of CD risk prediction sensitivity could be achieved by including non-HLA SNPs alongside HLA SNPs in genetic testing.
Kolobe, Thubi H A; Bulanda, Michelle; Susman, Louisa
2004-12-01
Accurate and diagnostic measures are central to early identification and intervention with infants who are at risk for developmental delays or disabilities. The purpose of this study was to examine (1) the ability of infants' Test of Infant Motor Performance (TIMP) scores at 7, 30, 60 and 90 days after term age to predict motor development at preschool age and (2) the contribution of the home environment and medical risk to the prediction. Sixty-one children from an original cohort of 90 infants who were assessed weekly with the TIMP, between 34 weeks gestational age and 4 months after term age, participated in this follow-up study. The Peabody Developmental Motor Scales, 2nd edition (PDMS-2), were administered to the children at the mean age of 57 months (SD=4.8 months). The quality and quantity of the home environment also were assessed at this age using the Early Childhood Home Observation for Measurement of the Environment (EC-HOME). Pearson product moment correlation coefficients, multiple regression, sensitivity and specificity, and positive and negative predictive values were used to assess the relationship among the TIMP, HOME, medical risk, and PDMS-2 scores. The correlation coefficients between the TIMP and PDMS-2 scores were statistically significant for all ages except at 7 days. The highest correlation coefficient was at 90 days (r=.69, P=.001). The TIMP scores at 30, 60, and 90 days after term; medical risk scores; and EC-HOME scores explained 24%, 23%, and 52% of the variance in the PDMS-2 scores, respectively. The TIMP score at 90 days after term was the most significant contributor to the prediction. The TIMP cutoff score of -0.5 standard deviation below the mean correctly classified 80%, 79%, and 87% of the children using a cutoff score of -2 standard deviations on the PDMS-2 at 30, 60, and 90 days, respectively. The results compare favorably with those of developmental tests administered to infants at 6 months of age or older. These findings underscore the need for age-specific test values and developmental surveillance of infants before making referrals.
Laceulle, Odilia M; Ormel, Johan; Vollebergh, Wilma A M; van Aken, Marcel A G; Nederhof, Esther
2014-03-01
This study aimed to test the vulnerability model of the relationship between temperament and mental disorders using a large sample of adolescents from the TRacking Adolescents Individual Lives' Survey (TRAILS). The vulnerability model argues that particular temperaments can place individuals at risk for the development of mental health problems. Importantly, the model may imply that not only baseline temperament predicts mental health problems prospectively, but additionally, that changes in temperament predict corresponding changes in risk for mental health problems. Data were used from 1195 TRAILS participants. Adolescent temperament was assessed both at age 11 and at age 16. Onset of mental disorders between age 16 and 19 was assessed at age 19, by means of the World Health Organization Composite International Diagnostic Interview (WHO CIDI). Results showed that temperament at age 11 predicted future mental disorders, thereby providing support for the vulnerability model. Moreover, temperament change predicted future mental disorders above and beyond the effect of basal temperament. For example, an increase in frustration increased the risk of mental disorders proportionally. This study confirms, and extends, the vulnerability model. Consequences of both temperament and temperament change were general (e.g., changes in frustration predicted both internalizing and externalizing disorders) as well as dimension specific (e.g., changes in fear predicted internalizing but not externalizing disorders). These findings confirm previous studies, which showed that mental disorders have both unique and shared underlying temperamental risk factors. © 2013 The Authors. Journal of Child Psychology and Psychiatry © 2013 Association for Child and Adolescent Mental Health.
Como, F; Carnesecchi, E; Volani, S; Dorne, J L; Richardson, J; Bassan, A; Pavan, M; Benfenati, E
2017-01-01
Ecological risk assessment of plant protection products (PPPs) requires an understanding of both the toxicity and the extent of exposure to assess risks for a range of taxa of ecological importance including target and non-target species. Non-target species such as honey bees (Apis mellifera), solitary bees and bumble bees are of utmost importance because of their vital ecological services as pollinators of wild plants and crops. To improve risk assessment of PPPs in bee species, computational models predicting the acute and chronic toxicity of a range of PPPs and contaminants can play a major role in providing structural and physico-chemical properties for the prioritisation of compounds of concern and future risk assessments. Over the last three decades, scientific advisory bodies and the research community have developed toxicological databases and quantitative structure-activity relationship (QSAR) models that are proving invaluable to predict toxicity using historical data and reduce animal testing. This paper describes the development and validation of a k-Nearest Neighbor (k-NN) model using in-house software for the prediction of acute contact toxicity of pesticides on honey bees. Acute contact toxicity data were collected from different sources for 256 pesticides, which were divided into training and test sets. The k-NN models were validated with good prediction, with an accuracy of 70% for all compounds and of 65% for highly toxic compounds, suggesting that they might reliably predict the toxicity of structurally diverse pesticides and could be used to screen and prioritise new pesticides. Copyright © 2016 Elsevier Ltd. All rights reserved.
Virag, Nathalie; Erickson, Mark; Taraborrelli, Patricia; Vetter, Rolf; Lim, Phang Boon; Sutton, Richard
2018-04-28
We developed a vasovagal syncope (VVS) prediction algorithm for use during head-up tilt with simultaneous analysis of heart rate (HR) and systolic blood pressure (SBP). We previously tested this algorithm retrospectively in 1155 subjects, showing sensitivity 95%, specificity 93% and median prediction time of 59s. This study was prospective, single center, on 140 subjects to evaluate this VVS prediction algorithm and assess if retrospective results were reproduced and clinically relevant. Primary endpoint was VVS prediction: sensitivity and specificity >80%. In subjects, referred for 60° head-up tilt (Italian protocol), non-invasive HR and SBP were supplied to the VVS prediction algorithm: simultaneous analysis of RR intervals, SBP trends and their variability represented by low-frequency power generated cumulative risk which was compared with a predetermined VVS risk threshold. When cumulative risk exceeded threshold, an alert was generated. Prediction time was duration between first alert and syncope. Of 140 subjects enrolled, data was usable for 134. Of 83 tilt+ve (61.9%), 81 VVS events were correctly predicted and of 51 tilt-ve subjects (38.1%), 45 were correctly identified as negative by the algorithm. Resulting algorithm performance was sensitivity 97.6%, specificity 88.2%, meeting primary endpoint. Mean VVS prediction time was 2min 26s±3min16s with median 1min 25s. Using only HR and HR variability (without SBP) the mean prediction time reduced to 1min34s±1min45s with median 1min13s. The VVS prediction algorithm, is clinically-relevant tool and could offer applications including providing a patient alarm, shortening tilt-test time, or triggering pacing intervention in implantable devices. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin
2018-05-01
This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC = 0.65 ± 0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p < 0.01). Thus, this study demonstrated that CAD-generated false-positives might include valuable information, which needs to be further explored for identifying and/or developing more effective imaging markers for predicting short-term breast cancer risk.
Reading Disabilities Prevention in Five Year Olds: A Case of Development X Treatment Interaction.
ERIC Educational Resources Information Center
Farnsworth, Linda L.; And Others
A total of 211 kindergarten children, aged 63 to 81 months, were classified into two groups according to the risk of failure in first grade predicted for them on the basis of their performance on the Wide Range Achievement Test (WRAT) and the Draw a Person (DAP) test. According to prediction, Group I children without intervention would probably…
Predictive power of the grace score in population with diabetes.
Baeza-Román, Anna; de Miguel-Balsa, Eva; Latour-Pérez, Jaime; Carrillo-López, Andrés
2017-12-01
Current clinical practice guidelines recommend risk stratification in patients with acute coronary syndrome (ACS) upon admission to hospital. Diabetes mellitus (DM) is widely recognized as an independent predictor of mortality in these patients, although it is not included in the GRACE risk score. The objective of this study is to validate the GRACE risk score in a contemporary population and particularly in the subgroup of patients with diabetes, and to test the effects of including the DM variable in the model. Retrospective cohort study in patients included in the ARIAM-SEMICYUC registry, with a diagnosis of ACS and with available in-hospital mortality data. We tested the predictive power of the GRACE score, calculating the area under the ROC curve. We assessed the calibration of the score and the predictive ability based on type of ACS and the presence of DM. Finally, we evaluated the effect of including the DM variable in the model by calculating the net reclassification improvement. The GRACE score shows good predictive power for hospital mortality in the study population, with a moderate degree of calibration and no significant differences based on ACS type or the presence of DM. Including DM as a variable did not add any predictive value to the GRACE model. The GRACE score has an appropriate predictive power, with good calibration and clinical applicability in the subgroup of diabetic patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Harris, Susan R; Backman, Catherine L; Mayson, Tanja A
2010-05-01
We compared abilities of the Alberta Infant Motor Scale (AIMS) and the Harris Infant Neuromotor Test (HINT), during the infant's first year, in predicting scores on the Bayley Scales of Infant Development (BSID) at age 2 and 3 years. This prospective study involved 144 infants (71 females, 73 males), assessed with the HINT and AIMS at 4 to 6.5 and 10 to 12.5 months and with the BSID at 2 and 3 years. Inclusion criteria for typical infants (n=58) were the following: 38 to 42 weeks' gestation, birthweight at least 2500g, and no congenital anomaly, postnatal health concern, nor major prenatal or perinatal maternal risk factor. For at-risk infants (n=86), inclusion criteria were any of the following: less than 38 weeks' gestation, birthweight less than 2500g, maternal age older than 35 years or younger than 19 years at infant birth, maternal psychiatric/mental health concerns, prenatal drug/alcohol exposure, multiple births, or use of reproductive technology. For the overall sample, the early (4-6.5mo) HINT had higher predictive correlations than the AIMS for 2-year BSID-II motor outcomes (r=-0.36 vs 0.26), and 3-year BSID-III gross motor outcomes (r=-0.45 vs 0.31), as did the 10- to 12.5-month HINT (r=-0.55 vs 0.47). Correlations were identical for 10- to 12.5-month HINT and AIMS scores and 3-year BSID-III gross motor (r=-0.58 and 0.58) and fine motor (r=-0.35 and 0.35) subscales. When the sample was divided into typical and at-risk groups, predictive correlations were consistently stronger for the at-risk infants. Categorical predictive analyses were reasonably similar across both tests. Results suggest that the HINT has comparable predictive validity to the AIMS and should be considered for use in clinical and research settings.
Patient compliance based on genetic medicine: a literature review.
Schneider, Kai Insa; Schmidtke, Jörg
2014-01-01
For this literature review, medical literature data bases were searched for studies on patient compliance after genetic risk assessment. The review focused on conditions where secondary or tertiary preventive options exist, namely cancer syndromes (BRCA-related cancer, HNPCC/colon cancer), hemochromatosis, thrombophilia, smoking cessation, and obesity. As a counterpart, patient compliance was assessed regarding medication adherence and medical advice in some of the most epidemiologically important conditions (including high blood pressure, metabolic syndrome, and coronary heart disease) after receiving medical advice based on nongenetic risk information or a combination of genetic and nongenetic risk information. In the majority of studies based on genetic risk assessments, patients were confronted with predictive rather than diagnostic genetic profiles. Most of the studies started from a knowledge base around 10 years ago when DNA testing was at an early stage, limited in scope and specificity, and costly. The major result is that overall compliance of patients after receiving a high-risk estimate from genetic testing for a given condition is high. However, significant behavior change does not take place just because the analyte is "genetic." Many more factors play a role in the complex process of behavioral tuning. Without adequate counseling and guidance, patients may interpret risk estimates of predictive genetic testing with an increase in fear and anxiety.
Risk prediction score for severe high altitude illness: a cohort study.
Canouï-Poitrine, Florence; Veerabudun, Kalaivani; Larmignat, Philippe; Letournel, Murielle; Bastuji-Garin, Sylvie; Richalet, Jean-Paul
2014-01-01
Risk prediction of acute mountain sickness, high altitude (HA) pulmonary or cerebral edema is currently based on clinical assessment. Our objective was to develop a risk prediction score of Severe High Altitude Illness (SHAI) combining clinical and physiological factors. Study population was 1017 sea-level subjects who performed a hypoxia exercise test before a stay at HA. The outcome was the occurrence of SHAI during HA exposure. Two scores were built, according to the presence (PRE, n = 537) or absence (ABS, n = 480) of previous experience at HA, using multivariate logistic regression. Calibration was evaluated by Hosmer-Lemeshow chisquare test and discrimination by Area Under ROC Curve (AUC) and Net Reclassification Index (NRI). The score was a linear combination of history of SHAI, ventilatory and cardiac response to hypoxia at exercise, speed of ascent, desaturation during hypoxic exercise, history of migraine, geographical location, female sex, age under 46 and regular physical activity. In the PRE/ABS groups, the score ranged from 0 to 12/10, a cut-off of 5/5.5 gave a sensitivity of 87%/87% and a specificity of 82%/73%. Adding physiological variables via the hypoxic exercise test improved the discrimination ability of the models: AUC increased by 7% to 0.91 (95%CI: 0.87-0.93) and 17% to 0.89 (95%CI: 0.85-0.91), NRI was 30% and 54% in the PRE and ABS groups respectively. A score computed with ten clinical, environmental and physiological factors accurately predicted the risk of SHAI in a large cohort of sea-level residents visiting HA regions.
Chen, Hong-Lin; Cao, Ying-Juan; Wang, Jing; Huai, Bao-Sha
2015-09-01
The Braden Scale is the most widely used pressure ulcer risk assessment in the world, but the currently used 5 risk classification groups do not accurately discriminate among their risk categories. To optimize risk classification based on Braden Scale scores, a retrospective analysis of all consecutively admitted patients in an acute care facility who were at risk for pressure ulcer development was performed between January 2013 and December 2013. Predicted pressure ulcer incidence first was calculated by logistic regression model based on original Braden score. Risk classification then was modified based on the predicted pressure ulcer incidence and compared between different risk categories in the modified (3-group) classification and the traditional (5-group) classification using chi-square test. Two thousand, six hundred, twenty-five (2,625) patients (mean age 59.8 ± 16.5, range 1 month to 98 years, 1,601 of whom were men) were included in the study; 81 patients (3.1%) developed a pressure ulcer. The predicted pressure ulcer incidence ranged from 0.1% to 49.7%. When the predicted pressure ulcer incidence was greater than 10.0% (high risk), the corresponding Braden scores were less than 11; when the predicted incidence ranged from 1.0% to 10.0% (moderate risk), the corresponding Braden scores ranged from 12 to 16; and when the predicted incidence was less than 1.0% (mild risk), the corresponding Braden scores were greater than 17. In the modified classification, observed pressure ulcer incidence was significantly different between each of the 3 risk categories (P less than 0.05). However, in the traditional classification, the observed incidence was not significantly different between the high-risk category and moderate-risk category (P less than 0.05) and between the mild-risk category and no-risk category (P less than 0.05). If future studies confirm the validity of these findings, pressure ulcer prevention protocols of care based on Braden Scale scores can be simplified.
Exploring a new bilateral focal density asymmetry based image marker to predict breast cancer risk
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Wang, Yunzhi; Qiu, Yuchen; Liu, Hong; Zheng, Bin
2017-03-01
Although breast density has been widely considered an important breast cancer risk factor, it is not very effective to predict risk of developing breast cancer in a short-term or harboring cancer in mammograms. Based on our recent studies to build short-term breast cancer risk stratification models based on bilateral mammographic density asymmetry, we in this study explored a new quantitative image marker based on bilateral focal density asymmetry to predict the risk of harboring cancers in mammograms. For this purpose, we assembled a testing dataset involving 100 positive and 100 negative cases. In each of positive case, no any solid masses are visible on mammograms. We developed a computer-aided detection (CAD) scheme to automatically detect focal dense regions depicting on two bilateral mammograms of left and right breasts. CAD selects one focal dense region with the maximum size on each image and computes its asymmetrical ratio. We used this focal density asymmetry as a new imaging marker to divide testing cases into two groups of higher and lower focal density asymmetry. The first group included 70 cases in which 62.9% are positive, while the second group included 130 cases in which 43.1% are positive. The odds ratio is 2.24. As a result, this preliminary study supported the feasibility of applying a new focal density asymmetry based imaging marker to predict the risk of having mammography-occult cancers. The goal is to assist radiologists more effectively and accurately detect early subtle cancers using mammography and/or other adjunctive imaging modalities in the future.
Contemporary model for cardiovascular risk prediction in people with type 2 diabetes.
Kengne, Andre Pascal; Patel, Anushka; Marre, Michel; Travert, Florence; Lievre, Michel; Zoungas, Sophia; Chalmers, John; Colagiuri, Stephen; Grobbee, Diederick E; Hamet, Pavel; Heller, Simon; Neal, Bruce; Woodward, Mark
2011-06-01
Existing cardiovascular risk prediction equations perform non-optimally in different populations with diabetes. Thus, there is a continuing need to develop new equations that will reliably estimate cardiovascular disease (CVD) risk and offer flexibility for adaptation in various settings. This report presents a contemporary model for predicting cardiovascular risk in people with type 2 diabetes mellitus. A 4.5-year follow-up of the Action in Diabetes and Vascular disease: preterax and diamicron-MR controlled evaluation (ADVANCE) cohort was used to estimate coefficients for significant predictors of CVD using Cox models. Similar Cox models were used to fit the 4-year risk of CVD in 7168 participants without previous CVD. The model's applicability was tested on the same sample and another dataset. A total of 473 major cardiovascular events were recorded during follow-up. Age at diagnosis, known duration of diabetes, sex, pulse pressure, treated hypertension, atrial fibrillation, retinopathy, HbA1c, urinary albumin/creatinine ratio and non-HDL cholesterol at baseline were significant predictors of cardiovascular events. The model developed using these predictors displayed an acceptable discrimination (c-statistic: 0.70) and good calibration during internal validation. The external applicability of the model was tested on an independent cohort of individuals with type 2 diabetes, where similar discrimination was demonstrated (c-statistic: 0.69). Major cardiovascular events in contemporary populations with type 2 diabetes can be predicted on the basis of routinely measured clinical and biological variables. The model presented here can be used to quantify risk and guide the intensity of treatment in people with diabetes.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
The prospect of predictive testing for personal risk: attitudes and decision making.
Wroe, A L; Salkovskis, P M; Rimes, K A
1998-06-01
As predictive tests for medical problems such as genetic disorders become more widely available, it becomes increasingly important to understand the processes involved in the decision whether or not to seek testing. This study investigates the decision to pursue the possibility of testing. Individuals (one group who had already contemplated the possibility of predictive testing and one group who had not) were asked to consider predictive testing for several diseases. They rated the likelihood of opting for testing and specified the reasons which they believed had affected their decision. The ratio of the numbers of reasons stated for testing and the numbers of reasons stated against testing was a good predictor of the stated likelihood of testing, particularly when the reasons were weighted by utility (importance). Those who had previously contemplated testing specified more emotional reasons. It is proposed that the decision process is internally logical although it may seem illogical to others due to there being idiosyncratic premises (or reasons) upon which the decision is based. It is concluded that the Utility Theory is a useful basis for describing how people make decisions related to predictive testing; modifications of the theory are proposed.
ERIC Educational Resources Information Center
Ramos Olazagasti, Maria A.; Klein, Rachel G.; Mannuzza, Salvatore; Belsky, Erica Roizen; Hutchison, Jesse A.; Lashua-Shriftman, Erin C.; Castellanos, F. Xavier
2013-01-01
Objective: To test whether children with attention-deficit/hyperactivity disorder (ADHD), free of conduct disorder (CD) in childhood (mean = 8 years), have elevated risk-taking, accidents, and medical illnesses in adulthood (mean = 41 years); whether development of CD influences risk-taking during adulthood; and whether exposure to…
Tee, Jason C; Klingbiel, Jannie F G; Collins, Robert; Lambert, Mike I; Coopoo, Yoga
2016-11-01
Tee, JC, Klingbiel, JFG, Collins, R, Lambert, MI, and Coopoo, Y. Preseason Functional Movement Screen component tests predict severe contact injuries in professional rugby union players. J Strength Cond Res 30(11): 3194-3203, 2016-Rugby union is a collision sport with a relatively high risk of injury. The ability of the Functional Movement Screen (FMS) or its component tests to predict the occurrence of severe (≥28 days) injuries in professional players was assessed. Ninety FMS test observations from 62 players across 4 different time periods were compared with severe injuries sustained during 6 months after FMS testing. Mean composite FMS scores were significantly lower in players who sustained severe injury (injured 13.2 ± 1.5 vs. noninjured 14.5 ± 1.4, Effect Size = 0.83, large) because of differences in in-line lunge (ILL) and active straight leg raise scores (ASLR). Receiver-operated characteristic curves and 2 × 2 contingency tables were used to determine that ASLR (cut-off 2/3) was the injury predictor with the greatest sensitivity (0.96, 95% confidence interval [CI] = 0.79-1.0). Adding the ILL in combination with ASLR (ILL + ASLR) improved the specificity of the injury prediction model (ASLR specificity = 0.29, 95% CI = 0.18-0.43 vs. ASLR + ILL specificity = 0.53, 95% CI = 0.39-0.66, p ≤ 0.05). Further analysis was performed to determine whether FMS tests could predict contact and noncontact injuries. The FMS composite score and various combinations of component tests (deep squat [DS] + ILL, ILL + ASLR, and DS + ILL + ASLR) were all significant predictors of contact injury. The FMS composite score also predicted noncontact injury, but no component test or combination thereof produced a similar result. These findings indicate that low scores on various FMS component tests are risk factors for injury in professional rugby players.
The Role of Cognitive Factors in Predicting Balance and Fall Risk in a Neuro-Rehabilitation Setting.
Saverino, A; Waller, D; Rantell, K; Parry, R; Moriarty, A; Playford, E D
2016-01-01
There is a consistent body of evidence supporting the role of cognitive functions, particularly executive function, in the elderly and in neurological conditions which become more frequent with ageing. The aim of our study was to assess the role of different domains of cognitive functions to predict balance and fall risk in a sample of adults with various neurological conditions in a rehabilitation setting. This was a prospective, cohort study conducted in a single centre in the UK. 114 participants consecutively admitted to a Neuro-Rehabilitation Unit were prospectively assessed for fall accidents. Baseline assessment included a measure of balance (Berg Balance Scale) and a battery of standard cognitive tests measuring executive function, speed of information processing, verbal and visual memory, visual perception and intellectual function. The outcomes of interest were the risk of becoming a faller, balance and fall rate. Two tests of executive function were significantly associated with fall risk, the Stroop Colour Word Test (IRR 1.01, 95% CI 1.00-1.03) and the number of errors on part B of the Trail Making Test (IRR 1.23, 95% CI 1.03-1.49). Composite scores of executive function, speed of information processing and visual memory domains resulted in 2 to 3 times increased likelihood of having better balance (OR 2.74 95% CI 1.08 to 6.94, OR 2.72 95% CI 1.16 to 6.36 and OR 2.44 95% CI 1.11 to 5.35 respectively). Our results show that specific subcomponents of executive functions are able to predict fall risk, while a more global cognitive dysfunction is associated with poorer balance.
The Role of Cognitive Factors in Predicting Balance and Fall Risk in a Neuro-Rehabilitation Setting
Saverino, A.; Waller, D.; Rantell, K.; Parry, R.; Moriarty, A.; Playford, E. D.
2016-01-01
Introduction There is a consistent body of evidence supporting the role of cognitive functions, particularly executive function, in the elderly and in neurological conditions which become more frequent with ageing. The aim of our study was to assess the role of different domains of cognitive functions to predict balance and fall risk in a sample of adults with various neurological conditions in a rehabilitation setting. Methods This was a prospective, cohort study conducted in a single centre in the UK. 114 participants consecutively admitted to a Neuro-Rehabilitation Unit were prospectively assessed for fall accidents. Baseline assessment included a measure of balance (Berg Balance Scale) and a battery of standard cognitive tests measuring executive function, speed of information processing, verbal and visual memory, visual perception and intellectual function. The outcomes of interest were the risk of becoming a faller, balance and fall rate. Results Two tests of executive function were significantly associated with fall risk, the Stroop Colour Word Test (IRR 1.01, 95% CI 1.00–1.03) and the number of errors on part B of the Trail Making Test (IRR 1.23, 95% CI 1.03–1.49). Composite scores of executive function, speed of information processing and visual memory domains resulted in 2 to 3 times increased likelihood of having better balance (OR 2.74 95% CI 1.08 to 6.94, OR 2.72 95% CI 1.16 to 6.36 and OR 2.44 95% CI 1.11 to 5.35 respectively). Conclusions Our results show that specific subcomponents of executive functions are able to predict fall risk, while a more global cognitive dysfunction is associated with poorer balance. PMID:27115880
Development and validation of an all-cause mortality risk score in type 2 diabetes.
Yang, Xilin; So, Wing Yee; Tong, Peter C Y; Ma, Ronald C W; Kong, Alice P S; Lam, Christopher W K; Ho, Chung Shun; Cockram, Clive S; Ko, Gary T C; Chow, Chun-Chung; Wong, Vivian C W; Chan, Juliana C N
2008-03-10
Diabetes reduces life expectancy by 10 to 12 years, but whether death can be predicted in type 2 diabetes mellitus remains uncertain. A prospective cohort of 7583 type 2 diabetic patients enrolled since 1995 were censored on July 30, 2005, or after 6 years of follow-up, whichever came first. A restricted cubic spline model was used to check data linearity and to develop linear-transforming formulas. Data were randomly assigned to a training data set and to a test data set. A Cox model was used to develop risk scores in the test data set. Calibration and discrimination were assessed in the test data set. A total of 619 patients died during a median follow-up period of 5.51 years, resulting in a mortality rate of 18.69 per 1000 person-years. Age, sex, peripheral arterial disease, cancer history, insulin use, blood hemoglobin levels, linear-transformed body mass index, random spot urinary albumin-creatinine ratio, and estimated glomerular filtration rate at enrollment were predictors of all-cause death. A risk score for all-cause mortality was developed using these predictors. The predicted and observed death rates in the test data set were similar (P > .70). The area under the receiver operating characteristic curve was 0.85 for 5 years of follow-up. Using the risk score in ranking cause-specific deaths, the area under the receiver operating characteristic curve was 0.95 for genitourinary death, 0.85 for circulatory death, 0.85 for respiratory death, and 0.71 for neoplasm death. Death in type 2 diabetes mellitus can be predicted using a risk score consisting of commonly measured clinical and biochemical variables. Further validation is needed before clinical use.
Lamb, Sarah E; McCabe, Chris; Becker, Clemens; Fried, Linda P; Guralnik, Jack M
2008-10-01
Falls are a major cause of disability, dependence, and death in older people. Brief screening algorithms may be helpful in identifying risk and leading to more detailed assessment. Our aim was to determine the most effective sequence of falls screening test items from a wide selection of recommended items including self-report and performance tests, and to compare performance with other published guidelines. Data were from a prospective, age-stratified, cohort study. Participants were 1002 community-dwelling women aged 65 years old or older, experiencing at least some mild disability. Assessments of fall risk factors were conducted in participants' homes. Fall outcomes were collected at 6 monthly intervals. Algorithms were built for prediction of any fall over a 12-month period using tree classification with cross-set validation. Algorithms using performance tests provided the best prediction of fall events, and achieved moderate to strong performance when compared to commonly accepted benchmarks. The items selected by the best performing algorithm were the number of falls in the last year and, in selected subpopulations, frequency of difficulty balancing while walking, a 4 m walking speed test, body mass index, and a test of knee extensor strength. The algorithm performed better than that from the American Geriatric Society/British Geriatric Society/American Academy of Orthopaedic Surgeons and other guidance, although these findings should be treated with caution. Suggestions are made on the type, number, and sequence of tests that could be used to maximize estimation of the probability of falling in older disabled women.
Jang, Eun Jin; Park, ByeongJu; Kim, Tae-Young; Shin, Soon-Ae
2016-01-01
Background Asian-specific prediction models for estimating individual risk of osteoporotic fractures are rare. We developed a Korean fracture risk prediction model using clinical risk factors and assessed validity of the final model. Methods A total of 718,306 Korean men and women aged 50–90 years were followed for 7 years in a national system-based cohort study. In total, 50% of the subjects were assigned randomly to the development dataset and 50% were assigned to the validation dataset. Clinical risk factors for osteoporotic fracture were assessed at the biennial health check. Data on osteoporotic fractures during the follow-up period were identified by ICD-10 codes and the nationwide database of the National Health Insurance Service (NHIS). Results During the follow-up period, 19,840 osteoporotic fractures were reported (4,889 in men and 14,951 in women) in the development dataset. The assessment tool called the Korean Fracture Risk Score (KFRS) is comprised of a set of nine variables, including age, body mass index, recent fragility fracture, current smoking, high alcohol intake, lack of regular exercise, recent use of oral glucocorticoid, rheumatoid arthritis, and other causes of secondary osteoporosis. The KFRS predicted osteoporotic fractures over the 7 years. This score was validated using an independent dataset. A close relationship with overall fracture rate was observed when we compared the mean predicted scores after applying the KFRS with the observed risks after 7 years within each 10th of predicted risk. Conclusion We developed a Korean specific prediction model for osteoporotic fractures. The KFRS was able to predict risk of fracture in the primary population without bone mineral density testing and is therefore suitable for use in both clinical setting and self-assessment. The website is available at http://www.nhis.or.kr. PMID:27399597
Kim, Ha Young; Jang, Eun Jin; Park, ByeongJu; Kim, Tae-Young; Shin, Soon-Ae; Ha, Yong-Chan; Jang, Sunmee
2016-01-01
Asian-specific prediction models for estimating individual risk of osteoporotic fractures are rare. We developed a Korean fracture risk prediction model using clinical risk factors and assessed validity of the final model. A total of 718,306 Korean men and women aged 50-90 years were followed for 7 years in a national system-based cohort study. In total, 50% of the subjects were assigned randomly to the development dataset and 50% were assigned to the validation dataset. Clinical risk factors for osteoporotic fracture were assessed at the biennial health check. Data on osteoporotic fractures during the follow-up period were identified by ICD-10 codes and the nationwide database of the National Health Insurance Service (NHIS). During the follow-up period, 19,840 osteoporotic fractures were reported (4,889 in men and 14,951 in women) in the development dataset. The assessment tool called the Korean Fracture Risk Score (KFRS) is comprised of a set of nine variables, including age, body mass index, recent fragility fracture, current smoking, high alcohol intake, lack of regular exercise, recent use of oral glucocorticoid, rheumatoid arthritis, and other causes of secondary osteoporosis. The KFRS predicted osteoporotic fractures over the 7 years. This score was validated using an independent dataset. A close relationship with overall fracture rate was observed when we compared the mean predicted scores after applying the KFRS with the observed risks after 7 years within each 10th of predicted risk. We developed a Korean specific prediction model for osteoporotic fractures. The KFRS was able to predict risk of fracture in the primary population without bone mineral density testing and is therefore suitable for use in both clinical setting and self-assessment. The website is available at http://www.nhis.or.kr.
Pletcher, Mark J; Tice, Jeffrey A; Pignone, Michael; McCulloch, Charles; Callister, Tracy Q; Browner, Warren S
2004-01-01
Background The coronary artery calcium (CAC) score is an independent predictor of coronary heart disease. We sought to combine information from the CAC score with information from conventional cardiac risk factors to produce post-test risk estimates, and to determine whether the score may add clinically useful information. Methods We measured the independent cross-sectional associations between conventional cardiac risk factors and the CAC score among asymptomatic persons referred for non-contrast electron beam computed tomography. Using the resulting multivariable models and published CAC score-specific relative risk estimates, we estimated post-test coronary heart disease risk in a number of different scenarios. Results Among 9341 asymptomatic study participants (age 35–88 years, 40% female), we found that conventional coronary heart disease risk factors including age, male sex, self-reported hypertension, diabetes and high cholesterol were independent predictors of the CAC score, and we used the resulting multivariable models for predicting post-test risk in a variety of scenarios. Our models predicted, for example, that a 60-year-old non-smoking non-diabetic women with hypertension and high cholesterol would have a 47% chance of having a CAC score of zero, reducing her 10-year risk estimate from 15% (per Framingham) to 6–9%; if her score were over 100, however (a 17% chance), her risk estimate would be markedly higher (25–51% in 10 years). In low risk scenarios, the CAC score is very likely to be zero or low, and unlikely to change management. Conclusion Combining information from the CAC score with information from conventional risk factors can change assessment of coronary heart disease risk to an extent that may be clinically important, especially when the pre-test 10-year risk estimate is intermediate. The attached spreadsheet makes these calculations easy. PMID:15327691
Hemal, Kshipra; Pagidipati, Neha J.; Coles, Adrian; Dolor, Rowena J.; Mark, Daniel B.; Pellikka, Patricia A.; Hoffmann, Udo; Litwin, Sheldon E.; Daubert, Melissa A.; Shah, Svati H.; Ariani, Kevin; Bullock-Palmer, Renee; Martinez, Beth; Lee, Kerry L.; Douglas, Pamela S.
2016-01-01
STRUCTURED ABSTRACT Objectives To determine whether presentation, risk assessment, testing choices, and results differ by sex in stable symptomatic outpatients with suspected coronary artery disease (CAD). Background Although established CAD presentations differ by sex, little is known about stable, suspected CAD. Methods Characteristics of 10,003 men and women in the Prospective Multicenter Imaging Study for Evaluation of Chest Pain (PROMISE) trial were compared using chi-square and Wilcoxon rank sum tests. Sex differences in test selection and predictors of test positivity were examined using logistic regression. Results Women were older (62.4 years vs. 59.0) and more likely to be hypertensive (66.6% vs. 63.2%), dyslipidemic (68.9% vs. 66.3%), and to have a family history of premature CAD (34.6% vs. 29.3) (all p-values<0.005). Women were less likely to smoke (45.6% vs. 57.0%; p<0.001), while diabetes prevalence was similar (21.8% vs. 21.0%; p=0.30). Chest pain was the primary symptom in 73.2% of women vs. 72.3% of men (p=0.30) and was characterized as “crushing/pressure/squeezing/tightness” in 52.5% of women vs. 46.2% of men (p<0.001). Compared to men, all risk scores characterized women as lower risk, and providers were more likely to characterize women as having low (<30%) pre-test probability for CAD (40.7% vs. 34.1%; p<0.001). Compared with men, women were more often referred to imaging tests (adjusted OR 1.21; 95% CI 1.01–1.44) than non-imaging tests. Women were less likely to have a positive test (9.7% vs. 15.1%; p<0.001). Although univariate predictors of test positivity were similar, in multivariable models, age, BMI, and Framingham risk score were predictive of a positive test in women, while Framingham and Diamond and Forrester risk scores were predictive in men. Conclusion Patient sex influences the entire diagnostic pathway for possible CAD, from baseline risk factors and presentation to noninvasive test outcomes. These differences highlight the need for sex-specific approaches to CAD evaluation. PMID:27017234
Torino, Claudia; Manfredini, Fabio; Bolignano, Davide; Aucella, Filippo; Baggetta, Rossella; Barillà, Antonio; Battaglia, Yuri; Bertoli, Silvio; Bonanno, Graziella; Castellino, Pietro; Ciurlino, Daniele; Cupisti, Adamasco; D'Arrigo, Graziella; De Paola, Luciano; Fabrizi, Fabrizio; Fatuzzo, Pasquale; Fuiano, Giorgio; Lombardi, Luigi; Lucisano, Gaetano; Messa, Piergiorgio; Rapanà, Renato; Rapisarda, Francesco; Rastelli, Stefania; Rocca-Rey, Lisa; Summaria, Chiara; Zuccalà, Alessandro; Tripepi, Giovanni; Catizone, Luigi; Zoccali, Carmine; Mallamaci, Francesca
2014-01-01
Scarce physical activity predicts shorter survival in dialysis patients. However, the relationship between physical (motor) fitness and clinical outcomes has never been tested in these patients. We tested the predictive power of an established metric of motor fitness, the Six-Minute Walking Test (6MWT), for death, cardiovascular events and hospitalization in 296 dialysis patients who took part in the trial EXCITE (ClinicalTrials.gov Identifier: NCT01255969). During follow up 69 patients died, 90 had fatal and non-fatal cardiovascular events, 159 were hospitalized and 182 patients had the composite outcome. In multivariate Cox models - including the study allocation arm and classical and non-classical risk factors - an increase of 20 walked metres during the 6MWT was associated to a 6% reduction of the risk for the composite end-point (P=0.001) and a similar relationship existed between the 6MWT, mortality (P<0.001) and hospitalizations (P=0.03). A similar trend was observed for cardiovascular events but this relationship did not reach statistical significance (P=0.09). Poor physical performance predicts a high risk of mortality, cardiovascular events and hospitalizations in dialysis patients. Future studies, including phase-2 EXCITE, will assess whether improving motor fitness may translate into better clinical outcomes in this high risk population. © 2014 S. Karger AG, Basel.
Cole, J H; Ritchie, S J; Bastin, M E; Valdés Hernández, M C; Muñoz Maniega, S; Royle, N; Corley, J; Pattie, A; Harris, S E; Zhang, Q; Wray, N R; Redmond, P; Marioni, R E; Starr, J M; Cox, S R; Wardlaw, J M; Sharp, D J; Deary, I J
2018-01-01
Age-associated disease and disability are placing a growing burden on society. However, ageing does not affect people uniformly. Hence, markers of the underlying biological ageing process are needed to help identify people at increased risk of age-associated physical and cognitive impairments and ultimately, death. Here, we present such a biomarker, ‘brain-predicted age’, derived using structural neuroimaging. Brain-predicted age was calculated using machine-learning analysis, trained on neuroimaging data from a large healthy reference sample (N=2001), then tested in the Lothian Birth Cohort 1936 (N=669), to determine relationships with age-associated functional measures and mortality. Having a brain-predicted age indicative of an older-appearing brain was associated with: weaker grip strength, poorer lung function, slower walking speed, lower fluid intelligence, higher allostatic load and increased mortality risk. Furthermore, while combining brain-predicted age with grey matter and cerebrospinal fluid volumes (themselves strong predictors) not did improve mortality risk prediction, the combination of brain-predicted age and DNA-methylation-predicted age did. This indicates that neuroimaging and epigenetics measures of ageing can provide complementary data regarding health outcomes. Our study introduces a clinically-relevant neuroimaging ageing biomarker and demonstrates that combining distinct measurements of biological ageing further helps to determine risk of age-related deterioration and death. PMID:28439103
Efficacy of functional movement screening for predicting injuries in coast guard cadets.
Knapik, Joseph J; Cosio-Lima, Ludimila M; Reynolds, Katy L; Shumway, Richard S
2015-05-01
Functional movement screening (FMS) examines the ability of individuals to perform highly specific movements with the aim of identifying individuals who have functional limitations or asymmetries. It is assumed that individuals who can more effectively accomplish the required movements have a lower injury risk. This study determined the ability of FMS to predict injuries in the United States Coast Guard (USCG) cadets. Seven hundred seventy male and 275 female USCG freshman cadets were administered the 7 FMS tests before the physically intense 8-week Summer Warfare Annual Basic (SWAB) training. Physical training-related injuries were recorded during SWAB training. Cumulative injury incidence was calculated at various FMS cutpoint scores. The ability of the FMS total score to predict injuries was examined by calculating sensitivity and specificity. Determination of the FMS cutpoint that maximized specificity and sensitivity was determined from the Youden's index (sensitivity + specificity - 1). For men, FMS scores ≤ 12 were associated with higher injury risk than scores >12; for women, FMS scores ≤ 15 were associated with higher injury risk than scores >15. The Youden's Index indicated that the optimal FMS cutpoint was ≤ 11 for men (22% sensitivity, 87% specificity) and ≤ 14 for women (60% sensitivity, 61% specificity). Functional movement screening demonstrated moderate prognostic accuracy for determining injury risk among female Coast Guard cadets but relatively low accuracy among male cadets. Attempting to predict injury risk based on the FMS test seems to have some limited promise based on the present and past investigations.
Nielsen, Mette L; Pareek, Manan; Leósdóttir, Margrét; Eriksson, Karl-Fredrik; Nilsson, Peter M; Olsen, Michael H
2018-03-01
To examine the predictive capability of a 1-h vs 2-h postload glucose value for cardiovascular morbidity and mortality. Prospective, population-based cohort study (Malmö Preventive Project) with subject inclusion 1974-1992. 4934 men without known diabetes and cardiovascular disease, who had blood glucose (BG) measured at 0, 20, 40, 60, 90 and 120 min during an OGTT (30 g glucose per m 2 body surface area), were followed for 27 years. Data on cardiovascular events and death were obtained through national and local registries. Predictive capabilities of fasting BG (FBG) and glucose values obtained during OGTT alone and added to a clinical prediction model comprising traditional cardiovascular risk factors were assessed using Harrell's concordance index (C-index) and integrated discrimination improvement (IDI). Median age was 48 (25th-75th percentile: 48-49) years and mean FBG 4.6 ± 0.6 mmol/L. FBG and 2-h postload BG did not independently predict cardiovascular events or death. Conversely, 1-h postload BG predicted cardiovascular morbidity and mortality and remained an independent predictor of cardiovascular death (HR: 1.09, 95% CI: 1.01-1.17, P = 0.02) and all-cause mortality (HR: 1.10, 95% CI: 1.05-1.16, P < 0.0001) after adjusting for various traditional risk factors. Clinical risk factors with added 1-h postload BG performed better than clinical risk factors alone, in predicting cardiovascular death (likelihood-ratio test, P = 0.02) and all-cause mortality (likelihood-ratio test, P = 0.0001; significant IDI, P = 0.0003). Among men without known diabetes, addition of 1-h BG, but not FBG or 2-h BG, to clinical risk factors provided incremental prognostic yield for prediction of cardiovascular death and all-cause mortality. © 2018 European Society of Endocrinology.
Prognostic value of liver fibrosis and steatosis biomarkers in type-2 diabetes and dyslipidaemia.
Perazzo, H; Munteanu, M; Ngo, Y; Lebray, P; Seurat, N; Rutka, F; Couteau, M; Jacqueminet, S; Giral, P; Monneret, D; Imbert-Bismut, F; Ratziu, V; Hartemann-Huertier, A; Housset, C; Poynard, T
2014-11-01
In cardiometabolic disorders, non-alcoholic fatty liver disease is frequent and presumably associated with increased mortality and cardiovascular risk. To evaluate the prognostic value of non-invasive biomarkers of liver fibrosis (FibroTest) and steatosis (SteatoTest) in patients with type-2 diabetes and/or dyslipidaemia. A total of 2312 patients with type-2 diabetes and/or dyslipidaemia were included and prospectively followed up for 5-15 years. The cardiovascular Framingham-risk score was calculated; advanced fibrosis and severe steatosis, were defined by FibroTest >0.48 and SteatoTest >0.69, respectively, as previously established. During a median follow-up of 12 years, 172 patients (7.4%) died. The leading causes of mortality were cancer (31%) and cardiovascular-related death (20%). The presence of advanced fibrosis [HR (95% CI)] [2.98 (95% CI 1.78-4.99); P < 0.0001] or severe steatosis [1.86 (1.34-2.58); P = 0.0002] was associated with an increased risk of mortality. In a multivariate Cox model adjusted for confounders: the presence of advanced fibrosis was associated with overall mortality [1.95 (1.12-3.41); P = 0.02]; advanced fibrosis at baseline [n = 50/677; 1.92 (1.04-3.55); P = 0.04] and progression to advanced fibrosis during follow-up [n = 16/127; 4.8 (1.5-14.9); P = 0.007] were predictors of cardiovascular events in patients with type-2 diabetes. In patients with a Framingham-risk score ≥20%, the presence of advanced fibrosis was predictive of cardiovascular events [2.24 (1.16-4.33); P < 0.05]. Liver biomarkers, such as FibroTest and SteatoTest, have prognostic values in patients with metabolic disorders. FibroTest has prognostic value for predicting overall survival in patients with type-2 diabetes and/or dyslipidaemia. In type-2 diabetes, FibroTest predicted cardiovascular events and improved the Framingham-risk score. © 2014 John Wiley & Sons Ltd.
Depression and Delinquency Covariation in an Accelerated Longitudinal Sample of Adolescents
Kofler, Michael J.; McCart, Michael R.; Zajac, Kristyn; Ruggiero, Kenneth J.; Saunders, Benjamin E.; Kilpatrick, Dean G.
2015-01-01
Objectives The current study tested opposing predictions stemming from the failure and acting out theories of depression-delinquency covariation. Methods Participants included a nationwide longitudinal sample of adolescents (N = 3,604) ages 12 to 17. Competing models were tested using cohort-sequential latent growth curve modeling to determine whether depressive symptoms at age 12 (baseline) predicted concurrent and age-related changes in delinquent behavior, whether the opposite pattern was apparent (delinquency predicting depression), and whether initial levels of depression predict changes in delinquency significantly better than vice versa. Results Early depressive symptoms predicted age-related changes in delinquent behavior significantly better than early delinquency predicted changes in depressive symptoms. In addition, the impact of gender on age-related changes in delinquent symptoms was mediated by gender differences in depressive symptom changes, indicating that depressive symptoms are a particularly salient risk factor for delinquent behavior in girls. Conclusion Early depressive symptoms represent a significant risk factor for later delinquent behavior – especially for girls – and appear to be a better predictor of later delinquency than early delinquency is of later depression. These findings provide support for the acting out theory and contradict failure theory predictions. PMID:21787049
Sisa, Ivan
2018-02-09
Cardiovascular disease (CVD) mortality is predicted to increase in Latin America countries due to their rapidly aging population. However, there is very little information about CVD risk assessment as a primary preventive measure in this high-risk population. We predicted the national risk of developing CVD in Ecuadorian elderly population using the Systematic COronary Risk Evaluation in Older Persons (SCORE OP) High and Low models by risk categories/CVD risk region in 2009. Data on national cardiovascular risk factors were obtained from the Encuesta sobre Salud, Bienestar y Envejecimiento. We computed the predicted 5-year risk of CVD risk and compared the extent of agreement and reclassification in stratifying high-risk individuals between SCORE OP High and Low models. Analyses were done by risk categories, CVD risk region, and sex. In 2009, based on SCORE OP Low model almost 42% of elderly adults living in Ecuador were at high risk of suffering CVD over a 5-year period. The extent of agreement between SCORE OP High and Low risk prediction models was moderate (Cohen's kappa test of 0.5), 34% of individuals approximately were reclassified into different risk categories and a third of the population would benefit from a pharmacologic intervention to reduce the CVD risk. Forty-two percent of elderly Ecuadorians were at high risk of suffering CVD over a 5-year period, indicating an urgent need to tailor primary preventive measures for this vulnerable and high-risk population. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Claes, E; Evers-Kiebooms, G; Denayer, L; Decruyenaere, M; Boogaerts, A; Philippe, K; Legius, E
2005-10-01
This prospective study evaluates emotional functioning and illness representations in 68 unaffected women (34 carriers/34 noncarriers) 1 year after predictive testing for BRCA1/2 mutations when offered within a multidisciplinary approach. Carriers had higher subjective risk perception of breast cancer than noncarriers. Carriers who did not have prophylactic oophorectomy had the highest risk perception of ovarian cancer. No differences were found between carriers and noncarriers regarding perceived seriousness and perceived control of breast and ovarian cancer. Mean levels of distress were within normal ranges. Only few women showed an overall pattern of clinically elevated distress. Cancer-specific distress and state-anxiety significantly decreased in noncarriers from pre- to posttest while general distress remained about the same. There were no significant changes in distress in the group of carriers except for ovarian cancer distress which significantly decreased from pre- to posttest. Our study did not reveal adverse effects of predictive testing when offered in the context of a multidisciplinary approach.
Bioclinical Test to Predict Nephropathia Epidemica Severity at Hospital Admission.
Hentzien, Maxime; Mestrallet, Stéphanie; Halin, Pascale; Pannet, Laure-Anne; Lebrun, Delphine; Dramé, Moustapha; Bani-Sadr, Firouzé; Galempoix, Jean-Marc; Strady, Christophe; Reynes, Jean-Marc; Penalba, Christian; Servettaz, Amélie
2018-06-01
We conducted a multicenter, retrospective cohort study of hospitalized patients with serologically proven nephropathia epidemica (NE) living in Ardennes Department, France, during 2000-2014 to develop a bioclinical test predictive of severe disease. Among 205 patients, 45 (22.0%) had severe NE. We found the following factors predictive of severe NE: nephrotoxic drug exposure (p = 0.005, point value 10); visual disorders (p = 0.02, point value 8); microscopic or macroscopic hematuria (p = 0.04, point value 7); leukocyte count >10 × 10 9 cells/L (p = 0.01, point value 9); and thrombocytopenia <90 × 10 9 /L (p = 0.003, point value 11). When point values for each factor were summed, we found a score of <10 identified low-risk patients (3.3% had severe disease), and a score >20 identified high-risk patients (45.3% had severe disease). If validated in future studies, this test could be used to stratify patients by severity in research studies and in clinical practice.
Lau, Anna F; Kabir, Masrura; Chen, Sharon C-A; Playford, E Geoffrey; Marriott, Deborah J; Jones, Michael; Lipman, Jeffrey; McBryde, Emma; Gottlieb, Thomas; Cheung, Winston; Seppelt, Ian; Iredell, Jonathan; Sorrell, Tania C
2015-04-01
Colonization with Candida species is an independent risk factor for invasive candidiasis (IC), but the minimum and most practicable parameters for prediction of IC have not been optimized. We evaluated Candida colonization in a prospective cohort of 6,015 nonneutropenic, critically ill patients. Throat, perineum, and urine were sampled 72 h post-intensive care unit (ICU) admission and twice weekly until discharge or death. Specimens were cultured onto chromogenic agar, and a subset underwent molecular characterization. Sixty-three (86%) patients who developed IC were colonized prior to infection; 61 (97%) tested positive within the first two time points. The median time from colonization to IC was 7 days (range, 0 to 35). Colonization at any site was predictive of IC, with the risk of infection highest for urine colonization (relative risk [RR]=2.25) but with the sensitivity highest (98%) for throat and/or perineum colonization. Colonization of ≥2 sites and heavy colonization of ≥1 site were significant independent risk factors for IC (RR=2.25 and RR=3.7, respectively), increasing specificity to 71% to 74% but decreasing sensitivity to 48% to 58%. Molecular testing would have prompted a resistance-driven decision to switch from fluconazole treatment in only 11% of patients infected with C. glabrata, based upon species-level identification alone. Positive predictive values (PPVs) were low (2% to 4%) and negative predictive values (NPVs) high (99% to 100%) regardless of which parameters were applied. In the Australian ICU setting, culture of throat and perineum within the first two time points after ICU admission captures 84% (61/73 patients) of subsequent IC cases. These optimized parameters, in combination with clinical risk factors, should strengthen development of a setting-specific risk-predictive model for IC. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Novel naïve Bayes classification models for predicting the chemical Ames mutagenicity.
Zhang, Hui; Kang, Yan-Li; Zhu, Yuan-Yuan; Zhao, Kai-Xia; Liang, Jun-Yu; Ding, Lan; Zhang, Teng-Guo; Zhang, Ji
2017-06-01
Prediction of drug candidates for mutagenicity is a regulatory requirement since mutagenic compounds could pose a toxic risk to humans. The aim of this investigation was to develop a novel prediction model of mutagenicity by using a naïve Bayes classifier. The established model was validated by the internal 5-fold cross validation and external test sets. For comparison, the recursive partitioning classifier prediction model was also established and other various reported prediction models of mutagenicity were collected. Among these methods, the prediction performance of naïve Bayes classifier established here displayed very well and stable, which yielded average overall prediction accuracies for the internal 5-fold cross validation of the training set and external test set I set were 89.1±0.4% and 77.3±1.5%, respectively. The concordance of the external test set II with 446 marketed drugs was 90.9±0.3%. In addition, four simple molecular descriptors (e.g., Apol, No. of H donors, Num-Rings and Wiener) related to mutagenicity and five representative substructures of mutagens (e.g., aromatic nitro, hydroxyl amine, nitroso, aromatic amine and N-methyl-N-methylenemethanaminum) produced by ECFP_14 fingerprints were identified. We hope the established naïve Bayes prediction model can be applied to risk assessment processes; and the obtained important information of mutagenic chemicals can guide the design of chemical libraries for hit and lead optimization. Copyright © 2017 Elsevier B.V. All rights reserved.
James, Katherine M.; Cowl, Clayton T.; Tilburt, Jon C.; Sinicrope, Pamela S.; Robinson, Marguerite E.; Frimannsdottir, Katrin R.; Tiedje, Kristina; Koenig, Barbara A.
2011-01-01
OBJECTIVE: To assess the impact of direct-to-consumer (DTC) predictive genomic risk information on perceived risk and worry in the context of routine clinical care. PATIENTS AND METHODS: Patients attending a preventive medicine clinic between June 1 and December 18, 2009, were randomly assigned to receive either genomic risk information from a DTC product plus usual care (n=74) or usual care alone (n=76). At intervals of 1 week and 1 year after their clinic visit, participants completed surveys containing validated measures of risk perception and levels of worry associated with the 12 conditions assessed by the DTC product. RESULTS: Of 345 patients approached, 150 (43%) agreed to participate, 64 (19%) refused, and 131 (38%) did not respond. Compared with those receiving usual care, participants who received genomic risk information initially rated their risk as higher for 4 conditions (abdominal aneurysm [P=.001], Graves disease [P=.04], obesity [P=.01], and osteoarthritis [P=.04]) and lower for one (prostate cancer [P=.02]). Although differences were not significant, they also reported higher levels of worry for 7 conditions and lower levels for 5 others. At 1 year, there were no significant differences between groups. CONCLUSION: Predictive genomic risk information modestly influences risk perception and worry. The extent and direction of this influence may depend on the condition being tested and its baseline prominence in preventive health care and may attenuate with time. Trial Registration: clinicaltrials.gov identifier: NCT00782366 PMID:21964170
INTERSPECIES CORRELATION ESTIMATES PREDICT PROTECTIVE ENVIRONMENTAL CONCENTRATIONS
Environmental risk assessments often use multiple single species toxicity test results and species sensitivity distributions (SSDs) to derive a predicted no-effect concentration in the environment, typically the 5th percentile of the SSD, termed the HC5. The shape and location of...
Predictive Medicine: Recombinant DNA Technology and Adult-Onset Genetic Disorders
Hayden, Michael
1988-01-01
Genetic factors are of great importance in common adult-onset disorders such as atherosclerosis, cancer, and neuro-degenerative diseases. Advances in DNA technology now allow identification of persons at high-risk of developing some of these diseases. This advance is leading to predictive medicine. In some genetic disorders, such as those leading to atherosclerosis and cancer, identification of high-risk individuals allows intervention which alters the natural history of the disorder. In other diseases, for which there is no treatment, such as Huntington's disease, the application of this technology provides information that relieves uncertainty and may affect quality of life, but does not alter the course of the illness. General implementation of predictive testing programs awaits the results of pilot projects, which will demonstrate the needs, appropriate levels of support, and guidelines for delivery of such testing. PMID:21253100
Performance of diagnosis-based risk adjustment measures in a population of sick Australians.
Duckett, S J; Agius, P A
2002-12-01
Australia is beginning to explore 'managed competition' as an organising framework for the health care system. This requires setting fair capitation rates, i.e. rates that adjust for the risk profile of covered lives. This paper tests two US-developed risk adjustment approaches using Australian data. Data from the 'co-ordinated care' dataset (which incorporates all service costs of 16,538 participants in a large health service research project conducted in 1996-99) were grouped into homogenous risk categories using risk adjustment 'grouper software'. The grouper products yielded three sets of homogenous categories: Diagnostic Groups and Diagnostic cost Groups. A two-stage analysis of predictive power was used: probability of any service use in the concurrent year, next year and the year after (logistic regression) and, for service users, a regression of logged cost of service use. The independent variables were diagnosis gender, a SES variable and the Age, gender and diagnosis-based risk adjustment measures explain around 40-45% of variation in costs of service use in the current year for untrimmed data (compared with around 15% for age and gender alone). Prediction of subsequent use is much poorer (around 20%). Using more information to assign people to risk categories generally improves prediction. Predictive power of diagnosis-base risk adjusters on this Australian dataset is similar to that found in Low predictive power carries policy risks of cream skimming rather than managing population health and care. Competitive funding models with risk adjustment on prior year experience could reduce system efficiency if implemented with current risk adjustment technology.
Lin, Kai-Yang; Zheng, Wei-Ping; Bei, Wei-Jie; Chen, Shi-Qun; Islam, Sheikh Mohammed Shariful; Liu, Yong; Xue, Lin; Tan, Ning; Chen, Ji-Yan
2017-03-01
A few studies developed simple risk model for predicting CIN with poor prognosis after emergent PCI. The study aimed to develop and validate a novel tool for predicting the risk of contrast-induced nephropathy (CIN) in patients undergoing emergent percutaneous coronary intervention (PCI). 692 consecutive patients undergoing emergent PCI between January 2010 and December 2013 were randomly (2:1) assigned to a development dataset (n=461) and a validation dataset (n=231). Multivariate logistic regression was applied to identify independent predictors of CIN, and established CIN predicting model, whose prognostic accuracy was assessed using the c-statistic for discrimination and the Hosmere Lemeshow test for calibration. The overall incidence of CIN was 55(7.9%). A total of 11 variables were analyzed, including age >75years old, baseline serum creatinine (SCr)>1.5mg/dl, hypotension and the use of intra-aortic balloon pump(IABP), which were identified to enter risk score model (Chen). The incidence of CIN was 32(6.9%) in the development dataset (in low risk (score=0), 1.0%, moderate risk (score:1-2), 13.4%, high risk (score≥3), 90.0%). Compared to the classical Mehran's and ACEF CIN risk score models, the risk score (Chen) across the subgroup of the study population exhibited similar discrimination and predictive ability on CIN (c-statistic:0.828, 0.776, 0.853, respectively), in-hospital mortality, 2, 3-years mortality (c-statistic:0.738.0.750, 0.845, respectively) in the validation population. Our data showed that this simple risk model exhibited good discrimination and predictive ability on CIN, similar to Mehran's and ACEF score, and even on long-term mortality after emergent PCI. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Investigating Married Adults' Communal Coping with Genetic Health Risk and Perceived Discrimination
Smith, Rachel A.; Sillars, Alan; Chesnut, Ryan P.; Zhu, Xun
2017-01-01
Increased genetic testing in personalized medicine presents unique challenges for couples, including managing disease risk and potential discrimination as a couple. This study investigated couples' conflicts and support gaps as they coped with perceived genetic discrimination. We also explored the degree to which communal coping was beneficial in reducing support gaps, and ultimately stress. Dyadic analysis of married adults (N = 266, 133 couples), in which one person had the genetic risk for serious illness, showed that perceived discrimination predicted more frequent conflicts about AATD-related treatment, privacy boundaries, and finances, which, in turn, predicted wider gaps in emotion and esteem support, and greater stress for both spouses. Communal coping predicted lower support gaps for both partners and marginally lower stress. PMID:29731540
Investigating Married Adults' Communal Coping with Genetic Health Risk and Perceived Discrimination.
Smith, Rachel A; Sillars, Alan; Chesnut, Ryan P; Zhu, Xun
2018-01-01
Increased genetic testing in personalized medicine presents unique challenges for couples, including managing disease risk and potential discrimination as a couple. This study investigated couples' conflicts and support gaps as they coped with perceived genetic discrimination. We also explored the degree to which communal coping was beneficial in reducing support gaps, and ultimately stress. Dyadic analysis of married adults ( N = 266, 133 couples), in which one person had the genetic risk for serious illness, showed that perceived discrimination predicted more frequent conflicts about AATD-related treatment, privacy boundaries, and finances, which, in turn, predicted wider gaps in emotion and esteem support, and greater stress for both spouses. Communal coping predicted lower support gaps for both partners and marginally lower stress.
Ronald, Lisa A; Campbell, Jonathon R; Balshaw, Robert F; Roth, David Z; Romanowski, Kamila; Marra, Fawziah; Cook, Victoria J; Johnston, James C
2016-01-01
Introduction Improved understanding of risk factors for developing active tuberculosis (TB) will better inform decisions about diagnostic testing and treatment for latent TB infection (LTBI) in migrant populations in low-incidence regions. We aim to examine TB risk factors among the foreign-born population in British Columbia (BC), Canada, and to create and validate a clinically relevant multivariate risk score to predict active TB. Methods and analysis This retrospective population-based cohort study will include all foreign-born individuals who acquired permanent resident status in Canada between 1 January 1985 and 31 December 2013 and acquired healthcare coverage in BC at any point during this period. Multiple administrative databases and disease registries will be linked, including a National Immigration Database, BC Provincial Health Insurance Registration, physician billings, hospitalisations, drugs dispensed from community pharmacies, vital statistics, HIV testing and notifications, cancer, chronic kidney disease and dialysis treatment, and all TB and LTBI testing and treatment data in BC. Extended proportional hazards regression will be used to estimate risk factors for TB and to create a prognostic TB risk score. Ethics and dissemination Ethical approval for this study has been obtained from the University of British Columbia Clinical Ethics Review Board. Once completed, study findings will be presented at conferences and published in peer-reviewed journals. An online TB risk score calculator will also be created. PMID:27888179
The role of high-risk HPV-DNA testing in the male sexual partners of women with HPV-induced lesions.
Giraldo, Paulo C; Eleutério, Jose; Cavalcante, Diane Isabelle M; Gonçalves, Ana Katherine S; Romão, Juliana A A; Eleutério, Renata M N
2008-03-01
The objectives were to assess the prevalence of high-risk HPV in the male sexual partners of women with HPV-induced lesions, and correlate it with biopsies guided by peniscopy. Fifty-four asymptomatic male sexual partners of women with low-grade squamous intra-epithelial lesions (LSIL) associated with high-risk HPV were examined between April 2003 and June 2005. The DNA-HPV was tested using a second-generation hybrid capture technique in scraped penile samples. Peniscopy identified acetowhite lesions leading to biopsy. High-risk HPV was present in 25.9% (14 out of 54) of the cases. Peniscopy led to 13 biopsies (24.07%), which resulted in two cases of condyloma, two cases of intra-epithelial neoplasia (PIN) I, one case of PIN II, and eight cases of normal tissue. The high-risk HPV test demonstrated 80% sensitivity, 100% specificity, 100% positive predictive value, and 88.9% negative predictive value for the identification of penile lesions. There was a greater chance of finding HPV lesions in the biopsy in the positive cases of high-risk HPV with abnormal peniscopy (p=0.007); OR=51 (CI 1.7-1527.1). Among asymptomatic male sexual partners of women with low-grade intra-epithelial squamous lesions, those infected by high-risk HPV have a higher chance of having abnormal penile tissue compared with male partners without that infection.
Hernández, Domingo; Sánchez-Fructuoso, Ana; González-Posada, José Manuel; Arias, Manuel; Campistol, Josep María; Rufino, Margarita; Morales, José María; Moreso, Francesc; Pérez, Germán; Torres, Armando; Serón, Daniel
2009-09-27
All-cause mortality is high after kidney transplantation (KT), but no prognostic index has focused on predicting mortality in KT using baseline and emergent comorbidity after KT. A total of 4928 KT recipients were used to derive a risk score predicting mortality. Patients were randomly assigned to two groups: a modeling population (n=2452), used to create a new index, and a testing population (n=2476), used to test this index. Multivariate Cox regression model coefficients of baseline (age, weight, time on dialysis, diabetes, hepatitis C, and delayed graft function) and emergent comorbidity within the first posttransplant year (diabetes, proteinuria, renal function, and immunosuppressants) were used to weigh each variable in the calculation of the score and allocated into risk quartiles. The probability of death at 3 years, estimated by baseline cumulative hazard function from the Cox model [P (death)=1-0.993592764 (exp(score/100)], increased from 0.9% in the lowest-risk quartile (score=40) to 4.7% in the highest risk-quartile (score=200). The observed incidence of death increased with increasing risk quartiles in testing population (log-rank analysis, P<0.0001). The overall C-index was 0.75 (95% confidence interval: 0.72-0.78) and 0.74 (95% confidence interval: 0.70-0.77) in both populations, respectively. This new index is an accurate tool to identify high-risk patients for mortality after KT.
The risk of coronary heart disease of seafarers on vessels sailing under a German flag.
Oldenburg, Marcus; Jensen, Hans-Joachim; Latza, Ute; Baur, Xaver
2010-01-01
This study aimed to predict the risk of coronary heart disease (CHD) among seafarers on German-flagged vessels and to assess the association of shipboard job duration at sea with the risk of CHD. During the legally required medical fitness test for nautical service, 161 seafarers in Hamburg participated in a cross-sectional study which included an interview, blood sampling, and blood pressure measurements (response 84.9%). The predicted 10-year risk of an acute coronary event of the examined German seafarers aged 35 to 64 years (n = 46) was assessed in comparison with a sample of male German employees of the same age working ashore (PROCAM study). The number of independent CHD risk factors (according to the PROCAM study) was compared in the groups with 'shorter' and 'longer' median shipboard job duration at sea (15.0 years). The examined German seafarers had a similar age-standardized predicted 10-year CHD risk as the German reference population. Nearly all independent CHD risk factors were significantly more frequent in seamen with job duration at sea of ≥ 15 years than in those with 〈 15 years. After adjusting for age, the number of CHD risk factors was associated with job duration (OR 1.08 [95% CI 1.02-1.14] per year). Seafarers on German-flagged ships have to attend a medical fitness test for nautical service every 2 years. Thus, it can be assumed that seafarers present a healthier population than employees ashore. In this study, however, CHD risk of seafarers was similar to that of the reference population. This may indicate that working onboard implies a high coronary risk. Furthermore, the study results suggest a tendency of increased risk of CHD among seafarers with longer job duration at sea.
Pedophilia: an evaluation of diagnostic and risk prediction methods.
Wilson, Robin J; Abracen, Jeffrey; Looman, Jan; Picheca, Janice E; Ferguson, Meaghan
2011-06-01
One hundred thirty child sexual abusers were diagnosed using each of following four methods: (a) phallometric testing, (b) strict application of Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision [DSM-IV-TR]) criteria, (c) Rapid Risk Assessment of Sex Offender Recidivism (RRASOR) scores, and (d) "expert" diagnoses rendered by a seasoned clinician. Comparative utility and intermethod consistency of these methods are reported, along with recidivism data indicating predictive validity for risk management. Results suggest that inconsistency exists in diagnosing pedophilia, leading to diminished accuracy in risk assessment. Although the RRASOR and DSM-IV-TR methods were significantly correlated with expert ratings, RRASOR and DSM-IV-TR were unrelated to each other. Deviant arousal was not associated with any of the other methods. Only the expert ratings and RRASOR scores were predictive of sexual recidivism. Logistic regression analyses showed that expert diagnosis did not add to prediction of sexual offence recidivism over and above RRASOR alone. Findings are discussed within a context of encouragement of clinical consistency and evidence-based practice regarding treatment and risk management of those who sexually abuse children.
Predicting MCI outcome with clinically available MRI and CSF biomarkers
Heister, D.; Brewer, J.B.; Magda, S.; Blennow, K.
2011-01-01
Objective: To determine the ability of clinically available volumetric MRI (vMRI) and CSF biomarkers, alone or in combination with a quantitative learning measure, to predict conversion to Alzheimer disease (AD) in patients with mild cognitive impairment (MCI). Methods: We stratified 192 MCI participants into positive and negative risk groups on the basis of 1) degree of learning impairment on the Rey Auditory Verbal Learning Test; 2) medial temporal atrophy, quantified from Food and Drug Administration–approved software for automated vMRI analysis; and 3) CSF biomarker levels. We also stratified participants based on combinations of risk factors. We computed Cox proportional hazards models, controlling for age, to assess 3-year risk of converting to AD as a function of risk group and used Kaplan-Meier analyses to determine median survival times. Results: When risk factors were examined separately, individuals testing positive showed significantly higher risk of converting to AD than individuals testing negative (hazard ratios [HR] 1.8–4.1). The joint presence of any 2 risk factors substantially increased risk, with the combination of greater learning impairment and increased atrophy associated with highest risk (HR 29.0): 85% of patients with both risk factors converted to AD within 3 years, vs 5% of those with neither. The presence of medial temporal atrophy was associated with shortest median dementia-free survival (15 months). Conclusions: Incorporating quantitative assessment of learning ability along with vMRI or CSF biomarkers in the clinical workup of MCI can provide critical information on risk of imminent conversion to AD. PMID:21998317
Mortality determinants and prediction of outcome in high risk newborns.
Dalvi, R; Dalvi, B V; Birewar, N; Chari, G; Fernandez, A R
1990-06-01
The aim of this study was to determine independent patient-related predictors of mortality in high risk newborns admitted at our centre. The study population comprised 100 consecutive newborns each, from the premature unit (PU) and sick baby care unit (SBCU), respectively. Thirteen high risk factors (variables) for each of the two units, were entered into a multivariate regression analysis. Variables with independent predictive value for poor outcome (i.e., death) in PU were, weight less than 1 kg, hyaline membrane disease, neurologic problems, and intravenous therapy. High risk factors in SBCU included, blood gas abnormality, bleeding phenomena, recurrent convulsions, apnea, and congenital anomalies. Identification of these factors guided us in defining priority areas for improvement in our system of neonatal care. Also, based on these variables a simple predictive score for outcome was constructed. The prediction equation and the score were cross-validated by applying them to a 'test-set' of 100 newborns each for PU and SBCU. Results showed a comparable sensitivity, specificity and error rate.
Wilson, Richard; Goodacre, Steve W; Klingbajl, Marcin; Kelly, Anne-Maree; Rainer, Tim; Coats, Tim; Holloway, Vikki; Townend, Will; Crane, Steve
2014-01-01
Background and objective Risk-adjusted mortality rates can be used as a quality indicator if it is assumed that the discrepancy between predicted and actual mortality can be attributed to the quality of healthcare (ie, the model has attributional validity). The Development And Validation of Risk-adjusted Outcomes for Systems of emergency care (DAVROS) model predicts 7-day mortality in emergency medical admissions. We aimed to test this assumption by evaluating the attributional validity of the DAVROS risk-adjustment model. Methods We selected cases that had the greatest discrepancy between observed mortality and predicted probability of mortality from seven hospitals involved in validation of the DAVROS risk-adjustment model. Reviewers at each hospital assessed hospital records to determine whether the discrepancy between predicted and actual mortality could be explained by the healthcare provided. Results We received 232/280 (83%) completed review forms relating to 179 unexpected deaths and 53 unexpected survivors. The healthcare system was judged to have potentially contributed to 10/179 (8%) of the unexpected deaths and 26/53 (49%) of the unexpected survivors. Failure of the model to appropriately predict risk was judged to be responsible for 135/179 (75%) of the unexpected deaths and 2/53 (4%) of the unexpected survivors. Some 10/53 (19%) of the unexpected survivors died within a few months of the 7-day period of model prediction. Conclusions We found little evidence that deaths occurring in patients with a low predicted mortality from risk-adjustment could be attributed to the quality of healthcare provided. PMID:23605036
Suicide risk assessment: Trust an implicit probe or listen to the patient?
Harrison, Dominique P; Stritzke, Werner G K; Fay, Nicolas; Hudaib, Abdul-Rahman
2018-05-21
Previous research suggests implicit cognition can predict suicidal behavior. This study examined the utility of the death/suicide implicit association test (d/s-IAT) in acute and prospective assessment of suicide risk and protective factors, relative to clinician and patient estimates of future suicide risk. Patients (N = 128; 79 female; 111 Caucasian) presenting to an emergency department were recruited if they reported current suicidal ideation or had been admitted because of an acute suicide attempt. Patients completed the d/s-IAT and self-report measures assessing three death-promoting (e.g., suicide ideation) and two life-sustaining (e.g., zest for life) factors, with self-report measures completed again at 3- and 6-month follow-ups. The clinician and patient provided risk estimates of that patient making a suicide attempt within the next 6 months. Results showed that among current attempters, the d/s-IAT differentiated between first time and multiple attempters; with multiple attempters having significantly weaker self-associations with life relative to death. The d/s-IAT was associated with concurrent suicidal ideation and zest for life, but only predicted the desire to die prospectively at 3 months. By contrast, clinician and patient estimates predicted suicide risk at 3- and 6-month follow-up, with clinician estimates predicting death-promoting factors, and only patient estimates predicting life-sustaining factors. The utility of the d/s-IAT was more pronounced in the assessment of concurrent risk. Prospectively, clinician and patient predictions complemented each other in predicting suicide risk and resilience, respectively. Our findings indicate collaborative rather than implicit approaches add greater value to the management of risk and recovery in suicidal patients. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Vance, J Eric; Bowen, Natasha K; Fernandez, Gustavo; Thompson, Shealy
2002-01-01
To identify predictors of behavioral outcomes in high-risk adolescents with aggression and serious emotional disturbance (SED). Three hundred thirty-seven adolescents from a statewide North Carolina treatment program for aggressive youths with SED were followed between July 1995 and June 1999 from program entry (T1) to approximately 1 year later (T2). Historical and current psychosocial risk and protective factors as well as psychiatric symptom severity at T1 were tested as predictors of high and low behavioral functioning at T2. Behavioral functioning was a composite based on the frequency of risk-taking, self-injurious, threatening, and assaultive behavior. Eleven risk and protective factors were predictive of T2 behavioral functioning, while none of the measured T1 psychiatric symptoms was predictive. A history of aggression and negative parent-child relationships in childhood was predictive of worse T2 behavior, as was lower IQ. Better T2 behavioral outcomes were predicted by a history of consistent parental employment and positive parent-child relations, higher levels of current family support, contact with prosocial peers, higher reading level, good problem-solving abilities, and superior interpersonal skills. Among high-risk adolescents with aggression and SED, psychiatric symptom severity may be a less important predictor of behavioral outcomes than certain risk and protective factors. Several factors predictive of good behavioral functioning represent feasible intervention targets.
Spannenkrebs, M; Crispin, A; Krämer, D
2013-12-01
The new examination before primary school enrollment in Baden-Wuerttemberg aims at detecting problems in infant development with regard to later school success in time to initiate supporting measures, especially to improve the language skills of children with other native languages. By a 2-level process composed of a screening of language skills (HASE and KVS) and an additional test (SETK 3-5) of children who did not pass the screening, the school physicians attested special needs for language promotion in the kindergarten. This study looked for associated risks of children with special needs for language promotion. The degree of test quality of the 2-level process for identifying special needs for language promotion was determined. This cross-sectional analysis explored findings of n=80,781 children in the new examination before primary school enrollment of the data-set of Baden-Wuerttemberg (children with school enrollment 2011). 56,352 children (69.8%) were speaking German, 24,429 children (30.2%) had other family languages. 20,461 children (25.3%) had special needs for language promotion in the kindergarten. A logistic regression model to determine main risks of special needs for language promotion was developed. Main effects were other native languages (OR 5.1 [4.8; 5.2]), problems in subitising (OR 2.8 [2.7; 3.0]) and language development lags in the questionnaire of the nursery school teachers (OR 3.5 [3.3; 3.7]). Protective effects were an elevated graduation of the mother (OR 0.7 [0.7; 0.7]) or the father (OR 0.8 [0.7; 0.8]). Risk scores of the effects were defined. The corresponding predictive probability to different levels of risk scores was calculated. The true positive rate of the screening of language skills (HASE/KVS) in regard to special needs for language promotion was 0.95, the true negative rate was 0.72 and the -positive predictive value was 0.53. The school physician's findings of special needs for language promotion acted as gold standard. With the additional test (SETK 3-5) the positive predictive value improved to 0.9, if at least one of the subtests of the SETK 3-5 was not passed. The risk score-level corresponded with the pretest-probability and the consecutive positive predictive value of the screening of language skills. This study showed an adequate degree of test quality of the 2-level process in the new examination before primary school enrollment in Baden-Wuerttemberg (screening of language skills and additional test, if the screening is not passed). In addition children with special needs for language promotion had associated risks. Risk scores, that have been defined, offer an information tool to the school physicians concerning the positive predictive value of the screening of language skills without additional testing. © Georg Thieme Verlag KG Stuttgart · New York.
Sussman, Jeremy B; Wiitala, Wyndy L; Zawistowski, Matthew; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A
2017-09-01
Accurately estimating cardiovascular risk is fundamental to good decision-making in cardiovascular disease (CVD) prevention, but risk scores developed in one population often perform poorly in dissimilar populations. We sought to examine whether a large integrated health system can use their electronic health data to better predict individual patients' risk of developing CVD. We created a cohort using all patients ages 45-80 who used Department of Veterans Affairs (VA) ambulatory care services in 2006 with no history of CVD, heart failure, or loop diuretics. Our outcome variable was new-onset CVD in 2007-2011. We then developed a series of recalibrated scores, including a fully refit "VA Risk Score-CVD (VARS-CVD)." We tested the different scores using standard measures of prediction quality. For the 1,512,092 patients in the study, the Atherosclerotic cardiovascular disease risk score had similar discrimination as the VARS-CVD (c-statistic of 0.66 in men and 0.73 in women), but the Atherosclerotic cardiovascular disease model had poor calibration, predicting 63% more events than observed. Calibration was excellent in the fully recalibrated VARS-CVD tool, but simpler techniques tested proved less reliable. We found that local electronic health record data can be used to estimate CVD better than an established risk score based on research populations. Recalibration improved estimates dramatically, and the type of recalibration was important. Such tools can also easily be integrated into health system's electronic health record and can be more readily updated.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Human papillomavirus DNA testing as an adjunct to cytology in cervical screening programs.
Lörincz, Attila T; Richart, Ralph M
2003-08-01
Our objective was to review current large studies of human papillomavirus (HPV) DNA testing as an adjunct to the Papanicolaou test for cervical cancer screening programs. We analyzed 10 large screening studies that used the Hybrid Capture 2 test and 3 studies that used the polymerase chain reaction test in a manner that enabled reliable estimates of accuracy for detecting or predicting high-grade cervical intraepithelial neoplasia (CIN). Most studies allowed comparison of HPV DNA and Papanicolaou testing and estimates of the performance of Papanicolaou and HPV DNA as combined tests. The studies were selected on the basis of a sufficient number of cases of high-grade CIN and cancer to provide meaningful statistical values. Investigators had to demonstrate the ability to generate reasonably reliable Hybrid Capture 2 or polymerase chain reaction data that were either minimally biased by nature of study design or that permitted analytical techniques for addressing issues of study bias to be applied. Studies had to provide data for the calculation of test sensitivity, specificity, predictive values, odds ratios, relative risks, confidence intervals, and other relevant measures. Final data were abstracted directly from published articles or estimated from descriptive statistics presented in the articles. In some studies, new analyses were performed from raw data supplied by the principal investigators. We concluded that HPV DNA testing was a more sensitive indicator for prevalent high-grade CIN than either conventional or liquid cytology. A combination of HPV DNA and Papanicolaou testing had almost 100% sensitivity and negative predictive value. The specificity of the combined tests was slightly lower than the specificity of the Papanicolaou test alone, but this decrease could potentially be offset by greater protection from neoplastic progression and cost savings available from extended screening intervals. One "double-negative" HPV DNA and Papanicolaou test indicated better prognostic assurance against risk of future CIN 3 than 3 subsequent negative conventional Papanicolaou tests and may safely allow 3-year screening intervals for such low-risk women.
Applying a new mammographic imaging marker to predict breast cancer risk
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Danala, Gopichandh; Hollingsworth, Alan B.; Stoug, Rebecca G.; Pearce, Melanie; Liu, Hong; Zheng, Bin
2018-02-01
Identifying and developing new mammographic imaging markers to assist prediction of breast cancer risk has been attracting extensive research interest recently. Although mammographic density is considered an important breast cancer risk, its discriminatory power is lower for predicting short-term breast cancer risk, which is a prerequisite to establish a more effective personalized breast cancer screening paradigm. In this study, we presented a new interactive computer-aided detection (CAD) scheme to generate a new quantitative mammographic imaging marker based on the bilateral mammographic tissue density asymmetry to predict risk of cancer detection in the next subsequent mammography screening. An image database involving 1,397 women was retrospectively assembled and tested. Each woman had two digital mammography screenings namely, the "current" and "prior" screenings with a time interval from 365 to 600 days. All "prior" images were originally interpreted negative. In "current" screenings, these cases were divided into 3 groups, which include 402 positive, 643 negative, and 352 biopsy-proved benign cases, respectively. There is no significant difference of BIRADS based mammographic density ratings between 3 case groups (p < 0.6). When applying the CAD-generated imaging marker or risk model to classify between 402 positive and 643 negative cases using "prior" negative mammograms, the area under a ROC curve is 0.70+/-0.02 and the adjusted odds ratios show an increasing trend from 1.0 to 8.13 to predict the risk of cancer detection in the "current" screening. Study demonstrated that this new imaging marker had potential to yield significantly higher discriminatory power to predict short-term breast cancer risk.
Developing and validating risk prediction models in an individual participant data meta-analysis
2014-01-01
Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587
Cai, Tommaso; Mazzoli, Sandra; Migno, Serena; Malossini, Gianni; Lanzafame, Paolo; Mereu, Liliana; Tateo, Saverio; Wagenlehner, Florian M E; Pickard, Robert S; Bartoletti, Riccardo
2014-09-01
To develop and externally validate a novel nomogram predicting recurrence risk probability at 12 months in women after an episode of urinary tract infection. The study included 768 women from Santa Maria Annunziata Hospital, Florence, Italy, affected by urinary tract infections from January 2005 to December 2009. Another 373 women with the same criteria enrolled at Santa Chiara Hospital, Trento, Italy, from January 2010 to June 2012 were used to externally validate and calibrate the nomogram. Univariate and multivariate Cox regression models tested the relationship between urinary tract infection recurrence risk, and patient clinical and laboratory characteristics. The nomogram was evaluated by calculating concordance probabilities, as well as testing calibration of predicted urinary tract infection recurrence with observed urinary tract infections. Nomogram variables included: number of partners, bowel function, type of pathogens isolated (Gram-positive/negative), hormonal status, number of previous urinary tract infection recurrences and previous treatment of asymptomatic bacteriuria. Of the original development data, 261 out of 768 women presented at least one episode of recurrence of urinary tract infection (33.9%). The nomogram had a concordance index of 0.85. The nomogram predictions were well calibrated. This model showed high discrimination accuracy and favorable calibration characteristics. In the validation group (373 women), the overall c-index was 0.83 (P = 0.003, 95% confidence interval 0.51-0.99), whereas the area under the receiver operating characteristic curve was 0.85 (95% confidence interval 0.79-0.91). The present nomogram accurately predicts the recurrence risk of urinary tract infection at 12 months, and can assist in identifying women at high risk of symptomatic recurrence that can be suitable candidates for a prophylactic strategy. © 2014 The Japanese Urological Association.
McLernon, David J; Donnan, Peter T; Sullivan, Frank M; Roderick, Paul; Rosenberg, William M; Ryder, Steve D; Dillon, John F
2014-06-02
To derive and validate a clinical prediction model to estimate the risk of liver disease diagnosis following liver function tests (LFTs) and to convert the model to a simplified scoring tool for use in primary care. Population-based observational cohort study of patients in Tayside Scotland identified as having their LFTs performed in primary care and followed for 2 years. Biochemistry data were linked to secondary care, prescriptions and mortality data to ascertain baseline characteristics of the derivation cohort. A separate validation cohort was obtained from 19 general practices across the rest of Scotland to externally validate the final model. Primary care, Tayside, Scotland. Derivation cohort: LFT results from 310 511 patients. After exclusions (including: patients under 16 years, patients having initial LFTs measured in secondary care, bilirubin >35 μmol/L, liver complications within 6 weeks and history of a liver condition), the derivation cohort contained 95 977 patients with no clinically apparent liver condition. Validation cohort: after exclusions, this cohort contained 11 653 patients. Diagnosis of a liver condition within 2 years. From the derivation cohort (n=95 977), 481 (0.5%) were diagnosed with a liver disease. The model showed good discrimination (C-statistic=0.78). Given the low prevalence of liver disease, the negative predictive values were high. Positive predictive values were low but rose to 20-30% for high-risk patients. This study successfully developed and validated a clinical prediction model and subsequent scoring tool, the Algorithm for Liver Function Investigations (ALFI), which can predict liver disease risk in patients with no clinically obvious liver disease who had their initial LFTs taken in primary care. ALFI can help general practitioners focus referral on a small subset of patients with higher predicted risk while continuing to address modifiable liver disease risk factors in those at lower risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Omnibus Risk Assessment via Accelerated Failure Time Kernel Machine Modeling
Sinnott, Jennifer A.; Cai, Tianxi
2013-01-01
Summary Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai et al., 2011). In this paper, we derive testing and prediction methods for KM regression under the accelerated failure time model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. PMID:24328713
Darvishi, Ebrahim; Khotanlou, Hassan; Khoubi, Jamshid; Giahi, Omid; Mahdavi, Neda
2017-09-01
This study aimed to provide an empirical model of predicting low back pain (LBP) by considering the occupational, personal, and psychological risk factor interactions in workers population employed in industrial units using an artificial neural networks approach. A total of 92 workers with LBP as the case group and 68 healthy workers as a control group were selected in various industrial units with similar occupational conditions. The demographic information and personal, occupational, and psychosocial factors of the participants were collected via interview, related questionnaires, consultation with occupational medicine, and also the Rapid Entire Body Assessment worksheet and National Aeronautics and Space Administration Task Load Index software. Then, 16 risk factors for LBP were used as input variables to develop the prediction model. Networks with various multilayered structures were developed using MATLAB. The developed neural networks with 1 hidden layer and 26 neurons had the least error of classification in both training and testing phases. The mean of classification accuracy of the developed neural networks for the testing and training phase data were about 88% and 96%, respectively. In addition, the mean of classification accuracy of both training and testing data was 92%, indicating much better results compared with other methods. It appears that the prediction model using the neural network approach is more accurate compared with other applied methods. Because occupational LBP is usually untreatable, the results of prediction may be suitable for developing preventive strategies and corrective interventions. Copyright © 2017. Published by Elsevier Inc.
Vize, Colin E.; Lynam, Donald R.; Lamkin, Joanna; Miller, Joshua D; Pardini, Dustin
2015-01-01
Despite years of research, and inclusion of psychopathy DSM-5, there remains debate over the fundamental components of psychopathy. Although there is agreement about traits related to Agreeableness and Conscientiousness, there is less agreement about traits related to Fearless Dominance (FD) or Boldness. The present paper uses proxies of FD and Self-centered Impulsivity (SCI) to examine the contribution of FD-related traits to the predictive utility of psychopathy in a large, longitudinal, sample of boys to test four possibilities: FD 1. assessed earlier is a risk factor, 2. interacts with other risk-related variables to predict later psychopathy, 3. interacts with SCI interact to predict outcomes, and 4. bears curvilinear relations to outcomes. SCI received excellent support as a measure of psychopathy in adolescence; however, FD was unrelated to criteria in all tests. It is suggested that FD be dropped from psychopathy and that future research focus on Agreeableness and Conscientiousness. PMID:27347448
Cesari, Matteo; Kritchevsky, Stephen B; Newman, Anne B; Simonsick, Eleanor M; Harris, Tamara B; Penninx, Brenda W; Brach, Jennifer S; Tylavsky, Frances A; Satterfield, Suzanne; Bauer, Doug C; Rubin, Susan M; Visser, Marjolein; Pahor, Marco
2009-01-01
Objectives To determine how three different physical performance measures (PPM) combine for added utility in predicting adverse health events in elders. Design Prospective cohort study. Setting Health, Aging, and Body Composition Study. Participants 3,024 well-functioning older persons (mean age 73.6 years). Measurements Timed gait, repeated chair stands and balance (semi- and full-tandem, and single leg stands each held for 30 seconds) tests were administered at baseline. Usual gait speed was categorized to distinguish high and low risk participants using the previously established 1 m/sec cut-point. The same population-percentile (21.3%) was used to identify cut-points for repeated chair stands (17.05 sec) and balance (53 sec) tests. Cox proportional hazard analyses were performed to evaluate the added value of PPM in predicting mortality, hospitalization, and (severe) mobility limitation events over 6.9 years of follow-up. Results Risk estimates for developing adverse health-related events were similarly large for each of the three high risk groups considered separately. A greater number of PPM scores at the high risk level was associated with a greater risk of developing adverse health-related events. When all three PPMs were considered, having only one poor performance was sufficient to indicate a highly significant higher risk of (severe) lower extremity and mortality events. Conclusion Although gait speed is considered the most important predictor of adverse health events, these findings demonstrate that poor performance on other tests of lower extremity function are equally prognostic. This suggests that chair stand and standing balance performance may be adequate substitutes when gait speed is unavailable. PMID:19207142
Samy, Abdallah M; Annajar, Badereddin B; Dokhan, Mostafa Ramadhan; Boussaa, Samia; Peterson, A Townsend
2016-02-01
Cutaneous leishmaniasis ranks among the tropical diseases least known and most neglected in Libya. World Health Organization reports recognized associations of Phlebotomus papatasi, Psammomys obesus, and Meriones spp., with transmission of zoonotic cutaneous leishmaniasis (ZCL; caused by Leishmania major) across Libya. Here, we map risk of ZCL infection based on occurrence records of L. major, P. papatasi, and four potential animal reservoirs (Meriones libycus, Meriones shawi, Psammomys obesus, and Gerbillus gerbillus). Ecological niche models identified limited risk areas for ZCL across the northern coast of the country; most species associated with ZCL transmission were confined to this same region, but some had ranges extending to central Libya. All ENM predictions were significant based on partial ROC tests. As a further evaluation of L. major ENM predictions, we compared predictions with 98 additional independent records provided by the Libyan National Centre for Disease Control (NCDC); all of these records fell inside the belt predicted as suitable for ZCL. We tested ecological niche similarity among vector, parasite, and reservoir species and could not reject any null hypotheses of niche similarity. Finally, we tested among possible combinations of vector and reservoir that could predict all recent human ZCL cases reported by NCDC; only three combinations could anticipate the distribution of human cases across the country.
Samy, Abdallah M.; Annajar, Badereddin B.; Dokhan, Mostafa Ramadhan; Boussaa, Samia; Peterson, A. Townsend
2016-01-01
Abstract Cutaneous leishmaniasis ranks among the tropical diseases least known and most neglected in Libya. World Health Organization reports recognized associations of Phlebotomus papatasi, Psammomys obesus, and Meriones spp., with transmission of zoonotic cutaneous leishmaniasis (ZCL; caused by Leishmania major) across Libya. Here, we map risk of ZCL infection based on occurrence records of L. major, P. papatasi, and four potential animal reservoirs (Meriones libycus, Meriones shawi, Psammomys obesus, and Gerbillus gerbillus). Ecological niche models identified limited risk areas for ZCL across the northern coast of the country; most species associated with ZCL transmission were confined to this same region, but some had ranges extending to central Libya. All ENM predictions were significant based on partial ROC tests. As a further evaluation of L. major ENM predictions, we compared predictions with 98 additional independent records provided by the Libyan National Centre for Disease Control (NCDC); all of these records fell inside the belt predicted as suitable for ZCL. We tested ecological niche similarity among vector, parasite, and reservoir species and could not reject any null hypotheses of niche similarity. Finally, we tested among possible combinations of vector and reservoir that could predict all recent human ZCL cases reported by NCDC; only three combinations could anticipate the distribution of human cases across the country. PMID:26863317
Lahey, Benjamin B; Class, Quetzal A; Zald, David H; Rathouz, Paul J; Applegate, Brooks; Waldman, Irwin D
2018-06-01
The developmental propensity model of antisocial behavior posits that several dispositional characteristics of children transact with the environment to influence the likelihood of learning antisocial behavior across development. Specifically, greater dispositional negative emotionality, greater daring, and lower prosociality-operationally, the inverse of callousness- and lower cognitive abilities are each predicted to increase risk for developing antisocial behavior. Prospective tests of key predictions derived from the model were conducted in a high-risk sample of 499 twins who were assessed on dispositions at 10-17 years of age and assessed for antisocial personality disorder (APD) symptoms at 22-31 years of age. Predictions were tested separately for parent and youth informants on the dispositions using multiple regressions that adjusted for oversampling, nonresponse, and clustering within twin pairs, controlling demographic factors and time since the first assessment. Consistent with predictions, greater numbers of APD symptoms in adulthood were independently predicted over a 10-15 year span by higher youth ratings on negative emotionality and daring and lower youth ratings on prosociality, and by parent ratings of greater negative emotionality and lower prosociality. A measure of working memory did not predict APD symptoms. These findings support future research on the role of these dispositions in the development of antisocial behavior. © 2017 Association for Child and Adolescent Mental Health.
How does genetic risk information for Lynch syndrome translate to risk management behaviours?
Steel, Emma; Robbins, Andrew; Jenkins, Mark; Flander, Louisa; Gaff, Clara; Keogh, Louise
2017-01-01
There is limited research on why some individuals who have undergone predictive genetic testing for Lynch syndrome do not adhere to screening recommendations. This study aimed to explore qualitatively how Lynch syndrome non-carriers and carriers translate genetic risk information and advice to decisions about risk managment behaviours in the Australian healthcare system. Participants of the Australasian Colorectal Cancer Family Registry who had undergone predictive genetic testing for Lynch syndrome were interviewed on their risk management behaviours. Transcripts were analysed thematically using a comparative coding analysis. Thirty-three people were interviewed. Of the non-carriers ( n = 16), 2 reported having apparently unnecessary colonoscopies, and 6 were unsure about what population-based colorectal cancer screening entails. Of the carriers ( n = 17), 2 reported they had not had regular colonoscopies, and spoke about their discomfort with the screening process and a lack of faith in the procedure's ability to reduce their risk of developing colorectal cancer. Of the female carriers ( n = 9), 2 could not recall being informed about the associated risk of gynaecological cancers. Non-carriers and female carriers of Lynch syndrome could benefit from further clarity and advice about appropriate risk management options. For those carriers who did not adhere to colonoscopy screening, a lack of faith in both genetic test results and screening were evident. It is essential that consistent advice is offered to both carriers and non-carriers of Lynch syndrome.
Parental distress in response to childhood medical trauma: A mediation model.
Currie, Roseanne; Anderson, Vicki A; McCarthy, Maria C; Burke, Kylie; Hearps, Stephen Jc; Muscara, Frank
2018-04-01
This study explored the relationship between individual and family-level risk in predicting longer-term parental distress following their child's unexpected diagnosis of serious illness. A mediation model was tested, whereby parents' pre-existing psychosocial risk predicts longer-term posttraumatic stress symptoms, indirectly through parents' acute stress response. One hundred and thirty-two parents of 104 children participated. Acute stress response partially mediated the relationship between psychosocial risk and posttraumatic stress symptoms, with a moderate indirect effect ( r 2 = .20, P M = .56, p < .001). Findings demonstrated that cumulative psychosocial risk factors predispose parents to acute stress and longer-term posttraumatic stress symptoms, highlighting the need for psychosocial screening in this population.
Accessing Autonomic Function Can Early Screen Metabolic Syndrome
Dai, Meng; Li, Mian; Yang, Zhi; Xu, Min; Xu, Yu; Lu, Jieli; Chen, Yuhong; Liu, Jianmin; Ning, Guang; Bi, Yufang
2012-01-01
Background Clinical diagnosis of the metabolic syndrome is time-consuming and invasive. Convenient instruments that do not require laboratory or physical investigation would be useful in early screening individuals at high risk of metabolic syndrome. Examination of the autonomic function can be taken as a directly reference and screening indicator for predicting metabolic syndrome. Methodology and Principal Findings The EZSCAN test, as an efficient and noninvasive technology, can access autonomic function through measuring electrochemical skin conductance. In this study, we used EZSCAN value to evaluate autonomic function and to detect metabolic syndrome in 5,887 participants aged 40 years or older. The EZSCAN test diagnostic accuracy was analyzed by receiver operating characteristic curves. Among the 5,815 participants in the final analysis, 2,541 were diagnosed as metabolic syndrome and the overall prevalence was 43.7%. Prevalence of the metabolic syndrome increased with the elevated EZSCAN risk level (p for trend <0.0001). Moreover, EZSCAN value was associated with an increase in the number of metabolic syndrome components (p for trend <0.0001). Compared with the no risk group (EZSCAN value 0–24), participants at the high risk group (EZSCAN value: 50–100) had a 2.35 fold increased risk of prevalent metabolic syndrome after the multiple adjustments. The area under the curve of the EZSCAN test was 0.62 (95% confidence interval [CI], 0.61–0.64) for predicting metabolic syndrome. The optimal operating point for the EZSCAN value to detect a high risk of prevalent metabolic syndrome was 30 in this study, while the sensitivity and specificity were 71.2% and 46.7%, respectively. Conclusions and Significance In conclusion, although less sensitive and accurate when compared with the clinical definition of metabolic syndrome, we found that the EZSCAN test is a good and simple screening technique for early predicting metabolic syndrome. PMID:22916265
Denman, Antony; Groves-Kirkby, Christopher; Coskeran, Thomas; Parkinson, Steven; Phillips, Paul; Tornberg, Roges
2005-08-01
Although previous analysis of health benefits and cost-effectiveness of radon remediation in a series of houses in Northamptonshire suggested that testing and remediation was justified, recent results indicate fewer predicted affected houses than previously assumed. Despite numerous awareness campaigns, limited numbers of householders have tested their homes, only a minority of affected householders have remediated, and those most at risk generally fail to remediate. Moreover, a recent survey shows a wide range of public perception of radon risk, not significantly influenced by public health campaigns. These observations impact our previous analysis, which has been reviewed in the light of these observations. Following the declaration of Northamptonshire, UK, as a radon Affected Area in 1992, a series of public awareness campaigns encouraged householders to assess domestic radon levels and, if appropriate, to take action to reduce them. Despite these awareness campaigns, however, only moderate numbers of householders have taken remediatory action. The costs of such remedial work in a series of domestic properties in Northamptonshire, the radon level reduction achieved, and the resultant heath benefit to the residents, have been the subject of study by our group for some years. Previous analysis, based on estimates of the total number of affected houses derived from the National Radiological Protection Board (NRPB) test data for the area, suggested that a programme of testing and remediation in Northamptonshire could be justified. The NRPB has continued to initiate and to collate radon testing, and published further results in 2003. These results include revised predictions of the numbers of affected houses, now considered to be less than the numbers previously assumed. More recently, the availability of the European Community Radon Software (ECRS) has permitted calculation of individual, rather than population-average, risk, demonstrating that those most at risk are generally those who do not take action. In addition, a recent survey of risk perception shows an extremely wide range of public perception of radon risk, a perception that has not been significantly altered by public health campaigns. These predictions have profound effects, both on our previous analysis, particularly since only limited numbers of householders test their homes and even fewer remediate if they discover raised levels, and also on the public health strategies for this risk.
Bartsch, Georg; Mitra, Anirban P; Mitra, Sheetal A; Almal, Arpit A; Steven, Kenneth E; Skinner, Donald G; Fry, David W; Lenehan, Peter F; Worzel, William P; Cote, Richard J
2016-02-01
Due to the high recurrence risk of nonmuscle invasive urothelial carcinoma it is crucial to distinguish patients at high risk from those with indolent disease. In this study we used a machine learning algorithm to identify the genes in patients with nonmuscle invasive urothelial carcinoma at initial presentation that were most predictive of recurrence. We used the genes in a molecular signature to predict recurrence risk within 5 years after transurethral resection of bladder tumor. Whole genome profiling was performed on 112 frozen nonmuscle invasive urothelial carcinoma specimens obtained at first presentation on Human WG-6 BeadChips (Illumina®). A genetic programming algorithm was applied to evolve classifier mathematical models for outcome prediction. Cross-validation based resampling and gene use frequencies were used to identify the most prognostic genes, which were combined into rules used in a voting algorithm to predict the sample target class. Key genes were validated by quantitative polymerase chain reaction. The classifier set included 21 genes that predicted recurrence. Quantitative polymerase chain reaction was done for these genes in a subset of 100 patients. A 5-gene combined rule incorporating a voting algorithm yielded 77% sensitivity and 85% specificity to predict recurrence in the training set, and 69% and 62%, respectively, in the test set. A singular 3-gene rule was constructed that predicted recurrence with 80% sensitivity and 90% specificity in the training set, and 71% and 67%, respectively, in the test set. Using primary nonmuscle invasive urothelial carcinoma from initial occurrences genetic programming identified transcripts in reproducible fashion, which were predictive of recurrence. These findings could potentially impact nonmuscle invasive urothelial carcinoma management. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
The clinical application of genetic testing in type 2 diabetes: a patient and physician survey.
Grant, R W; Hivert, M; Pandiscio, J C; Florez, J C; Nathan, D M; Meigs, J B
2009-11-01
Advances in type 2 diabetes genetics have raised hopes that genetic testing will improve disease prediction, prevention and treatment. Little is known about current physician and patient views regarding type 2 diabetes genetic testing. We hypothesised that physician and patient views would differ regarding the impact of genetic testing on motivation and adherence. We surveyed a nationally representative sample of US primary care physicians and endocrinologists (n = 304), a random sample of non-diabetic primary care patients (n = 152) and patients enrolled in a diabetes pharmacogenetics study (n = 89). Physicians and patients favoured genetic testing for diabetes risk prediction (79% of physicians vs 80% of non-diabetic patients would be somewhat/very likely to order/request testing, p = 0.7). More patients than physicians (71% vs 23%, p < 0.01) indicated that a 'high risk' result would be very likely to improve motivation to adopt preventive lifestyle changes. Patients favoured genetic testing to guide therapy (78% of patients vs 48% of physicians very likely to request/recommend testing, p < 0.01) and reported that genetic testing would make them 'much more motivated' to adhere to medications (72% vs 18% of physicians, p < 0.01). Many physicians (39%) would be somewhat/very likely to order genetic testing before published evidence of clinical efficacy. Despite the paucity of current data, physicians and patients reported high expectations that genetic testing would improve patient motivation to adopt key behaviours for the prevention or control of type 2 diabetes. This suggests the testable hypothesis that 'genetic' risk information might have greater value to motivate behaviour change compared with standard risk information.
Subramanian, Vijaya; Venkat, Janani; Dhanapal, Mohana
2016-10-01
To analyze which is superior, Doppler velocimetry or non-stress test or both by means of categorization into four groups and comparing the prediction of perinatal outcome in high-risk pregnancies like anemia, hypertensive disorders of pregnancies. This was a prospective study conducted at the Department of Obstetrics and Gynaecology, ISO KGH, Madras Medical College, Chennai, in the year 2014. Two hundred high-risk pregnancies like anemia, hypertensive disorders of pregnancy were included in the study. They were examined systematically, and Doppler velocimetry and non-stress test were done. The main vessels studied by Doppler were umbilical artery and middle cerebral artery, and the indices were calculated. The results of the non-stress test were interpreted as reactive and non-reactive. Based on the results of Doppler and non-stress test, the 200 cases were categorized into four groups and the results were analyzed. Among the 200 cases of high-risk pregnancies, those with a normal Doppler study and a reactive non-stress test had good perinatal outcome. When both were abnormal, there was a higher percentage of adverse outcome as compared to that of either Doppler alone being abnormal or non-stress test alone being non-reactive. It was also found that abnormal Doppler but with a reactive non-stress test had the advantage of prolonging the pregnancy and bringing a better outcome indicating that non-stress test is surely a good test of well-being. When Doppler was normal, but non-stress test was non-reactive, there was an increase in the rate of cesarean section. Each method of fetal surveillance reflects different aspect of maternal and fetal pathophysiology. Hence, combining these will help to bring out better perinatal outcome.
Marschollek, M; Nemitz, G; Gietzelt, M; Wolf, K H; Meyer Zu Schwabedissen, H; Haux, R
2009-08-01
Falls are among the predominant causes for morbidity and mortality in elderly persons and occur most often in geriatric clinics. Despite several studies that have identified parameters associated with elderly patients' fall risk, prediction models -- e.g., based on geriatric assessment data -- are currently not used on a regular basis. Furthermore, technical aids to objectively assess mobility-associated parameters are currently not used. To assess group differences in clinical as well as common geriatric assessment data and sensory gait measurements between fallers and non-fallers in a geriatric sample, and to derive and compare two prediction models based on assessment data alone (model #1) and added sensory measurement data (model #2). For a sample of n=110 geriatric in-patients (81 women, 29 men) the following fall risk-associated assessments were performed: Timed 'Up & Go' (TUG) test, STRATIFY score and Barthel index. During the TUG test the subjects wore a triaxial accelerometer, and sensory gait parameters were extracted from the data recorded. Group differences between fallers (n=26) and non-fallers (n=84) were compared using Student's t-test. Two classification tree prediction models were computed and compared. Significant differences between the two groups were found for the following parameters: time to complete the TUG test, transfer item (Barthel), recent falls (STRATIFY), pelvic sway while walking and step length. Prediction model #1 (using common assessment data only) showed a sensitivity of 38.5% and a specificity of 97.6%, prediction model #2 (assessment data plus sensory gait parameters) performed with 57.7% and 100%, respectively. Significant differences between fallers and non-fallers among geriatric in-patients can be detected for several assessment subscores as well as parameters recorded by simple accelerometric measurements during a common mobility test. Existing geriatric assessment data may be used for falls prediction on a regular basis. Adding sensory data improves the specificity of our test markedly.
Hirota, Morihiko; Ashikaga, Takao; Kouzuki, Hirokazu
2018-04-01
It is important to predict the potential of cosmetic ingredients to cause skin sensitization, and in accordance with the European Union cosmetic directive for the replacement of animal tests, several in vitro tests based on the adverse outcome pathway have been developed for hazard identification, such as the direct peptide reactivity assay, KeratinoSens™ and the human cell line activation test. Here, we describe the development of an artificial neural network (ANN) prediction model for skin sensitization risk assessment based on the integrated testing strategy concept, using direct peptide reactivity assay, KeratinoSens™, human cell line activation test and an in silico or structure alert parameter. We first investigated the relationship between published murine local lymph node assay EC3 values, which represent skin sensitization potency, and in vitro test results using a panel of about 134 chemicals for which all the required data were available. Predictions based on ANN analysis using combinations of parameters from all three in vitro tests showed a good correlation with local lymph node assay EC3 values. However, when the ANN model was applied to a testing set of 28 chemicals that had not been included in the training set, predicted EC3s were overestimated for some chemicals. Incorporation of an additional in silico or structure alert descriptor (obtained with TIMES-M or Toxtree software) in the ANN model improved the results. Our findings suggest that the ANN model based on the integrated testing strategy concept could be useful for evaluating the skin sensitization potential. Copyright © 2017 John Wiley & Sons, Ltd.
Lunevicius, Raimundas; Morkevicius, Matas
2005-09-01
Clear patient selection criteria and indications for laparoscopic repair of perforated duodenal ulcers are necessary. The aims of our study are to report the early outcome results after operation and to define the predictive values of risk factors influencing conversion rate and genesis of suture leakage. Sixty nonrandomly selected patients operated on laparoscopically in a tertiary care academic center between October 1996 and May 2004 for perforated duodenal ulcers were retrospectively analyzed. The primary outcome measures included the duration of symptoms, shock, underlying medical illness, ulcer size, age, Boey score, and the collective predictive value of these variables for conversion and suture leakage rates. Laparoscopic repair was completed in 46 patients (76.7%). Fourteen patients (23.3%) underwent conversion to open repair. Eight patients (13.3%) had postoperative complications. Suture leakage was confirmed in four patients (6.7%). Hospital stay was 7.8+/-5.3 days. There was no mortality. Patients with an ulcer perforation size of >8 mm had a significantly increased risk for conversion to open repair (p<0.05): positive predictive value (PPV) 75%, sensitivity 27%, specificity 98%, and negative predictive value (NPV) 85%. The significance of ulcer perforation size was confirmed by a stepwise logistic regression test (p=0.0201). All patients who developed suture leakage had acute symptoms for >9 h preoperatively (p<0.001): PPV 31%, specificity 84%, sensitivity 100%, and NPV 100%. Conversions happened with surgeons whose previous experience involved 1.8+/-2.3 cases compared to 3.9+/-2.9 cases in successful laparoscopic repair (p=0.039, t test). Ulcer perforation size of >8 mm is a significant risk factor influencing the conversion rate. An increase in the suture leakage rate is predicted by delayed presentation of >9 h.
Leslie, William D; Lix, Lisa M
2011-03-01
The World Health Organization (WHO) Fracture Risk Assessment Tool (FRAX) computes 10-year probability of major osteoporotic fracture from multiple risk factors, including femoral neck (FN) T-scores. Lumbar spine (LS) measurements are not currently part of the FRAX formulation but are used widely in clinical practice, and this creates confusion when there is spine-hip discordance. Our objective was to develop a hybrid 10-year absolute fracture risk assessment system in which nonvertebral (NV) fracture risk was assessed from the FN and clinical vertebral (V) fracture risk was assessed from the LS. We identified 37,032 women age 45 years and older undergoing baseline FN and LS dual-energy X-ray absorptiometry (DXA; 1990-2005) from a population database that contains all clinical DXA results for the Province of Manitoba, Canada. Results were linked to longitudinal health service records for physician billings and hospitalizations to identify nontrauma vertebral and nonvertebral fracture codes after bone mineral density (BMD) testing. The population was randomly divided into equal-sized derivation and validation cohorts. Using the derivation cohort, three fracture risk prediction systems were created from Cox proportional hazards models (adjusted for age and multiple FRAX risk factors): FN to predict combined all fractures, FN to predict nonvertebral fractures, and LS to predict vertebral (without nonvertebral) fractures. The hybrid system was the sum of nonvertebral risk from the FN model and vertebral risk from the LS model. The FN and hybrid systems were both strongly predictive of overall fracture risk (p < .001). In the validation cohort, ROC analysis showed marginally better performance of the hybrid system versus the FN system for overall fracture prediction (p = .24) and significantly better performance for vertebral fracture prediction (p < .001). In a discordance subgroup with FN and LS T-score differences greater than 1 SD, there was a significant improvement in overall fracture prediction with the hybrid method (p = .025). Risk reclassification under the hybrid system showed better alignment with observed fracture risk, with 6.4% of the women reclassified to a different risk category. In conclusion, a hybrid 10-year absolute fracture risk assessment system based on combining FN and LS information is feasible. The improvement in fracture risk prediction is small but supports clinical interest in a system that integrates LS in fracture risk assessment. Copyright © 2011 American Society for Bone and Mineral Research.
Hypomagnesemia predicts postoperative biochemical hypocalcemia after thyroidectomy.
Luo, Han; Yang, Hongliu; Zhao, Wanjun; Wei, Tao; Su, Anping; Wang, Bin; Zhu, Jingqiang
2017-05-25
To investigate the role of magnesium in biochemical and symptomatic hypocalcemia, a retrospective study was conducted. Less-than-total thyroidectomy patients were excluded from the final analysis. Identified the risk factors of biochemical and symptomatic hypocalcemia, and investigated the correlation by logistic regression and correlation test respectively. A total of 304 patients were included in the final analysis. General incidence of hypomagnesemia was 23.36%. Logistic regression showed that gender (female) (OR = 2.238, p = 0.015) and postoperative hypomagnesemia (OR = 2.010, p = 0.017) were independent risk factors for biochemical hypocalcemia. Both Pearson and partial correlation tests indicated there was indeed significant relation between calcium and magnesium. However, relative decreasing of iPTH (>70%) (6.691, p < 0.001) and hypocalcemia (2.222, p = 0.046) were identified as risk factors of symptomatic hypocalcemia. The difference remained significant even in normoparathyroidism patients. Postoperative hypomagnesemia was independent risk factor of biochemical hypocalcemia. Relative decline of iPTH was predominating in predicting symptomatic hypocalcemia.
Dawson, Benjamin K; Fereshtehnejad, Seyed-Mohammad; Anang, Julius B M; Nomura, Takashi; Rios-Romenets, Silvia; Nakashima, Kenji; Gagnon, Jean-François; Postuma, Ronald B
2018-06-01
Parkinson disease dementia dramatically increases mortality rates, patient expenditures, hospitalization risk, and caregiver burden. Currently, predicting Parkinson disease dementia risk is difficult, particularly in an office-based setting, without extensive biomarker testing. To appraise the predictive validity of the Montreal Parkinson Risk of Dementia Scale, an office-based screening tool consisting of 8 items that are simply assessed. This multicenter study (Montreal, Canada; Tottori, Japan; and Parkinson Progression Markers Initiative sites) used 4 diverse Parkinson disease cohorts with a prospective 4.4-year follow-up. A total of 717 patients with Parkinson disease were recruited between May 2005 and June 2016. Of these, 607 were dementia-free at baseline and followed-up for 1 year or more and so were included. The association of individual baseline scale variables with eventual dementia risk was calculated. Participants were then randomly split into cohorts to investigate weighting and determine the scale's optimal cutoff point. Receiver operating characteristic curves were calculated and correlations with selected biomarkers were investigated. Dementia, as defined by Movement Disorder Society level I criteria. Of the 607 patients (mean [SD] age, 63.4 [10.1]; 376 men [62%]), 70 (11.5%) converted to dementia. All 8 items of the Montreal Parkinson Risk of Dementia Scale independently predicted dementia development at the 5% significance level. The annual conversion rate to dementia in the high-risk group (score, >5) was 14.9% compared with 5.8% in the intermediate group (score, 4-5) and 0.6% in the low-risk group (score, 0-3). The weighting procedure conferred no significant advantage. Overall predictive validity by the area under the receiver operating characteristic curve was 0.877 (95% CI, 0.829-0.924) across all cohorts. A cutoff of 4 or greater yielded a sensitivity of 77.1% (95% CI, 65.6-86.3) and a specificity of 87.2% (95% CI, 84.1-89.9), with a positive predictive value (as of 4.4 years) of 43.90% (95% CI, 37.76-50.24) and a negative predictive value of 96.70% (95% CI, 95.01-97.85). Positive and negative likelihood ratios were 5.94 (95% CI, 4.08-8.65) and 0.26 (95% CI, 0.17-0.40), respectively. Scale results correlated with markers of Alzheimer pathology and neuropsychological test results. Despite its simplicity, the Montreal Parkinson Risk of Dementia Scale demonstrated predictive validity equal or greater to previously described algorithms using biomarker assessments. Future studies using head-to-head comparisons or refinement of weighting would be of interest.
ERIC Educational Resources Information Center
Lorber, Michael F.; Egeland, Byron
2011-01-01
The prediction of conduct problems (CPs) from infant difficulty and parenting measured in the first 6 months of life was studied in a sample of 267 high-risk mother-child dyads. Stable, cross-situational CPs at school entry (5-6 years) were predicted by negative infancy parenting, mediated by mutually angry and hostile mother-toddler interactions…
An early-biomarker algorithm predicts lethal graft-versus-host disease and survival
Hartwell, Matthew J.; Özbek, Umut; Holler, Ernst; Major-Monfried, Hannah; Reddy, Pavan; Aziz, Mina; Hogan, William J.; Ayuk, Francis; Efebera, Yvonne A.; Hexner, Elizabeth O.; Bunworasate, Udomsak; Qayed, Muna; Ordemann, Rainer; Wölfl, Matthias; Mielke, Stephan; Chen, Yi-Bin; Devine, Steven; Jagasia, Madan; Kitko, Carrie L.; Litzow, Mark R.; Kröger, Nicolaus; Locatelli, Franco; Morales, George; Nakamura, Ryotaro; Reshef, Ran; Rösler, Wolf; Weber, Daniela; Yanik, Gregory A.; Levine, John E.; Ferrara, James L.M.
2017-01-01
BACKGROUND. No laboratory test can predict the risk of nonrelapse mortality (NRM) or severe graft-versus-host disease (GVHD) after hematopoietic cellular transplantation (HCT) prior to the onset of GVHD symptoms. METHODS. Patient blood samples on day 7 after HCT were obtained from a multicenter set of 1,287 patients, and 620 samples were assigned to a training set. We measured the concentrations of 4 GVHD biomarkers (ST2, REG3α, TNFR1, and IL-2Rα) and used them to model 6-month NRM using rigorous cross-validation strategies to identify the best algorithm that defined 2 distinct risk groups. We then applied the final algorithm in an independent test set (n = 309) and validation set (n = 358). RESULTS. A 2-biomarker model using ST2 and REG3α concentrations identified patients with a cumulative incidence of 6-month NRM of 28% in the high-risk group and 7% in the low-risk group (P < 0.001). The algorithm performed equally well in the test set (33% vs. 7%, P < 0.001) and the multicenter validation set (26% vs. 10%, P < 0.001). Sixteen percent, 17%, and 20% of patients were at high risk in the training, test, and validation sets, respectively. GVHD-related mortality was greater in high-risk patients (18% vs. 4%, P < 0.001), as was severe gastrointestinal GVHD (17% vs. 8%, P < 0.001). The same algorithm can be successfully adapted to define 3 distinct risk groups at GVHD onset. CONCLUSION. A biomarker algorithm based on a blood sample taken 7 days after HCT can consistently identify a group of patients at high risk for lethal GVHD and NRM. FUNDING. The National Cancer Institute, American Cancer Society, and the Doris Duke Charitable Foundation. PMID:28194439
Hollett, Ross C.; Stritzke, Werner G. K.; Edgeworth, Phoebe; Weinborn, Michael
2017-01-01
According to the ambivalence model of craving, alcohol craving involves the dynamic interplay of separate approach and avoidance inclinations. Cue-elicited increases in approach inclinations are posited to be more likely to result in alcohol consumption and risky drinking behaviors only if unimpeded by restraint inclinations. Current study aims were (1) to test if changes in the net balance between approach and avoidance inclinations following alcohol cue exposure differentiate between low and high risk drinkers, and (2) if this balance is associated with alcohol consumption on a subsequent taste test. In two experiments (N = 60; N = 79), low and high risk social drinkers were exposed to alcohol cues, and pre- and post- approach and avoidance inclinations measured. An ad libitum alcohol consumption paradigm and a non-alcohol exposure condition were also included in Study 2. Cue-elicited craving was characterized by a predominant approach inclination only in the high risk drinkers. Conversely, approach inclinations were adaptively balanced by equally strong avoidance inclinations when cue-elicited craving was induced in low risk drinkers. For these low risk drinkers with the balanced craving profile, neither approach or avoidance inclinations predicted subsequent alcohol consumption levels during the taste test. Conversely, for high risk drinkers, where the approach inclination predominated, each inclination synergistically predicted subsequent drinking levels during the taste test. In conclusion, results support the importance of assessing both approach and avoidance inclinations, and their relative balance following alcohol cue exposure. Specifically, this more comprehensive assessment reveals changes in craving profiles that are not apparent from examining changes in approach inclinations alone, and it is this shift in the net balance that distinguishes high from low risk drinkers. PMID:28533759
The ethics of disclosing genetic diagnosis for Alzheimer's disease: do we need a new paradigm?
Arribas-Ayllon, Michael
2011-01-01
Genetic testing for rare Mendelian disorders represents the dominant ethical paradigm in clinical and professional practice. Predictive testing for Huntington's disease is the model against which other kinds of genetic testing are evaluated, including testing for Alzheimer's disease. This paper retraces the historical development of ethical reasoning in relation to predictive genetic testing and reviews a range of ethical, sociological and psychological literature from the 1970s to the present. In the past, ethical reasoning has embodied a distinct style whereby normative principles are developed from a dominant disease exemplar. This reductionist approach to formulating ethical frameworks breaks down in the case of disease susceptibility. Recent developments in the genetics of Alzheimer's disease present a significant case for reconsidering the ethics of disclosing risk for common complex diseases. Disclosing the results of susceptibility testing for Alzheimer's disease has different social, psychological and behavioural consequences. Furthermore, what genetic susceptibility means to individuals and their families is diffuse and often mitigated by other factors and concerns. The ethics of disclosing a genetic diagnosis of susceptibility is contingent on whether professionals accept that probabilistic risk information is in fact 'diagnostic' and it will rely substantially on empirical evidence of how people actually perceive, recall and communicate complex risk information.
Biological risk factors for suicidal behaviors: a meta-analysis
Chang, B P; Franklin, J C; Ribeiro, J D; Fox, K R; Bentley, K H; Kleiman, E M; Nock, M K
2016-01-01
Prior studies have proposed a wide range of potential biological risk factors for future suicidal behaviors. Although strong evidence exists for biological correlates of suicidal behaviors, it remains unclear if these correlates are also risk factors for suicidal behaviors. We performed a meta-analysis to integrate the existing literature on biological risk factors for suicidal behaviors and to determine their statistical significance. We conducted a systematic search of PubMed, PsycInfo and Google Scholar for studies that used a biological factor to predict either suicide attempt or death by suicide. Inclusion criteria included studies with at least one longitudinal analysis using a biological factor to predict either of these outcomes in any population through 2015. From an initial screen of 2541 studies we identified 94 cases. Random effects models were used for both meta-analyses and meta-regression. The combined effect of biological factors produced statistically significant but relatively weak prediction of suicide attempts (weighted mean odds ratio (wOR)=1.41; CI: 1.09–1.81) and suicide death (wOR=1.28; CI: 1.13–1.45). After accounting for publication bias, prediction was nonsignificant for both suicide attempts and suicide death. Only two factors remained significant after accounting for publication bias—cytokines (wOR=2.87; CI: 1.40–5.93) and low levels of fish oil nutrients (wOR=1.09; CI: 1.01–1.19). Our meta-analysis revealed that currently known biological factors are weak predictors of future suicidal behaviors. This conclusion should be interpreted within the context of the limitations of the existing literature, including long follow-up intervals and a lack of tests of interactions with other risk factors. Future studies addressing these limitations may more effectively test for potential biological risk factors. PMID:27622931
Comber, Mike H I; Walker, John D; Watts, Chris; Hermens, Joop
2003-08-01
The use of quantitative structure-activity relationships (QSARs) for deriving the predicted no-effect concentration of discrete organic chemicals for the purposes of conducting a regulatory risk assessment in Europe and the United States is described. In the United States, under the Toxic Substances Control Act (TSCA), the TSCA Interagency Testing Committee and the U.S. Environmental Protection Agency (U.S. EPA) use SARs to estimate the hazards of existing and new chemicals. Within the Existing Substances Regulation in Europe, QSARs may be used for data evaluation, test strategy indications, and the identification and filling of data gaps. To illustrate where and when QSARs may be useful and when their use is more problematic, an example, methyl tertiary-butyl ether (MTBE), is given and the predicted and experimental data are compared. Improvements needed for new QSARs and tools for developing and using QSARs are discussed.
Fan, X-J; Wan, X-B; Huang, Y; Cai, H-M; Fu, X-H; Yang, Z-L; Chen, D-K; Song, S-X; Wu, P-H; Liu, Q; Wang, L; Wang, J-P
2012-01-01
Background: Current imaging modalities are inadequate in preoperatively predicting regional lymph node metastasis (RLNM) status in rectal cancer (RC). Here, we designed support vector machine (SVM) model to address this issue by integrating epithelial–mesenchymal-transition (EMT)-related biomarkers along with clinicopathological variables. Methods: Using tissue microarrays and immunohistochemistry, the EMT-related biomarkers expression was measured in 193 RC patients. Of which, 74 patients were assigned to the training set to select the robust variables for designing SVM model. The SVM model predictive value was validated in the testing set (119 patients). Results: In training set, eight variables, including six EMT-related biomarkers and two clinicopathological variables, were selected to devise SVM model. In testing set, we identified 63 patients with high risk to RLNM and 56 patients with low risk. The sensitivity, specificity and overall accuracy of SVM in predicting RLNM were 68.3%, 81.1% and 72.3%, respectively. Importantly, multivariate logistic regression analysis showed that SVM model was indeed an independent predictor of RLNM status (odds ratio, 11.536; 95% confidence interval, 4.113–32.361; P<0.0001). Conclusion: Our SVM-based model displayed moderately strong predictive power in defining the RLNM status in RC patients, providing an important approach to select RLNM high-risk subgroup for neoadjuvant chemoradiotherapy. PMID:22538975
How implicit motives and everyday self-regulatory abilities shape cardiovascular risk in youth.
Ewart, Craig K; Elder, Gavin J; Smyth, Joshua M
2012-06-01
Tested hypotheses from social action theory that (a) implicit and explicit measures of agonistic (social control) motives and transcendence (self-control) motives differentially predict cardiovascular risk; and (b) implicit motives interact with everyday self-regulation behaviors to magnify risk. Implicit/explicit agonistic/transcendence motives were assessed in a multi-ethnic sample of 64 high school students with the Social Competence Interview (SCI). Everyday self-regulation was assessed with teacher ratings of internalizing, externalizing, and self-control behaviors. Ambulatory blood pressure and daily activities were measured over 48 h. Study hypotheses were supported: implicit goals predicted blood pressure levels but explicit self-reported coping goals did not; self-regulation indices did not predict blood pressure directly but interacted with implicit agonistic/transcendence motives to identify individuals at greatest risk (all p ≤ 0.05). Assessment of implicit motives by SCI, and everyday self-regulation by teachers may improve identification of youth at risk for cardiovascular disease.
Risk prediction with triglycerides in patients with stable coronary disease on statin treatment.
Werner, Christian; Filmer, Anja; Fritsch, Marco; Groenewold, Stephanie; Gräber, Stefan; Böhm, Michael; Laufs, Ulrich
2014-12-01
The aim of the prospective Homburg Cream and Sugar study was to analyze the role of fasting and postprandial serum triglycerides (TG) as risk modifiers in patients with coronary artery disease (CAD). A sequential oral triglyceride and glucose tolerance test was developed to obtain standardized measurements of postprandial TG kinetics and glucose in 514 consecutive patients with stable CAD confirmed by angiography (95% were treated with a statin). Fasting and postprandial TG predicted the primary outcome measure of cardiovascular death and hospitalizations after 48 months follow-up (fasting TG >150 vs. <106 mg/dl: Hazard ratio (HR) 1.79, 95% confidence interval (CI) 1.31-2.45, p = 0.0001; area under the curve >1120 vs. <750 mg/dl/5 hr: HR 1.78, 95% CI 1.29-2.45, p = 0.0003). Parameters of the postprandial TG increase did not improve risk prediction compared to fasting TG. The number of cardiovascular deaths and myocardial infarctions was higher in the upper tertile of fasting TG (HR 1.79, 95%-CI 1.04-3.09, p = 0.03). Risk prediction by TG was independent of traditional risk factors, medication, glucose metabolism, LDL- and HDL-cholesterol. Total cholesterol, LDL- and HDL-cholesterol concentrations were not associated with the primary outcome. Fasting serum triglycerides >150 mg/dl independently predict cardiovascular events in patients with coronary artery disease on guideline-recommended medication. Assessment of postprandial TG does not improve risk prediction compared to fasting TG in these patients.
Chadès, Iadine
2017-01-01
Environmental impact assessment (EIA) is used globally to manage the impacts of development projects on the environment, so there is an imperative to demonstrate that it can effectively identify risky projects. However, despite the widespread use of quantitative predictive risk models in areas such as toxicology, ecosystem modelling and water quality, the use of predictive risk tools to assess the overall expected environmental impacts of major construction and development proposals is comparatively rare. A risk-based approach has many potential advantages, including improved prediction and attribution of cause and effect; sensitivity analysis; continual learning; and optimal resource allocation. In this paper we investigate the feasibility of using a Bayesian belief network (BBN) to quantify the likelihood and consequence of non-compliance of new projects based on the occurrence probabilities of a set of expert-defined features. The BBN incorporates expert knowledge and continually improves its predictions based on new data as it is collected. We use simulation to explore the trade-off between the number of data points and the prediction accuracy of the BBN, and find that the BBN could predict risk with 90% accuracy using approximately 1000 data points. Although a further pilot test with real project data is required, our results suggest that a BBN is a promising method to monitor overall risks posed by development within an existing EIA process given a modest investment in data collection. PMID:28686651
Nicol, Sam; Chadès, Iadine
2017-01-01
Environmental impact assessment (EIA) is used globally to manage the impacts of development projects on the environment, so there is an imperative to demonstrate that it can effectively identify risky projects. However, despite the widespread use of quantitative predictive risk models in areas such as toxicology, ecosystem modelling and water quality, the use of predictive risk tools to assess the overall expected environmental impacts of major construction and development proposals is comparatively rare. A risk-based approach has many potential advantages, including improved prediction and attribution of cause and effect; sensitivity analysis; continual learning; and optimal resource allocation. In this paper we investigate the feasibility of using a Bayesian belief network (BBN) to quantify the likelihood and consequence of non-compliance of new projects based on the occurrence probabilities of a set of expert-defined features. The BBN incorporates expert knowledge and continually improves its predictions based on new data as it is collected. We use simulation to explore the trade-off between the number of data points and the prediction accuracy of the BBN, and find that the BBN could predict risk with 90% accuracy using approximately 1000 data points. Although a further pilot test with real project data is required, our results suggest that a BBN is a promising method to monitor overall risks posed by development within an existing EIA process given a modest investment in data collection.
Utility of different cardiovascular disease prediction models in rheumatoid arthritis.
Purcarea, A; Sovaila, S; Udrea, G; Rezus, E; Gheorghe, A; Tiu, C; Stoica, V
2014-01-01
Rheumatoid arthritis comes with a 30% higher probability for cardiovascular disease than the general population. Current guidelines advocate for early and aggressive primary prevention and treatment of risk factors in high-risk populations but this excess risk is under-addressed in RA in real life. This is mainly due to difficulties met in the correct risk evaluation. This study aims to underline the differences in results of the main cardiovascular risk screening models in the real life rheumatoid arthritis population. In a cross-sectional study, patients addressed to a tertiary care center in Romania for an biannual follow-up of rheumatoid arthritis and the ones who were considered free of any cardiovascular disease were assessed for subclinical atherosclerosis. Clinical, biological and carotidal ultrasound evaluations were performed. A number of cardiovascular disease prediction scores were performed and differences between tests were noted in regard to subclinical atherosclerosis as defined by the existence of carotid intima media thickness over 0,9 mm or carotid plaque. In a population of 29 Romanian rheumatoid arthritis patients free of cardiovascular disease, the performance of Framingham Risk Score, HeartSCORE, ARIC cardiovascular disease prediction score, Reynolds Risk Score, PROCAM risk score and Qrisk2 score were compared. All the scores under-diagnosed subclinical atherosclerosis. With an AUROC of 0,792, the SCORE model was the only one that could partially stratify patients in low, intermediate and high-risk categories. The use of the EULAR recommended modifier did not help to reclassify patients. The only score that showed a statistically significant prediction capacity for subclinical atherosclerosis in a Romanian rheumatoid arthritis population was SCORE. The additional calibration or the use of imaging techniques in CVD risk prediction for the intermediate risk category might be warranted.
Utility of different cardiovascular disease prediction models in rheumatoid arthritis
Purcarea, A; Sovaila, S; Udrea, G; Rezus, E; Gheorghe, A; Tiu, C; Stoica, V
2014-01-01
Background. Rheumatoid arthritis comes with a 30% higher probability for cardiovascular disease than the general population. Current guidelines advocate for early and aggressive primary prevention and treatment of risk factors in high-risk populations but this excess risk is under-addressed in RA in real life. This is mainly due to difficulties met in the correct risk evaluation. This study aims to underline the differences in results of the main cardiovascular risk screening models in the real life rheumatoid arthritis population. Methods. In a cross-sectional study, patients addressed to a tertiary care center in Romania for an biannual follow-up of rheumatoid arthritis and the ones who were considered free of any cardiovascular disease were assessed for subclinical atherosclerosis. Clinical, biological and carotidal ultrasound evaluations were performed. A number of cardiovascular disease prediction scores were performed and differences between tests were noted in regard to subclinical atherosclerosis as defined by the existence of carotid intima media thickness over 0,9 mm or carotid plaque. Results. In a population of 29 Romanian rheumatoid arthritis patients free of cardiovascular disease, the performance of Framingham Risk Score, HeartSCORE, ARIC cardiovascular disease prediction score, Reynolds Risk Score, PROCAM risk score and Qrisk2 score were compared. All the scores under-diagnosed subclinical atherosclerosis. With an AUROC of 0,792, the SCORE model was the only one that could partially stratify patients in low, intermediate and high-risk categories. The use of the EULAR recommended modifier did not help to reclassify patients. Conclusion. The only score that showed a statistically significant prediction capacity for subclinical atherosclerosis in a Romanian rheumatoid arthritis population was SCORE. The additional calibration or the use of imaging techniques in CVD risk prediction for the intermediate risk category might be warranted. PMID:25713628
Al-Khatib, Sana M; Sanders, Gillian D; Bigger, J Thomas; Buxton, Alfred E; Califf, Robert M; Carlson, Mark; Curtis, Anne; Curtis, Jeptha; Fain, Eric; Gersh, Bernard J; Gold, Michael R; Haghighi-Mood, Ali; Hammill, Stephen C; Healey, Jeff; Hlatky, Mark; Hohnloser, Stefan; Kim, Raymond J; Lee, Kerry; Mark, Daniel; Mianulli, Marcus; Mitchell, Brent; Prystowsky, Eric N; Smith, Joseph; Steinhaus, David; Zareba, Wojciech
2007-06-01
Accurate and timely prediction of sudden cardiac death (SCD) is a necessary prerequisite for effective prevention and therapy. Although the largest number of SCD events occurs in patients without overt heart disease, there are currently no tests that are of proven predictive value in this population. Efforts in risk stratification for SCD have focused primarily on predicting SCD in patients with known structural heart disease. Despite the ubiquity of tests that have been purported to predict SCD vulnerability in such patients, there is little consensus on which test, in addition to the left ventricular ejection fraction, should be used to determine which patients will benefit from an implantable cardioverter defibrillator. On July 20 and 21, 2006, a group of experts representing clinical cardiology, cardiac electrophysiology, biostatistics, economics, and health policy were joined by representatives of the US Food and Drug administration, Centers for Medicare Services, Agency for Health Research and Quality, the Heart Rhythm Society, and the device and pharmaceutical industry for a round table meeting to review current data on strategies of risk stratification for SCD, to explore methods to translate these strategies into practice and policy, and to identify areas that need to be addressed by future research studies. The meeting was organized by the Duke Center for the Prevention of SCD at the Duke Clinical Research Institute and was funded by industry participants. This article summarizes the presentations and discussions that occurred at that meeting.
Exercise blood pressure and the risk of future hypertension.
Holmqvist, L; Mortensen, L; Kanckos, C; Ljungman, C; Mehlig, K; Manhem, K
2012-12-01
The aim of this prospective cohort study was to identify which blood pressure measurement during exercise is the best predictor of future hypertension. Further we aimed to create a risk chart to facilitate the evaluation of blood pressure reaction during exercise testing. A number (n=1047) of exercise tests by bicycle ergometry, performed in 1996 and 1997 were analysed. In 2007-2008, 606 patients without hypertension at the time of the exercise test were sent a questionnaire aimed to identify current hypertension. The response rate was 58% (n=352). During the 10-12 years between exercise test and questionnaire, 23% developed hypertension. The strongest predictors of future hypertension were systolic blood pressure (SBP) before exercise (odds ratios (OR) 1.63 (1.31-2.01) for 10 mm Hg difference) in combination with the increase of SBP over time during exercise testing (OR 1.12 (1.01-1.24) steeper increase for every 1 mm Hg min(-1)). A high SBP before exercise and a steep rise in SBP over time represented a higher risk of developing hypertension. A risk chart based on SBP before exercise, increase of SBP over time and body mass index was created. SBP before exercise, maximal SBP during exercise and SBP at 100 W were significant single predictors of future hypertension and the prediction by maximal SBP was improved by adjusting for time/power at which SBP max was reached during exercise testing. Recovery ratio (maximal SBP/SBP 4 min after exercise) was not predictive of future hypertension.
Sharp, Madeleine E.; Viswanathan, Jayalakshmi; Lanyon, Linda J.; Barton, Jason J. S.
2012-01-01
Background There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. Objective We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Design/Methods Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Results Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a ‘risk premium’ of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. Conclusions This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia. PMID:22493669
Obrist, Seraina; Rogan, Slavko; Hilfiker, Roger
2016-01-01
Introduction. Falls are frequent in older adults and may have serious consequences but awareness of fall-risk is often low. A questionnaire might raise awareness of fall-risk; therefore we set out to construct and test such a questionnaire. Methods. Fall-risk factors and their odds ratios were extracted from meta-analyses and a questionnaire was devised to cover these risk factors. A formula to estimate the probability of future falls was set up using the extracted odds ratios. The understandability of the questionnaire and discrimination and calibration of the prediction formula were tested in a cohort study with a six-month follow-up. Community-dwelling persons over 60 years were recruited by an e-mail snowball-sampling method. Results and Discussion. We included 134 persons. Response rates for the monthly fall-related follow-up varied between the months and ranged from low 38% to high 90%. The proportion of present risk factors was low. Twenty-five participants reported falls. Discrimination was moderate (AUC: 0.67, 95% CI 0.54 to 0.81). The understandability, with the exception of five questions, was good. The wording of the questions needs to be improved and measures to increase the monthly response rates are needed before test-retest reliability and final predictive value can be assessed. PMID:27247571
Sharp, Madeleine E; Viswanathan, Jayalakshmi; Lanyon, Linda J; Barton, Jason J S
2012-01-01
There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a 'risk premium' of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia.
Rolison, Jonathan J; Hanoch, Yaniv; Miron-Shatz, Talya
2012-07-01
Genetic testing for gene mutations associated with specific cancers provides an opportunity for early detection, surveillance, and intervention (Smith, Cokkinides, & Brawley, 2008). Lifetime risk estimates provided by genetic testing refer to the risk of developing a specific disease within one's lifetime, and evidence suggests that this is important for the medical choices people make, as well as their future family and financial plans. The present studies tested whether adult men understand the lifetime risks of prostate cancer informed by genetic testing. In 2 experiments, adult men were asked to interpret the lifetime risk information provided in statements about risks of prostate cancer. Statement format was manipulated such that the most appropriate interpretation of risk statements referred to an absolute risk of cancer in experiment 1 and a relative risk in experiment 2. Experiment 1 revealed that few men correctly interpreted the lifetime risks of cancer when these refer to an absolute risk of cancer, and numeracy levels positively predicted correct responding. The proportion of correct responses was greatly improved in experiment 2 when the most appropriate interpretation of risk statements referred instead to a relative rather than an absolute risk, and numeracy levels were less involved. Understanding of lifetime risk information is often poor because individuals incorrectly believe that these refer to relative rather than absolute risks of cancer.
Accurate and robust genomic prediction of celiac disease using statistical learning.
Abraham, Gad; Tye-Din, Jason A; Bhalala, Oneil G; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael
2014-02-01
Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87-0.89) and in independent replication across cohorts (AUC of 0.86-0.9), despite differences in ethnicity. The models explained 30-35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases.
Lantelme, Pierre; Eltchaninoff, Hélène; Rabilloud, Muriel; Souteyrand, Géraud; Dupré, Marion; Spaziano, Marco; Bonnet, Marc; Becle, Clément; Riche, Benjamin; Durand, Eric; Bouvier, Erik; Dacher, Jean-Nicolas; Courand, Pierre-Yves; Cassagnes, Lucie; Dávila Serrano, Eduardo E; Motreff, Pascal; Boussel, Loic; Lefèvre, Thierry; Harbaoui, Brahim
2018-05-11
The aim of this study was to develop a new scoring system based on thoracic aortic calcification (TAC) to predict 1-year cardiovascular and all-cause mortality. A calcified aorta is often associated with poor prognosis after transcatheter aortic valve replacement (TAVR). A risk score encompassing aortic calcification may be valuable in identifying poor TAVR responders. The C 4 CAPRI (4 Cities for Assessing CAlcification PRognostic Impact) multicenter study included a training cohort (1,425 patients treated using TAVR between 2010 and 2014) and a contemporary test cohort (311 patients treated in 2015). TAC was measured by computed tomography pre-TAVR. CAPRI risk scores were based on the linear predictors of Cox models including TAC in addition to comorbidities and demographic, atherosclerotic disease and cardiac function factors. CAPRI scores were constructed and tested in 2 independent cohorts. Cardiovascular and all-cause mortality at 1 year was 13.0% and 17.9%, respectively, in the training cohort and 8.2% and 11.8% in the test cohort. The inclusion of TAC in the model improved prediction: 1-cm 3 increase in TAC was associated with a 6% increase in cardiovascular mortality and a 4% increase in all-cause mortality. The predicted and observed survival probabilities were highly correlated (slopes >0.9 for both cardiovascular and all-cause mortality). The model's predictive power was fair (AUC 68% [95% confidence interval [CI]: 64-72]) for both cardiovascular and all-cause mortality. The model performed similarly in the training and test cohorts. The CAPRI score, which combines the TAC variable with classical prognostic factors, is predictive of 1-year cardiovascular and all-cause mortality. Its predictive performance was confirmed in an independent contemporary cohort. CAPRI scores are highly relevant to current practice and strengthen the evidence base for decision making in valvular interventions. Its routine use may help prevent futile procedures. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Gajic, Ognjen; Dabbagh, Ousama; Park, Pauline K; Adesanya, Adebola; Chang, Steven Y; Hou, Peter; Anderson, Harry; Hoth, J Jason; Mikkelsen, Mark E; Gentile, Nina T; Gong, Michelle N; Talmor, Daniel; Bajwa, Ednan; Watkins, Timothy R; Festic, Emir; Yilmaz, Murat; Iscimen, Remzi; Kaufman, David A; Esper, Annette M; Sadikot, Ruxana; Douglas, Ivor; Sevransky, Jonathan; Malinchoc, Michael
2011-02-15
Accurate, early identification of patients at risk for developing acute lung injury (ALI) provides the opportunity to test and implement secondary prevention strategies. To determine the frequency and outcome of ALI development in patients at risk and validate a lung injury prediction score (LIPS). In this prospective multicenter observational cohort study, predisposing conditions and risk modifiers predictive of ALI development were identified from routine clinical data available during initial evaluation. The discrimination of the model was assessed with area under receiver operating curve (AUC). The risk of death from ALI was determined after adjustment for severity of illness and predisposing conditions. Twenty-two hospitals enrolled 5,584 patients at risk. ALI developed a median of 2 (interquartile range 1-4) days after initial evaluation in 377 (6.8%; 148 ALI-only, 229 adult respiratory distress syndrome) patients. The frequency of ALI varied according to predisposing conditions (from 3% in pancreatitis to 26% after smoke inhalation). LIPS discriminated patients who developed ALI from those who did not with an AUC of 0.80 (95% confidence interval, 0.78-0.82). When adjusted for severity of illness and predisposing conditions, development of ALI increased the risk of in-hospital death (odds ratio, 4.1; 95% confidence interval, 2.9-5.7). ALI occurrence varies according to predisposing conditions and carries an independently poor prognosis. Using routinely available clinical data, LIPS identifies patients at high risk for ALI early in the course of their illness. This model will alert clinicians about the risk of ALI and facilitate testing and implementation of ALI prevention strategies. Clinical trial registered with www.clinicaltrials.gov (NCT00889772).
Nutritional Risk in Emergency-2017: A New Simplified Proposal for a Nutrition Screening Tool.
Marcadenti, Aline; Mendes, Larissa Loures; Rabito, Estela Iraci; Fink, Jaqueline da Silva; Silva, Flávia Moraes
2018-03-13
There are many nutrition screening tools currently being applied in hospitals to identify risk of malnutrition. However, multivariate statistical models are not usually employed to take into account the importance of each variable included in the instrument's development. To develop and evaluate the concurrent and predictive validities of a new screening tool of nutrition risk. A prospective cohort study was developed, in which 4 nutrition screening tools were applied to all patients. Length of stay in hospital and mortality were considered to test the predictive validity, and the concurrent validity was tested by comparing the Nuritional Risk in Emergency (NRE)-2017 to the other tools. A total of 748 patients were included. The final NRE-2017 score was composed of 6 questions (advanced age, metabolic stress of the disease, decreased appetite, changing of food consistency, unintentional weight loss, and muscle mass loss) with answers yes or no. The prevalence of nutrition risk was 50.7% and 38.8% considering the cutoff points 1.0 and 1.5, respectively. The NRE-2017 showed a satisfactory power to indentify risk of malnutrition (area under the curve >0.790 for all analyses). According to the NRE-2017, patients at risk of malnutrition have twice as high relative risk of a very long hospital stay. The hazard ratio for mortality was 2.78 (1.03-7.49) when the cutoff adopted by the NRE-2017 was 1.5 points. NRE-2017 is a new, easy-to-apply nutrition screening tool which uses 6 bi-categoric features to detect the risk of malnutrition, and it presented a good concurrent and predictive validity. © 2018 American Society for Parenteral and Enteral Nutrition.
NASA Astrophysics Data System (ADS)
Iden, S. C.; Durner, W.; Delay, M.; Frimmel, F. H.
2009-04-01
Contaminated porous materials, like soils, dredged sediments or waste materials must be tested before they can be used as filling materials in order to minimize the risk of groundwater pollution. We applied a multiple batch extraction test at varying liquid-to-solid (L/S) ratios to a demolition waste material and a municipal waste incineration product and investigated the release of chloride, sulphate, sodium, copper, chromium and dissolved organic carbon from both waste materials. The liquid phase test concentrations were used to estimate parameters of a relatively simple mass balance model accounting for equilibrium partitioning. The model parameters were estimated within a Bayesian framework by applying an efficient MCMC sampler and the uncertainties of the model parameters and model predictions were quantified. We tested isotherms of the linear, Freundlich and Langmuir type and selected the optimal isotherm model by use of the Deviance Information Criterion (DIC). Both the excellent fit to the experimental data and a comparison between the model-predicted and independently measured concentrations at the L/S ratios of 0.25 and 0.5 L/kg demonstrate the applicability of the model for almost all studied substances and both waste materials. We conclude that batch extraction tests at varying L/S ratios provide, at moderate experimental cost, a powerful complement to established test designs like column leaching or single batch extraction tests. The method constitutes an important tool in risk assessments, because concentrations at soil water contents representative for the field situation can be predicted from easier-to-obtain test concentrations at larger L/S ratios. This helps to circumvent the experimental difficulties of the soil saturation extract and eliminates the need to apply statistical approaches to predict such representative concentrations which have been shown to suffer dramatically from poor correlations.
Bellini, F; Ricci, G; Remondini, D; Pession, A
2014-05-01
Oral food challenge (OFC) is still considered the gold standard for diagnosis of food allergy (FA). Skin prick test (SPT) and specific IgE (sIgE) tests are very useful but limited in their predictive accuracy. End point test (EPT) has been recently considered to determine the starting dose to induce oral desensitization. Allergometric tests combined may discriminate children at higher risk of reactions during OFC. We considered 94 children referred to our Allergy and Immunology Pediatric Department between January 2009 and December 2011 with CMA. Cutaneous allergometric skin tests (SPT and EPT) were periodically performed on all 94 children with CMA; sIgE levels against cow's milk proteins (CMP) α-lactalbumin, β-lactoglobulin and casein were periodically evaluated through blood samples every 6-12 months. During the period of the study, 26/94 (27.6%) children underwent more than once OFC. We collected 135 OFC compared with clinical presentation: 49/135 (36.2%) OFC were performed shortly after the onset of symptoms directly related to spontaneous intake of milk, to confirm suspicion of FA; 86/135 (63.7%) OFC were performed to evaluate the acquisition of tolerance. Of these, 52/86 (60.4%) OFC resulted positive, 34/86 (39.5%) were negative. The 3D EPT has the best ratio sensitivity (SE) / positive predictive value (PPV), SE 83%, specificity (SP) 58.3%, PPV 89.3%, negative predictive value (NPV) 45.1%. EPT 6D and 7D have the best PPV (100%) with a low NPV (respectively 22.2% and 21.2%). We obtained that a mean fresh milk wheal diameter ≥ 12 mm was predictive of 97% OFC, but only 32/101 (31.6%) allergic children presented this value. The tests with a wheal diameter ≤ 5 were performed on younger children, all of which were less than 9 months old; only 5 other tests performed on less than 9 months olds resulted in the others subgroups (1 in ≥ 12 mm wheal and 4 in the group between 6-11 mm). We also found that 95% of children with 4D EPT wheal diameter < 6 mm resulted tolerant. This cut off could be useful to decide which children have a lower risk of reactions during the OFC. EPT is more useful than SPT especially for children < 1 year of age being a less operator dependent test, and it could be helpful to discriminate between children with the highest risk to develop anaphylaxis following an OFC (≥ 5D positive EPT) and children with lowest risk (> 2D positive EPT), but it can't replace OFC, that currently remains the gold standard in the diagnosis of FA. We also underline that in allergic children younger than 9 months old, the values of SPT with fresh milk is much lower than in older children, so that it's better to separate this group of age when we try to predict the evolution of OFC through the evaluation with EPT. A validation of such results in a prospective study could maybe be useful to confirm the outcome of our data in the predictivity of OFC.
Maier, Sabine; Chung, Christine H; Morse, Michael; Platts-Mills, Thomas; Townes, Leigh; Mukhopadhyay, Pralay; Bhagavatheeswaran, Prabhu; Racenberg, Jan; Trifan, Ovidiu C
2015-01-01
Severe infusion reactions (SIRs) at rates of 5% or less are known side effects of biological agents, including mAbs such as cetuximab. There are currently no prospectively validated risk factors to aid physicians in identifying patients who may be at risk of experiencing an SIR following administration of any of these drugs. A retrospective analysis of 545 banked serum or plasma samples from cancer patients participating in clinical trials of cetuximab was designed to evaluate whether the presence of pretreatment IgE antibodies against cetuximab, as determined by a commercially available assay system, is associated with SIRs during the initial cetuximab infusion. Patients with a positive test indicating the presence of pretreatment antibodies had a higher risk of experiencing an SIR; however, at the prespecified cutoff utilized in this analysis, the test has a relatively low-positive predictive value (0.577 [0.369-0.766]) and a negative predictive value of 0.961 (0.912-0.987) in an unselected patient population. Data collected in this large retrospective validation study support prior observations of an association between the presence of pretreatment IgE antibodies cross-reactive with cetuximab and SIRs. Further analysis of the test's ability to predict patients at risk of an SIR would be required before this assay could be used reliably in this patient population. © 2014 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
The relationship between population adaptive potential and extinction risk in a changing environment is not well understood. Although the expectation is that genetic diversity is directly related to the capacity of populations to adapt, the statistical and predictive aspects of ...
A 21st Century Roadmap for Human Health Risk Assessment
For decades human health risk assessment has depended primarily on animal testing to predict adverse effects in humans, but that paradigm has come under question because of calls for more accurate information, less use of animals, and more efficient use of resources. Moreover, t...
Morris, Rob; Harwood, Rowan H; Baker, Ros; Sahota, Opinder; Armstrong, Sarah; Masud, Tahir
2007-01-01
people with vertebral fractures are at high risk of developing hip fractures. Falls risk is important in the pathogenesis of hip fractures. to investigate if balance tests, in conjunction with a falls history, can predict falls in older women with vertebral fractures. a cohort study of community-dwelling women aged over 60 years, with vertebral fractures. Balance tests investigated were: 5 m-timed-up-and-go-test (5 m-TUG), timed 10 m walk, TURN180 test (number of steps to turn 180 degrees ), tandem walk, ability to stand from chair with arms folded. Leg extensor power was also measured. fallers (at least one fall in a 12 month follow-up period) versus non-fallers. one hundred and four women aged 63-91 years [mean=78 +/- 7], were recruited. Eighty-six (83%) completed the study. Four variables were significantly associated with fallers: previous recurrent faller (2+ falls) [OR=6.52; 95% CI=1.69-25.22], 5 m-TUG test [OR=1.03; 1.00-1.06], timed 10 m walk [OR=1.07; 1.01-1.13] and the TURN180 test [OR=1.22; 1.00-1.49] [P <0.05]. Multi-variable analysis showed that only two variables, previous recurrent faller [OR=5.60; 1.40-22.45] and the 5 m-TUG test [OR=1.04; 1.00-1.08], were independently significantly associated with fallers. The optimal cut-off time for performing the 5 m-TUG test in predicting fallers was 30 s (area under ROC=60%). Combining previous recurrent faller with the 5 m-TUG improved prediction of fallers [OR=16.79, specificity=100%, sensitivity=13%]. a previous history of recurrent falls and the inability to perform the 5 m-TUG test within 30 s predicted falls in older women with vertebral fractures. Combining these two measures can predict fallers with a high degree of specificity (although a low sensitivity), allowing the identification of a group of patients suitable for fall and fracture prevention measures.
Mehdi, Ahmed M; Hamilton-Williams, Emma E; Cristino, Alexandre; Ziegler, Anette; Bonifacio, Ezio; Le Cao, Kim-Anh; Harris, Mark; Thomas, Ranjeny
2018-03-08
Autoimmune-mediated destruction of pancreatic islet β cells results in type 1 diabetes (T1D). Serum islet autoantibodies usually develop in genetically susceptible individuals in early childhood before T1D onset, with multiple islet autoantibodies predicting diabetes development. However, most at-risk children remain islet-antibody negative, and no test currently identifies those likely to seroconvert. We sought a genomic signature predicting seroconversion risk by integrating longitudinal peripheral blood gene expression profiles collected in high-risk children included in the BABYDIET and DIPP cohorts, of whom 50 seroconverted. Subjects were followed for 10 years to determine time of seroconversion. Any cohort effect and the time of seroconversion were corrected to uncover genes differentially expressed (DE) in seroconverting children. Gene expression signatures associated with seroconversion were evident during the first year of life, with 67 DE genes identified in seroconverting children relative to those remaining antibody negative. These genes contribute to T cell-, DC-, and B cell-related immune responses. Near-birth expression of ADCY9, PTCH1, MEX3B, IL15RA, ZNF714, TENM1, and PLEKHA5, along with HLA risk score predicted seroconversion (AUC 0.85). The ubiquitin-proteasome pathway linked DE genes and T1D susceptibility genes. Therefore, a gene expression signature in infancy predicts risk of seroconversion. Ubiquitination may play a mechanistic role in diabetes progression.
Mehdi, Ahmed M.; Hamilton-Williams, Emma E.; Cristino, Alexandre; Ziegler, Anette; Harris, Mark
2018-01-01
Autoimmune-mediated destruction of pancreatic islet β cells results in type 1 diabetes (T1D). Serum islet autoantibodies usually develop in genetically susceptible individuals in early childhood before T1D onset, with multiple islet autoantibodies predicting diabetes development. However, most at-risk children remain islet-antibody negative, and no test currently identifies those likely to seroconvert. We sought a genomic signature predicting seroconversion risk by integrating longitudinal peripheral blood gene expression profiles collected in high-risk children included in the BABYDIET and DIPP cohorts, of whom 50 seroconverted. Subjects were followed for 10 years to determine time of seroconversion. Any cohort effect and the time of seroconversion were corrected to uncover genes differentially expressed (DE) in seroconverting children. Gene expression signatures associated with seroconversion were evident during the first year of life, with 67 DE genes identified in seroconverting children relative to those remaining antibody negative. These genes contribute to T cell–, DC-, and B cell–related immune responses. Near-birth expression of ADCY9, PTCH1, MEX3B, IL15RA, ZNF714, TENM1, and PLEKHA5, along with HLA risk score predicted seroconversion (AUC 0.85). The ubiquitin-proteasome pathway linked DE genes and T1D susceptibility genes. Therefore, a gene expression signature in infancy predicts risk of seroconversion. Ubiquitination may play a mechanistic role in diabetes progression. PMID:29515040
Abdelbary, B E; Garcia-Viveros, M; Ramirez-Oropesa, H; Rahbar, M H; Restrepo, B I
2017-10-01
The purpose of this study was to develop a method for identifying newly diagnosed tuberculosis (TB) patients at risk for TB adverse events in Tamaulipas, Mexico. Surveillance data between 2006 and 2013 (8431 subjects) was used to develop risk scores based on predictive modelling. The final models revealed that TB patients failing their treatment regimen were more likely to have at most a primary school education, multi-drug resistance (MDR)-TB, and few to moderate bacilli on acid-fast bacilli smear. TB patients who died were more likely to be older males with MDR-TB, HIV, malnutrition, and reporting excessive alcohol use. Modified risk scores were developed with strong predictability for treatment failure and death (c-statistic 0·65 and 0·70, respectively), and moderate predictability for drug resistance (c-statistic 0·57). Among TB patients with diabetes, risk scores showed moderate predictability for death (c-statistic 0·68). Our findings suggest that in the clinical setting, the use of our risk scores for TB treatment failure or death will help identify these individuals for tailored management to prevent these adverse events. In contrast, the available variables in the TB surveillance dataset are not robust predictors of drug resistance, indicating the need for prompt testing at time of diagnosis.
Myer, Gregory D.; Ford, Kevin R.; Khoury, Jane; Succop, Paul; Hewett, Timothy E.
2012-01-01
Background Prospective measures of high knee abduction moment (KAM) during landing identify female athletes at high risk for anterior cruciate ligament injury. Laboratory-based measurements demonstrate 90% accuracy in prediction of high KAM. Clinic-based prediction algorithms that employ correlates derived from laboratory-based measurements also demonstrate high accuracy for prediction of high KAM mechanics during landing. Hypotheses Clinic-based measures derived from highly predictive laboratory-based models are valid for the accurate prediction of high KAM status, and simultaneous measurements using laboratory-based and clinic-based techniques highly correlate. Study Design Cohort study (diagnosis); Level of evidence, 2. Methods One hundred female athletes (basketball, soccer, volleyball players) were tested using laboratory-based measures to confirm the validity of identified laboratory-based correlate variables to clinic-based measures included in a prediction algorithm to determine high KAM status. To analyze selected clinic-based surrogate predictors, another cohort of 20 female athletes was simultaneously tested with both clinic-based and laboratory-based measures. Results The prediction model (odds ratio: 95% confidence interval), derived from laboratory-based surrogates including (1) knee valgus motion (1.59: 1.17-2.16 cm), (2) knee flexion range of motion (0.94: 0.89°-1.00°), (3) body mass (0.98: 0.94-1.03 kg), (4) tibia length (1.55: 1.20-2.07 cm), and (5) quadriceps-to-hamstrings ratio (1.70: 0.48%-6.0%), predicted high KAM status with 84% sensitivity and 67% specificity (P < .001). Clinic-based techniques that used a calibrated physician’s scale, a standard measuring tape, standard camcorder, ImageJ software, and an isokinetic dynamometer showed high correlation (knee valgus motion, r = .87; knee flexion range of motion, r = .95; and tibia length, r = .98) to simultaneous laboratory-based measurements. Body mass and quadriceps-to-hamstrings ratio were included in both methodologies and therefore had r values of 1.0. Conclusion Clinically obtainable measures of increased knee valgus, knee flexion range of motion, body mass, tibia length, and quadriceps-to-hamstrings ratio predict high KAM status in female athletes with high sensitivity and specificity. Female athletes who demonstrate high KAM landing mechanics are at increased risk for anterior cruciate ligament injury and are more likely to benefit from neuromuscular training targeted to this risk factor. Use of the developed clinic-based assessment tool may facilitate high-risk athletes’ entry into appropriate interventions that will have greater potential to reduce their injury risk. PMID:20595554
Predictive value of late decelerations for fetal acidemia in unselective low-risk pregnancies.
Sameshima, Hiroshi; Ikenoue, Tsuyomu
2005-01-01
We evaluated the clinical significance of late decelerations (LD) of intrapartum fetal heart rate (FHR) monitoring to detect low pH (< 7.1) in low-risk pregnancies. We selected two secondary and two tertiary-level institutions where 10,030 women delivered. Among them, 5522 were low-risk pregnancies. The last 2 hours of FHR patterns before delivery were interpreted according to the guidelines of the National Institute of Child Health and Human Development. The correlation between the incidence of LD (occasional, < 50%; recurrent, > or = 50%) and severity (reduced baseline FHR accelerations and variability) of LD, and low pH (< 7.1) were evaluated. Statistical analyses included a contingency table with chi2 and the Fisher test, and one-way analysis of variance with the Bonferroni/Dunn test. In the 5522 low-risk pregnancies, 301 showed occasional LD and 99 showed recurrent LD. Blood gases and pH values deteriorated as the incidence of LD increased and as baseline accelerations or variability was decreased. Positive predictive value for low pH (< 7.1) was exponentially elevated from 0% at no deceleration, 1% in occasional LD, and > 50% in recurrent LD with no baseline FHR accelerations and reduced variability. In low-risk pregnancies, information on LD combined with acceleration and baseline variability enables us to predict the potential incidence of fetal acidemia.
Shimada, Hiroyuki; Suzukawa, Megumi; Tiedemann, Anne; Kobayashi, Kumiko; Yoshida, Hideyo; Suzuki, Takao
2009-01-01
The use of falls risk screening tools may aid in targeting fall prevention interventions in older individuals most likely to benefit. To determine the optimal physical or cognitive test to screen for falls risk in frail older people. This prospective cohort study involved recruitment from 213 day-care centers in Japan. The feasibility study included 3,340 ambulatory individuals aged 65 years or older enrolled in the Tsukui Ordered Useful Care for Health (TOUCH) program. The external validation study included a subsample of 455 individuals who completed all tests. Physical tests included grip strength (GS), chair stand test (CST), one-leg standing test (OLS), functional reach test (FRT), tandem walking test (TWT), 6-meter walking speed at a comfortable pace (CWS) and at maximum pace (MWS), and timed up-and-go test (TUG). The mental status questionnaire (MSQ) was used to measure cognitive function. The incidence of falls during 1 year was investigated by self-report or an interview with the participant's family and care staff. The most practicable tests were the GS and MSQ, which could be administered to more than 90% of the participants regardless of the activities of daily living status. The FRT and TWT had lower feasibility than other lower limb function tests. During the 1-year retrospective analysis of falls, 99 (21.8%) of the 455 validation study participants had fallen at least once. Fallers showed significantly poorer performance than non-fallers in the OLS (p = 0.003), TWT (p = 0.001), CWS (p = 0.013), MWS (p = 0.007), and TUG (p = 0.011). The OLS, CWS, and MWS remained significantly associated with falls when performance cut-points were determined. Logistic regression analysis revealed that the TWT was a significant and independent, yet weak predictor of falls. A weighting system which considered feasibility and validity scored the CWS (at a cut-point of 0.7 m/s) as the best test to predict risk of falls. Clinical tests of neuromuscular function can predict risk of falls in frail older people. When feasibility and validity were considered, the CWS was the best test for use as a screening tool in frail older people, however, these preliminary results require confirmation in further research. Copyright 2009 S. Karger AG, Basel.
Making predictions of mangrove deforestation: a comparison of two methods in Kenya.
Rideout, Alasdair J R; Joshi, Neha P; Viergever, Karin M; Huxham, Mark; Briers, Robert A
2013-11-01
Deforestation of mangroves is of global concern given their importance for carbon storage, biogeochemical cycling and the provision of other ecosystem services, but the links between rates of loss and potential drivers or risk factors are rarely evaluated. Here, we identified key drivers of mangrove loss in Kenya and compared two different approaches to predicting risk. Risk factors tested included various possible predictors of anthropogenic deforestation, related to population, suitability for land use change and accessibility. Two approaches were taken to modelling risk; a quantitative statistical approach and a qualitative categorical ranking approach. A quantitative model linking rates of loss to risk factors was constructed based on generalized least squares regression and using mangrove loss data from 1992 to 2000. Population density, soil type and proximity to roads were the most important predictors. In order to validate this model it was used to generate a map of losses of Kenyan mangroves predicted to have occurred between 2000 and 2010. The qualitative categorical model was constructed using data from the same selection of variables, with the coincidence of different risk factors in particular mangrove areas used in an additive manner to create a relative risk index which was then mapped. Quantitative predictions of loss were significantly correlated with the actual loss of mangroves between 2000 and 2010 and the categorical risk index values were also highly correlated with the quantitative predictions. Hence, in this case the relatively simple categorical modelling approach was of similar predictive value to the more complex quantitative model of mangrove deforestation. The advantages and disadvantages of each approach are discussed, and the implications for mangroves are outlined. © 2013 Blackwell Publishing Ltd.
The 12-lead electrocardiogram and risk of sudden death: current utility and future prospects.
Narayanan, Kumar; Chugh, Sumeet S
2015-10-01
More than 100 years after it was first invented, the 12-lead electrocardiogram (ECG) continues to occupy an important place in the diagnostic armamentarium of the practicing clinician. With the recognition of relatively rare but important clinical entities such as Wolff-Parkinson-White and the long QT syndrome, this clinical tool was firmly established as a test for assessing risk of sudden cardiac death (SCD). However, over the past two decades the role of the ECG in risk prediction for common forms of SCD, for example in patients with coronary artery disease, has been the focus of considerable investigation. Especially in light of the limitations of current risk stratification approaches, there is a renewed focus on this broadly available and relatively inexpensive test. Various abnormalities of depolarization and repolarization on the ECG have been linked to SCD risk; however, more focused work is needed before they can be deployed in the clinical arena. The present review summarizes the current knowledge on various ECG risk markers for prediction of SCD and discusses some future directions in this field. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Prediction of human population responses to toxic compounds by a collaborative competition.
Eduati, Federica; Mangravite, Lara M; Wang, Tao; Tang, Hao; Bare, J Christopher; Huang, Ruili; Norman, Thea; Kellen, Mike; Menden, Michael P; Yang, Jichen; Zhan, Xiaowei; Zhong, Rui; Xiao, Guanghua; Xia, Menghang; Abdo, Nour; Kosyk, Oksana; Friend, Stephen; Dearry, Allen; Simeonov, Anton; Tice, Raymond R; Rusyn, Ivan; Wright, Fred A; Stolovitzky, Gustavo; Xie, Yang; Saez-Rodriguez, Julio
2015-09-01
The ability to computationally predict the effects of toxic compounds on humans could help address the deficiencies of current chemical safety testing. Here, we report the results from a community-based DREAM challenge to predict toxicities of environmental compounds with potential adverse health effects for human populations. We measured the cytotoxicity of 156 compounds in 884 lymphoblastoid cell lines for which genotype and transcriptional data are available as part of the Tox21 1000 Genomes Project. The challenge participants developed algorithms to predict interindividual variability of toxic response from genomic profiles and population-level cytotoxicity data from structural attributes of the compounds. 179 submitted predictions were evaluated against an experimental data set to which participants were blinded. Individual cytotoxicity predictions were better than random, with modest correlations (Pearson's r < 0.28), consistent with complex trait genomic prediction. In contrast, predictions of population-level response to different compounds were higher (r < 0.66). The results highlight the possibility of predicting health risks associated with unknown compounds, although risk estimation accuracy remains suboptimal.
Kingston, Drew A; Fedoroff, Paul; Firestone, Philip; Curry, Susan; Bradford, John M
2008-01-01
In this study, we examined the unique contribution of pornography consumption to the longitudinal prediction of criminal recidivism in a sample of 341 child molesters. We specifically tested the hypothesis, based on predictions informed by the confluence model of sexual aggression that pornography will be a risk factor for recidivism only for those individuals classified as relatively high risk for re-offending. Pornography use (frequency and type) was assessed through self-report and recidivism was measured using data from a national database from the Royal Canadian Mounted Police. Indices of recidivism, which were assessed up to 15 years after release, included an overall criminal recidivism index, as well as subcategories focusing on violent (including sexual) recidivism and sexual recidivism alone. Results for both frequency and type of pornography use were generally consistent with our predictions. Most importantly, after controlling for general and specific risk factors for sexual aggression, pornography added significantly to the prediction of recidivism. Statistical interactions indicated that frequency of pornography use was primarily a risk factor for higher-risk offenders, when compared with lower-risk offenders, and that content of pornography (i.e., pornography containing deviant content) was a risk factor for all groups. The importance of conceptualizing particular risk factors (e.g., pornography), within the context of other individual characteristics is discussed.
In Infants' Hands: Identification of Preverbal Infants at Risk for Primary Language Delay
ERIC Educational Resources Information Center
Lüke, Carina; Grimminger, Angela; Rohlfing, Katharina J.; Liszkowski, Ulf; Ritterfeld, Ute
2017-01-01
Early identification of primary language delay is crucial to implement effective prevention programs. Available screening instruments are based on parents' reports and have only insufficient predictive validity. This study employed observational measures of preverbal infants' gestural communication to test its predictive validity for identifying…
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumu...
Factors Influencing Physical Activity among Postpartum Iranian Women
ERIC Educational Resources Information Center
Roozbahani, Nasrin; Ghofranipour, Fazlollah; Eftekhar Ardabili, Hassan; Hajizadeh, Ebrahim
2014-01-01
Background: Postpartum women are a population at risk for sedentary living. Physical activity (PA) prior to pregnancy may be effective in predicting similar behaviour in the postpartum period. Objective: To test a composite version of the extended transtheoretical model (TTM) by adding "past behaviour" in order to predict PA behaviour…
Nohuz, Erdogan; De Simone, Luisa; Chêne, Gautier
2018-04-28
The IOTA (International Ovarian Tumor Analysis) group has developed the ADNEX (Assessment of Different NEoplasias in the adneXa) model to predict the risk that an ovarian mass is benign, borderline or malignant. This study aimed to test reliability of these risks prediction models to improve the performance of pelvic ultrasound and discriminate between benign and malignant cysts. Postmenopausal women with an adnexal mass (including ovarian, para-ovarian and tubal) and who underwent a standardized ultrasound examination before surgery were included. Prospectively and retrospectively collected data and ultrasound appearances of the tumors were described using the terms and definitions of the IOTA group and tested in accordance with the ADNEX model and were compared to the final histological diagnosis. Of the 107 menopausal patients recruited between 2011 and 2016, 14 were excluded (incomplete inclusion criteria). Thus, 93 patients constituted a cohort in whom 89 had benign cysts (83 ovarian and 6 tubal or para-ovarian cysts), 1 had border line tumor and 3 had invasive ovarian cancers (1 at first stage, 1 at advanced stage and 1 metastatic tumor in the ovary). The overall prevalence of malignancy was 4.3%. Every benign ovarian cyst was classified as probably benign by IOTA score which showed also a high specificity with the totality of probably malignant lesion proved malignant by histological exam. The limit of this score was the important rate of not classified or undetermined cysts. However, the malignancy risks calculated by ADNEX model allowed identifying the totality of malignancy. Thus, the combination of the two methods of analysis showed a sensitivity and specificity rates of respectively 100% and 98%. Evaluation of malignancy risks by these 2 tests highlighted a negative predictive value of 100% (there was no case of false negative) and a positive predictive value of 80%. On the basis of our findings, the IOTA classification and the ADNEX multimodal algorithm used as risks prediction models can improve the performance of pelvic ultrasound and discriminate between benign and malignant cysts in postmenopausal women, especially for undetermined lesions. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Stein, Judith A; Nyamathi, Adeline; Ullman, Jodie B; Bentler, Peter M
2007-01-01
Studies among normative samples generally demonstrate a positive impact of marriage on health behaviors and other related attitudes. In this study, we examine the impact of marriage on HIV/AIDS risk behaviors and attitudes among impoverished, highly stressed, homeless couples, many with severe substance abuse problems. A multilevel analysis of 368 high-risk sexually intimate married and unmarried heterosexual couples assessed individual and couple-level effects on social support, substance use problems, HIV/AIDS knowledge, perceived HIV/AIDS risk, needle-sharing, condom use, multiple sex partners, and HIV/AIDS testing. More variance was explained in the protective and risk variables by couple-level latent variable predictors than by individual latent variable predictors, although some gender effects were found (e.g., more alcohol problems among men). The couple-level variable of marriage predicted lower perceived risk, less deviant social support, and fewer sex partners but predicted more needle-sharing.
Anxiety sensitivity cognitive concerns predict suicide risk.
Oglesby, Mary Elizabeth; Capron, Daniel William; Raines, Amanda Medley; Schmidt, Norman Bradley
2015-03-30
Anxiety sensitivity (AS) cognitive concerns, which reflects fears of mental incapacitation, have been previously associated with suicidal ideation and behavior. The first study aim was to replicate and extend upon previous research by investigating whether AS cognitive concerns can discriminate between those at low risk versus high risk for suicidal behavior. Secondly, we aimed to test the incremental predictive power of AS cognitive concerns above and beyond known suicide risk factors (i.e., thwarted belongingness and insomnia). The sample consisted of 106 individuals (75% meeting current criteria for an Axis I disorder) recruited from the community. Results revealed that AS cognitive concerns were a robust predictor of elevated suicide risk after covarying for negative affect, whereas AS social and physical concerns were not. Those with high, relative to low, AS cognitive scores were 3.67 times more likely to be in the high suicide risk group. Moreover, AS cognitive concerns significantly predicted elevated suicide risk above and beyond relevant suicide risk factors. Results of this study add to a growing body of the literature demonstrating a relationship between AS cognitive concerns and increased suicidality. Incorporating AS cognitive concerns amelioration protocols into existing interventions for suicidal behavior may be beneficial. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Predicting injury risk with "New Car Assessment Program" crashworthiness ratings.
Jones, I S; Whitfield, R A
1988-12-01
The relationship between crashworthiness ratings produced by the National Highway Traffic Safety Administration's (NHTSA's) New Car Assessment Program (NCAP) and the risk of incapacitating injury or death for drivers who are involved in single-car, fixed-object, frontal collisions was examined. The results are based on 6,405 such crashes from the Motor Vehicle Traffic Accident file of the Texas Department of Highways and Public Transportation. The risk of injury was modeled using logistic regression taking into account the NCAP test results for each individual model of car and the intervening effects of car mass, age of the driver, restraint use, and crash severity. Three measures of anthropometric dummy response, Head Injury Criterion (HIC), Chest Deceleration (CD), and femur load were used to indicate vehicle crash test performance. The results show that there is a significant relationship between the results of the NCAP tests and the risk of serious injury or death in actual single-car frontal accidents. In terms of overall injury, chest deceleration was a better predictor than the Head Injury Criterion. For restrained drivers, crash severity, driver age, and chest deceleration were significant parameters for predicting risk of serious injury or death; the risk of injury decreased as chest deceleration decreased. The results were similar for unrestrained drivers although vehicle mass and femur load were also significant factors in the model. The risk of overall injury decreased as chest deceleration decreased but appeared to decrease as femur load increased.
Hsu, Hsiu-Yueh; Yu, Hsing-Yi; Lou, Jiunn-Horng; Eng, Cheng-Joo
2015-04-01
Sexual self-efficacy plays an important role in adolescents' sexual health. The aim of this study was to test a cause-and-effect model of sexual self-concept and sexual risk cognition toward sexual self-efficacy in adolescents. The study was a cross-sectional survey. Using a random sampling method, a total of 713 junior nursing students were invited to participate in the study, and 465 valid surveys were returned, resulting in a return rate of 65.2%. The data was collected using an anonymous mailed questionnaire. Structural equation modeling was used to test the relationships among sexual self-concept, sexual risk cognition, and sexual self-efficacy, as well as the mediating role of sexual risk cognition. The results revealed that the postulated model fits the data well. Sexual self-concept significantly predicted sexual risk cognition and sexual self-efficacy. Sexual risk cognition significantly predicted sexual self-efficacy and had a mediating effect on the relationship between sexual self-concept and sexual self-efficacy. Based on social cognitive theory and a structural equation model technique, this study confirmed the mediating role of sexual risk cognition in the relationship between sexual self-concept and sexual self-efficacy. Also, sexual self-concept's direct and indirect effects explaining adolescents' sexual self-efficacy were found in this study. © 2014 The Authors. Japan Journal of Nursing Science © 2014 Japan Academy of Nursing Science.
Jakimov, Tamara; Mrdović, Igor; Filipović, Branka; Zdravković, Marija; Djoković, Aleksandra; Hinić, Saša; Milić, Nataša; Filipović, Branislav
2017-12-31
To compare the prognostic performance of three major risk scoring systems including global registry for acute coronary events (GRACE), thrombolysis in myocardial infarction (TIMI), and prediction of 30-day major adverse cardiovascular events after primary percutaneous coronary intervention (RISK-PCI). This single-center retrospective study involved 200 patients with acute coronary syndrome (ACS) who underwent invasive diagnostic approach, ie, coronary angiography and myocardial revascularization if appropriate, in the period from January 2014 to July 2014. The GRACE, TIMI, and RISK-PCI risk scores were compared for their predictive ability. The primary endpoint was a composite 30-day major adverse cardiovascular event (MACE), which included death, urgent target-vessel revascularization (TVR), stroke, and non-fatal recurrent myocardial infarction (REMI). The c-statistics of the tested scores for 30-day MACE or area under the receiver operating characteristic curve (AUC) with confidence intervals (CI) were as follows: RISK-PCI (AUC=0.94; 95% CI 1.790-4.353), the GRACE score on admission (AUC=0.73; 95% CI 1.013-1.045), the GRACE score on discharge (AUC=0.65; 95% CI 0.999-1.033). The RISK-PCI score was the only score that could predict TVR (AUC=0.91; 95% CI 1.392-2.882). The RISK-PCI scoring system showed an excellent discriminative potential for 30-day death (AUC=0.96; 95% CI 1.339-3.548) in comparison with the GRACE scores on admission (AUC=0.88; 95% CI 1.018-1.072) and on discharge (AUC=0.78; 95% CI 1.000-1.058). In comparison with the GRACE and TIMI scores, RISK-PCI score showed a non-inferior ability to predict 30-day MACE and death in ACS patients. Moreover, RISK-PCI was the only scoring system that could predict recurrent ischemia requiring TVR.
Jakimov, Tamara; Mrdović, Igor; Filipović, Branka; Zdravković, Marija; Djoković, Aleksandra; Hinić, Saša; Milić, Nataša; Filipović, Branislav
2017-01-01
Aim To compare the prognostic performance of three major risk scoring systems including global registry for acute coronary events (GRACE), thrombolysis in myocardial infarction (TIMI), and prediction of 30-day major adverse cardiovascular events after primary percutaneous coronary intervention (RISK-PCI). Methods This single-center retrospective study involved 200 patients with acute coronary syndrome (ACS) who underwent invasive diagnostic approach, ie, coronary angiography and myocardial revascularization if appropriate, in the period from January 2014 to July 2014. The GRACE, TIMI, and RISK-PCI risk scores were compared for their predictive ability. The primary endpoint was a composite 30-day major adverse cardiovascular event (MACE), which included death, urgent target-vessel revascularization (TVR), stroke, and non-fatal recurrent myocardial infarction (REMI). Results The c-statistics of the tested scores for 30-day MACE or area under the receiver operating characteristic curve (AUC) with confidence intervals (CI) were as follows: RISK-PCI (AUC = 0.94; 95% CI 1.790-4.353), the GRACE score on admission (AUC = 0.73; 95% CI 1.013-1.045), the GRACE score on discharge (AUC = 0.65; 95% CI 0.999-1.033). The RISK-PCI score was the only score that could predict TVR (AUC = 0.91; 95% CI 1.392-2.882). The RISK-PCI scoring system showed an excellent discriminative potential for 30-day death (AUC = 0.96; 95% CI 1.339-3.548) in comparison with the GRACE scores on admission (AUC = 0.88; 95% CI 1.018-1.072) and on discharge (AUC = 0.78; 95% CI 1.000-1.058). Conclusions In comparison with the GRACE and TIMI scores, RISK-PCI score showed a non-inferior ability to predict 30-day MACE and death in ACS patients. Moreover, RISK-PCI was the only scoring system that could predict recurrent ischemia requiring TVR. PMID:29308832
Scalar utility theory and proportional processing: what does it actually imply?
Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I
2017-01-01
Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed. PMID:27288541
Scalar utility theory and proportional processing: What does it actually imply?
Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I
2016-09-07
Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Playford, E Geoffrey; Lipman, Jeffrey; Jones, Michael; Lau, Anna F; Kabir, Masrura; Chen, Sharon C-A; Marriott, Deborah J; Seppelt, Ian; Gottlieb, Thomas; Cheung, Winston; Iredell, Jonathan R; McBryde, Emma S; Sorrell, Tania C
2016-12-01
Delayed antifungal therapy for invasive candidiasis (IC) contributes to poor outcomes. Predictive risk models may allow targeted antifungal prophylaxis to those at greatest risk. A prospective cohort study of 6685 consecutive nonneutropenic patients admitted to 7 Australian intensive care units (ICUs) for ≥72 hours was performed. Clinical risk factors for IC occurring prior to and following ICU admission, colonization with Candida species on surveillance cultures from 3 sites assessed twice weekly, and the occurrence of IC ≥72 hours following ICU admission or ≤72 hours following ICU discharge were measured. From these parameters, a risk-predictive model for the development of ICU-acquired IC was then derived. Ninety-six patients (1.43%) developed ICU-acquired IC. A simple summation risk-predictive model using the 10 independently significant variables associated with IC demonstrated overall moderate accuracy (area under the receiver operating characteristic curve = 0.82). No single threshold score could categorize patients into clinically useful high- and low-risk groups. However, using 2 threshold scores, 3 patient cohorts could be identified: those at high risk (score ≥6, 4.8% of total cohort, positive predictive value [PPV] 11.7%), those at low risk (score ≤2, 43.1% of total cohort, PPV 0.24%), and those at intermediate risk (score 3-5, 52.1% of total cohort, PPV 1.46%). Dichotomization of ICU patients into high- and low-risk groups for IC risk is problematic. Categorizing patients into high-, intermediate-, and low-risk groups may more efficiently target early antifungal strategies and utilization of newer diagnostic tests. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun
2017-02-01
An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.
ERIC Educational Resources Information Center
Gobbens, Robbert J. J.; van Assen, Marcel A. L. M.; Luijkx, Katrien G.; Schols, Jos M. G. A.
2012-01-01
Purpose: To assess the predictive validity of frailty and its domains (physical, psychological, and social), as measured by the Tilburg Frailty Indicator (TFI), for the adverse outcomes disability, health care utilization, and quality of life. Design and Methods: The predictive validity of the TFI was tested in a representative sample of 484…
Nilsson, Gunnar; Mooe, Thomas; Stenlund, Hans; Samuelsson, Eva
2014-04-18
Evaluation of angina symptoms in primary care often includes clinical exercise testing. We sought to identify clinical characteristics that predicted the outcome of exercise testing and to describe the occurrence of cardiovascular events during follow-up. This study followed patients referred to exercise testing for suspected coronary disease by general practitioners in the County of Jämtland, Sweden (enrolment, 25 months from February 2010). Patient characteristics were registered by pre-test questionnaire. Exercise tests were performed with a bicycle ergometer, a 12-lead electrocardiogram, and validated scales for scoring angina symptoms. Exercise tests were classified as positive (ST-segment depression >1 mm and chest pain indicative of angina), non-conclusive (ST depression or chest pain), or negative. Odds ratios (ORs) for exercise-test outcome were calculated with a bivariate logistic model adjusted for age, sex, systolic blood pressure, and previous cardiovascular events. Cardiovascular events (unstable angina, myocardial infarctions, decisions on revascularization, cardiovascular death, and recurrent angina in primary care) were recorded within six months. A probability cut-off of 10% was used to detect cardiovascular events in relation to the predicted test outcome. We enrolled 865 patients (mean age 63.5 years, 50.6% men); 6.4% of patients had a positive test, 75.5% were negative, 16.4% were non-conclusive, and 1.7% were not assessable. Positive or non-conclusive test results were predicted by exertional chest pain (OR 2.46, 95% confidence interval (CI) 1.69-3.59), a pathologic ST-T segment on resting electrocardiogram (OR 2.29, 95% CI 1.44-3.63), angina according to the patient (OR 1.70, 95% CI 1.13-2.55), and medication for dyslipidaemia (OR 1.51, 95% CI 1.02-2.23). During follow-up, cardiovascular events occurred in 8% of all patients and 4% were referred to revascularization. Cardiovascular events occurred in 52.7%, 18.3%, and 2% of patients with positive, non-conclusive, or negative tests, respectively. The model predicted 67/69 patients with a cardiovascular event. Clinical characteristics can be used to predict exercise test outcome. Primary care patients with a negative exercise test have a very low risk of cardiovascular events, within six months. A predictive model based on clinical characteristics can be used to refine the identification of low-risk patients.
Nallani, Gopinath; Venables, Barney; Constantine, Lisa; Huggett, Duane
2016-05-01
Evaluation of the environmental risk of human pharmaceuticals is now a mandatory component in all new drug applications submitted for approval in EU. With >3000 drugs currently in use, it is not feasible to test each active ingredient, so prioritization is key. A recent review has listed nine prioritization approaches including the fish plasma model (FPM). The present paper focuses on comparison of measured and predicted fish plasma bioconcentration factors (BCFs) of four common over-the-counter/prescribed pharmaceuticals: norethindrone (NET), ibuprofen (IBU), verapamil (VER) and clozapine (CLZ). The measured data were obtained from the earlier published fish BCF studies. The measured BCF estimates of NET, IBU, VER and CLZ were 13.4, 1.4, 0.7 and 31.2, while the corresponding predicted BCFs (based log Kow at pH 7) were 19, 1.0, 7.6 and 30, respectively. These results indicate that the predicted BCFs matched well the measured values. The BCF estimates were used to calculate the human: fish plasma concentration ratios of each drug to predict potential risk to fish. The plasma ratio results show the following order of risk potential for fish: NET > CLZ > VER > IBU. The FPM has value in prioritizing pharmaceutical products for ecotoxicological assessments.
PERSONAL VULNERABILITIES AND ASSORTATIVE MATE SELECTION AMONG NEWLYWED SPOUSES.
Trombello, Joseph M; Schoebi, Dominik; Bradbury, Thomas N
2015-06-01
Assortative-mating theories propose that individuals select romantic relationship partners who are similar to them on positive and negative qualities. Furthermore, stress-generation and intergenerational transmission of divorce models argue that one's depression history or family-of-origin relationship problems predict qualities of a marital partner that predispose them to relationship distress. We analyzed data from 172 newlywed couples to examine predictors and mediators of a marital partner's risk index. First, an index of one's own and one's partner risk was created through factor analysis and was comprised of measures that indicate insecurity about oneself. This index was significantly correlated with baseline marital satisfaction and, among men, steps toward divorce at follow-up. Then, structural equation modeling tested direct and indirect pathways predicting partner's risk index, analyzing prior depression history and family-of-origin relational impairment as predictors and one's own risk index as the mediator. Results demonstrated that own risk index reliably predicted partner's risk, while own risk index also mediated the relationship between own family-of-origin relational dysfunction/depression history and partner's risk index. These results support assortative mating theories and suggest that the association between adverse family-of-origin relationships or depression history and the risk profile in one's marital partner is explained by one's own risk profile.
Berrisford, Richard; Brunelli, Alessandro; Rocco, Gaetano; Treasure, Tom; Utley, Martin
2005-08-01
To identify pre-operative factors associated with in-hospital mortality following lung resection and to construct a risk model that could be used prospectively to inform decisions and retrospectively to enable fair comparisons of outcomes. Data were submitted to the European Thoracic Surgery Database from 27 units in 14 countries. We analysed data concerning all patients that had a lung resection. Logistic regression was used with a random sample of 60% of cases to identify pre-operative factors associated with in-hospital mortality and to build a model of risk. The resulting model was tested on the remaining 40% of patients. A second model based on age and ppoFEV1% was developed for risk of in-hospital death amongst tumour resection patients. Of the 3426 adult patients that had a first lung resection for whom mortality data were available, 66 died within the same hospital admission. Within the data used for model development, dyspnoea (according to the Medical Research Council classification), ASA (American Society of Anaesthesiologists) score, class of procedure and age were found to be significantly associated with in-hospital death in a multivariate analysis. The logistic model developed on these data displayed predictive value when tested on the remaining data. Two models of the risk of in-hospital death amongst adult patients undergoing lung resection have been developed. The models show predictive value and can be used to discern between high-risk and low-risk patients. Amongst the test data, the model developed for all diagnoses performed well at low risk, underestimated mortality at medium risk and overestimated mortality at high risk. The second model for resection of lung neoplasms was developed after establishing the performance of the first model and so could not be tested robustly. That said, we were encouraged by its performance over the entire range of estimated risk. The first of these two models could be regarded as an evaluation based on clinically available criteria while the second uses data obtained from objective measurement. We are optimistic that further model development and testing will provide a tool suitable for case mix adjustment.
Rastrelli, Giulia; Corona, Giovanni; Fisher, Alessandra D; Silverii, Antonio; Mannucci, Edoardo; Maggi, Mario
2012-12-01
The classification of subjects as low or high cardiovascular (CV) risk is usually performed by risk engines, based upon multivariate prediction algorithms. However, their accuracy in predicting major adverse CV events (MACEs) is lower in high-risk populations as they take into account only conventional risk factors. To evaluate the accuracy of Progetto Cuore risk engine in predicting MACE in subjects with erectile dysfunction (ED) and to test the role of unconventional CV risk factors, specifically identified for ED. A consecutive series of 1,233 men (mean age 53.33 ± 9.08 years) attending our outpatient clinic for sexual dysfunction was longitudinally studied for a mean period of 4.4 ± 2.6 years. Several clinical, biochemical, and instrumental parameters were evaluated. Subjects were classified as high or low risk, according to previously reported ED-specific risk factors. In the overall population, Progetto Cuore-predicted population survival was not significantly different from the observed one (P = 0.545). Accordingly, receiver operating characteristic (ROC) analysis shows that Progetto Cuore has an accuracy of 0.697 ± 0.037 (P < 0.001) in predicting MACE. Considering subjects at high risk according to ED-specific risk factors, the observed incidence of MACE was significantly higher than the expected for both low educated and patients reporting partner's hypoactive sexual desire (HSD, both <0.05), but not for other described factors. The area under ROC curves of Progetto Cuore for MACE in subjects with low education and reported partner's HSD were 0.659 ± 0.053 (P = 0.008) and 0.550 ± 0.076 (P = 0.570), respectively. Overall, Progetto Cuore is a proper instrument for evaluating CV risk in ED subjects. However, in ED, other factors such as low education and partner's HSD concur to risk profile. At variance with low education, Progetto Cuore is not accurate enough to predict MACE in subjects with partner's HSD, suggesting that the latter effect is not mediated by conventional risk factors included in the algorithm. © 2012 International Society for Sexual Medicine.
Amirabadizadeh, Alireza; Nezami, Hossein; Vaughn, Michael G; Nakhaee, Samaneh; Mehrpour, Omid
2018-05-12
Substance abuse exacts considerable social and health care burdens throughout the world. The aim of this study was to create a prediction model to better identify risk factors for drug use. A prospective cross-sectional study was conducted in South Khorasan Province, Iran. Of the total of 678 eligible subjects, 70% (n: 474) were randomly selected to provide a training set for constructing decision tree and multiple logistic regression (MLR) models. The remaining 30% (n: 204) were employed in a holdout sample to test the performance of the decision tree and MLR models. Predictive performance of different models was analyzed by the receiver operating characteristic (ROC) curve using the testing set. Independent variables were selected from demographic characteristics and history of drug use. For the decision tree model, the sensitivity and specificity for identifying people at risk for drug abuse were 66% and 75%, respectively, while the MLR model was somewhat less effective at 60% and 73%. Key independent variables in the analyses included first substance experience, age at first drug use, age, place of residence, history of cigarette use, and occupational and marital status. While study findings are exploratory and lack generalizability they do suggest that the decision tree model holds promise as an effective classification approach for identifying risk factors for drug use. Convergent with prior research in Western contexts is that age of drug use initiation was a critical factor predicting a substance use disorder.
Assessment of risk for transplant-transmissible infectious encephalitis among deceased organ donors.
Smalley, Hannah K; Anand, Nishi; Buczek, Dylan; Buczek, Nicholas; Lin, Timothy; Rajore, Tanay; Wacker, Muriel; Basavaraju, Sridhar V; Gurbaxani, Brian M; Hammett, Teresa; Keskinocak, Pinar; Sokol, Joel; Kuehnert, Matthew J
2018-05-29
There were 13 documented clusters of infectious encephalitis transmission via organ transplant from deceased donors to recipients during 2002-2013. Hence, organs from donors diagnosed with encephalitis are often declined due to concerns about the possibility of infection, given that there is no quick and simple test to detect causes of infectious encephalitis. We constructed a database containing cases of infectious and non-infectious encephalitis. Using statistical imputation, cross-validation, and regression techniques, we determined deceased organ donor characteristics, including demographics, signs, symptoms, physical exam, and laboratory findings, predictive of infectious versus non-infectious encephalitis, and developed a calculator which assesses risk of infection. Using up to 12 predictive patient characteristics, (with a minimum of 3, depending on what information is available), the calculator provides the probability that a donor may have infectious versus non-infectious encephalitis, improving the prediction accuracy over current practices. These characteristics include gender, fever, immunocompromised state (other than HIV), cerebrospinal fluid elevation, altered mental status, psychiatric features, cranial nerve abnormality, meningeal signs, focal motor weakness, Babinski's sign, movement disorder, and sensory abnormalities. In the absence of definitive diagnostic testing in a potential organ donor, infectious encephalitis can be predicted with a risk score. The risk calculator presented in this paper represents a prototype, establishing a framework that can be expanded to other infectious diseases transmissible through solid organ transplantation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Lemos, Raquel; Marôco, João; Simões, Mário R; Santiago, Beatriz; Tomás, José; Santana, Isabel
2017-03-01
Amnestic mild cognitive impairment (aMCI) patients carry a greater risk of conversion to Alzheimer's disease (AD). Therefore, the International Working Group (IWG) on AD aims to consider some cases of aMCI as symptomatic prodromal AD. The core diagnostic marker of AD is a significant and progressive memory deficit, and the Free and Cued Selective Reminding Test (FCSRT) was recommended by the IWG to test memory in cases of possible prodromal AD. This study aims to investigate whether the performance on the FCSRT would enhance the ability to predict conversion to AD in an aMCI group. A longitudinal study was conducted on 88 aMCI patients, and neuropsychological tests were analysed on the relative risk of conversion to AD. During follow-up (23.82 months), 33% of the aMCI population converted to AD. An impaired FCSRT TR was significantly associated with the risk of conversion to dementia, with a mean time to conversion of 25 months. The FCSRT demonstrates utility for detecting AD at its prodromal stage, thus supporting its use as a valid clinical marker. © 2015 The British Psychological Society.
p53 predictive value for pT1-2 N0 disease at radical cystectomy.
Shariat, Shahrokh F; Lotan, Yair; Karakiewicz, Pierre I; Ashfaq, Raheela; Isbarn, Hendrik; Fradet, Yves; Bastian, Patrick J; Nielsen, Matthew E; Capitanio, Umberto; Jeldres, Claudio; Montorsi, Francesco; Müller, Stefan C; Karam, Jose A; Heukamp, Lukas C; Netto, George; Lerner, Seth P; Sagalowsky, Arthur I; Cote, Richard J
2009-09-01
Approximately 15% to 30% of patients with pT1-2N0M0 urothelial carcinoma of the bladder experience disease progression despite radical cystectomy with curative intent. We determined whether p53 expression would improve the prediction of disease progression after radical cystectomy for pT1-2N0M0 UCB. In a multi-institutional retrospective cohort we identified 324 patients with pT1-2N0M0 urothelial carcinoma of the bladder who underwent radical cystectomy. Analysis focused on a testing cohort of 272 patients and an external validation of 52. Competing risks regression models were used to test the association of variables with cancer specific mortality after accounting for nonbladder cancer caused mortality. In the testing cohort 91 patients (33.5%) had altered p53 expression (p53alt). On multivariate competing risks regression analysis altered p53 achieved independent status for predicting disease recurrence and cancer specific mortality (each p <0.001). Adding p53 increased the accuracy of multivariate competing risks regression models predicting recurrence and cancer specific mortality by 5.7% (62.0% vs 67.7%) and 5.4% (61.6% vs 67.0%), respectively. Alterations in p53 represent a highly promising marker of disease recurrence and cancer specific mortality after radical cystectomy for urothelial carcinoma of the bladder. Analysis confirmed previous findings and showed that considering p53 can result in substantial accuracy gains relative to the use of standard predictors. The value and the level of the current evidence clearly exceed previous proof of the independent predictor status of p53 for predicting recurrence and cancer specific mortality.
Settivari, Raja S; Ball, Nicholas; Murphy, Lynea; Rasoulpour, Reza; Boverhof, Darrell R; Carney, Edward W
2015-03-01
Interest in applying 21st-century toxicity testing tools for safety assessment of industrial chemicals is growing. Whereas conventional toxicology uses mainly animal-based, descriptive methods, a paradigm shift is emerging in which computational approaches, systems biology, high-throughput in vitro toxicity assays, and high-throughput exposure assessments are beginning to be applied to mechanism-based risk assessments in a time- and resource-efficient fashion. Here we describe recent advances in predictive safety assessment, with a focus on their strategic application to meet the changing demands of the chemical industry and its stakeholders. The opportunities to apply these new approaches is extensive and include screening of new chemicals, informing the design of safer and more sustainable chemical alternatives, filling information gaps on data-poor chemicals already in commerce, strengthening read-across methodology for categories of chemicals sharing similar modes of action, and optimizing the design of reduced-risk product formulations. Finally, we discuss how these predictive approaches dovetail with in vivo integrated testing strategies within repeated-dose regulatory toxicity studies, which are in line with 3Rs principles to refine, reduce, and replace animal testing. Strategic application of these tools is the foundation for informed and efficient safety assessment testing strategies that can be applied at all stages of the product-development process.
Thirthagiri, E; Lee, S Y; Kang, P; Lee, D S; Toh, G T; Selamat, S; Yoon, S-Y; Taib, N A Mohd; Thong, M K; Yip, C H; Teo, S H
2008-01-01
The cost of genetic testing and the limited knowledge about the BRCA1 and BRCA2 genes in different ethnic groups has limited its availability in medium- and low-resource countries, including Malaysia. In addition, the applicability of many risk-assessment tools, such as the Manchester Scoring System and BOADICEA (Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm) which were developed based on mutation rates observed primarily in Caucasian populations using data from multiplex families, and in populations where the rate of breast cancer is higher, has not been widely tested in Asia or in Asians living elsewhere. Here, we report the results of genetic testing for mutations in the BRCA1 or BRCA2 genes in a series of families with breast cancer in the multi-ethnic population (Malay, Chinese and Indian) of Malaysia. A total of 187 breast cancer patients with either early-onset breast cancer (at age = 40 years) or a personal and/or family history of breast or ovarian cancer were comprehensively tested by full sequencing of both BRCA1 and BRCA2. Two algorithms to predict the presence of mutations, the Manchester Scoring System and BOADICEA, were evaluated. Twenty-seven deleterious mutations were detected (14 in BRCA1 and 13 in BRCA2), only one of which was found in two unrelated individuals (BRCA2 490 delCT). In addition, 47 variants of uncertain clinical significance were identified (16 in BRCA1 and 31 in BRCA2). Notably, many mutations are novel (13 of the 30 BRCA1 mutations and 24 of the 44 BRCA2). We report that while there were an equal proportion of BRCA1 and BRCA2 mutations in the Chinese population in our study, there were significantly more BRCA2 mutations among the Malays. In addition, we show that the predictive power of the BOADICEA risk-prediction model and the Manchester Scoring System was significantly better for BRCA1 than BRCA2, but that the overall sensitivity, specificity and positive-predictive value was lower in this population than has been previously reported in Caucasian populations. Our study underscores the need for larger collaborative studies among non-Caucasian populations to validate the role of genetic testing and the use of risk-prediction models in ensuring that the other populations in the world may also benefit from the genomics and genetics era.
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Revolution In Toxicity Testing And Risk Prediction For Chemicals In The Environment (ASA)
Addressing safety aspects of drugs and environmental chemicals relies extensively on animal testing; however, the quantity of chemicals needing assessment and challenges of species extrapolation require alternative approaches to traditional animal studies. Newer in vitro and in s...
ESTIMATING ACUTE AND CRONIC TOXICITY OF CHEMICALS FOR ENDANGERED FISHES
Predictive toxicological models, including estimates of uncertainty, are necessary to perform probability-based ecological risk assessments. This is particularly true for the protection of endangered species that are not prudent to test, other species that have not been tested o...
A vision for modernizing environmental risk assessment
In 2007, the US National Research Council (NRC) published a Vision and Strategy for [human health] Toxicity Testing in the 21st century. Central to the vision was increased reliance on high throughput in vitro testing and predictive approaches based on mechanistic understanding o...
An Evaluation of Fish ELS Data: Is it Predictive?
As with higher vertebrate animal alternatives, balance between reducing the use of animals in testing without impairing or increasing uncertainty in risk assessment is needed. Testing demands for long-term (chronic) fish toxicity represents the third largest pool of needs follo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, Sarah; Repins, Ingrid L; Hacke, Peter L
Continued growth of PV system deployment would be enhanced by quantitative, low-uncertainty predictions of the degradation and failure rates of PV modules and systems. The intended product lifetime (decades) far exceeds the product development cycle (months), limiting our ability to reduce the uncertainty of the predictions for this rapidly changing technology. Yet, business decisions (setting insurance rates, analyzing return on investment, etc.) require quantitative risk assessment. Moving toward more quantitative assessments requires consideration of many factors, including the intended application, consequence of a possible failure, variability in the manufacturing, installation, and operation, as well as uncertainty in the measured accelerationmore » factors, which provide the basis for predictions based on accelerated tests. As the industry matures, it is useful to periodically assess the overall strategy for standards development and prioritization of research to provide a technical basis both for the standards and the analysis related to the application of those. To this end, this paper suggests a tiered approach to creating risk assessments. Recent and planned potential improvements in international standards are also summarized.« less
Automated Cervical Screening and Triage, Based on HPV Testing and Computer-Interpreted Cytology.
Yu, Kai; Hyun, Noorie; Fetterman, Barbara; Lorey, Thomas; Raine-Bennett, Tina R; Zhang, Han; Stamps, Robin E; Poitras, Nancy E; Wheeler, William; Befano, Brian; Gage, Julia C; Castle, Philip E; Wentzensen, Nicolas; Schiffman, Mark
2018-04-11
State-of-the-art cervical cancer prevention includes human papillomavirus (HPV) vaccination among adolescents and screening/treatment of cervical precancer (CIN3/AIS and, less strictly, CIN2) among adults. HPV testing provides sensitive detection of precancer but, to reduce overtreatment, secondary "triage" is needed to predict women at highest risk. Those with the highest-risk HPV types or abnormal cytology are commonly referred to colposcopy; however, expert cytology services are critically lacking in many regions. To permit completely automatable cervical screening/triage, we designed and validated a novel triage method, a cytologic risk score algorithm based on computer-scanned liquid-based slide features (FocalPoint, BD, Burlington, NC). We compared it with abnormal cytology in predicting precancer among 1839 women testing HPV positive (HC2, Qiagen, Germantown, MD) in 2010 at Kaiser Permanente Northern California (KPNC). Precancer outcomes were ascertained by record linkage. As additional validation, we compared the algorithm prospectively with cytology results among 243 807 women screened at KPNC (2016-2017). All statistical tests were two-sided. Among HPV-positive women, the algorithm matched the triage performance of abnormal cytology. Combined with HPV16/18/45 typing (Onclarity, BD, Sparks, MD), the automatable strategy referred 91.7% of HPV-positive CIN3/AIS cases to immediate colposcopy while deferring 38.4% of all HPV-positive women to one-year retesting (compared with 89.1% and 37.4%, respectively, for typing and cytology triage). In the 2016-2017 validation, the predicted risk scores strongly correlated with cytology (P < .001). High-quality cervical screening and triage performance is achievable using this completely automated approach. Automated technology could permit extension of high-quality cervical screening/triage coverage to currently underserved regions.
Schoell, Samantha L; Weaver, Ashley A; Urban, Jillian E; Jones, Derek A; Stitzel, Joel D; Hwang, Eunjoo; Reed, Matthew P; Rupp, Jonathan D; Hu, Jingwen
2015-11-01
The aging population is a growing concern as the increased fragility and frailty of the elderly results in an elevated incidence of injury as well as an increased risk of mortality and morbidity. To assess elderly injury risk, age-specific computational models can be developed to directly calculate biomechanical metrics for injury. The first objective was to develop an older occupant Global Human Body Models Consortium (GHBMC) average male model (M50) representative of a 65 year old (YO) and to perform regional validation tests to investigate predicted fractures and injury severity with age. Development of the GHBMC M50 65 YO model involved implementing geometric, cortical thickness, and material property changes with age. Regional validation tests included a chest impact, a lateral impact, a shoulder impact, a thoracoabdominal impact, an abdominal bar impact, a pelvic impact, and a lateral sled test. The second objective was to investigate age-related injury risks by performing a frontal US NCAP simulation test with the GHBMC M50 65 YO and the GHBMC M50 v4.2 models. Simulation results were compared to the GHBMC M50 v4.2 to evaluate the effect of age on occupant response and risk for head injury, neck injury, thoracic injury, and lower extremity injury. Overall, the GHBMC M50 65 YO model predicted higher probabilities of AIS 3+ injury for the head and thorax.
Geotechnical risk analysis by flat dilatometer (DMT)
NASA Astrophysics Data System (ADS)
Amoroso, Sara; Monaco, Paola
2015-04-01
In the last decades we have assisted at a massive migration from laboratory testing to in situ testing, to the point that, today, in situ testing is often the major part of a geotechnical investigation. The State of the Art indicates that direct-push in situ tests, such as the Cone Penetration Test (CPT) and the Flat Dilatometer Test (DMT), are fast and convenient in situ tests for routine site investigation. In most cases the DMT estimated parameters, in particular the undrained shear strength su and the constrained modulus M, are used with the common design methods of Geotechnical Engineering for evaluating bearing capacity, settlements etc. The paper focuses on the prediction of settlements of shallow foundations, that is probably the No. 1 application of the DMT, especially in sands, where undisturbed samples cannot be retrieved, and on the risk associated with their design. A compilation of documented case histories that compare DMT-predicted vs observed settlements, was collected by Monaco et al. (2006), indicating that, in general, the constrained modulus M can be considered a reasonable "operative modulus" (relevant to foundations in "working conditions") for settlement predictions based on the traditional linear elastic approach. Indeed, the use of a site investigation method, such as DMT, that improve the accuracy of design parameters, reduces risk, and the design can then center on the site's true soil variability without parasitic test variability. In this respect, Failmezger et al. (1999, 2015) suggested to introduce Beta probability distribution, that provides a realistic and useful description of variability for geotechnical design problems. The paper estimates Beta probability distribution in research sites where DMT tests and observed settlements are available. References Failmezger, R.A., Rom, D., Ziegler, S.R. (1999). "SPT? A better approach of characterizing residual soils using other in-situ tests", Behavioral Characterics of Residual Soils, B. Edelen, Ed., ASCE, Reston, VA, pp. 158-175. Failmezger, R.A., Till, P., Frizzell, J., Kight, S. (2015). "Redesign of shallow foundations using dilatometer tests-more case studies after DMT'06 conference", Proc. 2nd International Conference on the Flat Dilatometer, June 14-16 (paper accepted). Monaco, P., Totani, G., Calabrese, M. (2006). "DMT-predicted vs observed settlements: a review of the available experience". In "Flat Dilatometer Testing", Proc. 2nd International Conference on the Flat Dilatometer, Washington, D.C., USA, April 2-5, 244-252. R.A. Failmezger and J.B. Anderson (eds).
Gaura, Elena; Kemp, John; Brusey, James
2013-12-01
The paper demonstrates that wearable sensor systems, coupled with real-time on-body processing and actuation, can enhance safety for wearers of heavy protective equipment who are subjected to harsh thermal environments by reducing risk of Uncompensable Heat Stress (UHS). The work focuses on Explosive Ordnance Disposal operatives and shows that predictions of UHS risk can be performed in real-time with sufficient accuracy for real-world use. Furthermore, it is shown that the required sensory input for such algorithms can be obtained with wearable, non-intrusive sensors. Two algorithms, one based on Bayesian nets and another on decision trees, are presented for determining the heat stress risk, considering the mean skin temperature prediction as a proxy. The algorithms are trained on empirical data and have accuracies of 92.1±2.9% and 94.4±2.1%, respectively when tested using leave-one-subject-out cross-validation. In applications such as Explosive Ordnance Disposal operative monitoring, such prediction algorithms can enable autonomous actuation of cooling systems and haptic alerts to minimize casualties.
Disparities in genetic testing: thinking outside the BRCA box.
Hall, Michael J; Olopade, Olufunmilayo I
2006-05-10
The impact of predictive genetic testing on cancer care can be measured by the increased demand for and utilization of genetic services as well as in the progress made in reducing cancer risks in known mutation carriers. Nonetheless, differential access to and utilization of genetic counseling and cancer predisposition testing among underserved racial and ethnic minorities compared with the white population has led to growing health care disparities in clinical cancer genetics that are only beginning to be addressed. Furthermore, deficiencies in the utility of genetic testing in underserved populations as a result of limited testing experience and in the effectiveness of risk-reducing interventions compound access and knowledge-base disparities. The recent literature on racial/ethnic health care disparities is briefly reviewed, and is followed by a discussion of the current limitations of risk assessment and genetic testing outside of white populations. The importance of expanded testing in underserved populations is emphasized.
The lack of resources available for comprehensive toxicity testing, international interest in limiting the quantity of animals used in testing, and a mounting list of anthropogenic chemicals produced world-wide have led to the exploration of innovative means for identifying chemi...
Anger as a moderator of safer sex motivation among low-income urban women.
Schroder, Kerstin E E; Carey, Michael P
2005-10-01
Theoretical models suggest that both HIV knowledge and HIV risk perception inform rational decision making and, thus, predict safer sex motivation and behavior. However, the amount of variance explained by knowledge and risk perception is typically small. In this cross-sectional study, we investigated whether the predictive power of HIV knowledge and HIV risk perception on safer sex motivation is affected by trait anger. We hypothesized that anger may disrupt rational decision making, distorting the effects of both HIV knowledge and risk perception on safer sex intentions. Data from 232 low-income, urban women at risk for HIV infection were used to test a path model with past sexual risk behavior, HIV knowledge, and HIV risk perception as predictors of safer sex intentions. Moderator effects of anger on safer sex intentions were tested by simultaneous group comparisons between high-anger and low-anger women (median split). The theoretically expected "rational pattern" was found among low-anger women only, including (a) a positive effect of knowledge on safer sex intentions, and (b) buffer (inhibitor) effects of HIV knowledge and HIV risk perception on the negative path leading from past risk behavior to safer sex intentions. Among high-anger women, an "irrational pattern" emerged, with no effects of HIV knowledge and negative effects of both past risk behavior and HIV risk perception on safer sex intentions. In sum, the results suggest that rational knowledge- and risk-based decisions regarding safer sex may be limited to low-anger women.
Anger as a Moderator of Safer Sex Motivation among Low Income Urban Women
Carey, Michael P.
2005-01-01
Theoretical models suggest that both HIV knowledge and HIV risk perception inform rational decision-making and, thus, predict safer sex motivation and behavior. However, the amount of variance explained by knowledge and risk perception is typically small. In this cross-sectional study, we investigated whether the predictive power of HIV knowledge and HIV risk perception on safer sex motivation is affected by trait anger. We hypothesized that anger may disrupt rational-decision making, distorting the effects of both HIV knowledge and risk perception on safer sex intentions. Data from 232 low-income, urban women at risk for HIV infection were used to test a path model with past sexual risk behavior, HIV knowledge, and HIV risk perception as predictors of safer sex intentions. Moderator effects of anger on safer sex intentions were tested by simultaneous group comparisons between high-anger and low-anger women (median-split). The theoretically expected “rational pattern” was found among low-anger women only, including (a) a positive effect of knowledge on safer sex intentions, and (b) buffer (inhibitor) effects of HIV knowledge and HIV risk perception on the negative path leading from past risk behavior to safer sex intentions. Among high-anger women, an “irrational pattern” emerged, with no effects of HIV knowledge and negative effects of both past risk behavior and HIV risk perception on safer sex intentions. In sum, the results suggest that rational knowledge and risk-based decisions regarding safer sex may be limited to low-anger women. PMID:16247592
ERIC Educational Resources Information Center
Wong, Simpson W. L.; McBride-Chang, Catherine; Lam, Catherine; Chan, Becky; Lam, Fanny W. F.; Doo, Sylvia
2012-01-01
This study sought to examine factors that are predictive of future developmental dyslexia among a group of 5-year-old Chinese children at risk for dyslexia, including 62 children with a sibling who had been previously diagnosed with dyslexia and 52 children who manifested clinical at-risk factors in aspects of language according to testing by…
PREDICT-PD: An online approach to prospectively identify risk indicators of Parkinson's disease.
Noyce, Alastair J; R'Bibo, Lea; Peress, Luisa; Bestwick, Jonathan P; Adams-Carr, Kerala L; Mencacci, Niccolo E; Hawkes, Christopher H; Masters, Joseph M; Wood, Nicholas; Hardy, John; Giovannoni, Gavin; Lees, Andrew J; Schrag, Anette
2017-02-01
A number of early features can precede the diagnosis of Parkinson's disease (PD). To test an online, evidence-based algorithm to identify risk indicators of PD in the UK population. Participants aged 60 to 80 years without PD completed an online survey and keyboard-tapping task annually over 3 years, and underwent smell tests and genotyping for glucocerebrosidase (GBA) and leucine-rich repeat kinase 2 (LRRK2) mutations. Risk scores were calculated based on the results of a systematic review of risk factors and early features of PD, and individuals were grouped into higher (above 15th centile), medium, and lower risk groups (below 85th centile). Previously defined indicators of increased risk of PD ("intermediate markers"), including smell loss, rapid eye movement-sleep behavior disorder, and finger-tapping speed, and incident PD were used as outcomes. The correlation of risk scores with intermediate markers and movement of individuals between risk groups was assessed each year and prospectively. Exploratory Cox regression analyses with incident PD as the dependent variable were performed. A total of 1323 participants were recruited at baseline and >79% completed assessments each year. Annual risk scores were correlated with intermediate markers of PD each year and baseline scores were correlated with intermediate markers during follow-up (all P values < 0.001). Incident PD diagnoses during follow-up were significantly associated with baseline risk score (hazard ratio = 4.39, P = .045). GBA variants or G2019S LRRK2 mutations were found in 47 participants, and the predictive power for incident PD was improved by the addition of genetic variants to risk scores. The online PREDICT-PD algorithm is a unique and simple method to identify indicators of PD risk. © 2017 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
Harrison, Dominique P; Stritzke, Werner G K; Fay, Nicolas; Ellison, T Mark; Hudaib, Abdul-Rahman
2014-09-01
Assessment of implicit self-associations with death relative to life, measured by a death/suicide implicit association test (d/s-IAT), has shown promise in the prediction of suicide risk. The current study examined whether the d/s-IAT reflects an individual's desire to die or a diminished desire to live and whether the predictive utility of implicit cognition is mediated by life-oriented beliefs. Four hundred eight undergraduate students (285 female; Mage = 20.36 years, SD = 4.72) participated. Participants completed the d/s-IAT and self-report measures assessing 6 indicators of suicide risk (suicide ideation frequency and intensity, depression, nonsuicidal self-harm thoughts frequency and intensity, and nonsuicidal self-harm attempts), as well as survival and coping beliefs and history of prior suicide attempts. The d/s-IAT significantly predicted 5 out of the 6 indicators of suicide risk above and beyond the strongest traditional indicator of risk, history of prior suicide attempts. However, the effect of the d/s-IAT on each of the risk indicators was mediated by individuals' survival and coping beliefs. Moreover, the distribution of d/s-IAT scores primarily reflected variability in self-associations with life. Implicit suicide-related cognition appears to reflect a gradual diminishing of the desire to live, rather than a desire to die. Contemporary theories of suicide and risk assessment protocols need to account for the dynamic relationship between both risk and life-oriented resilience factors, and intervention strategies aimed at enhancing engagement with life should be a routine part of suicide risk management. PsycINFO Database Record (c) 2014 APA, all rights reserved.
León-Justel, Antonio; Madrazo-Atutxa, Ainara; Alvarez-Rios, Ana I; Infantes-Fontán, Rocio; Garcia-Arnés, Juan A; Lillo-Muñoz, Juan A; Aulinas, Anna; Urgell-Rull, Eulàlia; Boronat, Mauro; Sánchez-de-Abajo, Ana; Fajardo-Montañana, Carmen; Ortuño-Alonso, Mario; Salinas-Vert, Isabel; Granada, Maria L; Cano, David A; Leal-Cerro, Alfonso
2016-10-01
Cushing's syndrome (CS) is challenging to diagnose. Increased prevalence of CS in specific patient populations has been reported, but routine screening for CS remains questionable. To decrease the diagnostic delay and improve disease outcomes, simple new screening methods for CS in at-risk populations are needed. To develop and validate a simple scoring system to predict CS based on clinical signs and an easy-to-use biochemical test. Observational, prospective, multicenter. Referral hospital. A cohort of 353 patients attending endocrinology units for outpatient visits. All patients were evaluated with late-night salivary cortisol (LNSC) and a low-dose dexamethasone suppression test for CS. Diagnosis or exclusion of CS. Twenty-six cases of CS were diagnosed in the cohort. A risk scoring system was developed by logistic regression analysis, and cutoff values were derived from a receiver operating characteristic curve. This risk score included clinical signs and symptoms (muscular atrophy, osteoporosis, and dorsocervical fat pad) and LNSC levels. The estimated area under the receiver operating characteristic curve was 0.93, with a sensitivity of 96.2% and specificity of 82.9%. We developed a risk score to predict CS in an at-risk population. This score may help to identify at-risk patients in non-endocrinological settings such as primary care, but external validation is warranted.
The contributions of breast density and common genetic variation to breast cancer risk.
Vachon, Celine M; Pankratz, V Shane; Scott, Christopher G; Haeberle, Lothar; Ziv, Elad; Jensen, Matthew R; Brandt, Kathleen R; Whaley, Dana H; Olson, Janet E; Heusinger, Katharina; Hack, Carolin C; Jud, Sebastian M; Beckmann, Matthias W; Schulz-Wendtland, Ruediger; Tice, Jeffrey A; Norman, Aaron D; Cunningham, Julie M; Purrington, Kristen S; Easton, Douglas F; Sellers, Thomas A; Kerlikowske, Karla; Fasching, Peter A; Couch, Fergus J
2015-05-01
We evaluated whether a 76-locus polygenic risk score (PRS) and Breast Imaging Reporting and Data System (BI-RADS) breast density were independent risk factors within three studies (1643 case patients, 2397 control patients) using logistic regression models. We incorporated the PRS odds ratio (OR) into the Breast Cancer Surveillance Consortium (BCSC) risk-prediction model while accounting for its attributable risk and compared five-year absolute risk predictions between models using area under the curve (AUC) statistics. All statistical tests were two-sided. BI-RADS density and PRS were independent risk factors across all three studies (P interaction = .23). Relative to those with scattered fibroglandular densities and average PRS (2(nd) quartile), women with extreme density and highest quartile PRS had 2.7-fold (95% confidence interval [CI] = 1.74 to 4.12) increased risk, while those with low density and PRS had reduced risk (OR = 0.30, 95% CI = 0.18 to 0.51). PRS added independent information (P < .001) to the BCSC model and improved discriminatory accuracy from AUC = 0.66 to AUC = 0.69. Although the BCSC-PRS model was well calibrated in case-control data, independent cohort data are needed to test calibration in the general population. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Applying the reasoned action approach to understanding health protection and health risk behaviors.
Conner, Mark; McEachan, Rosemary; Lawton, Rebecca; Gardner, Peter
2017-12-01
The Reasoned Action Approach (RAA) developed out of the Theory of Reasoned Action and Theory of Planned Behavior but has not yet been widely applied to understanding health behaviors. The present research employed the RAA in a prospective design to test predictions of intention and action for groups of protection and risk behaviors separately in the same sample. To test the RAA for health protection and risk behaviors. Measures of RAA components plus past behavior were taken in relation to eight protection and six risk behaviors in 385 adults. Self-reported behavior was assessed one month later. Multi-level modelling showed instrumental attitude, experiential attitude, descriptive norms, capacity and past behavior were significant positive predictors of intentions to engage in protection or risk behaviors. Injunctive norms were only significant predictors of intention in protection behaviors. Autonomy was a significant positive predictor of intentions in protection behaviors and a negative predictor in risk behaviors (the latter relationship became non-significant when controlling for past behavior). Multi-level modelling showed that intention, capacity, and past behavior were significant positive predictors of action for both protection and risk behaviors. Experiential attitude and descriptive norm were additional significant positive predictors of risk behaviors. The RAA has utility in predicting both protection and risk health behaviors although the power of predictors may vary across these types of health behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Skinner, James E; Meyer, Michael; Nester, Brian A; Geary, Una; Taggart, Pamela; Mangione, Antoinette; Ramalanjaona, George; Terregino, Carol; Dalsey, William C
2009-01-01
Objective: Comparative algorithmic evaluation of heartbeat series in low-to-high risk cardiac patients for the prospective prediction of risk of arrhythmic death (AD). Background: Heartbeat variation reflects cardiac autonomic function and risk of AD. Indices based on linear stochastic models are independent risk factors for AD in post-myocardial infarction (post-MI) cohorts. Indices based on nonlinear deterministic models have superior predictability in retrospective data. Methods: Patients were enrolled (N = 397) in three emergency departments upon presenting with chest pain and were determined to be at low-to-high risk of acute MI (>7%). Brief ECGs were recorded (15 min) and R-R intervals assessed by three nonlinear algorithms (PD2i, DFA, and ApEn) and four conventional linear-stochastic measures (SDNN, MNN, 1/f-Slope, LF/HF). Out-of-hospital AD was determined by modified Hinkle–Thaler criteria. Results: All-cause mortality at one-year follow-up was 10.3%, with 7.7% adjudicated to be AD. The sensitivity and relative risk for predicting AD was highest at all time-points for the nonlinear PD2i algorithm (p ≤0.001). The sensitivity at 30 days was 100%, specificity 58%, and relative risk >100 (p ≤0.001); sensitivity at 360 days was 95%, specificity 58%, and relative risk >11.4 (p ≤0.001). Conclusions: Heartbeat analysis by the time-dependent nonlinear PD2i algorithm is comparatively the superior test. PMID:19707283
Automated analysis of free speech predicts psychosis onset in high-risk youths
Bedi, Gillinder; Carrillo, Facundo; Cecchi, Guillermo A; Slezak, Diego Fernández; Sigman, Mariano; Mota, Natália B; Ribeiro, Sidarta; Javitt, Daniel C; Copelli, Mauro; Corcoran, Cheryl M
2015-01-01
Background/Objectives: Psychiatry lacks the objective clinical tests routinely used in other specializations. Novel computerized methods to characterize complex behaviors such as speech could be used to identify and predict psychiatric illness in individuals. AIMS: In this proof-of-principle study, our aim was to test automated speech analyses combined with Machine Learning to predict later psychosis onset in youths at clinical high-risk (CHR) for psychosis. Methods: Thirty-four CHR youths (11 females) had baseline interviews and were assessed quarterly for up to 2.5 years; five transitioned to psychosis. Using automated analysis, transcripts of interviews were evaluated for semantic and syntactic features predicting later psychosis onset. Speech features were fed into a convex hull classification algorithm with leave-one-subject-out cross-validation to assess their predictive value for psychosis outcome. The canonical correlation between the speech features and prodromal symptom ratings was computed. Results: Derived speech features included a Latent Semantic Analysis measure of semantic coherence and two syntactic markers of speech complexity: maximum phrase length and use of determiners (e.g., which). These speech features predicted later psychosis development with 100% accuracy, outperforming classification from clinical interviews. Speech features were significantly correlated with prodromal symptoms. Conclusions: Findings support the utility of automated speech analysis to measure subtle, clinically relevant mental state changes in emergent psychosis. Recent developments in computer science, including natural language processing, could provide the foundation for future development of objective clinical tests for psychiatry. PMID:27336038
Hanif, M W; Valsamakis, G; Dixon, A; Boutsiadis, A; Jones, A F; Barnett, A H; Kumar, S
2008-09-01
We tested a stepwise, community-based screening strategy for glucose intolerance in South Asians using a health questionnaire in conjunction with body mass index (BMI). Anthropometric measurements (waist and hip circumference, sagittal diameter and percentage body fat) were then conducted in a hospital setting followed by an oral glucose tolerance test (OGTT) to identify subjects at the highest risk and analyse the factors predicting that risk. A health questionnaire was administered to 435 subjects in a community setting and BMI was measured. Subjects were graded by a risk score based on the health questionnaire as high, medium and low. Subjects with high and medium risk scores and a representative sample of those with low scores had anthropometric measurements in hospital followed by an OGTT. In total, 205 (47%) of the subjects had an OGTT performed. In total, 48.7% of the subjects tested with an OGTT had evidence of glucose dysregulation: 20% had diabetes and 28.7% had impaired glucose tolerance (IGT). Logistic regression model explained 49.1% of the total variability. The significant predictors of diabetes and IGT were Blood Glucose Monitoring Strips (BMI), random blood glucose (BM), sibling with diabetes and presence of diagnosed hypertension or ischaemic disease. Most of these predictors along with other heredity diabetes factors create a composite score, with high predictability, as the receiver operating curve analysis shows. We describe a simple, stepwise strategy in a community setting, based on a health questionnaire and anthropometric measurements, to explain about 50% of cases with IGT and diabetes and diagnose about 50% of cases from the population screened. We have also identified factors that predict the risk.
Chen, Minjun; Tung, Chun-Wei; Shi, Qiang; Guo, Lei; Shi, Leming; Fang, Hong; Borlak, Jürgen; Tong, Weida
2014-07-01
Drug-induced liver injury (DILI) is a major cause of drug failures in both the preclinical and clinical phase. Consequently, improving prediction of DILI at an early stage of drug discovery will reduce the potential failures in the subsequent drug development program. In this regard, high-content screening (HCS) assays are considered as a promising strategy for the study of DILI; however, the predictive performance of HCS assays is frequently insufficient. In the present study, a new testing strategy was developed to improve DILI prediction by employing in vitro assays that was combined with the RO2 model (i.e., 'rule-of-two' defined by daily dose ≥100 mg/day & logP ≥3). The RO2 model was derived from the observation that high daily doses and lipophilicity of an oral medication were associated with significant DILI risk in humans. In the developed testing strategy, the RO2 model was used for the rational selection of candidates for HCS assays, and only the negatives predicted by the RO2 model were further investigated by HCS. Subsequently, the effects of drug treatment on cell loss, nuclear size, DNA damage/fragmentation, apoptosis, lysosomal mass, mitochondrial membrane potential, and steatosis were studied in cultures of primary rat hepatocytes. Using a set of 70 drugs with clear evidence of clinically relevant DILI, the testing strategy improved the accuracies by 10 % and reduced the number of drugs requiring experimental assessment by approximately 20 %, as compared to the HCS assay alone. Moreover, the testing strategy was further validated by including published data (Cosgrove et al. in Toxicol Appl Pharmacol 237:317-330, 2009) on drug-cytokine-induced hepatotoxicity, which improved the accuracies by 7 %. Taken collectively, the proposed testing strategy can significantly improve the prediction of in vitro assays for detecting DILI liability in an early drug discovery phase.
Lotfi, Ghassan; Faraz, Saima; Nasir, Razan; Somini, Sreenisha; Abdeldayem, Rasha M; Koratkar, Raghunandini; Alsawalhi, Nadia; Ammar, Abeer
2017-10-26
The purpose of this study is to first compare the performance of the PAMG-1 biomarker test to that of standard clinical assessment (SCA) for the risk assessment of spontaneous preterm delivery (sPTD) among women with symptoms of preterm labor (PTL) and then calculate the potential impact on unnecessary admission reduction. Patients of gestational age 24 0/7 -36 6/7 with PTL symptoms, cervical dilatation ≤3 cm, no intercourse within 24 h, and clinically intact membranes were recruited consecutively into this prospective observational study. Specificity (SP), sensitivity (SN), positive-predictive value (PPV), and negative-predictive value (NPV) for the PAMG-1 test and SCA, for which a positive result was defined as patient admission, for predicting spontaneous delivery ≤7 and ≤14 d of presentation were calculated. One hundred and forty-eight patients were included in the analysis, 132 of which had both SCA and PAMG-1 results available. For the prediction of sPTD ≤7 d for SCA and PAMG-1, the PPV and NPV were 10% and 100%, and 71% and 98%, respectively. For prediction of sPTD ≤14 d for SCA and PAMG-1, the PPV and NPV were 14% and 100%, and 86% and 96%, respectively. Sixty-one per cent (81/132) of patients were admitted for treatment and/or observation. Our study reinforces the critical role of the PAMG-1 biomarker test to aid in risk assessment of imminent spontaneous preterm delivery in patients with symptoms of PTL. The PAMG-1 test was found to be statistically superior to standard clinical assessment alone, with respect to specificity. Based on our data, the introduction of a PAMG-1 test result into clinical decision making could reduce up to 91% of unnecessary admissions for women presenting with threatened preterm labor.
Moarcăs, M; Georgescu, I C; Moarcăs, R; Badea, M; Cîrstoiu, M
2014-01-01
The cytological interpretation of ASC-US represents a category of morphologic uncertainty. For patients with this result, other tests are necessary in order to determine the risk for cervical lesions. 198 patients with ASC-US cytology have been analyzed between 2008 and 2013. All the patients included in the study have subsequently had a high oncogenic HPV testing and colposcopy risk. 103 (52%) patients tested positive for high risk HPV and 21 (10%) had associated colposcopy changes and precancerous and cancerous lesions identified through biopsy. 95 (48%) patients tested negative for HPV and none of these women had lesions at colposcopy. High oncogenic risk HPV testing was proven useful in identifying the patients with ASC-US cytology who are at high risk for cervical lesions (100% sensibility). In this study, the HPV testing had a negative predictive value of 100%, which uselessly renders a further colposcopy evaluation. HPV testing for women with ASC-US is not specific in identifying women with cervical lesions (Specificity 53%) and this results from a high prevalence of limited HPV infections in an age group which is less than 30 years old. High risk HPV testing for women with ASC-US cervical cytology is useful in determining the risk for precancerous and cancerous cervical lesions. A positive result is associated with a high risk for cervical lesions (20%) and for these patients colposcopy is necessary. For women with a negative result, the risk for cervical lesions is practically null so colposcopy is not required.
Veeravagu, Anand; Li, Amy; Swinney, Christian; Tian, Lu; Moraff, Adrienne; Azad, Tej D; Cheng, Ivan; Alamin, Todd; Hu, Serena S; Anderson, Robert L; Shuer, Lawrence; Desai, Atman; Park, Jon; Olshen, Richard A; Ratliff, John K
2017-07-01
OBJECTIVE The ability to assess the risk of adverse events based on known patient factors and comorbidities would provide more effective preoperative risk stratification. Present risk assessment in spine surgery is limited. An adverse event prediction tool was developed to predict the risk of complications after spine surgery and tested on a prospective patient cohort. METHODS The spinal Risk Assessment Tool (RAT), a novel instrument for the assessment of risk for patients undergoing spine surgery that was developed based on an administrative claims database, was prospectively applied to 246 patients undergoing 257 spinal procedures over a 3-month period. Prospectively collected data were used to compare the RAT to the Charlson Comorbidity Index (CCI) and the American College of Surgeons National Surgery Quality Improvement Program (ACS NSQIP) Surgical Risk Calculator. Study end point was occurrence and type of complication after spine surgery. RESULTS The authors identified 69 patients (73 procedures) who experienced a complication over the prospective study period. Cardiac complications were most common (10.2%). Receiver operating characteristic (ROC) curves were calculated to compare complication outcomes using the different assessment tools. Area under the curve (AUC) analysis showed comparable predictive accuracy between the RAT and the ACS NSQIP calculator (0.670 [95% CI 0.60-0.74] in RAT, 0.669 [95% CI 0.60-0.74] in NSQIP). The CCI was not accurate in predicting complication occurrence (0.55 [95% CI 0.48-0.62]). The RAT produced mean probabilities of 34.6% for patients who had a complication and 24% for patients who did not (p = 0.0003). The generated predicted values were stratified into low, medium, and high rates. For the RAT, the predicted complication rate was 10.1% in the low-risk group (observed rate 12.8%), 21.9% in the medium-risk group (observed 31.8%), and 49.7% in the high-risk group (observed 41.2%). The ACS NSQIP calculator consistently produced complication predictions that underestimated complication occurrence: 3.4% in the low-risk group (observed 12.6%), 5.9% in the medium-risk group (observed 34.5%), and 12.5% in the high-risk group (observed 38.8%). The RAT was more accurate than the ACS NSQIP calculator (p = 0.0018). CONCLUSIONS While the RAT and ACS NSQIP calculator were both able to identify patients more likely to experience complications following spine surgery, both have substantial room for improvement. Risk stratification is feasible in spine surgery procedures; currently used measures have low accuracy.
The Functional Movement Screen and Injury Risk: Association and Predictive Value in Active Men.
Bushman, Timothy T; Grier, Tyson L; Canham-Chervak, Michelle; Anderson, Morgan K; North, William J; Jones, Bruce H
2016-02-01
The Functional Movement Screen (FMS) is a series of 7 tests used to assess the injury risk in active populations. To determine the association of the FMS with the injury risk, assess predictive values, and identify optimal cut points using 3 injury types. Cohort study; Level of evidence, 2. Physically active male soldiers aged 18 to 57 years (N = 2476) completed the FMS. Demographic and fitness data were collected by survey. Medical record data for overuse injuries, traumatic injuries, and any injury 6 months after the FMS assessment were obtained. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated along with the receiver operating characteristic (ROC) to determine the area under the curve (AUC) and identify optimal cut points for the risk assessment. Risks, risk ratios (RRs), odds ratios (ORs), and 95% CIs were calculated to assess injury risks. Soldiers who scored ≤14 were at a greater risk for injuries compared with those who scored >14 using the composite score for overuse injuries (RR, 1.84; 95% CI, 1.63-2.09), traumatic injuries (RR, 1.26; 95% CI, 1.03-1.54), and any injury (RR, 1.60; 95% CI, 1.45-1.77). When controlling for other known injury risk factors, multivariate logistic regression analysis identified poor FMS performance (OR [score ≤14/19-21], 2.00; 95% CI, 1.42-2.81) as an independent risk factor for injuries. A cut point of ≤14 registered low measures of predictive value for all 3 injury types (sensitivity, 28%-37%; PPV, 19%-52%; AUC, 54%-61%). Shifting the injury risk cut point of ≤14 to the optimal cut points indicated by the ROC did not appreciably improve sensitivity or the PPV. Although poor FMS performance was associated with a higher risk of injuries, it displayed low sensitivity, PPV, and AUC. On the basis of these findings, the use of the FMS to screen for the injury risk is not recommended in this population because of the low predictive value and misclassification of the injury risk. © 2015 The Author(s).
Combination of serum angiopoietin-2 and uterine artery Doppler for prediction of preeclampsia.
Puttapitakpong, Ploynin; Phupong, Vorapong
2016-02-01
The aim of this study was to determine the predictive value of the combination of serum angiopoietin-2 (Ang-2) levels and uterine artery Doppler for the detection of preeclampsia in women at 16-18 weeks of gestation and to identify other pregnancy complications that could be predicted with these combined tests. Maternal serum Ang-2 levels were measured, and uterine artery Doppler was performed in 400 pregnant women. The main outcome was preeclampsia. The predictive values of this combination were calculated. Twenty-five women (6.3%) developed preeclampsia. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of uterine artery Doppler combined with serum Ang-2 levels for the prediction of preeclampsia were 24.0%, 94.4%, 22.2% and 94.9%, respectively. For the prediction of early-onset preeclampsia, the sensitivity, specificity, PPV and NPV were 57.1%, 94.1%, 14.8% and 99.2%, respectively. Patients with abnormal uterine artery Doppler and abnormal serum Ang-2 levels (above 19.5 ng ml(-1)) were at higher risk for preterm delivery (relative risk=2.7, 95% confidence interval 1.2-5.8). Our findings revealed that the combination of uterine artery Doppler and serum Ang-2 levels at 16-18 weeks of gestation can be used to predict early-onset preeclampsia but not overall preeclampsia. Thus, this combination may be a useful early second trimester screening test for the prediction of early-onset preeclampsia.
García-Herranz, Sara; Díaz-Mardomingo, M Carmen; Peraita, Herminia
2016-09-01
In the field of neuropsychology, it is essential to determine which neuropsychological tests predict Alzheimer's disease (AD) in people with mild cognitive impairment (MCI) and which cut-off points should be used to identify people at greater risk for converting to dementia. The aim of the present study was to analyse the predictive value of the cognitive tests included in a neuropsychological battery for conversion to AD among MCI participants and to analyse the influence of some sociodemographic variables - sex, age, schooling - and others, such as follow-up time and emotional state. A total of 105 participants were assessed with a neuropsychological battery at baseline and during a 3-year follow-up period. For the present study, the data were analysed at baseline. During the follow-up period, 24 participants (22.85%) converted to dementia (2.79 ± 1.14 years) and 81 (77.14%) remained as MCI. The logistic regression analysis determined that the long delay cued recall and the performance time of the Rey figure test were the best predictive tests of conversion to dementia after an MCI diagnosis. Concerning the sociodemographic factors, sex had the highest predictive power. The results reveal the relevance of the neuropsychological data obtained in the first assessment. Specifically, the data obtained in the episodic verbal memory tests and tests that assess visuospatial and executive components may help to identify people with MCI who may develop AD in an interval not longer than 4 years, with the masculine gender being an added risk factor. © 2015 The British Psychological Society.
Bellcross, Cecelia A; Peipins, Lucy A; McCarty, Frances A; Rodriguez, Juan L; Hawkins, Nikki A; Hensley Alford, Sharon; Leadbetter, Steven
2015-01-01
Evidence shows underutilization of cancer genetics services. To explore the reasons behind this underutilization, this study evaluated characteristics of women who were referred for genetic counseling and/or had undergone BRCA1/2 testing. An ovarian cancer risk perception study stratified 16,720 eligible women from the Henry Ford Health System into average-, elevated-, and high-risk groups based on family history. We randomly selected 3,307 subjects and interviewed 2,524 of them (76.3% response rate). Among the average-, elevated-, and high-risk groups, 2.3, 10.1, and 20.2%, respectively, reported genetic counseling referrals, and 0.8, 3.3, and 9.5%, respectively, reported having undergone BRCA testing. Personal breast cancer history, high risk, and perceived ovarian cancer risk were associated with both referral and testing. Discussion of family history with a doctor predicted counseling referral, whereas belief that family history influenced risk was the strongest BRCA testing predictor. Women perceiving their cancer risk as much higher than other women their age were twice as likely (95% confidence interval: 2.0-9.6) to report genetic counseling referral. In a health system with ready access to cancer genetic counseling and BRCA testing, women who were at high risk underutilized these services. There were strong associations between perceived ovarian cancer risk and genetic counseling referral, and between a belief that family history influenced risk and BRCA testing.
To the Greatest Lengths: Al Qaeda, Proximity and Recruitment Risk
2010-12-01
activity (Boba, 2005, pp. 218–219). On the complex end of this spectrum, density mapping uses mathematical formulas to determine degrees of criminal...area. These calculations "combines actuarial risk prediction with environmental criminology to assign risk values to places according to their...translated records, and the compilation of distance variables are correct. 46 2. Model Mathematically , the formula for this test is
Hermes, Helen E.; Teutonico, Donato; Preuss, Thomas G.; Schneckener, Sebastian
2018-01-01
The environmental fates of pharmaceuticals and the effects of crop protection products on non-target species are subjects that are undergoing intense review. Since measuring the concentrations and effects of xenobiotics on all affected species under all conceivable scenarios is not feasible, standard laboratory animals such as rabbits are tested, and the observed adverse effects are translated to focal species for environmental risk assessments. In that respect, mathematical modelling is becoming increasingly important for evaluating the consequences of pesticides in untested scenarios. In particular, physiologically based pharmacokinetic/toxicokinetic (PBPK/TK) modelling is a well-established methodology used to predict tissue concentrations based on the absorption, distribution, metabolism and excretion of drugs and toxicants. In the present work, a rabbit PBPK/TK model is developed and evaluated with data available from the literature. The model predictions include scenarios of both intravenous (i.v.) and oral (p.o.) administration of small and large compounds. The presented rabbit PBPK/TK model predicts the pharmacokinetics (Cmax, AUC) of the tested compounds with an average 1.7-fold error. This result indicates a good predictive capacity of the model, which enables its use for risk assessment modelling and simulations. PMID:29561908
Rosellini, A J; Stein, M B; Benedek, D M; Bliese, P D; Chiu, W T; Hwang, I; Monahan, J; Nock, M K; Petukhova, M V; Sampson, N A; Street, A E; Zaslavsky, A M; Ursano, R J; Kessler, R C
2017-10-01
The U.S. Army uses universal preventives interventions for several negative outcomes (e.g. suicide, violence, sexual assault) with especially high risks in the early years of service. More intensive interventions exist, but would be cost-effective only if targeted at high-risk soldiers. We report results of efforts to develop models for such targeting from self-report surveys administered at the beginning of Army service. 21 832 new soldiers completed a self-administered questionnaire (SAQ) in 2011-2012 and consented to link administrative data to SAQ responses. Penalized regression models were developed for 12 administratively-recorded outcomes occurring by December 2013: suicide attempt, mental hospitalization, positive drug test, traumatic brain injury (TBI), other severe injury, several types of violence perpetration and victimization, demotion, and attrition. The best-performing models were for TBI (AUC = 0.80), major physical violence perpetration (AUC = 0.78), sexual assault perpetration (AUC = 0.78), and suicide attempt (AUC = 0.74). Although predicted risk scores were significantly correlated across outcomes, prediction was not improved by including risk scores for other outcomes in models. Of particular note: 40.5% of suicide attempts occurred among the 10% of new soldiers with highest predicted risk, 57.2% of male sexual assault perpetrations among the 15% with highest predicted risk, and 35.5% of female sexual assault victimizations among the 10% with highest predicted risk. Data collected at the beginning of service in self-report surveys could be used to develop risk models that define small proportions of new soldiers accounting for high proportions of negative outcomes over the first few years of service.
Friesen, Phoebe; Lawrence, Ryan E; Brucato, Gary; Girgis, Ragy R; Dixon, Lisa
2016-11-01
Genetic tests for schizophrenia could introduce both risks and benefits. Little is known about the hopes and expectations of young adults at clinical high-risk for psychosis concerning genetic testing for schizophrenia, despite the fact that these youth could be among those highly affected by such tests. We conducted semistructured interviews with 15 young adults at clinical high-risk for psychosis to ask about their interest, expectations, and hopes regarding genetic testing for schizophrenia. Most participants reported a high level of interest in genetic testing for schizophrenia, and the majority said they would take such a test immediately if it were available. Some expressed far-reaching expectations for a genetic test, such as predicting symptom severity and the timing of symptom onset. Several assumed that genetic testing would be accompanied by interventions to prevent schizophrenia. Participants anticipated mixed reactions on finding out they had a genetic risk for schizophrenia, suggesting that they might feel both a sense of relief and a sense of hopelessness. We suggest that genetic counseling could play an important role in counteracting a culture of genetic over-optimism and helping young adults at clinical high-risk for psychosis understand the limitations of genetic testing. Counseling sessions could also invite individuals to explore how receiving genetic risk information might impact their well-being, as early evidence suggests that some psychological factors help individuals cope, whereas others heighten distress related to genetic test results.
Longitudinal histories as predictors of future diagnoses of domestic abuse: modelling study
Kohane, Isaac S; Mandl, Kenneth D
2009-01-01
Objective To determine whether longitudinal data in patients’ historical records, commonly available in electronic health record systems, can be used to predict a patient’s future risk of receiving a diagnosis of domestic abuse. Design Bayesian models, known as intelligent histories, used to predict a patient’s risk of receiving a future diagnosis of abuse, based on the patient’s diagnostic history. Retrospective evaluation of the model’s predictions using an independent testing set. Setting A state-wide claims database covering six years of inpatient admissions to hospital, admissions for observation, and encounters in emergency departments. Population All patients aged over 18 who had at least four years between their earliest and latest visits recorded in the database (561 216 patients). Main outcome measures Timeliness of detection, sensitivity, specificity, positive predictive values, and area under the ROC curve. Results 1.04% (5829) of the patients met the narrow case definition for abuse, while 3.44% (19 303) met the broader case definition for abuse. The model achieved sensitive, specific (area under the ROC curve of 0.88), and early (10-30 months in advance, on average) prediction of patients’ future risk of receiving a diagnosis of abuse. Analysis of model parameters showed important differences between sexes in the risks associated with certain diagnoses. Conclusions Commonly available longitudinal diagnostic data can be useful for predicting a patient’s future risk of receiving a diagnosis of abuse. This modelling approach could serve as the basis for an early warning system to help doctors identify high risk patients for further screening. PMID:19789406
Nonlinear Dynamics: Theoretical Perspectives and Application to Suicidology
ERIC Educational Resources Information Center
Schiepek, Gunter; Fartacek, Clemens; Sturm, Josef; Kralovec, Karl; Fartacek, Reinhold; Ploderl, Martin
2011-01-01
Despite decades of research, the prediction of suicidal behavior remains limited. As a result, searching for more specific risk factors and testing their predictive power are central in suicidology. This strategy may be of limited value because it assumes linearity to the suicidal process that is most likely nonlinear by nature and which can be…
Screening older adults at risk of falling with the Tinetti balance scale.
Raîche, M; Hébert, R; Prince, F; Corriveau, H
2000-09-16
In a prospective study of 225 community dwelling people 75 years and older, we tested the validity of the Tinetti balance scale to predict individuals who will fall at least once during the following year. A score of 36 or less identified 7 of 10 fallers with 70% sensitivity and 52% specificity. With this cut-off score, 53% of the individuals were screened positive and presented a two-fold risk of falling. These characteristics support the use of this test to screen older people at risk of falling in order to include them in a preventive intervention.
Two-Step Approach for the Prediction of Future Type 2 Diabetes Risk
Abdul-Ghani, Muhammad A.; Abdul-Ghani, Tamam; Stern, Michael P.; Karavic, Jasmina; Tuomi, Tiinamaija; Bo, Insoma; DeFronzo, Ralph A.; Groop, Leif
2011-01-01
OBJECTIVE To develop a model for the prediction of type 2 diabetes mellitus (T2DM) risk on the basis of a multivariate logistic model and 1-h plasma glucose concentration (1-h PG). RESEARCH DESIGN AND METHODS The model was developed in a cohort of 1,562 nondiabetic subjects from the San Antonio Heart Study (SAHS) and validated in 2,395 nondiabetic subjects in the Botnia Study. A risk score on the basis of anthropometric parameters, plasma glucose and lipid profile, and blood pressure was computed for each subject. Subjects with a risk score above a certain cut point were considered to represent high-risk individuals, and their 1-h PG concentration during the oral glucose tolerance test was used to further refine their future T2DM risk. RESULTS We used the San Antonio Diabetes Prediction Model (SADPM) to generate the initial risk score. A risk-score value of 0.065 was found to be an optimal cut point for initial screening and selection of high-risk individuals. A 1-h PG concentration >140 mg/dL in high-risk individuals (whose risk score was >0.065) was the optimal cut point for identification of subjects at increased risk. The two cut points had 77.8, 77.4, and 44.8% (for the SAHS) and 75.8, 71.6, and 11.9% (for the Botnia Study) sensitivity, specificity, and positive predictive value, respectively, in the SAHS and Botnia Study. CONCLUSIONS A two-step model, based on the combination of the SADPM and 1-h PG, is a useful tool for the identification of high-risk Mexican-American and Caucasian individuals. PMID:21788628
Cardiac risk stratification: Role of the coronary calcium score
Sharma, Rakesh K; Sharma, Rajiv K; Voelker, Donald J; Singh, Vibhuti N; Pahuja, Deepak; Nash, Teresa; Reddy, Hanumanth K
2010-01-01
Coronary artery calcium (CAC) is an integral part of atherosclerotic coronary heart disease (CHD). CHD is the leading cause of death in industrialized nations and there is a constant effort to develop preventative strategies. The emphasis is on risk stratification and primary risk prevention in asymptomatic patients to decrease cardiovascular mortality and morbidity. The Framingham Risk Score predicts CHD events only moderately well where family history is not included as a risk factor. There has been an exploration for new tests for better risk stratification and risk factor modification. While the Framingham Risk Score, European Systematic Coronary Risk Evaluation Project, and European Prospective Cardiovascular Munster study remain excellent tools for risk factor modification, the CAC score may have additional benefit in risk assessment. There have been several studies supporting the role of CAC score for prediction of myocardial infarction and cardiovascular mortality. It has been shown to have great scope in risk stratification of asymptomatic patients in the emergency room. Additionally, it may help in assessment of progression or regression of coronary artery disease. Furthermore, the CAC score may help differentiate ischemic from nonischemic cardiomyopathy. PMID:20730016
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.
Palm, Sara; Momeni, Shima; Lundberg, Stina; Nylander, Ingrid; Roman, Erika
2014-01-01
Certain personality types and behavioral traits display high correlations to drug use and an increased level of dopamine in the reward system is a common denominator of all drugs of abuse. Dopamine response to drugs has been suggested to correlate with some of these personality types and to be a key factor influencing the predisposition to addiction. This study investigated if behavioral traits can be related to potassium- and amphetamine-induced dopamine response in the dorsal striatum, an area hypothesized to be involved in the shift from drug use to addiction. The open field and multivariate concentric square field™ tests were used to assess individual behavior in male Wistar rats. Chronoamperometric recordings were then made to study the potassium- and amphetamine-induced dopamine response in vivo. A classification based on risk-taking behavior in the open field was used for further comparisons. Risk-taking behavior was correlated between the behavioral tests and high risk takers displayed a more pronounced response to the dopamine uptake blocking effects of amphetamine. Behavioral parameters from both tests could also predict potassium- and amphetamine-induced dopamine responses showing a correlation between neurochemistry and behavior in risk-assessment and risk-taking parameters. In conclusion, the high risk-taking rats showed a more pronounced reduction of dopamine uptake in the dorsal striatum after amphetamine indicating that this area may contribute to the sensitivity of these animals to psychostimulants and proneness to addiction. Further, inherent dopamine activity was related to risk-assessment behavior, which may be of importance for decision-making and inhibitory control, key components in addiction. PMID:25076877
Natsch, Andreas; Emter, Roger; Haupt, Tina; Ellis, Graham
2018-06-01
Cosmetic regulations prohibit animal testing for the purpose of safety assessment and recent REACH guidance states that the local lymph node assay (LLNA) in mice shall only be conducted if in vitro data cannot give sufficient information for classification and labelling. However, Quantitative Risk Assessment (QRA) for fragrance ingredients requires a NESIL, a dose not expected to cause induction of skin sensitization in humans. In absence of human data, this is derived from the LLNA and it remains a key challenge for risk assessors to derive this value from non-animal data. Here we present a workflow using structural information, reactivity data and KeratinoSens results to predict a LLNA result as a point of departure. Specific additional tests (metabolic activation, complementary reactivity tests) are applied in selected cases depending on the chemical domain of a molecule. Finally, in vitro and in vivo data on close analogues are used to estimate uncertainty of the prediction in the specific chemical domain. This approach was applied to three molecules which were subsequently tested in the LLNA and 22 molecules with available and sometimes discordant human and LLNA data. Four additional case studies illustrate how this approach is being applied to recently developed molecules in the absence of animal data. Estimation of uncertainty and how this can be applied to determine a final NESIL for risk assessment is discussed. We conclude that, in the data-rich domain of fragrance ingredients, sensitization risk assessment without animal testing is possible in most cases by this integrated approach.
Using latent class analysis to identify academic and behavioral risk status in elementary students.
King, Kathleen R; Lembke, Erica S; Reinke, Wendy M
2016-03-01
Identifying classes of children on the basis of academic and behavior risk may have important implications for the allocation of intervention resources within Response to Intervention (RTI) and Multi-Tiered System of Support (MTSS) models. Latent class analysis (LCA) was conducted with a sample of 517 third grade students. Fall screening scores in the areas of reading, mathematics, and behavior were used as indicators of success on an end of year statewide achievement test. Results identified 3 subclasses of children, including a class with minimal academic and behavioral concerns (Tier 1; 32% of the sample), a class at-risk for academic problems and somewhat at-risk for behavior problems (Tier 2; 37% of the sample), and a class with significant academic and behavior problems (Tier 3; 31%). Each class was predictive of end of year performance on the statewide achievement test, with the Tier 1 class performing significantly higher on the test than the Tier 2 class, which in turn scored significantly higher than the Tier 3 class. The results of this study indicated that distinct classes of children can be determined through brief screening measures and are predictive of later academic success. Further implications are discussed for prevention and intervention for students at risk for academic failure and behavior problems. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kogan, Steven M; Cho, Junhan; Simons, Leslie Gordon; Allen, Kimberly A; Beach, Steven R H; Simons, Ronald L; Gibbons, Frederick X
2015-04-01
Life History Theory (LHT), a branch of evolutionary biology, describes how organisms maximize their reproductive success in response to environmental conditions. This theory suggests that challenging environmental conditions will lead to early pubertal maturation, which in turn predicts heightened risky sexual behavior. Although largely confirmed among female adolescents, results with male youth are inconsistent. We tested a set of predictions based on LHT with a sample of 375 African American male youth assessed three times from age 11 to age 16. Harsh, unpredictable community environments and harsh, inconsistent, or unregulated parenting at age 11 were hypothesized to predict pubertal maturation at age 13; pubertal maturation was hypothesized to forecast risky sexual behavior, including early onset of intercourse, substance use during sexual activity, and lifetime numbers of sexual partners. Results were consistent with our hypotheses. Among African American male youth, community environments were a modest but significant predictor of pubertal timing. Among those youth with high negative emotionality, both parenting and community factors predicted pubertal timing. Pubertal timing at age 13 forecast risky sexual behavior at age 16. Results of analyses conducted to determine whether environmental effects on sexual risk behavior were mediated by pubertal timing were not significant. This suggests that, although evolutionary mechanisms may affect pubertal development via contextual influences for sensitive youth, the factors that predict sexual risk behavior depend less on pubertal maturation than LHT suggests.
Shen, Songying; Lu, Jinhua; Zhang, Lifang; He, Jianrong; Li, Weidong; Chen, Niannian; Wen, Xingxuan; Xiao, Wanqing; Yuan, Mingyang; Qiu, Lan; Cheng, Kar Keung; Xia, Huimin; Mol, Ben Willem J; Qiu, Xiu
2017-02-01
There remains uncertainty regarding whether a single fasting glucose measurement is sufficient to predict risk of adverse perinatal outcomes. We included 12,594 pregnant women who underwent a 75-g oral glucose-tolerance test (OGTT) at 22-28weeks' gestation in the Born in Guangzhou Cohort Study, China. Outcomes were large for gestational age (LGA) baby, cesarean section, and spontaneous preterm birth. We calculated the area under the receiver operator characteristic curves (AUCs) to assess the capacity of OGTT glucose values to predict adverse outcomes, and compared the AUCs of different components of OGTT. 1325 women had a LGA baby (10.5%). Glucose measurements were linearly associated with LGA, with strongest associations for fasting glucose (odds ratio 1.37, 95% confidence interval 1.30-1.45). Weaker associations were observed for cesarean section and spontaneous preterm birth. Fasting glucose have a comparable discriminative power for prediction of LGA to the combination of fasting, 1h, and 2h glucose values during OGTT (AUCs, 0.611 vs. 0.614, P=0.166). The LGA risk was consistently increased in women with abnormal fasting glucose (≥5.1mmol/l), irrespective of 1h or 2h glucose levels. A single fasting glucose measurement performs comparably to 75-g OGTT in predicting risk of having a LGA baby. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Lee, Christine K; Hofer, Ira; Gabel, Eilon; Baldi, Pierre; Cannesson, Maxime
2018-04-17
The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality. The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index. In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99). Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.
Hoenigl, Martin; Weibel, Nadir; Mehta, Sanjay R; Anderson, Christy M; Jenks, Jeffrey; Green, Nella; Gianella, Sara; Smith, Davey M; Little, Susan J
2015-08-01
Although men who have sex with men (MSM) represent a dominant risk group for human immunodeficiency virus (HIV), the risk of HIV infection within this population is not uniform. The objective of this study was to develop and validate a score to estimate incident HIV infection risk. Adult MSM who were tested for acute and early HIV (AEH) between 2008 and 2014 were retrospectively randomized 2:1 to a derivation and validation dataset, respectively. Using the derivation dataset, each predictor associated with an AEH outcome in the multivariate prediction model was assigned a point value that corresponded to its odds ratio. The score was validated on the validation dataset using C-statistics. Data collected at a single HIV testing encounter from 8326 unique MSM were analyzed, including 200 with AEH (2.4%). Four risk behavior variables were significantly associated with an AEH diagnosis (ie, incident infection) in multivariable analysis and were used to derive the San Diego Early Test (SDET) score: condomless receptive anal intercourse (CRAI) with an HIV-positive MSM (3 points), the combination of CRAI plus ≥5 male partners (3 points), ≥10 male partners (2 points), and diagnosis of bacterial sexually transmitted infection (2 points)-all as reported for the prior 12 months. The C-statistic for this risk score was >0.7 in both data sets. The SDET risk score may help to prioritize resources and target interventions, such as preexposure prophylaxis, to MSM at greatest risk of acquiring HIV infection. The SDET risk score is deployed as a freely available tool at http://sdet.ucsd.edu. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Measurable residual disease testing in acute myeloid leukaemia.
Hourigan, C S; Gale, R P; Gormley, N J; Ossenkoppele, G J; Walter, R B
2017-07-01
There is considerable interest in developing techniques to detect and/or quantify remaining leukaemia cells termed measurable or, less precisely, minimal residual disease (MRD) in persons with acute myeloid leukaemia (AML) in complete remission defined by cytomorphological criteria. An important reason for AML MRD-testing is the possibility of estimating the likelihood (and timing) of leukaemia relapse. A perfect MRD-test would precisely quantify leukaemia cells biologically able and likely to cause leukaemia relapse within a defined interval. AML is genetically diverse and there is currently no uniform approach to detecting such cells. Several technologies focused on immune phenotype or cytogenetic and/or molecular abnormalities have been developed, each with advantages and disadvantages. Many studies report a positive MRD-test at diverse time points during AML therapy identifies persons with a higher risk of leukaemia relapse compared with those with a negative MRD-test even after adjusting for other prognostic and predictive variables. No MRD-test in AML has perfect sensitivity and specificity for relapse prediction at the cohort- or subject levels and there are substantial rates of false-positive and -negative tests. Despite these limitations, correlations between MRD-test results and relapse risk have generated interest in MRD-test result-directed therapy interventions. However, convincing proof that a specific intervention will reduce relapse risk in persons with a positive MRD-test is lacking and needs testing in randomized trials. Routine clinical use of MRD-testing requires further refinements and standardization/harmonization of assay platforms and results reporting. Such data are needed to determine whether results of MRD-testing can be used as a surrogate end point in AML therapy trials. This could make drug-testing more efficient and accelerate regulatory approvals. Although MRD-testing in AML has advanced substantially, much remains to be done.
Is it time to sound an alarm about false-positive cell-free DNA testing for fetal aneuploidy?
Mennuti, Michael T; Cherry, Athena M; Morrissette, Jennifer J D; Dugoff, Lorraine
2013-11-01
Testing cell-free DNA (cfDNA) in maternal blood samples has been shown to have very high sensitivity for the detection of fetal aneuploidy with very low false-positive results in high-risk patients who undergo invasive prenatal diagnosis. Recent observation in clinical practice of several cases of positive cfDNA tests for trisomy 18 and trisomy 13, which were not confirmed by cytogenetic testing of the pregnancy, may reflect a limitation of the positive predictive value of this quantitative testing, particularly when it is used to detect rare aneuploidies. Analysis of a larger number of false-positive cases is needed to evaluate whether these observations reflect the positive predictive value that should be expected. Infrequently, mechanisms (such as low percentage mosaicism or confined placental mosaicism) might also lead to positive cfDNA testing that is not concordant with standard prenatal cytogenetic diagnosis. The need to explore these and other possible causes of false-positive cfDNA testing is exemplified by 2 of these cases. Additional evaluation of cfDNA testing in clinical practice and a mechanism for the systematic reporting of false-positive and false-negative cases will be important before this test is offered widely to the general population of low-risk obstetric patients. In the meantime, incorporating information about the positive predictive value in pretest counseling and in clinical laboratory reports is recommended. These experiences reinforce the importance of offering invasive testing to confirm cfDNA results before parental decision-making. Copyright © 2013 Mosby, Inc. All rights reserved.
Winnberg, Elisabeth; Winnberg, Ulrika; Pohlkamp, Lilian; Hagberg, Anette
2018-04-07
Little is known about how people's lives are influenced when going from a 50% risk status of Huntington's disease (HD) to no risk after performing predictive testing. In this study, 20 interviews were conducted to explore the long-term (> 5 years) experiences after receiving predictive test results as a non-carrier of HD. The results showed a broad variety of both positive and negative reactions. The most prominent positive reaction reported was feelings of relief and gratitude, of not carrying the HD mutation for themselves and for their children. Also, the non-carrier status promoted in some individuals' significant life changes such as a wishing to have (more) children, pursuing a career or breaking up from an unhappy relationship. However, negative reactions on their psychological well-being were also described. Some had experienced psychological pressure of needing to do something extraordinary in their lives; others expressed feelings of guilt towards affected or untested siblings, resulting in sadness or clinical depression. The new genetic risk status could generate a need of re-orientation, a process that for some persons took several years to accomplish. The results of the present study show the importance of offering long-term post-result counselling for non-carriers in order to deal with the psychological consequences that may follow predictive testing.
Zhou, Yi; Othus, Megan; Walter, Roland B; Estey, Elihu H; Wu, David; Wood, Brent L
2018-04-21
Relapse is the major cause of death in patients with acute myeloid leukemia (AML) after allogeneic hematopoietic cell transplantation (HCT). Measurable residual disease (MRD) detected by multiparameter flow cytometry (MFC) before and after HCT is a strong, independent risk factor for relapse. As next-generation sequencing (NGS) is increasingly applied in AML MRD detection, it remains to be determined if NGS can improve prediction of post-HCT relapse. Herein, we investigated pre-HCT MRD detected by MFC and NGS in 59 adult patients with NPM1-mutated AML in morphologic remission; 45 of the 59 had post-HCT MRD determined by MFC and NGS around day 28. Before HCT, MRD detected by MFC was the most significant risk factor for relapse (hazard ratio [HR], 4.63; P < .001), whereas MRD detected only by NGS was not. After HCT, MRD detected by either MFC or NGS was significant risk factor for relapse (HR, 4.96, P = .004 and HR, 4.36, P = .002, respectively). Combining pre- and post-HCT MRD provided the best prediction for relapse (HR, 5.25; P < .001), with a sensitivity at 83%. We conclude that NGS testing of mutated NPM1 post-HCT improves the risk assessment for relapse, whereas pre-HCT MFC testing identifies a subset of high-risk patients in whom additional therapy should be tested. Copyright © 2018 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
Nelson-Wong, Erika; Appell, Ryan; McKay, Mike; Nawaz, Hannah; Roth, Joanna; Sigler, Robert; Third, Jacqueline; Walker, Mark
2012-04-01
Falls are a leading contributor to disability in older adults. Increased muscle co-contraction in the lower extremities during static and dynamic balance challenges has been associated with aging, and also with a history of falling. Co-contraction during static balance challenges has not been previously linked with performance on clinical tests designed to ascertain fall risk. The purpose of this study was to investigate the relationship between co-contraction about the ankle during static balance challenges with fall risk on a commonly used dynamic balance assessment, the Four Square Step Test (FSST). Twenty-three volunteers (mean age 73 years) performed a series of five static balance challenges (Romberg eyes open/closed, Sharpened Romberg eyes open/closed, and Single Leg Standing) with continuous electromyography (EMG) of bilateral tibialis anterior and gastrocnemius muscles. Participants then completed the FSST and were categorized as 'at-risk' or 'not-at-risk' to fall based on a cutoff time of 12 s. Co-contraction was quantified with co-contraction index (CCI). CCI during narrow base conditions was positively correlated with time to complete FSST. High CCIs during all static balance challenges with the exception of Romberg stance with eyes closed were predictive of being at-risk to fall based on FSST time, odds ratio 19.3. The authors conclude that co-contraction about the ankle during static balance challenges can be predictive of performance on a dynamic balance test.
Assessing Probabilistic Risk Assessment Approaches for Insect Biological Control Introductions.
Kaufman, Leyla V; Wright, Mark G
2017-07-07
The introduction of biological control agents to new environments requires host specificity tests to estimate potential non-target impacts of a prospective agent. Currently, the approach is conservative, and is based on physiological host ranges determined under captive rearing conditions, without consideration for ecological factors that may influence realized host range. We use historical data and current field data from introduced parasitoids that attack an endemic Lepidoptera species in Hawaii to validate a probabilistic risk assessment (PRA) procedure for non-target impacts. We use data on known host range and habitat use in the place of origin of the parasitoids to determine whether contemporary levels of non-target parasitism could have been predicted using PRA. Our results show that reasonable predictions of potential non-target impacts may be made if comprehensive data are available from places of origin of biological control agents, but scant data produce poor predictions. Using apparent mortality data rather than marginal attack rate estimates in PRA resulted in over-estimates of predicted non-target impact. Incorporating ecological data into PRA models improved the predictive power of the risk assessments.
Assessing Probabilistic Risk Assessment Approaches for Insect Biological Control Introductions
Kaufman, Leyla V.; Wright, Mark G.
2017-01-01
The introduction of biological control agents to new environments requires host specificity tests to estimate potential non-target impacts of a prospective agent. Currently, the approach is conservative, and is based on physiological host ranges determined under captive rearing conditions, without consideration for ecological factors that may influence realized host range. We use historical data and current field data from introduced parasitoids that attack an endemic Lepidoptera species in Hawaii to validate a probabilistic risk assessment (PRA) procedure for non-target impacts. We use data on known host range and habitat use in the place of origin of the parasitoids to determine whether contemporary levels of non-target parasitism could have been predicted using PRA. Our results show that reasonable predictions of potential non-target impacts may be made if comprehensive data are available from places of origin of biological control agents, but scant data produce poor predictions. Using apparent mortality data rather than marginal attack rate estimates in PRA resulted in over-estimates of predicted non-target impact. Incorporating ecological data into PRA models improved the predictive power of the risk assessments. PMID:28686180
Elskens, Marc; Vloeberghs, Daniel; Van Elsen, Liesbeth; Baeyens, Willy; Goeyens, Leo
2012-09-15
For reasons of food safety, packaging and food contact materials must be submitted to migration tests. Testing of silicone moulds is often very laborious, since three replicate tests are required to decide about their compliancy. This paper presents a general modelling framework to predict the sample's compliance or non-compliance using results of the first two migration tests. It compares the outcomes of models with multiple continuous predictors with a class of models involving latent and dummy variables. The model's prediction ability was tested using cross and external validations, i.e. model revalidation each time a new measurement set became available. At the overall migration limit of 10 mg dm(-2), the relative uncertainty on a prediction was estimated to be ~10%. Taking the default values for α and β equal to 0.05, the maximum value that can be predicted for sample compliance was therefore 7 mg dm(-2). Beyond this limit the risk for false compliant results increases significantly, and a third migration test should be performed. The result of this latter test defines the sample's compliance or non-compliance. Propositions for compliancy control inspired by the current dioxin control strategy are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Ripamonti, C; Lisi, L; Avella, M
2014-05-01
To investigate the specificity of the neck shaft angle (NSA) to predict hip fracture in males. We consecutively studied 228 males without fracture and 38 with hip fracture. A further 49 males with spine fracture were studied to evaluate the specificity of NSA for hip-fracture prediction. Femoral neck (FN) bone mineral density (FN-BMD), NSA, hip axis length and FN diameter (FND) were measured in each subject by dual X-ray absorptiometry. Between-mean differences in the studied variables were tested by the unpaired t-test. The ability of NSA to predict hip fracture was tested by logistic regression. Compared with controls, FN-BMD (p < 0.01) was significantly lower in both groups of males with fractures, whereas FND (p < 0.01) and NSA (p = 0.05) were higher only in the hip-fracture group. A significant inverse correlation (p < 0.01) was found between NSA and FN-BMD. By age-, height- and weight-corrected logistic regression, none of the tested geometric parameters, separately considered from FN-BMD, entered the best model to predict spine fracture, whereas NSA (p < 0.03) predicted hip fracture together with age (p < 0.001). When forced into the regression, FN-BMD (p < 0.001) became the only fracture predictor to enter the best model to predict both fracture types. NSA is associated with hip-fracture risk in males but is not independent of FN-BMD. The lack of ability of NSA to predict hip fracture in males independent of FN-BMD should depend on its inverse correlation with FN-BMD by capturing, as the strongest fracture predictor, some of the effects of NSA on the hip fracture. Conversely, NSA in females does not correlate with FN-BMD but independently predicts hip fractures.
Lisi, L; Avella, M
2014-01-01
Objective: To investigate the specificity of the neck shaft angle (NSA) to predict hip fracture in males. Methods: We consecutively studied 228 males without fracture and 38 with hip fracture. A further 49 males with spine fracture were studied to evaluate the specificity of NSA for hip-fracture prediction. Femoral neck (FN) bone mineral density (FN-BMD), NSA, hip axis length and FN diameter (FND) were measured in each subject by dual X-ray absorptiometry. Between-mean differences in the studied variables were tested by the unpaired t-test. The ability of NSA to predict hip fracture was tested by logistic regression. Results: Compared with controls, FN-BMD (p < 0.01) was significantly lower in both groups of males with fractures, whereas FND (p < 0.01) and NSA (p = 0.05) were higher only in the hip-fracture group. A significant inverse correlation (p < 0.01) was found between NSA and FN-BMD. By age-, height- and weight-corrected logistic regression, none of the tested geometric parameters, separately considered from FN-BMD, entered the best model to predict spine fracture, whereas NSA (p < 0.03) predicted hip fracture together with age (p < 0.001). When forced into the regression, FN-BMD (p < 0.001) became the only fracture predictor to enter the best model to predict both fracture types. Conclusion: NSA is associated with hip-fracture risk in males but is not independent of FN-BMD. Advances in knowledge: The lack of ability of NSA to predict hip fracture in males independent of FN-BMD should depend on its inverse correlation with FN-BMD by capturing, as the strongest fracture predictor, some of the effects of NSA on the hip fracture. Conversely, NSA in females does not correlate with FN-BMD but independently predicts hip fractures. PMID:24678889
Predicting stroke through genetic risk functions: the CHARGE Risk Score Project.
Ibrahim-Verbaas, Carla A; Fornage, Myriam; Bis, Joshua C; Choi, Seung Hoan; Psaty, Bruce M; Meigs, James B; Rao, Madhu; Nalls, Mike; Fontes, Joao D; O'Donnell, Christopher J; Kathiresan, Sekar; Ehret, Georg B; Fox, Caroline S; Malik, Rainer; Dichgans, Martin; Schmidt, Helena; Lahti, Jari; Heckbert, Susan R; Lumley, Thomas; Rice, Kenneth; Rotter, Jerome I; Taylor, Kent D; Folsom, Aaron R; Boerwinkle, Eric; Rosamond, Wayne D; Shahar, Eyal; Gottesman, Rebecca F; Koudstaal, Peter J; Amin, Najaf; Wieberdink, Renske G; Dehghan, Abbas; Hofman, Albert; Uitterlinden, André G; Destefano, Anita L; Debette, Stephanie; Xue, Luting; Beiser, Alexa; Wolf, Philip A; Decarli, Charles; Ikram, M Arfan; Seshadri, Sudha; Mosley, Thomas H; Longstreth, W T; van Duijn, Cornelia M; Launer, Lenore J
2014-02-01
Beyond the Framingham Stroke Risk Score, prediction of future stroke may improve with a genetic risk score (GRS) based on single-nucleotide polymorphisms associated with stroke and its risk factors. The study includes 4 population-based cohorts with 2047 first incident strokes from 22,720 initially stroke-free European origin participants aged ≥55 years, who were followed for up to 20 years. GRSs were constructed with 324 single-nucleotide polymorphisms implicated in stroke and 9 risk factors. The association of the GRS to first incident stroke was tested using Cox regression; the GRS predictive properties were assessed with area under the curve statistics comparing the GRS with age and sex, Framingham Stroke Risk Score models, and reclassification statistics. These analyses were performed per cohort and in a meta-analysis of pooled data. Replication was sought in a case-control study of ischemic stroke. In the meta-analysis, adding the GRS to the Framingham Stroke Risk Score, age and sex model resulted in a significant improvement in discrimination (all stroke: Δjoint area under the curve=0.016, P=2.3×10(-6); ischemic stroke: Δjoint area under the curve=0.021, P=3.7×10(-7)), although the overall area under the curve remained low. In all the studies, there was a highly significantly improved net reclassification index (P<10(-4)). The single-nucleotide polymorphisms associated with stroke and its risk factors result only in a small improvement in prediction of future stroke compared with the classical epidemiological risk factors for stroke.
Personality and the Leading Behavioral Contributors of Mortality
Turiano, Nicholas A.; Chapman, Benjamin P.; Gruenewald, Tara L.; Mroczek, Daniel K.
2014-01-01
Objective Personality traits predict both health behaviors and mortality risk across the life course. However, there are few investigations that have examined these effects in a single study. Thus, there are limitations in assessing if health behaviors explain why personality predicts health and longevity. Method Utilizing 14-year mortality data from a national sample of over 6,000 adults from the Midlife in the United States Study, we tested whether alcohol use, smoking behavior, and waist circumference mediated the personality–mortality association. Results After adjusting for demographic variables, higher levels of Conscientiousness predicted a 13% reduction in mortality risk over the follow-up. Structural equation models provided evidence that heavy drinking, smoking, and greater waist circumference significantly mediated the Conscientiousness–mortality association by 42%. Conclusion The current study provided empirical support for the health-behavior model of personality— Conscientiousness influences the behaviors persons engage in and these behaviors affect the likelihood of poor health outcomes. Findings highlight the usefulness of assessing mediation in a structural equation modeling framework when testing proportional hazards. In addition, the current findings add to the growing literature that personality traits can be used to identify those at risk for engaging in behaviors that deteriorate health and shorten the life span. PMID:24364374
Spencer, Rand
2006-01-01
The goal is to analyze the long-term visual outcome of extremely low-birth-weight children. This is a retrospective analysis of eyes of extremely low-birth-weight children on whom vision testing was performed. Visual outcomes were studied by analyzing acuity outcomes at >/=36 months of adjusted age, correlating early acuity testing with final visual outcome and evaluating adverse risk factors for vision. Data from 278 eyes are included. Mean birth weight was 731g, and mean gestational age at birth was 26 weeks. 248 eyes had grating acuity outcomes measured at 73 +/- 36 months, and 183 eyes had recognition acuity testing at 76 +/- 39 months. 54% had below normal grating acuities, and 66% had below normal recognition acuities. 27% of grating outcomes and 17% of recognition outcomes were =20/200. Abnormal early grating acuity testing was predictive of abnormal grating (P < .0001) and recognition (P = .0001) acuity testing at >/=3 years of age. A slower-than-normal rate of early visual development was predictive of abnormal grating acuity (P < .0001) and abnormal recognition acuity (P < .0001) at >/=3 years of age. Eyes diagnosed with maximal retinopathy of prematurity in zone I had lower acuity outcomes (P = .0002) than did those with maximal retinopathy of prematurity in zone II/III. Eyes of children born at =28 weeks gestational age had 4.1 times greater risk for abnormal recognition acuity than did those of children born at >28 weeks gestational age. Eyes of children with poorer general health after premature birth had a 5.3 times greater risk of abnormal recognition acuity. Long-term visual development in extremely low-birth-weight infants is problematic and associated with a high risk of subnormal acuity. Early acuity testing is useful in identifying children at greatest risk for long-term visual abnormalities. Gestational age at birth of = 28 weeks was associated with a higher risk of an abnormal long-term outcome.
Cunha-Cruz, Joana; Milgrom, Peter; Shirtcliff, R Michael; Bailit, Howard L; Huebner, Colleen E; Conrad, Douglas; Ludwig, Sharity; Mitchell, Melissa; Dysert, Jeanne; Allen, Gary; Scott, JoAnna; Mancl, Lloyd
2015-06-20
To improve the oral health of low-income children, innovations in dental delivery systems are needed, including community-based care, the use of expanded duty auxiliary dental personnel, capitation payments, and global budgets. This paper describes the protocol for PREDICT (Population-centered Risk- and Evidence-based Dental Interprofessional Care Team), an evaluation project to test the effectiveness of new delivery and payment systems for improving dental care and oral health. This is a parallel-group cluster randomized controlled trial. Fourteen rural Oregon counties with a publicly insured (Medicaid) population of 82,000 children (0 to 21 years old) and pregnant women served by a managed dental care organization are randomized into test and control counties. In the test intervention (PREDICT), allied dental personnel provide screening and preventive services in community settings and case managers serve as patient navigators to arrange referrals of children who need dentist services. The delivery system intervention is paired with a compensation system for high performance (pay-for-performance) with efficient performance monitoring. PREDICT focuses on the following: 1) identifying eligible children and gaining caregiver consent for services in community settings (for example, schools); 2) providing risk-based preventive and caries stabilization services efficiently at these settings; 3) providing curative care in dental clinics; and 4) incentivizing local delivery teams to meet performance benchmarks. In the control intervention, care is delivered in dental offices without performance incentives. The primary outcome is the prevalence of untreated dental caries. Other outcomes are related to process, structure and cost. Data are collected through patient and staff surveys, clinical examinations, and the review of health and administrative records. If effective, PREDICT is expected to substantially reduce disparities in dental care and oral health. PREDICT can be disseminated to other care organizations as publicly insured clients are increasingly served by large practice organizations. ClinicalTrials.gov NCT02312921 6 December 2014. The Robert Wood Johnson Foundation and Advantage Dental Services, LLC, are supporting the evaluation.
Can shoulder dystocia be reliably predicted?
Dodd, Jodie M; Catcheside, Britt; Scheil, Wendy
2012-06-01
To evaluate factors reported to increase the risk of shoulder dystocia, and to evaluate their predictive value at a population level. The South Australian Pregnancy Outcome Unit's population database from 2005 to 2010 was accessed to determine the occurrence of shoulder dystocia in addition to reported risk factors, including age, parity, self-reported ethnicity, presence of diabetes and infant birth weight. Odds ratios (and 95% confidence interval) of shoulder dystocia was calculated for each risk factor, which were then incorporated into a logistic regression model. Test characteristics for each variable in predicting shoulder dystocia were calculated. As a proportion of all births, the reported rate of shoulder dystocia increased significantly from 0.95% in 2005 to 1.38% in 2010 (P = 0.0002). Using a logistic regression model, induction of labour and infant birth weight greater than both 4000 and 4500 g were identified as significant independent predictors of shoulder dystocia. The value of risk factors alone and when incorporated into the logistic regression model was poorly predictive of the occurrence of shoulder dystocia. While there are a number of factors associated with an increased risk of shoulder dystocia, none are of sufficient sensitivity or positive predictive value to allow their use clinically to reliably and accurately identify the occurrence of shoulder dystocia. © 2012 The Authors ANZJOG © 2012 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Adolescent Risk: The Co-Occurrence of Illness, Suicidality, and Substance Use
ERIC Educational Resources Information Center
Husler, Gebhard; Blakeney, Ronny; Werlen, Egon
2005-01-01
Illness is rarely considered a "risk factor" in adolescence. This study tests illness, suicidality and substance use as outcome measures in a path analysis of 1028 Swiss adolescents in secondary prevention programs. The model showed that negative mood (depression and anxiety) predicted two paths. One path led from negative mood to…
Frontostriatal Maturation Predicts Cognitive Control Failure to Appetitive Cues in Adolescents
ERIC Educational Resources Information Center
Somerville, Leah H.; Hare, Todd; Casey, B. J.
2011-01-01
Adolescent risk-taking is a public health issue that increases the odds of poor lifetime outcomes. One factor thought to influence adolescents' propensity for risk-taking is an enhanced sensitivity to appetitive cues, relative to an immature capacity to exert sufficient cognitive control. We tested this hypothesis by characterizing interactions…
The path for incorporating new alternative methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. Some of these challenges include development of relevant and predictive test systems and computational models to integrate...
Anderson, Kermyt G
2017-01-01
A substantial theoretical and empirical literature suggests that stressful events in childhood influence the timing and patterning of subsequent sexual and reproductive behaviors. Stressful childhood environments have been predicted to produce a life history strategy in which adults are oriented more toward short-term mating behaviors and less toward behaviors consistent with longevity. This article tests the hypothesis that adverse childhood environment will predict adult outcomes in two areas: risky sexual behavior (engagement in sexual risk behavior or having taken an HIV test) and marital status (currently married vs. never married, divorced, or a member of an unmarried couple). Data come from the Behavioral Risk Factor Surveillance System. The sample contains 17,530 men and 23,978 women aged 18-54 years living in 13 U.S. states plus the District of Columbia. Adverse childhood environment is assessed through 11 retrospective measures of childhood environment, including having grown up with someone who was depressed or mentally ill, who was an alcoholic, who used or abused drugs, or who served time in prison; whether one's parents divorced in childhood; and two scales measuring childhood exposure to violence and to sexual trauma. The results indicate that adverse childhood environment is associated with increased likelihood of engaging in sexual risk behaviors or taking an HIV test, and increased likelihood of being in an unmarried couple or divorced/separated, for both men and women. The predictions are supported by the data, lending further support to the hypothesis that childhood environments influence adult reproductive strategy.
Witt, Katrina; Lichtenstein, Paul; Fazel, Seena
2015-01-01
Background Violence risk assessment in schizophrenia relies heavily on criminal history factors. Aims To investigate which criminal history factors are most strongly associated with violent crime in schizophrenia. Method A total of 13 806 individuals (8891 men and 4915 women) with two or more hospital admissions for schizophrenia were followed up for violent convictions. Multivariate hazard ratios for 15 criminal history factors included in different risk assessment tools were calculated. The incremental predictive validity of these factors was estimated using tests of discrimination, calibration and reclassification. Results Over a mean follow-up of 12.0 years, 17.3% of men (n = 1535) and 5.7% of women (n = 281) were convicted of a violent offence. Criminal history factors most strongly associated with subsequent violence for both men and women were a previous conviction for a violent offence; for assault, illegal threats and/or intimidation; and imprisonment. However, only a previous conviction for a violent offence was associated with incremental predictive validity in both genders following adjustment for young age and comorbid substance use disorder. Conclusions Clinical and actuarial approaches to assess violence risk can be improved if included risk factors are tested using multiple measures of performance. PMID:25657352
Witt, Katrina; Lichtenstein, Paul; Fazel, Seena
2015-05-01
Violence risk assessment in schizophrenia relies heavily on criminal history factors. To investigate which criminal history factors are most strongly associated with violent crime in schizophrenia. A total of 13 806 individuals (8891 men and 4915 women) with two or more hospital admissions for schizophrenia were followed up for violent convictions. Multivariate hazard ratios for 15 criminal history factors included in different risk assessment tools were calculated. The incremental predictive validity of these factors was estimated using tests of discrimination, calibration and reclassification. Over a mean follow-up of 12.0 years, 17.3% of men (n = 1535) and 5.7% of women (n = 281) were convicted of a violent offence. Criminal history factors most strongly associated with subsequent violence for both men and women were a previous conviction for a violent offence; for assault, illegal threats and/or intimidation; and imprisonment. However, only a previous conviction for a violent offence was associated with incremental predictive validity in both genders following adjustment for young age and comorbid substance use disorder. Clinical and actuarial approaches to assess violence risk can be improved if included risk factors are tested using multiple measures of performance. © The Royal College of Psychiatrists 2015.
[Design and validation of an instrument to assess families at risk for health problems].
Puschel, Klaus; Repetto, Paula; Solar, María Olga; Soto, Gabriela; González, Karla
2012-04-01
There is a paucity of screening instruments with a high clinical predictive value to identify families at risk and therefore, develop focused interventions in primary care. To develop an easy to apply screening instrument with a high clinical predictive value to identify families with a higher health vulnerability. In the first stage of the study an instrument with a high content validity was designed through a review of existent instruments, qualitative interviews with families and expert opinions following a Delphi approach of three rounds. In the second stage, concurrent validity was tested through a comparative analysis between the pilot instrument and a family clinical interview conducted to 300 families randomly selected from a population registered at a primary care clinic in Santiago. The sampling was blocked based on the presence of diabetes, depression, child asthma, behavioral disorders, presence of an older person or the lack of previous conditions among family members. The third stage, was directed to test the clinical predictive validity of the instrument by comparing the baseline vulnerability obtained by the instrument and the change in clinical status and health related quality of life perceptions of the family members after nine months of follow-up. The final SALUFAM instrument included 13 items and had a high internal consistency (Cronbach's alpha: 0.821), high test re-test reproducibility (Pearson correlation: 0.84) and a high clinical predictive value for clinical deterioration (Odds ratio: 1.826; 95% confidence intervals: 1.101-3.029). SALUFAM instrument is applicable, replicable, has a high content validity, concurrent validity and clinical predictive value.
Cigarette smoking in a student sample: neurocognitive and clinical correlates.
Dinn, Wayne M; Aycicegi, Ayse; Harris, Catherine L
2004-01-01
Why do adolescents begin to smoke in the face of profound health risks and aggressive antismoking campaigns? The present study tested predictions based on two theoretical models of tobacco use in young adults: (1) the self-medication model; and (2) the orbitofrontal/disinhibition model. Investigators speculated that a significant number of smokers were self-medicating since nicotine possesses mood-elevating and hedonic properties. The self-medication model predicts that smokers will demonstrate increased rates of psychopathology relative to nonsmokers. Similarly, researchers have suggested that individuals with attention-deficit/hyperactivity disorder (ADHD) employ nicotine to enhance cognitive function. The ADHD/self-medication model predicts that smokers will perform poorly on tests of executive function and report a greater number of ADHD symptoms. A considerable body of research indicates that tobacco use is associated with several related personality traits including extraversion, impulsivity, risk taking, sensation seeking, novelty seeking, and antisocial personality features. Antisocial behavior and related personality traits as well as tobacco use may reflect, in part, a failure to effectively employ reward and punishment cues to guide behavior. This failure may reflect orbitofrontal dysfunction. The orbitofrontal/disinhibition model predicts that smokers will perform poorly on neurocognitive tasks considered sensitive to orbitofrontal dysfunction and will obtain significantly higher scores on measures of behavioral disinhibition and antisocial personality relative to nonsmokers. To test these predictions, we administered a battery of neuropsychological tests, clinical scales, and personality questionnaires to university student smokers and nonsmokers. Results did not support the self-medication model or the ADHD/self-medication model; however, findings were consistent with the orbitofrontal/disinhibition model.
Verghese, Joe; Holtzer, Roee; Lipton, Richard B; Wang, Cuiling
2012-10-01
To examine the validity of the Walking While Talking Test (WWT), a mobility stress test, to predict frailty, disability, and death in high-functioning older adults. Prospective cohort study. Community sample. Six hundred thirty-one community-residing adults aged 70 and older participating in the Einstein Aging Study (mean follow-up 32 months). High-functioning status at baseline was defined as absence of disability and dementia and normal walking speeds. Hazard ratios (HRs) for frailty, disability, and all-cause mortality. Frailty was defined as presence of three out of the following five attributes: weight loss, weakness, exhaustion, low physical activity, and slow gait. The predictive validity of the WWT was also compared with that of the Short Physical Performance Battery (SPPB) for study outcomes. Two hundred eighteen participants developed frailty, 88 developed disability, and 49 died. Each 10-cm/s decrease in WWT speed was associated with greater risk of frailty (HR = 1.12, 95% confidence interval (CI) = 1.06-1.18), disability (HR = 1.13, 95% CI = 1.03-1.23), and mortality (HR = 1.13, 95% CI = 1.01-1.27). Most associations remained robust even after accounting for potential confounders and gait speed. Comparisons of HRs and model fit suggest that the WWT may better predict frailty whereas SPPB may better predict disability. Mobility stress tests such as the WWT are robust predictors of risk of frailty, disability, and mortality in high-functioning older adults. © 2012, Copyright the Authors Journal compilation © 2012, The American Geriatrics Society.
Das, Anirban; Trehan, Amita; Oberoi, Sapna; Bansal, Deepak
2017-06-01
The study aims to validate a score predicting risk of complications in pediatric patients with chemotherapy-related febrile neutropenia (FN) and evaluate the performance of previously published models for risk stratification. Children diagnosed with cancer and presenting with FN were evaluated in a prospective single-center study. A score predicting the risk of complications, previously derived in the unit, was validated on a prospective cohort. Performance of six predictive models published from geographically distinct settings was assessed on the same cohort. Complications were observed in 109 (26.3%) of 414 episodes of FN over 15 months. A risk score based on undernutrition (two points), time from last chemotherapy (<7 days = two points), presence of a nonupper respiratory focus of infection (two points), C-reactive protein (>60 mg/l = five points), and absolute neutrophil count (<100 per μl = two points) was used to stratify patients into "low risk" (score <7, n = 208) and assessed using the following parameters: overall performance (Nagelkerke R 2 = 34.4%), calibration (calibration slope = 0.39; P = 0.25 in Hosmer-Lemeshow test), discrimination (c-statistic = 0.81), overall sensitivity (86%), negative predictive value (93%), and clinical net benefit (0.43). Six previously published rules demonstrated inferior performance in this cohort. An indigenous decision rule using five simple predefined variables was successful in identifying children at risk for complications. Prediction models derived in developed nations may not be appropriate for low-middle-income settings and need to be validated before use. © 2016 Wiley Periodicals, Inc.
García-Jaramillo, Maira; Calm, Remei; Bondia, Jorge; Tarín, Cristina; Vehí, Josep
2009-01-01
Objective The objective of this article was to develop a methodology to quantify the risk of suffering different grades of hypo- and hyperglycemia episodes in the postprandial state. Methods Interval predictions of patient postprandial glucose were performed during a 5-hour period after a meal for a set of 3315 scenarios. Uncertainty in the patient's insulin sensitivities and carbohydrate (CHO) contents of the planned meal was considered. A normalized area under the curve of the worst-case predicted glucose excursion for severe and mild hypo- and hyperglycemia glucose ranges was obtained and weighted accordingly to their importance. As a result, a comprehensive risk measure was obtained. A reference model of preprandial glucose values representing the behavior in different ranges was chosen by a ξ2 test. The relationship between the computed risk index and the probability of occurrence of events was analyzed for these reference models through 19,500 Monte Carlo simulations. Results The obtained reference models for each preprandial glucose range were 100, 160, and 220 mg/dl. A relationship between the risk index ranges <10, 10–60, 60–120, and >120 and the probability of occurrence of mild and severe postprandial hyper- and hypoglycemia can be derived. Conclusions When intrapatient variability and uncertainty in the CHO content of the meal are considered, a safer prediction of possible hyper- and hypoglycemia episodes induced by the tested insulin therapy can be calculated. PMID:20144339
Risk scores for outcome in bacterial meningitis: Systematic review and external validation study.
Bijlsma, Merijn W; Brouwer, Matthijs C; Bossuyt, Patrick M; Heymans, Martijn W; van der Ende, Arie; Tanck, Michael W T; van de Beek, Diederik
2016-11-01
To perform an external validation study of risk scores, identified through a systematic review, predicting outcome in community-acquired bacterial meningitis. MEDLINE and EMBASE were searched for articles published between January 1960 and August 2014. Performance was evaluated in 2108 episodes of adult community-acquired bacterial meningitis from two nationwide prospective cohort studies by the area under the receiver operating characteristic curve (AUC), the calibration curve, calibration slope or Hosmer-Lemeshow test, and the distribution of calculated risks. Nine risk scores were identified predicting death, neurological deficit or death, or unfavorable outcome at discharge in bacterial meningitis, pneumococcal meningitis and invasive meningococcal disease. Most studies had shortcomings in design, analyses, and reporting. Evaluation showed AUCs of 0.59 (0.57-0.61) and 0.74 (0.71-0.76) in bacterial meningitis, 0.67 (0.64-0.70) in pneumococcal meningitis, and 0.81 (0.73-0.90), 0.82 (0.74-0.91), 0.84 (0.75-0.93), 0.84 (0.76-0.93), 0.85 (0.75-0.95), and 0.90 (0.83-0.98) in meningococcal meningitis. Calibration curves showed adequate agreement between predicted and observed outcomes for four scores, but statistical tests indicated poor calibration of all risk scores. One score could be recommended for the interpretation and design of bacterial meningitis studies. None of the existing scores performed well enough to recommend routine use in individual patient management. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Krosshaug, Tron; Steffen, Kathrin; Kristianslund, Eirik; Nilstad, Agnethe; Mok, Kam-Ming; Myklebust, Grethe; Andersen, Thor Einar; Holme, Ingar; Engebretsen, Lars; Bahr, Roald
2016-04-01
The evidence linking knee kinematics and kinetics during a vertical drop jump (VDJ) to anterior cruciate ligament (ACL) injury risk is restricted to a single small sample. Still, the VDJ test continues to be advocated for clinical screening purposes. To test whether 5 selected kinematic and kinetic variables were associated with future ACL injuries in a large cohort of Norwegian female elite soccer and handball players. Furthermore, we wanted to assess whether the VDJ test can be recommended as a screening test to identify players with increased risk. Cohort study; Level of evidence, 2. Elite female soccer and handball players participated in preseason screening tests from 2007 through 2014. The tests included marker-based 3-dimensional motion analysis of a drop-jump landing. We followed a predefined statistical protocol in which we included the following candidate risk factors in 5 separate logistic regression analyses, with new ACL injury as the outcome: (1) knee valgus angle at initial contact, (2) peak knee abduction moment, (3) peak knee flexion angle, (4) peak vertical ground-reaction force, and (5) medial knee displacement. A total of 782 players were tested (age, 21 ± 4 years; height, 170 ± 7 cm; body mass, 67 ± 8 kg), of which 710 were included in the analyses. We registered 42 new noncontact ACL injuries, including 12 in previously ACL-injured players. Previous ACL injury (relative risk, 3.8; 95% CI, 2.1-7.1) and medial knee displacement (odds ratio, 1.40; 95% CI, 1.12-1.74 per 1-SD change) were associated with increased risk for injury. However, among the 643 players without previous injury, we found no association with medial knee displacement. A receiver operating characteristic curve analysis of medial knee displacement showed an area under the curve of 0.6, indicating a poor-to-failed combined sensitivity and specificity of the test, even when including previously injured players. Of the 5 risk factors considered, medial knee displacement was the only factor associated with increased risk for ACL. However, receiver operating characteristic curve analysis indicated a poor combined sensitivity and specificity when medial knee displacement was used as a screening test for predicting ACL injury. For players with no previous injury, none of the VDJ variables were associated with increased injury risk. VDJ tests cannot predict ACL injuries in female elite soccer and handball players. © 2016 The Author(s).
Oosting, Ellen; Hoogeboom, Thomas J; Appelman-de Vries, Suzan A; Swets, Adam; Dronkers, Jaap J; van Meeteren, Nico L U
2016-01-01
The aim of this study was to evaluate the value of conventional factors, the Risk Assessment and Predictor Tool (RAPT) and performance-based functional tests as predictors of delayed recovery after total hip arthroplasty (THA). A prospective cohort study in a regional hospital in the Netherlands with 315 patients was attending for THA in 2012. The dependent variable recovery of function was assessed with the Modified Iowa Levels of Assistance scale. Delayed recovery was defined as taking more than 3 days to walk independently. Independent variables were age, sex, BMI, Charnley score, RAPT score and scores for four performance-based tests [2-minute walk test, timed up and go test (TUG), 10-meter walking test (10 mW) and hand grip strength]. Regression analysis with all variables identified older age (>70 years), Charnley score C, slow walking speed (10 mW >10.0 s) and poor functional mobility (TUG >10.5 s) as the best predictors of delayed recovery of function. This model (AUC 0.85, 95% CI 0.79-0.91) performed better than a model with conventional factors and RAPT scores, and significantly better (p = 0.04) than a model with only conventional factors (AUC 0.81, 95% CI 0.74-0.87). The combination of performance-based tests and conventional factors predicted inpatient functional recovery after THA. Two simple functional performance-based tests have a significant added value to a more conventional screening with age and comorbidities to predict recovery of functioning immediately after total hip surgery. Patients over 70 years old, with comorbidities, with a TUG score >10.5 s and a walking speed >1.0 m/s are at risk for delayed recovery of functioning. Those high risk patients need an accurate discharge plan and could benefit from targeted pre- and postoperative therapeutic exercise programs.
Not all that glitters is RMT in the forecasting of risk of portfolios in the Brazilian stock market
NASA Astrophysics Data System (ADS)
Sandoval, Leonidas; Bortoluzzo, Adriana Bruscato; Venezuela, Maria Kelly
2014-09-01
Using stocks of the Brazilian stock exchange (BM&F-Bovespa), we build portfolios of stocks based on Markowitz's theory and test the predicted and realized risks. This is done using the correlation matrices between stocks, and also using Random Matrix Theory in order to clean such correlation matrices from noise. We also calculate correlation matrices using a regression model in order to remove the effect of common market movements and their cleaned versions using Random Matrix Theory. This is done for years of both low and high volatility of the Brazilian stock market, from 2004 to 2012. The results show that the use of regression to subtract the market effect on returns greatly increases the accuracy of the prediction of risk, and that, although the cleaning of the correlation matrix often leads to portfolios that better predict risks, in periods of high volatility of the market this procedure may fail to do so. The results may be used in the assessment of the true risks when one builds a portfolio of stocks during periods of crisis.
Poverty, AIDS and child health: identifying highest-risk children in South Africa.
Cluver, Lucie; Boyes, Mark; Orkin, Mark; Sherr, Lorraine
2013-10-11
Identifying children at the highest risk of negative health effects is a prerequisite to effective public health policies in Southern Africa. A central ongoing debate is whether poverty, orphanhood or parental AIDS most reliably indicates child health risks. Attempts to address this key question have been constrained by a lack of data allowing distinction of AIDS-specific parental death or morbidity from other causes of orphanhood and chronic illness. To examine whether household poverty, orphanhood and parental illness (by AIDS or other causes) independently or interactively predict child health, developmental and HIV-infection risks. We interviewed 6 002 children aged 10 - 17 years in 2009 - 2011, using stratified random sampling in six urban and rural sites across three South African provinces. Outcomes were child mental health risks, educational risks and HIV-infection risks. Regression models that controlled for socio-demographic co-factors tested potential impacts and interactions of poverty, AIDS-specific and other orphanhood and parental illness status. Household poverty independently predicted child mental health and educational risks, AIDS orphanhood independently predicted mental health risks and parental AIDS illness independently predicted mental health, educational and HIV-infection risks. Interaction effects of poverty with AIDS orphanhood and parental AIDS illness were found across all outcomes. No effects, or interactions with poverty, were shown by AIDS-unrelated orphanhood or parental illness. The identification of children at highest risk requires recognition and measurement of both poverty and parental AIDS. This study shows negative impacts of poverty and AIDS-specific vulnerabilities distinct from orphanhood and adult illness more generally. Additionally, effects of interaction between family AIDS and poverty suggest that, where these co-exist, children are at highest risk of all.
A Risk Stratification Model for Lung Cancer Based on Gene Coexpression Network and Deep Learning
2018-01-01
Risk stratification model for lung cancer with gene expression profile is of great interest. Instead of previous models based on individual prognostic genes, we aimed to develop a novel system-level risk stratification model for lung adenocarcinoma based on gene coexpression network. Using multiple microarray, gene coexpression network analysis was performed to identify survival-related networks. A deep learning based risk stratification model was constructed with representative genes of these networks. The model was validated in two test sets. Survival analysis was performed using the output of the model to evaluate whether it could predict patients' survival independent of clinicopathological variables. Five networks were significantly associated with patients' survival. Considering prognostic significance and representativeness, genes of the two survival-related networks were selected for input of the model. The output of the model was significantly associated with patients' survival in two test sets and training set (p < 0.00001, p < 0.0001 and p = 0.02 for training and test sets 1 and 2, resp.). In multivariate analyses, the model was associated with patients' prognosis independent of other clinicopathological features. Our study presents a new perspective on incorporating gene coexpression networks into the gene expression signature and clinical application of deep learning in genomic data science for prognosis prediction. PMID:29581968
Risk Assessment in the 21st Century | Science Inventory | US ...
For the past ~50 years, risk assessment depended almost exclusively on animal testing for hazard identification and dose-response assessment. Originally sound and effective, with increasing dependence on chemical tools and the number of chemicals in commerce, this traditional approach is no longer adequate. This presentation provides an update on current progress in achieving the goals outlined in the NAS report on Toxicology Testing in the 21st Century, highlighting many of the advances lead by the EPA. Topics covered include the evolution of the mode of action framework into a chemically agnostic, adverse outcome pathway (AOP), a systems-based data framework that facilitates integration of modifiable factors (e.g., genetic variation, life stages), and an understanding of networks, and mixtures. Further, the EDSP pivot is used to illustrate how AOPs drive development of predictive models for risk assessment based on assembly of high throughput assays representing AOP key elements. The birth of computational exposure science, capable of large-scale predictive exposure models, is reviewed. Although still in its infancy, development of non-targeted analysis to begin addressing exposome also is presented. Finally, the systems-based AEP is described that integrates exposure, toxicokinetics and AOPs into a comprehensive framework. For the past ~50 years, risk assessment depended almost exclusively on animal testing for hazard identification and dose-response as
Chalfoun, A.D.; Martin, T.E.
2009-01-01
Predation is an important and ubiquitous selective force that can shape habitat preferences of prey species, but tests of alternative mechanistic hypotheses of habitat influences on predation risk are lacking. 2. We studied predation risk at nest sites of a passerine bird and tested two hypotheses based on theories of predator foraging behaviour. The total-foliage hypothesis predicts that predation will decline in areas of greater overall vegetation density by impeding cues for detection by predators. The potential-prey-site hypothesis predicts that predation decreases where predators must search more unoccupied potential nest sites. 3. Both observational data and results from a habitat manipulation provided clear support for the potential-prey-site hypothesis and rejection of the total-foliage hypothesis. Birds chose nest patches containing both greater total foliage and potential nest site density (which were correlated in their abundance) than at random sites, yet only potential nest site density significantly influenced nest predation risk. 4. Our results therefore provided a clear and rare example of adaptive nest site selection that would have been missed had structural complexity or total vegetation density been considered alone. 5. Our results also demonstrated that interactions between predator foraging success and habitat structure can be more complex than simple impedance or occlusion by vegetation. ?? 2008 British Ecological Society.
Ronald, Lisa A; Campbell, Jonathon R; Balshaw, Robert F; Roth, David Z; Romanowski, Kamila; Marra, Fawziah; Cook, Victoria J; Johnston, James C
2016-11-25
Improved understanding of risk factors for developing active tuberculosis (TB) will better inform decisions about diagnostic testing and treatment for latent TB infection (LTBI) in migrant populations in low-incidence regions. We aim to examine TB risk factors among the foreign-born population in British Columbia (BC), Canada, and to create and validate a clinically relevant multivariate risk score to predict active TB. This retrospective population-based cohort study will include all foreign-born individuals who acquired permanent resident status in Canada between 1 January 1985 and 31 December 2013 and acquired healthcare coverage in BC at any point during this period. Multiple administrative databases and disease registries will be linked, including a National Immigration Database, BC Provincial Health Insurance Registration, physician billings, hospitalisations, drugs dispensed from community pharmacies, vital statistics, HIV testing and notifications, cancer, chronic kidney disease and dialysis treatment, and all TB and LTBI testing and treatment data in BC. Extended proportional hazards regression will be used to estimate risk factors for TB and to create a prognostic TB risk score. Ethical approval for this study has been obtained from the University of British Columbia Clinical Ethics Review Board. Once completed, study findings will be presented at conferences and published in peer-reviewed journals. An online TB risk score calculator will also be created. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Theory-Based Cartographic Risk Model Development and Application for Home Fire Safety.
Furmanek, Stephen; Lehna, Carlee; Hanchette, Carol
There is a gap in the use of predictive risk models to identify areas at risk for home fires and burn injury. The purpose of this study was to describe the creation, validation, and application of such a model using a sample from an intervention study with parents of newborns in Jefferson County, KY, as an example. Performed was a literature search to identify risk factors for home fires and burn injury in the target population. Obtained from the American Community Survey at the census tract level and synthesized to create a predictive cartographic risk model was risk factor data. Model validation was performed through correlation, regression, and Moran's I with fire incidence data from open records. Independent samples t-tests were used to examine the model in relation to geocoded participant addresses. Participant risk level for fire rate was determined and proximity to fire station service areas and hospitals. The model showed high and severe risk clustering in the northwest section of the county. Strongly correlated with fire rate was modeled risk; the best predictive model for fire risk contained home value (low), race (black), and non high school graduates. Applying the model to the intervention sample, the majority of participants were at lower risk and mostly within service areas closest to a fire department and hospital. Cartographic risk models were useful in identifying areas at risk and analyzing participant risk level. The methods outlined in this study are generalizable to other public health issues.
HPV-Testing in Follow-up of Patients Treated for CIN2+ Lesions
Mariani, Luciano; Sandri, Maria Teresa; Preti, Mario; Origoni, Massimo; Costa, Silvano; Cristoforoni, Paolo; Bottari, Fabio; Sideri, Mario
2016-01-01
Persistent positivity of HPV-DNA testing is considered a prognostic index of recurrent disease in patients treated for CIN2+. HPV detection, and particularly genotyping, has an adequate high rate of sensitivity and specificity (along with an optimal reproducibility), for accurately predicting treatment failure, allowing for an intensified monitoring activity. Conversely, women with a negative HPV-test 6 months after therapy have a very low risk for residual/recurrent disease, which leads to a more individualized follow-up schedule, allowing for a gradual return to the normal screening scheme. HPV testing should be routinely included (with or without cytology) in post-treatment follow-up of CIN2+ patients for early detection of recurrence and cancer progression. HPV genotyping methods, as a biological indicator of persistent disease, could be more suitable for a predictive role and risk stratification (particularly in the case of HPV 16/18 persistence) than pooled HPV-based testing. However, it is necessary to be aware of the performance of the system, adhering to strict standardization of the process and quality assurance criteria. PMID:26722366
Savolainen, Otto; Fagerberg, Björn; Vendelbo Lind, Mads; Sandberg, Ann-Sofie; Ross, Alastair B; Bergström, Göran
2017-01-01
The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D) risk that would improve prediction of T2D over current risk markers. Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629). Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D. Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA), smoking, serum adiponectin)) alone, and in combination with metabolomics had the largest areas under the curve (AUC) (0.794 (95% confidence interval [0.738-0.850]) and 0.808 [0.749-0.867] respectively), with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577-0.736]). Prediction based on non-blood based measures was 0.638 [0.565-0.711]). Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model.
Savolainen, Otto; Fagerberg, Björn; Vendelbo Lind, Mads; Sandberg, Ann-Sofie; Ross, Alastair B.; Bergström, Göran
2017-01-01
Aim The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D) risk that would improve prediction of T2D over current risk markers. Methods Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629). Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D. Results Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA), smoking, serum adiponectin)) alone, and in combination with metabolomics had the largest areas under the curve (AUC) (0.794 (95% confidence interval [0.738–0.850]) and 0.808 [0.749–0.867] respectively), with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577–0.736]). Prediction based on non-blood based measures was 0.638 [0.565–0.711]). Conclusions Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model. PMID:28692646
Lo, Monica Y; Bonthala, Nirupama; Holper, Elizabeth M; Banks, Kamakki; Murphy, Sabina A; McGuire, Darren K; de Lemos, James A; Khera, Amit
2013-03-15
Women with angina pectoris and abnormal stress test findings commonly have no epicardial coronary artery disease (CAD) at catheterization. The aim of the present study was to develop a risk score to predict obstructive CAD in such patients. Data were analyzed from 337 consecutive women with angina pectoris and abnormal stress test findings who underwent cardiac catheterization at our center from 2003 to 2007. Forward selection multivariate logistic regression analysis was used to identify the independent predictors of CAD, defined by ≥50% diameter stenosis in ≥1 epicardial coronary artery. The independent predictors included age ≥55 years (odds ratio 2.3, 95% confidence interval 1.3 to 4.0), body mass index <30 kg/m(2) (odds ratio 1.9, 95% confidence interval 1.1 to 3.1), smoking (odds ratio 2.6, 95% confidence interval 1.4 to 4.8), low high-density lipoprotein cholesterol (odds ratio 2.9, 95% confidence interval 1.5 to 5.5), family history of premature CAD (odds ratio 2.4, 95% confidence interval 1.0 to 5.7), lateral abnormality on stress imaging (odds ratio 2.8, 95% confidence interval 1.5 to 5.5), and exercise capacity <5 metabolic equivalents (odds ratio 2.4, 95% confidence interval 1.1 to 5.6). Assigning each variable 1 point summed to constitute a risk score, a graded association between the score and prevalent CAD (ptrend <0.001). The risk score demonstrated good discrimination with a cross-validated c-statistic of 0.745 (95% confidence interval 0.70 to 0.79), and an optimized cutpoint of a score of ≤2 included 62% of the subjects and had a negative predictive value of 80%. In conclusion, a simple clinical risk score of 7 characteristics can help differentiate those more or less likely to have CAD among women with angina pectoris and abnormal stress test findings. This tool, if validated, could help to guide testing strategies in women with angina pectoris. Copyright © 2013 Elsevier Inc. All rights reserved.
The Priority Heuristic: Making Choices Without Trade-Offs
Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph
2010-01-01
Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, we generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (i) Allais' paradox, (ii) risk aversion for gains if probabilities are high, (iii) risk seeking for gains if probabilities are low (lottery tickets), (iv) risk aversion for losses if probabilities are low (buying insurance), (v) risk seeking for losses if probabilities are high, (vi) certainty effect, (vii) possibility effect, and (viii) intransitivities. We test how accurately the heuristic predicts people's choices, compared to previously proposed heuristics and three modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. PMID:16637767
Prediction Model for Predicting Powdery Mildew using ANN for Medicinal Plant— Picrorhiza kurrooa
NASA Astrophysics Data System (ADS)
Shivling, V. D.; Ghanshyam, C.; Kumar, Rakesh; Kumar, Sanjay; Sharma, Radhika; Kumar, Dinesh; Sharma, Atul; Sharma, Sudhir Kumar
2017-02-01
Plant disease fore casting system is an important system as it can be used for prediction of disease, further it can be used as an alert system to warn the farmers in advance so as to protect their crop from being getting infected. Fore casting system will predict the risk of infection for crop by using the environmental factors that favor in germination of disease. In this study an artificial neural network based system for predicting the risk of powdery mildew in Picrorhiza kurrooa was developed. For development, Levenberg-Marquardt backpropagation algorithm was used having a single hidden layer of ten nodes. Temperature and duration of wetness are the major environmental factors that favor infection. Experimental data was used as a training set and some percentage of data was used for testing and validation. The performance of the system was measured in the form of the coefficient of correlation (R), coefficient of determination (R2), mean square error and root mean square error. For simulating the network an inter face was developed. Using this interface the network was simulated by putting temperature and wetness duration so as to predict the level of risk at that particular value of the input data.
Uttley, M; Crawford, M H
1994-02-01
In 1980 and 1981 Mennonite descendants of a group of Russian immigrants participated in a multidisciplinary study of biological aging. The Mennonites live in Goessel, Kansas, and Henderson, Nebraska. In 1991 the survival status of the participants was documented by each church secretary. Data are available for 1009 individuals, 177 of whom are now deceased. They ranged from 20 to 95 years in age when the data were collected. Biological ages were computed using a stepwise multiple regression procedure based on 38 variables previously identified as being related to survival, with chronological age as the dependent variable. Standardized residuals place participants in either a predicted-younger or a predicted-older group. The independence of the variables biological age and survival status is tested with the chi-square statistic. The significance of biological age differences between surviving and deceased Mennonites is determined by t test values. The two statistics provide consistent results. Predicted age group classification and survival status are related. The group of deceased participants is generally predicted to be older than the group of surviving participants, although neither statistic is significant for all subgroups of Mennonites. In most cases, however, individuals in the predicted-older groups are at a relatively higher risk of dying compared with those in the predicted-younger groups, although the increased risk is not always significant.
Karres, Julian; Kieviet, Noera; Eerenberg, Jan-Peter; Vrouenraets, Bart C
2018-01-01
Early mortality after hip fracture surgery is high and preoperative risk assessment for the individual patient is challenging. A risk model could identify patients in need of more intensive perioperative care, provide insight in the prognosis, and allow for risk adjustment in audits. This study aimed to develop and validate a risk prediction model for 30-day mortality after hip fracture surgery: the Hip fracture Estimator of Mortality Amsterdam (HEMA). Data on 1050 consecutive patients undergoing hip fracture surgery between 2004 and 2010 were retrospectively collected and randomly split into a development cohort (746 patients) and validation cohort (304 patients). Logistic regression analysis was performed in the development cohort to determine risk factors for the HEMA. Discrimination and calibration were assessed in both cohorts using the area under the receiver operating characteristic curve (AUC), the Hosmer-Lemeshow goodness-of-fit test, and by stratification into low-, medium- and high-risk groups. Nine predictors for 30-day mortality were identified and used in the final model: age ≥85 years, in-hospital fracture, signs of malnutrition, myocardial infarction, congestive heart failure, current pneumonia, renal failure, malignancy, and serum urea >9 mmol/L. The HEMA showed good discrimination in the development cohort (AUC = 0.81) and the validation cohort (AUC = 0.79). The Hosmer-Lemeshow test indicated no lack of fit in either cohort (P > 0.05). The HEMA is based on preoperative variables and can be used to predict the risk of 30-day mortality after hip fracture surgery for the individual patient. Prognostic Level II. See Instructions for Authors for a complete description of levels of evidence.