Adjusted Clinical Groups: Predictive Accuracy for Medicaid Enrollees in Three States
Adams, E. Kathleen; Bronstein, Janet M.; Raskind-Hood, Cheryl
2002-01-01
Actuarial split-sample methods were used to assess predictive accuracy of adjusted clinical groups (ACGs) for Medicaid enrollees in Georgia, Mississippi (lagging in managed care penetration), and California. Accuracy for two non-random groups—high-cost and located in urban poor areas—was assessed. Measures for random groups were derived with and without short-term enrollees to assess the effect of turnover on predictive accuracy. ACGs improved predictive accuracy for high-cost conditions in all States, but did so only for those in Georgia's poorest urban areas. Higher and more unpredictable expenses of short-term enrollees moderated the predictive power of ACGs. This limitation was significant in Mississippi due in part, to that State's very high proportion of short-term enrollees. PMID:12545598
Alishiri, Gholam Hossein; Bayat, Noushin; Fathi Ashtiani, Ali; Tavallaii, Seyed Abbas; Assari, Shervin; Moharamzad, Yashar
2008-01-01
The aim of this work was to develop two logistic regression models capable of predicting physical and mental health related quality of life (HRQOL) among rheumatoid arthritis (RA) patients. In this cross-sectional study which was conducted during 2006 in the outpatient rheumatology clinic of our university hospital, Short Form 36 (SF-36) was used for HRQOL measurements in 411 RA patients. A cutoff point to define poor versus good HRQOL was calculated using the first quartiles of SF-36 physical and mental component scores (33.4 and 36.8, respectively). Two distinct logistic regression models were used to derive predictive variables including demographic, clinical, and psychological factors. The sensitivity, specificity, and accuracy of each model were calculated. Poor physical HRQOL was positively associated with pain score, disease duration, monthly family income below 300 US$, comorbidity, patient global assessment of disease activity or PGA, and depression (odds ratios: 1.1; 1.004; 15.5; 1.1; 1.02; 2.08, respectively). The variables that entered into the poor mental HRQOL prediction model were monthly family income below 300 US$, comorbidity, PGA, and bodily pain (odds ratios: 6.7; 1.1; 1.01; 1.01, respectively). Optimal sensitivity and specificity were achieved at a cutoff point of 0.39 for the estimated probability of poor physical HRQOL and 0.18 for mental HRQOL. Sensitivity, specificity, and accuracy of the physical and mental models were 73.8, 87, 83.7% and 90.38, 70.36, 75.43%, respectively. The results show that the suggested models can be used to predict poor physical and mental HRQOL separately among RA patients using simple variables with acceptable accuracy. These models can be of use in the clinical decision-making of RA patients and to recognize patients with poor physical or mental HRQOL in advance, for better management.
Math and numeracy in young adults with spina bifida and hydrocephalus.
Dennis, Maureen; Barnes, Marcia
2002-01-01
The developmental stability of poor math skill was studied in 31 young adults with spina bifida and hydrocephalus (SBH), a neurodevelopmental disorder involving malformations of the brain and spinal cord. Longitudinally, individuals with poor math problem solving as children grew into adults with poor problem solving and limited functional numeracy. As a group, young adults with SBH had poor computation accuracy, computation speed, problem solving, a ndfunctional numeracy. Computation accuracy was related to a supporting cognitive system (working memory for numbers), and functional numeracy was related to one medical history variable (number of lifetime shunt revisions). Adult functional numeracy, but not functional literacy, was predictive of higher levels of social, personal, and community independence.
Hengartner, M P; Heekeren, K; Dvorsky, D; Walitza, S; Rössler, W; Theodoridou, A
2017-09-01
The aim of this study was to critically examine the prognostic validity of various clinical high-risk (CHR) criteria alone and in combination with additional clinical characteristics. A total of 188 CHR positive persons from the region of Zurich, Switzerland (mean age 20.5 years; 60.2% male), meeting ultra high-risk (UHR) and/or basic symptoms (BS) criteria, were followed over three years. The test battery included the Structured Interview for Prodromal Syndromes (SIPS), verbal IQ and many other screening tools. Conversion to psychosis was defined according to ICD-10 criteria for schizophrenia (F20) or brief psychotic disorder (F23). Altogether n=24 persons developed manifest psychosis within three years and according to Kaplan-Meier survival analysis, the projected conversion rate was 17.5%. The predictive accuracy of UHR was statistically significant but poor (area under the curve [AUC]=0.65, P<.05), whereas BS did not predict psychosis beyond mere chance (AUC=0.52, P=.730). Sensitivity and specificity were 0.83 and 0.47 for UHR, and 0.96 and 0.09 for BS. UHR plus BS achieved an AUC=0.66, with sensitivity and specificity of 0.75 and 0.56. In comparison, baseline antipsychotic medication yielded a predictive accuracy of AUC=0.62 (sensitivity=0.42; specificity=0.82). A multivariable prediction model comprising continuous measures of positive symptoms and verbal IQ achieved a substantially improved prognostic accuracy (AUC=0.85; sensitivity=0.86; specificity=0.85; positive predictive value=0.54; negative predictive value=0.97). We showed that BS have no predictive accuracy beyond chance, while UHR criteria poorly predict conversion to psychosis. Combining BS with UHR criteria did not improve the predictive accuracy of UHR alone. In contrast, dimensional measures of both positive symptoms and verbal IQ showed excellent prognostic validity. A critical re-thinking of binary at-risk criteria is necessary in order to improve the prognosis of psychotic disorders. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C
2018-06-01
Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.
E-nose based rapid prediction of early mouldy grain using probabilistic neural networks
Ying, Xiaoguo; Liu, Wei; Hui, Guohua; Fu, Jun
2015-01-01
In this paper, early mouldy grain rapid prediction method using probabilistic neural network (PNN) and electronic nose (e-nose) was studied. E-nose responses to rice, red bean, and oat samples with different qualities were measured and recorded. E-nose data was analyzed using principal component analysis (PCA), back propagation (BP) network, and PNN, respectively. Results indicated that PCA and BP network could not clearly discriminate grain samples with different mouldy status and showed poor predicting accuracy. PNN showed satisfying discriminating abilities to grain samples with an accuracy of 93.75%. E-nose combined with PNN is effective for early mouldy grain prediction. PMID:25714125
Artificial neural network prediction of ischemic tissue fate in acute stroke imaging
Huang, Shiliang; Shen, Qiang; Duong, Timothy Q
2010-01-01
Multimodal magnetic resonance imaging of acute stroke provides predictive value that can be used to guide stroke therapy. A flexible artificial neural network (ANN) algorithm was developed and applied to predict ischemic tissue fate on three stroke groups: 30-, 60-minute, and permanent middle cerebral artery occlusion in rats. Cerebral blood flow (CBF), apparent diffusion coefficient (ADC), and spin–spin relaxation time constant (T2) were acquired during the acute phase up to 3 hours and again at 24 hours followed by histology. Infarct was predicted on a pixel-by-pixel basis using only acute (30-minute) stroke data. In addition, neighboring pixel information and infarction incidence were also incorporated into the ANN model to improve prediction accuracy. Receiver-operating characteristic analysis was used to quantify prediction accuracy. The major findings were the following: (1) CBF alone poorly predicted the final infarct across three experimental groups; (2) ADC alone adequately predicted the infarct; (3) CBF+ADC improved the prediction accuracy; (4) inclusion of neighboring pixel information and infarction incidence further improved the prediction accuracy; and (5) prediction was more accurate for permanent occlusion, followed by 60- and 30-minute occlusion. The ANN predictive model could thus provide a flexible and objective framework for clinicians to evaluate stroke treatment options on an individual patient basis. PMID:20424631
Should gram stains have a role in diagnosing hip arthroplasty infections?
Johnson, Aaron J; Zywiel, Michael G; Stroh, D Alex; Marker, David R; Mont, Michael A
2010-09-01
The utility of Gram stains in diagnosing periprosthetic infections following total hip arthroplasty has recently been questioned. Several studies report low sensitivity of the test, and its poor ability to either confirm or rule out infection in patients undergoing revision total hip arthroplasty. Despite this, many institutions including that of the senior author continue to perform Gram stains during revision total hip arthroplasty. We assessed the sensitivity, specificity, accuracy, and positive and negative predictive values of Gram stains from surgical-site samplings taken from procedures on patients with both infected and aseptic revision total hip arthroplasties. A review was performed on patients who underwent revision total hip arthroplasty between 2000 and 2007. Eighty-two Gram stains were performed on patients who had infected total hip arthroplasties and underwent revision procedures. Additionally, of the 410 revision total hip arthroplasties performed on patients who were confirmed infection-free, 120 Gram stains were performed. Patients were diagnosed as infected using multiple criteria at the time of surgery. Sensitivity, specificity, positive and negative predictive values, and accuracy were calculated from these Gram stain results. The Gram stain demonstrated a sensitivity and specificity of 9.8% and 100%, respectively. In this series, the Gram stain had a negative predictive value of 62%, a positive predictive value of 100%, and an accuracy of 63%. Gram stains obtained from surgical-site samples had poor sensitivity and poor negative predictive value. Based on these findings, as well as those of other authors, we believe that Gram stains should no longer be considered for diagnosing infections in revision total hip arthroplasty. Level III, diagnostic study. See Guidelines for Authors for a complete description of levels of evidence.
Chen, Yile; Tai, Qiang; Hong, Shaodong; Kong, Yuan; Shang, Yushu; Liang, Wenhua; Guo, Zhiyong; He, Xiaoshun
2012-11-15
The question of whether high pretransplantation soluble CD30 (sCD30) level can be a predictor of kidney transplant acute rejection (AR) is under debate. Herein, we performed a meta-analysis on the predictive efficacy of sCD30 for AR in renal transplantation. PubMed (1966-2012), EMBASE (1988-2012), and Web of Science (1986-2012) databases were searched for studies concerning the predictive efficacy of sCD30 for AR after kidney transplantation. After a careful review of eligible studies, sensitivity, specificity, and other measures of the accuracy of sCD30 were pooled. A summary receiver operating characteristic curve was used to represent the overall test performance. Twelve studies enrolling 2507 patients met the inclusion criteria. The pooled estimates for pretransplantation sCD30 in prediction of allograft rejection risk were poor, with a sensitivity of 0.70 (95% confidence interval (CI), 0.66-0.74), a specificity of 0.48 (95% CI, 0.46-0.50), a positive likelihood ratio of 1.35 (95% CI, 1.20-1.53), a negative likelihood ratio of 0.68 (95% CI, 0.55-0.84), and a diagnostic odds ratio of 2.07 (95% CI, 1.54-2.80). The area under curve of the summary receiver operating characteristic curve was 0.60, indicating poor overall accuracy of the serum sCD30 level in the prediction of patients at risk for AR. The results of the meta-analysis show that the accuracy of pretransplantation sCD30 for predicting posttransplantation AR was poor. Prospective studies are needed to clarify the usefulness of this test for identifying risks of AR in transplant recipients.
Accuracy of Predicted Genomic Breeding Values in Purebred and Crossbred Pigs.
Hidalgo, André M; Bastiaansen, John W M; Lopes, Marcos S; Harlizius, Barbara; Groenen, Martien A M; de Koning, Dirk-Jan
2015-05-26
Genomic selection has been widely implemented in dairy cattle breeding when the aim is to improve performance of purebred animals. In pigs, however, the final product is a crossbred animal. This may affect the efficiency of methods that are currently implemented for dairy cattle. Therefore, the objective of this study was to determine the accuracy of predicted breeding values in crossbred pigs using purebred genomic and phenotypic data. A second objective was to compare the predictive ability of SNPs when training is done in either single or multiple populations for four traits: age at first insemination (AFI); total number of piglets born (TNB); litter birth weight (LBW); and litter variation (LVR). We performed marker-based and pedigree-based predictions. Within-population predictions for the four traits ranged from 0.21 to 0.72. Multi-population prediction yielded accuracies ranging from 0.18 to 0.67. Predictions across purebred populations as well as predicting genetic merit of crossbreds from their purebred parental lines for AFI performed poorly (not significantly different from zero). In contrast, accuracies of across-population predictions and accuracies of purebred to crossbred predictions for LBW and LVR ranged from 0.08 to 0.31 and 0.11 to 0.31, respectively. Accuracy for TNB was zero for across-population prediction, whereas for purebred to crossbred prediction it ranged from 0.08 to 0.22. In general, marker-based outperformed pedigree-based prediction across populations and traits. However, in some cases pedigree-based prediction performed similarly or outperformed marker-based prediction. There was predictive ability when purebred populations were used to predict crossbred genetic merit using an additive model in the populations studied. AFI was the only exception, indicating that predictive ability depends largely on the genetic correlation between PB and CB performance, which was 0.31 for AFI. Multi-population prediction was no better than within-population prediction for the purebred validation set. Accuracy of prediction was very trait-dependent. Copyright © 2015 Hidalgo et al.
Waide, Emily H; Tuggle, Christopher K; Serão, Nick V L; Schroyen, Martine; Hess, Andrew; Rowland, Raymond R R; Lunney, Joan K; Plastow, Graham; Dekkers, Jack C M
2018-02-01
Genomic prediction of the pig's response to the porcine reproductive and respiratory syndrome (PRRS) virus (PRRSV) would be a useful tool in the swine industry. This study investigated the accuracy of genomic prediction based on porcine SNP60 Beadchip data using training and validation datasets from populations with different genetic backgrounds that were challenged with different PRRSV isolates. Genomic prediction accuracy averaged 0.34 for viral load (VL) and 0.23 for weight gain (WG) following experimental PRRSV challenge, which demonstrates that genomic selection could be used to improve response to PRRSV infection. Training on WG data during infection with a less virulent PRRSV, KS06, resulted in poor accuracy of prediction for WG during infection with a more virulent PRRSV, NVSL. Inclusion of single nucleotide polymorphisms (SNPs) that are in linkage disequilibrium with a major quantitative trait locus (QTL) on chromosome 4 was vital for accurate prediction of VL. Overall, SNPs that were significantly associated with either trait in single SNP genome-wide association analysis were unable to predict the phenotypes with an accuracy as high as that obtained by using all genotyped SNPs across the genome. Inclusion of data from close relatives into the training population increased whole genome prediction accuracy by 33% for VL and by 37% for WG but did not affect the accuracy of prediction when using only SNPs in the major QTL region. Results show that genomic prediction of response to PRRSV infection is moderately accurate and, when using all SNPs on the porcine SNP60 Beadchip, is not very sensitive to differences in virulence of the PRRSV in training and validation populations. Including close relatives in the training population increased prediction accuracy when using the whole genome or SNPs other than those near a major QTL.
Improving transmembrane protein consensus topology prediction using inter-helical interaction.
Wang, Han; Zhang, Chao; Shi, Xiaohu; Zhang, Li; Zhou, You
2012-11-01
Alpha helix transmembrane proteins (αTMPs) represent roughly 30% of all open reading frames (ORFs) in a typical genome and are involved in many critical biological processes. Due to the special physicochemical properties, it is hard to crystallize and obtain high resolution structures experimentally, thus, sequence-based topology prediction is highly desirable for the study of transmembrane proteins (TMPs), both in structure prediction and function prediction. Various model-based topology prediction methods have been developed, but the accuracy of those individual predictors remain poor due to the limitation of the methods or the features they used. Thus, the consensus topology prediction method becomes practical for high accuracy applications by combining the advances of the individual predictors. Here, based on the observation that inter-helical interactions are commonly found within the transmembrane helixes (TMHs) and strongly indicate the existence of them, we present a novel consensus topology prediction method for αTMPs, CNTOP, which incorporates four top leading individual topology predictors, and further improves the prediction accuracy by using the predicted inter-helical interactions. The method achieved 87% prediction accuracy based on a benchmark dataset and 78% accuracy based on a non-redundant dataset which is composed of polytopic αTMPs. Our method derives the highest topology accuracy than any other individual predictors and consensus predictors, at the same time, the TMHs are more accurately predicted in their length and locations, where both the false positives (FPs) and the false negatives (FNs) decreased dramatically. The CNTOP is available at: http://ccst.jlu.edu.cn/JCSB/cntop/CNTOP.html. Copyright © 2012 Elsevier B.V. All rights reserved.
Shutter, Lori; Tong, Karen A; Holshouser, Barbara A
2004-12-01
Proton magnetic resonance spectroscopy (MRS) is being used to evaluate individuals with acute traumatic brain injury and several studies have shown that changes in certain brain metabolites (N-acetylaspartate, choline) are associated with poor neurologic outcomes. The majority of previous MRS studies have been obtained relatively late after injury and none have examined the role of glutamate/ glutamine (Glx). We conducted a prospective MRS study of 42 severely injured adults to measure quantitative metabolite changes early (7 days) after injury in normal appearing brain. We used these findings to predict long-term neurologic outcome and to determine if MRS data alone or in combination with clinical outcome variables provided better prediction of long-term outcomes. We found that glutamate/glutamine (Glx) and choline (Cho) were significantly elevated in occipital gray and parietal white matter early after injury in patients with poor long-term (6-12-month) outcomes. Glx and Cho ratios predicted long-term outcome with 94% accuracy and when combined with the motor Glasgow Coma Scale score provided the highest predictive accuracy (97%). Somatosensory evoked potentials were not as accurate as MRS data in predicting outcome. Elevated Glx and Cho are more sensitive indicators of injury and predictors of poor outcome when spectroscopy is done early after injury. This may be a reflection of early excitotoxic injury (i.e., elevated Glx) and of injury associated with membrane disruption (i.e., increased Cho) secondary to diffuse axonal injury.
MEDEX 2015: Heart Rate Variability Predicts Development of Acute Mountain Sickness.
Sutherland, Angus; Freer, Joseph; Evans, Laura; Dolci, Alberto; Crotti, Matteo; Macdonald, Jamie Hugo
2017-09-01
Sutherland, Angus, Joseph Freer, Laura Evans, Alberto Dolci, Matteo Crotti, and Jamie Hugo Macdonald. MEDEX 2015: Heart rate variability predicts development of acute mountain sickness. High Alt Med Biol. 18: 199-208, 2017. Acute mountain sickness (AMS) develops when the body fails to acclimatize to atmospheric changes at altitude. Preascent prediction of susceptibility to AMS would be a useful tool to prevent subsequent harm. Changes to peripheral oxygen saturation (SpO 2 ) on hypoxic exposure have previously been shown to be of poor predictive value. Heart rate variability (HRV) has shown promise in the early prediction of AMS, but its use pre-expedition has not previously been investigated. We aimed to determine whether pre- and intraexpedition HRV assessment could predict susceptibility to AMS at high altitude with better diagnostic accuracy than SpO 2 . Forty-four healthy volunteers undertook an expedition in the Nepali Himalaya to >5000 m. SpO 2 and HRV parameters were recorded at rest in normoxia and in a normobaric hypoxic chamber before the expedition. On the expedition HRV parameters and SpO 2 were collected again at 3841 m. A daily Lake Louise Score was obtained to assess AMS symptomology. Low frequency/high frequency (LF/HF) ratio in normoxia (cutpoint ≤2.28 a.u.) and LF following 15 minutes of exposure to normobaric hypoxia had moderate (area under the curve ≥0.8) diagnostic accuracy. LF/HF ratio in normoxia had the highest sensitivity (85%) and specificity (88%) for predicting AMS on subsequent ascent to altitude. In contrast, pre-expedition SpO 2 measurements had poor (area under the curve <0.7) diagnostic accuracy and inferior sensitivity and specificity. Pre-ascent measurement of HRV in normoxia was found to be of better diagnostic accuracy for AMS prediction than all measures of HRV in hypoxia, and better than peripheral oxygen saturation monitoring.
Prediction of Spirometric Forced Expiratory Volume (FEV1) Data Using Support Vector Regression
NASA Astrophysics Data System (ADS)
Kavitha, A.; Sujatha, C. M.; Ramakrishnan, S.
2010-01-01
In this work, prediction of forced expiratory volume in 1 second (FEV1) in pulmonary function test is carried out using the spirometer and support vector regression analysis. Pulmonary function data are measured with flow volume spirometer from volunteers (N=175) using a standard data acquisition protocol. The acquired data are then used to predict FEV1. Support vector machines with polynomial kernel function with four different orders were employed to predict the values of FEV1. The performance is evaluated by computing the average prediction accuracy for normal and abnormal cases. Results show that support vector machines are capable of predicting FEV1 in both normal and abnormal cases and the average prediction accuracy for normal subjects was higher than that of abnormal subjects. Accuracy in prediction was found to be high for a regularization constant of C=10. Since FEV1 is the most significant parameter in the analysis of spirometric data, it appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.
Predicting aged pork quality using a portable Raman device.
Santos, C C; Zhao, J; Dong, X; Lonergan, S M; Huff-Lonergan, E; Outhouse, A; Carlson, K B; Prusa, K J; Fedler, C A; Yu, C; Shackelford, S D; King, D A; Wheeler, T L
2018-05-29
The utility of Raman spectroscopic signatures of fresh pork loin (1 d & 15 d postmortem) in predicting fresh pork tenderness and slice shear force (SSF) was determined. Partial least square models showed that sensory tenderness and SSF are weakly correlated (R 2 = 0.2). Raman spectral data were collected in 6 s using a portable Raman spectrometer (RS). A PLS regression model was developed to predict quantitatively the tenderness scores and SSF values from Raman spectral data, with very limited success. It was discovered that the prediction accuracies for day 15 post mortem samples are significantly greater than that for day 1 postmortem samples. Classification models were developed to predict tenderness at two ends of sensory quality as "poor" vs. "good". The accuracies of classification into different quality categories (1st to 4th percentile) are also greater for the day 15 postmortem samples for sensory tenderness (93.5% vs 76.3%) and SSF (92.8% vs 76.1%). RS has the potential to become a rapid on-line screening tool for the pork producers to quickly select meats with superior quality and/or cull poor quality to meet market demand/expectations. Copyright © 2018 Elsevier Ltd. All rights reserved.
Electroencephalography Predicts Poor and Good Outcomes After Cardiac Arrest: A Two-Center Study.
Rossetti, Andrea O; Tovar Quiroga, Diego F; Juan, Elsa; Novy, Jan; White, Roger D; Ben-Hamouda, Nawfel; Britton, Jeffrey W; Oddo, Mauro; Rabinstein, Alejandro A
2017-07-01
The prognostic role of electroencephalography during and after targeted temperature management in postcardiac arrest patients, relatively to other predictors, is incompletely known. We assessed performances of electroencephalography during and after targeted temperature management toward good and poor outcomes, along with other recognized predictors. Cohort study (April 2009 to March 2016). Two academic hospitals (Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland; Mayo Clinic, Rochester, MN). Consecutive comatose adults admitted after cardiac arrest, identified through prospective registries. All patients were managed with targeted temperature management, receiving prespecified standardized clinical, neurophysiologic (particularly, electroencephalography during and after targeted temperature management), and biochemical evaluations. We assessed electroencephalography variables (reactivity, continuity, epileptiform features, and prespecified "benign" or "highly malignant" patterns based on the American Clinical Neurophysiology Society nomenclature) and other clinical, neurophysiologic (somatosensory-evoked potential), and biochemical prognosticators. Good outcome (Cerebral Performance Categories 1 and 2) and mortality predictions at 3 months were calculated. Among 357 patients, early electroencephalography reactivity and continuity and flexor or better motor reaction had greater than 70% positive predictive value for good outcome; reactivity (80.4%; 95% CI, 75.9-84.4%) and motor response (80.1%; 95% CI, 75.6-84.1%) had highest accuracy. Early benign electroencephalography heralded good outcome in 86.2% (95% CI, 79.8-91.1%). False positive rates for mortality were less than 5% for epileptiform or nonreactive early electroencephalography, nonreactive late electroencephalography, absent somatosensory-evoked potential, absent pupillary or corneal reflexes, presence of myoclonus, and neuron-specific enolase greater than 75 µg/L; accuracy was highest for early electroencephalography reactivity (86.6%; 95% CI, 82.6-90.0). Early highly malignant electroencephalography had an false positive rate of 1.5% with accuracy of 85.7% (95% CI, 81.7-89.2%). This study provides class III evidence that electroencephalography reactivity predicts both poor and good outcomes, and motor reaction good outcome after cardiac arrest. Electroencephalography reactivity seems to be the best discriminator between good and poor outcomes. Standardized electroencephalography interpretation seems to predict both conditions during and after targeted temperature management.
Giovannini, Giada; Monti, Giulia; Tondelli, Manuela; Marudi, Andrea; Valzania, Franco; Leitinger, Markus; Trinka, Eugen; Meletti, Stefano
2017-03-01
Status epilepticus (SE) is a neurological emergency, characterized by high short-term morbidity and mortality. We evaluated and compared two scores that have been developed to evaluate status epilepticus prognosis: STESS (Status Epilepticus Severity Score) and EMSE (Epidemiology based Mortality score in Status Epilepticus). A prospective observational study was performed on consecutive patients with SE admitted between September 2013 and August 2015. Demographics, clinical variables, STESS-3 and -4, and EMSE-64 scores were calculated for each patient at baseline. SE drug response, 30-day mortality and morbidity were the outcomes measure. 162 episodes of SE were observed: 69% had a STESS ≥3; 34% had a STESS ≥4; 51% patients had an EMSE ≥64. The 30-days mortality was 31.5%: EMSE-64 showed greater negative predictive value (NPV) (97.5%), positive predictive value (PPV) (59.8%) and accuracy in the prediction of death than STESS-3 and STESS-4 (p<0.001). At 30 days, the clinical condition had deteriorated in 59% of the cases: EMSE-64 showed greater NPV (71.3%), PPV (87.8%) and accuracy than STESS-3 and STESS-4 (p<0.001) in the prediction of this outcome. In 23% of all cases, status epilepticus proved refractory to non-anaesthetic treatment. All three scales showed a high NPV (EMSE-64: 87.3%; STESS-4: 89.4%; STESS-3: 87.5%) but a low PPV (EMSE-64: 40.9%; STESS-4: 52.9%; STESS-3: 32%) for the prediction of refractoriness to first and second line drugs. This means that accuracy for the prediction of refractoriness was equally poor for all scales. EMSE-64 appears superior to STESS-3 and STESS-4 in the prediction of 30-days mortality and morbidity. All scales showed poor accuracy in the prediction of response to first and second line antiepileptic drugs. At present, there are no reliable scores capable of predicting treatment responsiveness. Copyright © 2017 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Davis, Eric; Devlin, Sean; Cooper, Candice; Nhaissi, Melissa; Paulson, Jennifer; Wells, Deborah; Scaradavou, Andromachi; Giralt, Sergio; Papadopoulos, Esperanza; Kernan, Nancy A; Byam, Courtney; Barker, Juliet N
2018-05-01
A strategy to rapidly determine if a matched unrelated donor (URD) can be secured for allograft recipients is needed. We sought to validate the accuracy of (1) HapLogic match predictions and (2) a resultant novel Search Prognosis (SP) patient categorization that could predict 8/8 HLA-matched URD(s) likelihood at search initiation. Patient prognosis categories at search initiation were correlated with URD confirmatory typing results. HapLogic-based SP categorizations accurately predicted the likelihood of an 8/8 HLA-match in 830 patients (1530 donors tested). Sixty percent of patients had 8/8 URD(s) identified. Patient SP categories (217 very good, 104 good, 178 fair, 33 poor, 153 very poor, 145 futile) were associated with a marked progressive decrease in 8/8 URD identification and transplantation. Very good to good categories were highly predictive of identifying and receiving an 8/8 URD regardless of ancestry. Europeans in fair/poor categories were more likely to identify and receive an 8/8 URD compared with non-Europeans. In all ancestries very poor and futile categories predicted no 8/8 URDs. HapLogic permits URD search results to be predicted once patient HLA typing and ancestry is obtained, dramatically improving search efficiency. Poor, very poor, andfutile searches can be immediately recognized, thereby facilitating prompt pursuit of alternative donors. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
Enhancing the Performance of LibSVM Classifier by Kernel F-Score Feature Selection
NASA Astrophysics Data System (ADS)
Sarojini, Balakrishnan; Ramaraj, Narayanasamy; Nickolas, Savarimuthu
Medical Data mining is the search for relationships and patterns within the medical datasets that could provide useful knowledge for effective clinical decisions. The inclusion of irrelevant, redundant and noisy features in the process model results in poor predictive accuracy. Much research work in data mining has gone into improving the predictive accuracy of the classifiers by applying the techniques of feature selection. Feature selection in medical data mining is appreciable as the diagnosis of the disease could be done in this patient-care activity with minimum number of significant features. The objective of this work is to show that selecting the more significant features would improve the performance of the classifier. We empirically evaluate the classification effectiveness of LibSVM classifier on the reduced feature subset of diabetes dataset. The evaluations suggest that the feature subset selected improves the predictive accuracy of the classifier and reduce false negatives and false positives.
Synthetic Stromgren photometry for F dwarf stars
NASA Technical Reports Server (NTRS)
Bell, R. A.
1988-01-01
Recent synthetic spectrum and color calculations for cool dwarf star models are tested by comparison with observation. The accuracy of the computed dependence of the thermal colors B-V and b-y on effective temperature is examined, and H-beta indices are presented and compared with observed values. The accuracy of the predictions of the Stromgren uvby system metal-abundance indicator m1 and luminosity indicator c1 are tested. A new calibration of the c1, b-y diagram in terms of absolute magnitudes is given, making use of recent calculations of stellar isochrones. Observations of very metal-poor subdwarfs are used to study the accuracy of the isochrones. The c1, b-y diagram of the subdwarfs is compared with that of the turnoff-region stars in the very metal-poor globular cluster NGC 6397.
Hydrometeorological model for streamflow prediction
Tangborn, Wendell V.
1979-01-01
The hydrometeorological model described in this manual was developed to predict seasonal streamflow from water in storage in a basin using streamflow and precipitation data. The model, as described, applies specifically to the Skokomish, Nisqually, and Cowlitz Rivers, in Washington State, and more generally to streams in other regions that derive seasonal runoff from melting snow. Thus the techniques demonstrated for these three drainage basins can be used as a guide for applying this method to other streams. Input to the computer program consists of daily averages of gaged runoff of these streams, and daily values of precipitation collected at Longmire, Kid Valley, and Cushman Dam. Predictions are based on estimates of the absolute storage of water, predominately as snow: storage is approximately equal to basin precipitation less observed runoff. A pre-forecast test season is used to revise the storage estimate and improve the prediction accuracy. To obtain maximum prediction accuracy for operational applications with this model , a systematic evaluation of several hydrologic and meteorologic variables is first necessary. Six input options to the computer program that control prediction accuracy are developed and demonstrated. Predictions of streamflow can be made at any time and for any length of season, although accuracy is usually poor for early-season predictions (before December 1) or for short seasons (less than 15 days). The coefficient of prediction (CP), the chief measure of accuracy used in this manual, approaches zero during the late autumn and early winter seasons and reaches a maximum of about 0.85 during the spring snowmelt season. (Kosco-USGS)
Shia, Wei-Chung; Huang, Yu-Len; Wu, Hwa-Koon; Chen, Dar-Ren
2017-05-01
Strategies are needed for the identification of a poor response to treatment and determination of appropriate chemotherapy strategies for patients in the early stages of neoadjuvant chemotherapy for breast cancer. We hypothesize that power Doppler ultrasound imaging can provide useful information on predicting response to neoadjuvant chemotherapy. The solid directional flow of vessels in breast tumors was used as a marker of pathologic complete responses (pCR) in patients undergoing neoadjuvant chemotherapy. Thirty-one breast cancer patients who received neoadjuvant chemotherapy and had tumors of 2 to 5 cm were recruited. Three-dimensional power Doppler ultrasound with high-definition flow imaging technology was used to acquire the indices of tumor blood flow/volume, and the chemotherapy response prediction was established, followed by support vector machine classification. The accuracy of pCR prediction before the first chemotherapy treatment was 83.87% (area under the ROC curve [AUC] = 0.6957). After the second chemotherapy treatment, the accuracy of was 87.9% (AUC = 0.756). Trend analysis showed that good and poor responders exhibited different trends in vascular flow during chemotherapy. This preliminary study demonstrates the feasibility of using the vascular flow in breast tumors to predict chemotherapeutic efficacy. © 2017 by the American Institute of Ultrasound in Medicine.
Catto, James W F; Linkens, Derek A; Abbod, Maysam F; Chen, Minyou; Burton, Julian L; Feeley, Kenneth M; Hamdy, Freddie C
2003-09-15
New techniques for the prediction of tumor behavior are needed, because statistical analysis has a poor accuracy and is not applicable to the individual. Artificial intelligence (AI) may provide these suitable methods. Whereas artificial neural networks (ANN), the best-studied form of AI, have been used successfully, its hidden networks remain an obstacle to its acceptance. Neuro-fuzzy modeling (NFM), another AI method, has a transparent functional layer and is without many of the drawbacks of ANN. We have compared the predictive accuracies of NFM, ANN, and traditional statistical methods, for the behavior of bladder cancer. Experimental molecular biomarkers, including p53 and the mismatch repair proteins, and conventional clinicopathological data were studied in a cohort of 109 patients with bladder cancer. For all three of the methods, models were produced to predict the presence and timing of a tumor relapse. Both methods of AI predicted relapse with an accuracy ranging from 88% to 95%. This was superior to statistical methods (71-77%; P < 0.0006). NFM appeared better than ANN at predicting the timing of relapse (P = 0.073). The use of AI can accurately predict cancer behavior. NFM has a similar or superior predictive accuracy to ANN. However, unlike the impenetrable "black-box" of a neural network, the rules of NFM are transparent, enabling validation from clinical knowledge and the manipulation of input variables to allow exploratory predictions. This technique could be used widely in a variety of areas of medicine.
Accuracy of four commonly used color vision tests in the identification of cone disorders.
Thiadens, Alberta A H J; Hoyng, Carel B; Polling, Jan Roelof; Bernaerts-Biskop, Riet; van den Born, L Ingeborgh; Klaver, Caroline C W
2013-04-01
To determine which color vision test is most appropriate for the identification of cone disorders. In a clinic-based study, four commonly used color vision tests were compared between patients with cone dystrophy (n = 37), controls with normal visual acuity (n = 35), and controls with low vision (n = 39) and legal blindness (n = 11). Mean outcome measures were specificity, sensitivity, positive predictive value and discriminative accuracy of the Ishihara test, Hardy-Rand-Rittler (HRR) test, and the Lanthony and Farnsworth Panel D-15 tests. In the comparison between cone dystrophy and all controls, sensitivity, specificity and predictive value were highest for the HRR and Ishihara tests. When patients were compared to controls with normal vision, discriminative accuracy was highest for the HRR test (c-statistic for PD-axes 1, for T-axis 0.851). When compared to controls with poor vision, discriminative accuracy was again highest for the HRR test (c-statistic for PD-axes 0.900, for T-axis 0.766), followed by the Lanthony Panel D-15 test (c-statistic for PD-axes 0.880, for T-axis 0.500) and Ishihara test (c-statistic 0.886). Discriminative accuracies of all tests did not further decrease when patients were compared to controls who were legally blind. The HRR, Lanthony Panel D-15 and Ishihara all have a high discriminative accuracy to identify cone disorders, but the highest scores were for the HRR test. Poor visual acuity slightly decreased the accuracy of all tests. Our advice is to use the HRR test since this test also allows for evaluation of all three color axes and quantification of color defects.
Pan, Hui; Ba-Thein, William
2018-01-01
Global Pharma Health Fund (GPHF) Minilab™, a semi-quantitative thin-layer chromatography (TLC)-based commercially available test kit, is widely used in drug quality surveillance globally, but its diagnostic accuracy is unclear. We investigated the diagnostic accuracy of Minilab system for antimicrobials, using high-performance liquid chromatography (HPLC) as reference standard. Following the Minilab protocols and the Pharmacopoeia of the People's Republic of China protocols, Minilab-TLC and HPLC were used to test five common antimicrobials (506 batches) for relative concentration of active pharmaceutical ingredients. The prevalence of poor-quality antimicrobials determined, respectively, by Minilab TLC and HPLC was amoxicillin (0% versus 14.9%), azithromycin (0% versus 17.4%), cefuroxime axetil (14.3% versus 0%), levofloxacin (0% versus 3.0%), and metronidazole (0% versus 38.0%). The Minilab TLC had false-positive and false-negative detection rates of 2.6% (13/506) and 15.2% (77/506) accordingly, resulting in the following test characteristics: sensitivity 0%, specificity 97.0%, positive predictive value 0, negative predictive value 0.8, positive likelihood ratio 0, negative likelihood ratio 1.0, diagnostic odds ratio 0, and adjusted diagnostic odds ratio 0.2. This study demonstrates unsatisfying diagnostic accuracy of Minilab system in screening poor-quality antimicrobials of common use. Using Minilab as a stand-alone system for monitoring drug quality should be reconsidered.
Yang, Qinglin; Su, Yingying; Hussain, Mohammed; Chen, Weibi; Ye, Hong; Gao, Daiquan; Tian, Fei
2014-05-01
Burst suppression ratio (BSR) is a quantitative electroencephalography (qEEG) parameter. The purpose of our study was to compare the accuracy of BSR when compared to other EEG parameters in predicting poor outcomes in adults who sustained post-anoxic coma while not being subjected to therapeutic hypothermia. EEG was registered and recorded at least once within 7 days of post-anoxic coma onset. Electrodes were placed according to the international 10-20 system, using a 16-channel layout. Each EEG expert scored raw EEG using a grading scale adapted from Young and scored amplitude-integrated electroencephalography tracings, in addition to obtaining qEEG parameters defined as BSR with a defined threshold. Glasgow outcome scales of 1 and 2 at 3 months, determined by two blinded neurologists, were defined as poor outcome. Sixty patients with Glasgow coma scale score of 8 or less after anoxic accident were included. The sensitivity (97.1%), specificity (73.3%), positive predictive value (82.5%), and negative prediction value (95.0%) of BSR in predicting poor outcome were higher than other EEG variables. BSR1 and BSR2 were reliable in predicting death (area under the curve > 0.8, P < 0.05), with the respective cutoff points being 39.8% and 61.6%. BSR1 was reliable in predicting poor outcome (area under the curve = 0.820, P < 0.05) with a cutoff point of 23.9%. BSR1 was also an independent predictor of increased risk of death (odds ratio = 1.042, 95% confidence intervals: 1.012-1.073, P = 0.006). BSR may be a better predictor in prognosticating poor outcomes in patients with post-anoxic coma who do not undergo therapeutic hypothermia when compared to other qEEG parameters.
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
Azadi, Sama; Karimi-Jashni, Ayoub
2016-02-01
Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Yu, Zhiyuan; Zheng, Jun; Guo, Rui; Ma, Lu; Li, Mou; Wang, Xiaoze; Lin, Sen; Li, Hao; You, Chao
2017-12-01
Hematoma expansion is independently associated with poor outcome in intracerebral hemorrhage (ICH). Blend sign is a simple predictor for hematoma expansion on non-contrast computed tomography. However, its accuracy for predicting hematoma expansion is inconsistent in previous studies. This meta-analysis is aimed to systematically assess the performance of blend sign in predicting hematoma expansion in ICH. A systematic literature search was conducted. Original studies about predictive accuracy of blend sign for hematoma expansion in ICH were included. Pooled sensitivity, specificity, positive and negative likelihood ratios were calculated. Summary receiver operating characteristics curve was constructed. Publication bias was assessed by Deeks' funnel plot asymmetry test. A total of 5 studies with 2248 patients were included in this meta-analysis. The pooled sensitivity, specificity, positive and negative likelihood ratios of blend sign for predicting hematoma expansion were 0.28, 0.92, 3.4 and 0.78, respectively. The area under the curve (AUC) was 0.85. No significant publication bias was found. This meta-analysis demonstrates that blend sign is a useful predictor with high specificity for hematoma expansion in ICH. Further studies with larger sample size are still necessary to verify the accuracy of blend sign for predicting hematoma expansion. Copyright © 2017 Elsevier B.V. All rights reserved.
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2018-02-01
The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.
Estimation of proteinuria as a predictor of complications of pre-eclampsia: a systematic review
Thangaratinam, Shakila; Coomarasamy, Arri; O'Mahony, Fidelma; Sharp, Steve; Zamora, Javier; Khan, Khalid S; Ismail, Khaled MK
2009-01-01
Background Proteinuria is one of the essential criteria for the clinical diagnosis of pre-eclampsia. Increasing levels of proteinuria is considered to be associated with adverse maternal and fetal outcomes. We aim to determine the accuracy with which the amount of proteinuria predicts maternal and fetal complications in women with pre-eclampsia by systematic quantitative review of test accuracy studies. Methods We conducted electronic searches in MEDLINE (1951 to 2007), EMBASE (1980 to 2007), the Cochrane Library (2007) and the MEDION database to identify relevant articles and hand-search of selected specialist journals and reference lists of articles. There were no language restrictions for any of these searches. Two reviewers independently selected those articles in which the accuracy of proteinuria estimate was evaluated to predict maternal and fetal complications of pre-eclampsia. Data were extracted on study characteristics, quality and accuracy to construct 2 × 2 tables with maternal and fetal complications as reference standards. Results Sixteen primary articles with a total of 6749 women met the selection criteria with levels of proteinuria estimated by urine dipstick, 24-hour urine proteinuria or urine protein:creatinine ratio as a predictor of complications of pre-eclampsia. All 10 studies predicting maternal outcomes showed that proteinuria is a poor predictor of maternal complications in women with pre-eclampsia. Seventeen studies used laboratory analysis and eight studies bedside analysis to assess the accuracy of proteinuria in predicting fetal and neonatal complications. Summary likelihood ratios of positive and negative tests for the threshold level of 5 g/24 h were 2.0 (95% CI 1.5, 2.7) and 0.53 (95% CI 0.27, 1) for stillbirths, 1.5 (95% CI 0.94, 2.4) and 0.73 (95% CI 0.39, 1.4) for neonatal deaths and 1.5 (95% 1, 2) and 0.78 (95% 0.64, 0.95) for Neonatal Intensive Care Unit admission. Conclusion Measure of proteinuria is a poor predictor of either maternal or fetal complications in women with pre-eclampsia. PMID:19317889
Hooper, Lee; Abdelhamid, Asmaa; Ali, Adam; Bunn, Diane K; Jennings, Amy; John, W Garry; Kerry, Susan; Lindner, Gregor; Pfortmueller, Carmen A; Sjöstrand, Fredrik; Walsh, Neil P; Fairweather-Tait, Susan J; Potter, John F; Hunter, Paul R; Shepstone, Lee
2015-01-01
Objectives To assess which osmolarity equation best predicts directly measured serum/plasma osmolality and whether its use could add value to routine blood test results through screening for dehydration in older people. Design Diagnostic accuracy study. Participants Older people (≥65 years) in 5 cohorts: Dietary Strategies for Healthy Ageing in Europe (NU-AGE, living in the community), Dehydration Recognition In our Elders (DRIE, living in residential care), Fortes (admitted to acute medical care), Sjöstrand (emergency room) or Pfortmueller cohorts (hospitalised with liver cirrhosis). Reference standard for hydration status Directly measured serum/plasma osmolality: current dehydration (serum osmolality >300 mOsm/kg), impending/current dehydration (≥295 mOsm/kg). Index tests 39 osmolarity equations calculated using serum indices from the same blood draw as directly measured osmolality. Results Across 5 cohorts 595 older people were included, of whom 19% were dehydrated (directly measured osmolality >300 mOsm/kg). Of 39 osmolarity equations, 5 showed reasonable agreement with directly measured osmolality and 3 had good predictive accuracy in subgroups with diabetes and poor renal function. Two equations were characterised by narrower limits of agreement, low levels of differential bias and good diagnostic accuracy in receiver operating characteristic plots (areas under the curve >0.8). The best equation was osmolarity=1.86×(Na++ K+)+1.15×glucose+urea+14 (all measured in mmol/L). It appeared useful in people aged ≥65 years with and without diabetes, poor renal function, dehydration, in men and women, with a range of ages, health, cognitive and functional status. Conclusions Some commonly used osmolarity equations work poorly, and should not be used. Given costs and prevalence of dehydration in older people we suggest use of the best formula by pathology laboratories using a cutpoint of 295 mOsm/L (sensitivity 85%, specificity 59%), to report dehydration risk opportunistically when serum glucose, urea and electrolytes are measured for other reasons in older adults. Trial registration numbers: DRIE: Research Register for Social Care, 122273; NU-AGE: ClinicalTrials.gov NCT01754012. PMID:26490100
Squara, Fabien; Scarlatti, Didier; Riccini, Philippe; Garret, Gauthier; Moceri, Pamela; Ferrari, Emile
2018-03-13
Fluoroscopic criteria have been described for the documentation of septal right ventricular (RV) lead positioning, but their accuracy remains questioned. Consecutive patients undergoing pacemaker or defibrillator implantation were prospectively included. RV lead was positioned using postero-anterior and left anterior oblique 40° incidences, and right anterior oblique 30° to rule out coronary sinus positioning when suspected. RV lead positioning using fluoroscopy was compared to true RV lead positioning as assessed by transthoracic echocardiography (TTE). Precise anatomical localizations were determined with both modalities; then, RV lead positioning was ultimately dichotomized into two simple clinically relevant categories: RV septal or RV free wall. Accuracy of fluoroscopy for RV lead positioning was then assessed by comparison with TTE. We included 100 patients. On TTE, 66/100 had a septal RV lead and 34/100 had a free wall RV lead. Fluoroscopy had moderate agreement with TTE for precise anatomical localization of RV lead (k = 0.53), and poor agreement for septal/free wall localization (k = 0.36). For predicting septal RV lead positioning, classical fluoroscopy criteria had a high sensitivity (95.5%; 63/66 patients having a septal RV lead on TTE were correctly identified by fluoroscopy) but a very low specificity (35.3%; only 12/34 patients having a free wall RV lead on TTE were correctly identified by fluoroscopy). Classical fluoroscopy criteria have a poor accuracy for identifying RV free wall leads, which are most of the time misclassified as septal. This raises important concerns about the efficacy and safety of RV lead positioning using classical fluoroscopy criteria.
Arnold, David T; Rowen, Donna; Versteegh, Matthijs M; Morley, Anna; Hooper, Clare E; Maskell, Nicholas A
2015-01-23
In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in patients in poor health states. The aim of this study was to test all existing mapping algorithms of QLQ-C30 onto EQ-5D, in a dataset of patients with malignant pleural mesothelioma, an invariably fatal malignancy where no previous mapping estimation has been published. Health related quality of life (HRQoL) data where both the EQ-5D and QLQ-C30 were used simultaneously was obtained from the UK-based prospective observational SWAMP (South West Area Mesothelioma and Pemetrexed) trial. In the original trial 73 patients with pleural mesothelioma were offered palliative chemotherapy and their HRQoL was assessed across five time points. This data was used to test the nine available mapping algorithms found in the literature, comparing predicted against observed EQ-5D values. The ability of algorithms to predict the mean, minimise error and detect clinically significant differences was assessed. The dataset had a total of 250 observations across 5 timepoints. The linear regression mapping algorithms tested generally performed poorly, over-estimating the predicted compared to observed EQ-5D values, especially when observed EQ-5D was below 0.5. The best performing algorithm used a response mapping method and predicted the mean EQ-5D with accuracy with an average root mean squared error of 0.17 (Standard Deviation; 0.22). This algorithm reliably discriminated between clinically distinct subgroups seen in the primary dataset. This study tested mapping algorithms in a population with poor health states, where they have been previously shown to perform poorly. Further research into EQ-5D estimation should be directed at response mapping methods given its superior performance in this study.
Comparison of Drainmod Based Watershed Scale Models
Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya
2004-01-01
Watershed scale hydrology and water quality models (DRAINMOD-DUFLOW, DRAINMOD-W, DRAINMOD-GIS and WATGIS) that describe the nitrogen loadings at the outlet of poorly drained watersheds were examined with respect to their accuracy and uncertainty in model predictions. Latin Hypercube Sampling (LHS) was applied to determine the impact of uncertainty in estimating field...
Using Keystroke Analytics to Improve Pass-Fail Classifiers
ERIC Educational Resources Information Center
Casey, Kevin
2017-01-01
Learning analytics offers insights into student behaviour and the potential to detect poor performers before they fail exams. If the activity is primarily online (for example computer programming), a wealth of low-level data can be made available that allows unprecedented accuracy in predicting which students will pass or fail. In this paper, we…
Does ADHD in adults affect the relative accuracy of metamemory judgments?
Knouse, Laura E; Paradise, Matthew J; Dunlosky, John
2006-11-01
Prior research suggests that individuals with ADHD overestimate their performance across domains despite performing more poorly in these domains. The authors introduce measures of accuracy from the larger realm of judgment and decision making--namely, relative accuracy and calibration--to the study of self-evaluative judgment accuracy in adults with ADHD. Twenty-eight adults with ADHD and 28 matched controls participate in a computer-administered paired-associate learning task and predict their future recall using immediate and delayed judgments of learning (JOLs). Retrospective confidence judgments are also collected. Groups perform equally in terms of judgment magnitude and absolute judgment accuracy as measured by discrepancy scores and calibration curves. Both groups benefit equally from making their JOL at a delay, and the group with ADHD show higher relative accuracy for delayed judgments. Results suggest that under certain circumstances, adults with ADHD can make accurate judgments about their future memory.
Nnoaham, Kelechi E.; Hummelshoj, Lone; Kennedy, Stephen H.; Jenkinson, Crispin; Zondervan, Krina T.
2012-01-01
Objective To generate and validate symptom-based models to predict endometriosis among symptomatic women prior to undergoing their first laparoscopy. Design Prospective, observational, two-phase study, in which women completed a 25-item questionnaire prior to surgery. Setting Nineteen hospitals in 13 countries. Patient(s) Symptomatic women (n = 1,396) scheduled for laparoscopy without a previous surgical diagnosis of endometriosis. Intervention(s) None. Main Outcome Measure(s) Sensitivity and specificity of endometriosis diagnosis predicted by symptoms and patient characteristics from optimal models developed using multiple logistic regression analyses in one data set (phase I), and independently validated in a second data set (phase II) by receiver operating characteristic (ROC) curve analysis. Result(s) Three hundred sixty (46.7%) women in phase I and 364 (58.2%) in phase II were diagnosed with endometriosis at laparoscopy. Menstrual dyschezia (pain on opening bowels) and a history of benign ovarian cysts most strongly predicted both any and stage III and IV endometriosis in both phases. Prediction of any-stage endometriosis, although improved by ultrasound scan evidence of cyst/nodules, was relatively poor (area under the curve [AUC] = 68.3). Stage III and IV disease was predicted with good accuracy (AUC = 84.9, sensitivity of 82.3% and specificity 75.8% at an optimal cut-off of 0.24). Conclusion(s) Our symptom-based models predict any-stage endometriosis relatively poorly and stage III and IV disease with good accuracy. Predictive tools based on such models could help to prioritize women for surgical investigation in clinical practice and thus contribute to reducing time to diagnosis. We invite other researchers to validate the key models in additional populations. PMID:22657249
Wang, Yan-peng; Gong, Qi; Yu, Sheng-rong; Liu, You-yan
2012-04-01
A method for detecting trace impurities in high concentration matrix by ICP-AES based on partial least squares (PLS) was established. The research showed that PLS could effectively correct the interference caused by high level of matrix concentration error and could withstand higher concentrations of matrix than multicomponent spectral fitting (MSF). When the mass ratios of matrix to impurities were from 1 000 : 1 to 20 000 : 1, the recoveries of standard addition were between 95% and 105% by PLS. For the system in which interference effect has nonlinear correlation with the matrix concentrations, the prediction accuracy of normal PLS method was poor, but it can be improved greatly by using LIN-PPLS, which was based on matrix transformation of sample concentration. The contents of Co, Pb and Ga in stream sediment (GBW07312) were detected by MSF, PLS and LIN-PPLS respectively. The results showed that the prediction accuracy of LIN-PPLS was better than PLS, and the prediction accuracy of PLS was better than MSF.
Jaiswar, S P; Natu, S M; Sujata; Sankhwar, P L; Manjari, Gupta
2015-12-01
To study correlation between ovarian reserve with biophysical markers (antral follicle count and ovarian volume) and biochemical markers (S. FSH, S. Inhibin B, and S. AMH) and use these markers to predict poor ovarian response to ovarian induction. This is a prospective observational study. One hundred infertile women attending the Obst & Gynae Dept, KGMU were recruited. Blood samples were collected on day 2/day 3 for assessment of S. FSH, S. Inhibin B, and S. AMH and TVS were done for antral follicle count and ovarian volume. Clomephene citrate 100 mg 1OD was given from day 2 to 6, and patients were followed up with serial USG measurements. The numbers of dominant follicles (> or = 14 mm) at the time of hCG administration were counted. Patients with <3 follicles in the 1st cycle were subjected to the 2nd cycle of clomephene 100 mg 1OD from day 2 to day 6 with Inj HMG 150 IU given i.m. starting from day 8 and every alternate day until at least one leading follicle attained ≥18 mm. Development of <3 follicles at end of the 2nd cycle was considered as poor response. Univariate analyses showed that s. inhibin B presented the highest (ROCAUC = 0.862) discriminating potential for predicting poor ovarian response, In multivariate logistic regression model, the variables age, FSH, AMH, INHIBIN B, and AFC remained significant, and the resulting model showed a predicted accuracy of 84.4 %. A derived multimarker computation by a logistic regression model for predicting poor ovarian response was obtained through this study. Thus, potential poor responders could be identified easily, and appropriate ovarian stimulation protocol could be devised for such pts.
Achievable accuracy of hip screw holding power estimation by insertion torque measurement.
Erani, Paolo; Baleani, Massimiliano
2018-02-01
To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fukunishi, Yoshifumi
2010-01-01
For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.
Chagpar, Anees B.; Middleton, Lavinia P.; Sahin, Aysegul A.; Dempsey, Peter; Buzdar, Aman U.; Mirza, Attiqa N.; Ames, Fredrick C.; Babiera, Gildy V.; Feig, Barry W.; Hunt, Kelly K.; Kuerer, Henry M.; Meric-Bernstam, Funda; Ross, Merrick I.; Singletary, S Eva
2006-01-01
Objective: To assess the accuracy of physical examination, ultrasonography, and mammography in predicting residual size of breast tumors following neoadjuvant chemotherapy. Background: Neoadjuvant chemotherapy is an accepted part of the management of stage II and III breast cancer. Accurate prediction of residual pathologic tumor size after neoadjuvant chemotherapy is critical in guiding surgical therapy. Although physical examination, ultrasonography, and mammography have all been used to predict residual tumor size, there have been conflicting reports about the accuracy of these methods in the neoadjuvant setting. Methods: We reviewed the records of 189 patients who participated in 1 of 2 protocols using doxorubicin-containing neoadjuvant chemotherapy, and who had assessment by physical examination, ultrasonography, and/or mammography no more than 60 days before their surgical resection. Size correlations were performed using Spearman rho analysis. Clinical and pathologic measurements were also compared categorically using the weighted kappa statistic. Results: Size estimates by physical examination, ultrasonography, and mammography were only moderately correlated with residual pathologic tumor size after neoadjuvant chemotherapy (correlation coefficients: 0.42, 0.42, and 0.41, respectively), with an accuracy of ±1 cm in 66% of patients by physical examination, 75% by ultrasonography, and 70% by mammography. Kappa values (0.24–0.35) indicated poor agreement between clinical and pathologic measurements. Conclusion: Physical examination, ultrasonography, and mammography were only moderately useful for predicting residual pathologic tumor size after neoadjuvant chemotherapy. PMID:16432360
Akinwuntan, A E; Backus, D; Grayson, J; Devos, H
2018-05-26
Some symptoms of multiple sclerosis (MS) affect driving. In a recent study, performance on five cognitive tests predicted the on-road test performance of individuals with relapsing-remitting MS with 91% accuracy, 70% sensitivity and 97% specificity. However, the accuracy with which the battery will predict the driving performance of a different cohort that includes all types of MS is unknown. Participants (n = 118; 48 ± 9 years of age; 97 females) performed a comprehensive off-road evaluation that lasted about 3 h and a standardized on-road test that lasted approximately 45 min over a 2-day period within the same week. Performance on the five cognitive tests was used to predict participants' performance on the standardized on-road test. Performance on the five tests together predicted outcome of the on-road test with 82% accuracy, 42% sensitivity and 90% specificity. The accuracy of predicting the on-road performance of a new MS cohort using performance on the battery of five cognitive tests remained very high (82%). The battery, which was administrable in <45 min and cost ~$150, was better at identifying those who actually passed the on-road test (90% specificity). The sensitivity (42%) of the battery indicated that it should not be used as the sole determinant of poor driving-related cognitive skills. A fail performance on the battery should only imply that more comprehensive testing is warranted. © 2018 EAN.
Carroll, Julia M; Mundy, Ian R; Cunningham, Anna J
2014-09-01
It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. Word reading and spelling were retested 6 months later, and text reading accuracy and reading comprehension were tested 3 years later. The children with FRD were at increased risk of developing difficulties in reading accuracy, but not reading comprehension. Four groups were compared: good and poor readers with and without FRD. In most cases good readers outperformed poor readers regardless of family history, but there was an effect of family history on naming and nonword repetition regardless of literacy outcome, suggesting a role for speech production skills as an endophenotype of dyslexia. Phonological processing predicted spelling, while language predicted text reading accuracy and comprehension. FRD was a significant additional predictor of reading and spelling after controlling for speech production, language and phonological processing, suggesting that children with FRD show additional difficulties in literacy that cannot be fully explained in terms of their language and phonological skills. © 2014 John Wiley & Sons Ltd.
The Effects of Otitis Media on Articulation. Final Report for 1982-1983.
ERIC Educational Resources Information Center
Roberts, Joanne Erwick
The study examined the relationship in 44 preschoolers (considered to have varying degrees of predicted risk for poor school performance) between otitis media (middle ear disease) during the first 3 years of life and speech production (articulation) during preschool and school age years. Speech production accuracy was assessed by the number of…
Zheng, Jun; Yu, Zhiyuan; Xu, Zhao; Li, Mou; Wang, Xiaoze; Lin, Sen; Li, Hao; You, Chao
2017-05-12
BACKGROUND Hematoma expansion is associated with poor outcome in intracerebral hemorrhage (ICH) patients. The spot sign and the blend sign are reliable tools for predicting hematoma expansion in ICH patients. The aim of this study was to compare the accuracy of the two signs in the prediction of hematoma expansion. MATERIAL AND METHODS Patients with spontaneous ICH were screened for the presence of the computed tomography angiography (CTA) spot sign and the non-contrast CT (NCCT) blend sign within 6 hours after onset of symptoms. The sensitivity, specificity, and positive and negative predictive values of the spot sign and the blend sign in predicting hematoma expansion were calculated. The accuracy of the spot sign and the blend sign in predicting hematoma expansion was analyzed by receiver-operator analysis. RESULTS A total of 115 patients were enrolled in this study. The spot sign was observed in 25 (21.74%) patients, whereas the blend sign was observed in 22 (19.13%) patients. Of the 28 patients with hematoma expansion, the CTA spot sign was found on admission CT scans in 16 (57.14%) and the NCCT blend sign in 12 (42.86%), respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of the spot sign for predicting hematoma expansion were 57.14%, 89.66%, 64.00%, and 86.67%, respectively. In contrast, the sensitivity, specificity, positive predictive value, and negative predictive value of the blend sign were 42.86%, 88.51%, 54.55%, and 82.80%, respectively. The area under the curve (AUC) of the spot sign was 0.734, which was higher than that of the blend sign (0.657). CONCLUSIONS Both the spot sign and the blend sign seemed to be good predictors for hematoma expansion, and the spot sign appeared to have better predictive accuracy.
Zheng, Jun; Yu, Zhiyuan; Xu, Zhao; Li, Mou; Wang, Xiaoze; Lin, Sen; Li, Hao; You, Chao
2017-01-01
Background Hematoma expansion is associated with poor outcome in intracerebral hemorrhage (ICH) patients. The spot sign and the blend sign are reliable tools for predicting hematoma expansion in ICH patients. The aim of this study was to compare the accuracy of the two signs in the prediction of hematoma expansion. Material/Methods Patients with spontaneous ICH were screened for the presence of the computed tomography angiography (CTA) spot sign and the non-contrast CT (NCCT) blend sign within 6 hours after onset of symptoms. The sensitivity, specificity, and positive and negative predictive values of the spot sign and the blend sign in predicting hematoma expansion were calculated. The accuracy of the spot sign and the blend sign in predicting hematoma expansion was analyzed by receiver-operator analysis. Results A total of 115 patients were enrolled in this study. The spot sign was observed in 25 (21.74%) patients, whereas the blend sign was observed in 22 (19.13%) patients. Of the 28 patients with hematoma expansion, the CTA spot sign was found on admission CT scans in 16 (57.14%) and the NCCT blend sign in 12 (42.86%), respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of the spot sign for predicting hematoma expansion were 57.14%, 89.66%, 64.00%, and 86.67%, respectively. In contrast, the sensitivity, specificity, positive predictive value, and negative predictive value of the blend sign were 42.86%, 88.51%, 54.55%, and 82.80%, respectively. The area under the curve (AUC) of the spot sign was 0.734, which was higher than that of the blend sign (0.657). Conclusions Both the spot sign and the blend sign seemed to be good predictors for hematoma expansion, and the spot sign appeared to have better predictive accuracy. PMID:28498827
Can verbal working memory training improve reading?
Banales, Erin; Kohnen, Saskia; McArthur, Genevieve
2015-01-01
The aim of the current study was to determine whether poor verbal working memory is associated with poor word reading accuracy because the former causes the latter, or the latter causes the former. To this end, we tested whether (a) verbal working memory training improves poor verbal working memory or poor word reading accuracy, and whether (b) reading training improves poor reading accuracy or verbal working memory in a case series of four children with poor word reading accuracy and verbal working memory. Each child completed 8 weeks of verbal working memory training and 8 weeks of reading training. Verbal working memory training improved verbal working memory in two of the four children, but did not improve their reading accuracy. Similarly, reading training improved word reading accuracy in all children, but did not improve their verbal working memory. These results suggest that the causal links between verbal working memory and reading accuracy may not be as direct as has been assumed.
Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.
2009-01-01
Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.
Choi, Seung Pill; Park, Kyu Nam; Wee, Jung Hee; Park, Jeong Ho; Youn, Chun Song; Kim, Han Joon; Oh, Sang Hoon; Oh, Yoon Sang; Kim, Soo Hyun; Oh, Joo Suk
2017-10-01
In cardiac arrest patients treated with targeted temperature management (TTM), it is not certain if somatosensory evoked potentials (SEPs) and visual evoked potentials (VEPs) can predict neurological outcomes during TTM. The aim of this study was to investigate the prognostic value of SEPs and VEPs during TTM and after rewarming. This retrospective cohort study included comatose patients resuscitated from cardiac arrest and treated with TTM between March 2007 and July 2015. SEPs and VEPs were recorded during TTM and after rewarming in these patients. Neurological outcome was assessed at discharge by the Cerebral Performance Category (CPC) Scale. In total, 115 patients were included. A total of 175 SEPs and 150 VEPs were performed. Five SEPs during treated with TTM and nine SEPs after rewarming were excluded from outcome prediction by SEPs due to an indeterminable N20 response because of technical error. Using 80 SEPs and 85 VEPs during treated with TTM, absent SEPs yielded a sensitivity of 58% and a specificity of 100% for poor outcome (CPC 3-5), and absent VEPs predicted poor neurological outcome with a sensitivity of 44% and a specificity of 96%. The AUC of combination of SEPs and VEPs was superior to either test alone (0.788 for absent SEPs and 0.713 for absent VEPs compared with 0.838 for the combination). After rewarming, absent SEPs and absent VEPs predicted poor neurological outcome with a specificity of 100%. When SEPs and VEPs were combined, VEPs slightly increased the prognostic accuracy of SEPs alone. Although one patient with absent VEP during treated with TTM had a good neurological outcome, none of the patients with good neurological outcome had an absent VEP after rewarming. Absent SEPs could predict poor neurological outcome during TTM as well as after rewarming. Absent VEPs may predict poor neurological outcome in both periods and VEPs may provide additional prognostic value in outcome prediction. Copyright © 2017 Elsevier B.V. All rights reserved.
Integrative genetic risk prediction using non-parametric empirical Bayes classification.
Zhao, Sihai Dave
2017-06-01
Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.
Guilloux, Jean-Philippe; Bassi, Sabrina; Ding, Ying; Walsh, Chris; Turecki, Gustavo; Tseng, George; Cyranowski, Jill M; Sibille, Etienne
2015-02-01
Major depressive disorder (MDD) in general, and anxious-depression in particular, are characterized by poor rates of remission with first-line treatments, contributing to the chronic illness burden suffered by many patients. Prospective research is needed to identify the biomarkers predicting nonremission prior to treatment initiation. We collected blood samples from a discovery cohort of 34 adult MDD patients with co-occurring anxiety and 33 matched, nondepressed controls at baseline and after 12 weeks (of citalopram plus psychotherapy treatment for the depressed cohort). Samples were processed on gene arrays and group differences in gene expression were investigated. Exploratory analyses suggest that at pretreatment baseline, nonremitting patients differ from controls with gene function and transcription factor analyses potentially related to elevated inflammation and immune activation. In a second phase, we applied an unbiased machine learning prediction model and corrected for model-selection bias. Results show that baseline gene expression predicted nonremission with 79.4% corrected accuracy with a 13-gene model. The same gene-only model predicted nonremission after 8 weeks of citalopram treatment with 76% corrected accuracy in an independent validation cohort of 63 MDD patients treated with citalopram at another institution. Together, these results demonstrate the potential, but also the limitations, of baseline peripheral blood-based gene expression to predict nonremission after citalopram treatment. These results not only support their use in future prediction tools but also suggest that increased accuracy may be obtained with the inclusion of additional predictors (eg, genetics and clinical scales).
NASA Astrophysics Data System (ADS)
Brunner, Philip; Doherty, J.; Simmons, Craig T.
2012-07-01
The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.
Accuracy of ultrasonography in the detection of severe hepatic lipidosis in cats.
Yeager, A E; Mohammed, H
1992-04-01
The accuracy of ultrasonography in detection of feline hepatic lipidosis was studied retrospectively. The following ultrasonographic criteria were associated positively with severe hepatic lipidosis: the liver hyperechoic, compared with falciform fat; the liver isoechoic or hyperechoic, compared with omental fat; poor visualization of intrahepatic vessel borders; and increased attenuation of sound by the liver. In a group of 36 cats with clinically apparent hepatobiliary disease and in which liver biopsy was done, liver hyperechoic, compared with falciform fat, was the best criterion for diagnosis of severe hepatic lipidosis with 91% sensitivity, 100% specificity, and 100% positive predictive value.
A new type of lamp and reflector for I.R. simulation
NASA Technical Reports Server (NTRS)
Saenger, G.
1986-01-01
The lamps and reflectors used for infrared radiation simulation tests at ESTEC did not allow researchers to predict the intensity needed for test conditions with the desired accuracy. This was due to poor reproducibility of the polar diagrams, the unknown contribution of the radiation in the long wavelength range in vacuum, imperfections in the quartz bulbs, and misalignment of the lamp in the reflector. When using a 1000 W coiled spiral quartz lamp with a diffuse reflector, these shortcomings are overcome. Due to the good reproducibility, an overall accuracy within plus or minus 2 percent should be obtained.
Abbreviation of the Follow-Up NIH Stroke Scale Using Factor Analysis
Raza, Syed Ali; Frankel, Michael R.; Rangaraju, Srikant
2017-01-01
Background The NIH Stroke Scale (NIHSS) is a 15-item measure of stroke-related neurologic deficits that, when measured at 24 h, is highly predictive of long-term functional outcome. We hypothesized that a simplified 24-h scale that incorporates the most predictive components of the NIHSS can retain prognostic accuracy and have improved interrater reliability. Methods In a post hoc analysis of the Interventional Management of Stroke-3 (IMS-3) trial, we performed principal component (PC) analysis to resolve the 24-h NIHSS into PCs. In the PCs that explained the largest proportions of variance, key variables were identified. Using these key variables, the prognostic accuracies (area under the curve [AUC]) for good outcome (3-month modified Rankin Scale [mRS] 0–2) and poor outcome (mRS 5–6) of various abbreviated NIHSS iterations were compared with the total 24-h NIHSS. The results were validated in the NINDS intravenous tissue plasminogen activator (NINDS-TPA) study cohort. Based on previously published data, interrater reliability of the abbreviated 24-h NIHSS (aNIHSS) was compared to the total 24-h NIHSS. Results In 545 IMS-3 participants, 2 PCs explained 60.8% of variance in the 24-h NIHSS. The key variables in PC1 included neglect, arm and leg weakness; while PC2 included level-of-consciousness (LOC) questions, LOC commands, and aphasia. A 3-variable aNIHSS (aphasia, neglect, arm weakness) retained excellent prognostic accuracy for good outcome (AUC = 0.90) as compared to the total 24-h NIHSS (AUC = 0.91), and it was more predictive (p < 0.001) than the baseline NIHSS (AUC = 0.73). The prognostic accuracy of the aNIHSS for good outcome was validated in the NINDS-TPA trial cohort (aNIHSS: AUC = 0.89 vs. total 24-h NIHSS: 0.92). An aNIHSS >9 predicted very poor outcomes (mRS 0–2: 0%, mRS 4–6: 98.5%). The estimated interrater reliability of the aNIHSS was higher than that of the total 24-h NIHSS across 6 published datasets (mean weighted kappa 0.80 vs. 0.73, p < 0.001). Conclusions At 24 h following ischemic stroke, aphasia, neglect, and arm weakness are the most prognostically relevant neurologic findings. The aNIHSS appears to have excellent prognostic accuracy with higher reliability and may be clinically useful. PMID:28968607
Kim, Min Jung; Kim, Eun-Kyung; Park, Seho; Moon, Hee Jung; Kim, Seung Il; Park, Byeong-Woo
2015-09-01
Triple-negative breast cancer (TNBC) which expresses neither hormonal receptors nor HER-2 is associated with poor prognosis and shorter survival. Several studies have suggested that TNBC patients attaining pathological complete response (pCR) after neoadjuvant chemotherapy (NAC) show a longer survival than those without pCR. To assess the accuracy of 3.0-T breast magnetic resonance imaging (MRI) in predicting pCR and to evaluate the clinicoradiologic factors affecting the diagnostic accuracy of 3.0-T breast MRI in TNBC patients treated with anthracycline and taxane (ACD). This retrospective study was approved by the institutional review board; patient consent was not required. Between 2009 and 2012, 35 TNBC patients with 3.0-T breast MRI prior to (n = 26) or after (n = 35) NAC were included. MRI findings were reviewed according to pCR to chemotherapy. The diagnostic accuracy of 3.0-T breast MRI for predicting pCR and the clinicoradiological factors affecting MRI accuracy and response to NAC were analyzed. 3.0-T MRI following NAC with ACD accurately predicted pCR in 91.4% of TNBC patients. The residual tumor size between pathology and 3.0-T MRI in non-pCR cases showed a higher correlation in the Ki-67-positive TNBC group (r = 0.947) than in the Ki-67 negative group (r = 0.375) with statistical trends (P = 0.069). Pre-treatment MRI in the non-pCR group compared to the pCR group showed a larger tumor size (P = 0.030) and non-mass presentation (P = 0.015). 3.0-T MRI in TNBC patients following NAC with ACD showed a high accuracy for predicting pCR to NAC. Ki-67 can affect the diagnostic accuracy of 3.0-T MRI for pCR to NAC with ACD in TNBC patients. © The Foundation Acta Radiologica 2014.
ERIC Educational Resources Information Center
Hall, Samuel R.; Stephens, Jonny R.; Seaby, Eleanor G.; Andrade, Matheus Gesteira; Lowry, Andrew F.; Parton, Will J. C.; Smith, Claire F.; Border, Scott
2016-01-01
It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second…
Hooper, Lee; Abdelhamid, Asmaa; Ali, Adam; Bunn, Diane K; Jennings, Amy; John, W Garry; Kerry, Susan; Lindner, Gregor; Pfortmueller, Carmen A; Sjöstrand, Fredrik; Walsh, Neil P; Fairweather-Tait, Susan J; Potter, John F; Hunter, Paul R; Shepstone, Lee
2015-10-21
To assess which osmolarity equation best predicts directly measured serum/plasma osmolality and whether its use could add value to routine blood test results through screening for dehydration in older people. Diagnostic accuracy study. Older people (≥65 years) in 5 cohorts: Dietary Strategies for Healthy Ageing in Europe (NU-AGE, living in the community), Dehydration Recognition In our Elders (DRIE, living in residential care), Fortes (admitted to acute medical care), Sjöstrand (emergency room) or Pfortmueller cohorts (hospitalised with liver cirrhosis). Directly measured serum/plasma osmolality: current dehydration (serum osmolality>300 mOsm/kg), impending/current dehydration (≥295 mOsm/kg). 39 osmolarity equations calculated using serum indices from the same blood draw as directly measured osmolality. Across 5 cohorts 595 older people were included, of whom 19% were dehydrated (directly measured osmolality>300 mOsm/kg). Of 39 osmolarity equations, 5 showed reasonable agreement with directly measured osmolality and 3 had good predictive accuracy in subgroups with diabetes and poor renal function. Two equations were characterised by narrower limits of agreement, low levels of differential bias and good diagnostic accuracy in receiver operating characteristic plots (areas under the curve>0.8). The best equation was osmolarity=1.86×(Na++K+)+1.15×glucose+urea+14 (all measured in mmol/L). It appeared useful in people aged ≥65 years with and without diabetes, poor renal function, dehydration, in men and women, with a range of ages, health, cognitive and functional status. Some commonly used osmolarity equations work poorly, and should not be used. Given costs and prevalence of dehydration in older people we suggest use of the best formula by pathology laboratories using a cutpoint of 295 mOsm/L (sensitivity 85%, specificity 59%), to report dehydration risk opportunistically when serum glucose, urea and electrolytes are measured for other reasons in older adults. DRIE: Research Register for Social Care, 122273; NU-AGE: ClinicalTrials.gov NCT01754012. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A hybrid localization technique for patient tracking.
Rodionov, Denis; Kolev, George; Bushminkin, Kirill
2013-01-01
Nowadays numerous technologies are employed for tracking patients and assets in hospitals or nursing homes. Each of them has advantages and drawbacks. For example, WiFi localization has relatively good accuracy but cannot be used in case of power outage or in the areas with poor WiFi coverage. Magnetometer positioning or cellular network does not have such problems but they are not as accurate as localization with WiFi. This paper describes technique that simultaneously employs different localization technologies for enhancing stability and average accuracy of localization. The proposed algorithm is based on fingerprinting method paired with data fusion and prediction algorithms for estimating the object location. The core idea of the algorithm is technology fusion using error estimation methods. For testing accuracy and performance of the algorithm testing simulation environment has been implemented. Significant accuracy improvement was showed in practical scenarios.
Timsit, E; Dendukuri, N; Schiller, I; Buczinski, S
2016-12-01
Diagnosis of bovine respiratory disease (BRD) in beef cattle placed in feedlots is typically based on clinical illness (CI) detected by pen-checkers. Unfortunately, the accuracy of this diagnostic approach (namely, sensitivity [Se] and specificity [Sp]) remains poorly understood, in part due to the absence of a reference test for ante-mortem diagnosis of BRD. Our objective was to pool available estimates of CI's diagnostic accuracy for BRD diagnosis in feedlot beef cattle while adjusting for the inaccuracy in the reference test. The presence of lung lesions (LU) at slaughter was used as the reference test. A systematic review of the literature was conducted to identify research articles comparing CI detected by pen-checkers during the feeding period to LU at slaughter. A hierarchical Bayesian latent-class meta-analysis was used to model test accuracy. This approach accounted for imperfections of both tests as well as the within and between study variability in the accuracy of CI. Furthermore, it also predicted the Se CI and Sp CI for future studies. Conditional independence between CI and LU was assumed, as these two tests are not based on similar biological principles. Seven studies were included in the meta-analysis. Estimated pooled Se CI and Sp CI were 0.27 (95% Bayesian credible interval: 0.12-0.65) and 0.92 (0.72-0.98), respectively, whereas estimated pooled Se LU and Sp LU were 0.91 (0.82-0.99) and 0.67 (0.64-0.79). Predicted Se CI and Sp CI for future studies were 0.27 (0.01-0.96) and 0.92 (0.14-1.00), respectively. The wide credible intervals around predicted Se CI and Sp CI estimates indicated considerable heterogeneity among studies, which suggests that pooled Se CI and Sp CI are not generalizable to individual studies. In conclusion, CI appeared to have poor Se but high Sp for BRD diagnosis in feedlots. Furthermore, considerable heterogeneity among studies highlighted an urgent need to standardize BRD diagnosis in feedlots. Copyright © 2016 Elsevier B.V. All rights reserved.
Brodaty, Henry; Aerts, Liesbeth; Crawford, John D; Heffernan, Megan; Kochan, Nicole A; Reppermund, Simone; Kang, Kristan; Maston, Kate; Draper, Brian; Trollor, Julian N; Sachdev, Perminder S
2017-05-01
Mild cognitive impairment (MCI) is considered an intermediate stage between normal aging and dementia. It is diagnosed in the presence of subjective cognitive decline and objective cognitive impairment without significant functional impairment, although there are no standard operationalizations for each of these criteria. The objective of this study is to determine which operationalization of the MCI criteria is most accurate at predicting dementia. Six-year longitudinal study, part of the Sydney Memory and Ageing Study. Community-based. 873 community-dwelling dementia-free adults between 70 and 90 years of age. Persons from a non-English speaking background were excluded. Seven different operationalizations for subjective cognitive decline and eight measures of objective cognitive impairment (resulting in 56 different MCI operational algorithms) were applied. The accuracy of each algorithm to predict progression to dementia over 6 years was examined for 618 individuals. Baseline MCI prevalence varied between 0.4% and 30.2% and dementia conversion between 15.9% and 61.9% across different algorithms. The predictive accuracy for progression to dementia was poor. The highest accuracy was achieved based on objective cognitive impairment alone. Inclusion of subjective cognitive decline or mild functional impairment did not improve dementia prediction accuracy. Not MCI, but objective cognitive impairment alone, is the best predictor for progression to dementia in a community sample. Nevertheless, clinical assessment procedures need to be refined to improve the identification of pre-dementia individuals. Copyright © 2016 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.
Sánchez-Rodríguez, Dolores; Annweiler, Cédric; Ronquillo-Moreno, Natalia; Tortosa-Rodríguez, Andrea; Guillén-Solà, Anna; Vázquez-Ibar, Olga; Escalada, Ferran; Muniesa, Josep M; Marco, Ester
Malnutrition is a prevalent condition related to adverse outcomes in older people. Our aim was to compare the diagnostic capacity of the malnutrition criteria of the European Society of Parenteral and Enteral Nutrition (ESPEN) with other classical diagnostic tools. Cohort study of 102 consecutive in-patients ≥70 years admitted for postacute rehabilitation. Patients were considered malnourished if their Mini-Nutritional Assessment-Short Form (MNA-SF) score was ≤11 and serum albumin <3 mg/dL or MNA-SF ≤ 11, serum albumin <3 mg/dL, and usual clinical signs and symptoms of malnutrition. Sensitivity, specificity, positive and negative predictive values, accuracy likelihood ratios, and kappa values were calculated for both methods: and compared with ESPEN consensus. Of 102 eligible in-patients, 88 fulfilled inclusion criteria and were identified as "at risk" by MNA-SF. Malnutrition diagnosis was confirmed in 11.6% and 10.5% of the patients using classical methods,whereas 19.3% were malnourished according to the ESPEN criteria. Combined with low albumin levels, the diagnosis showed 57.9% sensitivity, 64.5% specificity, 85.9% negative predictive value,0.63 accuracy (fair validity, low range), and kappa index of 0.163 (poor ESPEN agreement). The combination of MNA-SF, low albumin, and clinical malnutrition showed 52.6% sensitivity, 88.3% specificity, 88.3%negative predictive value, and 0.82 accuracy (fair validity, low range), and kappa index of 0.43 (fair ESPEN agreement). Malnutrition was almost twice as prevalent when diagnosed by the ESPEN consensus, compared to classical assessment methods: Classical methods: showed fair validity and poor agreement with the ESPEN consensus in assessing malnutrition in geriatric postacute care. Copyright © 2018 Elsevier B.V. All rights reserved.
Prognostic accuracy of five simple scales in childhood bacterial meningitis.
Pelkonen, Tuula; Roine, Irmeli; Monteiro, Lurdes; Cruzeiro, Manuel Leite; Pitkäranta, Anne; Kataja, Matti; Peltola, Heikki
2012-08-01
In childhood acute bacterial meningitis, the level of consciousness, measured with the Glasgow coma scale (GCS) or the Blantyre coma scale (BCS), is the most important predictor of outcome. The Herson-Todd scale (HTS) was developed for Haemophilus influenzae meningitis. Our objective was to identify prognostic factors, to form a simple scale, and to compare the predictive accuracy of these scales. Seven hundred and twenty-three children with bacterial meningitis in Luanda were scored by GCS, BCS, and HTS. The simple Luanda scale (SLS), based on our entire database, comprised domestic electricity, days of illness, convulsions, consciousness, and dyspnoea at presentation. The Bayesian Luanda scale (BLS) added blood glucose concentration. The accuracy of the 5 scales was determined for 491 children without an underlying condition, against the outcomes of death, severe neurological sequelae or death, or a poor outcome (severe neurological sequelae, death, or deafness), at hospital discharge. The highest accuracy was achieved with the BLS, whose area under the curve (AUC) for death was 0.83, for severe neurological sequelae or death was 0.84, and for poor outcome was 0.82. Overall, the AUCs for SLS were ≥0.79, for GCS were ≥0.76, for BCS were ≥0.74, and for HTS were ≥0.68. Adding laboratory parameters to a simple scoring system, such as the SLS, improves the prognostic accuracy only little in bacterial meningitis.
The Pandolf equation under-predicts the metabolic rate of contemporary military load carriage.
Drain, Jace R; Aisbett, Brad; Lewis, Michael; Billing, Daniel C
2017-11-01
This investigation assessed the accuracy of error of the Pandolf load carriage energy expenditure equation when simulating contemporary military conditions (load distribution, external load and walking speed). Within-participant design. Sixteen male participants completed 10 trials comprised of five walking speeds (2.5, 3.5, 4.5, 5.5 and 6.5km·h -1 ) and two external loads (22.7 and 38.4kg). The Pandolf equation demonstrated poor predictive precision, with a mean bias of 124.9W and -48.7 to 298.5W 95% limits of agreement. Furthermore, the Pandolf equation systematically under-predicted metabolic rate (p<0.05) across the 10 speed-load combinations. Predicted metabolic rate error ranged from 12-33% across all conditions with the 'moderate' walking speeds (i.e. 4.5-5.5km·h -1 ) yielding less prediction error (12-17%) when compared to the slower and faster walking speeds (21-33%). Factors such as mechanical efficiency and load distribution contribute to the impaired predictive accuracy. The authors suggest the Pandolf equation should be applied to military load carriage with caution. Copyright © 2017 Sports Medicine Australia. All rights reserved.
Labrenz, Franziska; Icenhour, Adriane; Benson, Sven; Elsenbruch, Sigrid
2015-01-01
As a fundamental learning process, fear conditioning promotes the formation of associations between predictive cues and biologically significant signals. In its application to pain, conditioning may provide important insight into mechanisms underlying pain-related fear, although knowledge especially in interoceptive pain paradigms remains scarce. Furthermore, while the influence of contingency awareness on excitatory learning is subject of ongoing debate, its role in pain-related acquisition is poorly understood and essentially unknown regarding extinction as inhibitory learning. Therefore, we addressed the impact of contingency awareness on learned emotional responses to pain- and safety-predictive cues in a combined dataset of two pain-related conditioning studies. In total, 75 healthy participants underwent differential fear acquisition, during which rectal distensions as interoceptive unconditioned stimuli (US) were repeatedly paired with a predictive visual cue (conditioned stimulus; CS+) while another cue (CS−) was presented unpaired. During extinction, both CS were presented without US. CS valence, indicating learned emotional responses, and CS-US contingencies were assessed on visual analog scales (VAS). Based on an integrative measure of contingency accuracy, a median-split was performed to compare groups with low vs. high contingency accuracy regarding learned emotional responses. To investigate predictive value of contingency accuracy, regression analyses were conducted. Highly accurate individuals revealed more pronounced negative emotional responses to CS+ and increased positive responses to CS− when compared to participants with low contingency accuracy. Following extinction, highly accurate individuals had fully extinguished pain-predictive cue properties, while exhibiting persistent positive emotional responses to safety signals. In contrast, individuals with low accuracy revealed equally positive emotional responses to both, CS+ and CS−. Contingency accuracy predicted variance in the formation of positive responses to safety cues while no predictive value was found for danger cues following acquisition and for neither cue following extinction. Our findings underscore specific roles of learned danger and safety in pain-related acquisition and extinction. Contingency accuracy appears to distinctly impact learned emotional responses to safety and danger cues, supporting aversive learning to occur independently from CS-US awareness. The interplay of cognitive and emotional factors in shaping excitatory and inhibitory pain-related learning may contribute to altered pain processing, underscoring its clinical relevance in chronic pain. PMID:26640433
Labrenz, Franziska; Icenhour, Adriane; Benson, Sven; Elsenbruch, Sigrid
2015-01-01
As a fundamental learning process, fear conditioning promotes the formation of associations between predictive cues and biologically significant signals. In its application to pain, conditioning may provide important insight into mechanisms underlying pain-related fear, although knowledge especially in interoceptive pain paradigms remains scarce. Furthermore, while the influence of contingency awareness on excitatory learning is subject of ongoing debate, its role in pain-related acquisition is poorly understood and essentially unknown regarding extinction as inhibitory learning. Therefore, we addressed the impact of contingency awareness on learned emotional responses to pain- and safety-predictive cues in a combined dataset of two pain-related conditioning studies. In total, 75 healthy participants underwent differential fear acquisition, during which rectal distensions as interoceptive unconditioned stimuli (US) were repeatedly paired with a predictive visual cue (conditioned stimulus; CS(+)) while another cue (CS(-)) was presented unpaired. During extinction, both CS were presented without US. CS valence, indicating learned emotional responses, and CS-US contingencies were assessed on visual analog scales (VAS). Based on an integrative measure of contingency accuracy, a median-split was performed to compare groups with low vs. high contingency accuracy regarding learned emotional responses. To investigate predictive value of contingency accuracy, regression analyses were conducted. Highly accurate individuals revealed more pronounced negative emotional responses to CS(+) and increased positive responses to CS(-) when compared to participants with low contingency accuracy. Following extinction, highly accurate individuals had fully extinguished pain-predictive cue properties, while exhibiting persistent positive emotional responses to safety signals. In contrast, individuals with low accuracy revealed equally positive emotional responses to both, CS(+) and CS(-). Contingency accuracy predicted variance in the formation of positive responses to safety cues while no predictive value was found for danger cues following acquisition and for neither cue following extinction. Our findings underscore specific roles of learned danger and safety in pain-related acquisition and extinction. Contingency accuracy appears to distinctly impact learned emotional responses to safety and danger cues, supporting aversive learning to occur independently from CS-US awareness. The interplay of cognitive and emotional factors in shaping excitatory and inhibitory pain-related learning may contribute to altered pain processing, underscoring its clinical relevance in chronic pain.
Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C
2018-05-16
A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.
Yang, Mina; Choi, Rihwa; Kim, June Soo; On, Young Keun; Bang, Oh Young; Cho, Hyun-Jung; Lee, Soo-Youn
2016-12-01
The purpose of this study was to evaluate the performance of 16 previously published warfarin dosing algorithms in Korean patients. The 16 algorithms were selected through a literature search and evaluated using a cohort of 310 Korean patients with atrial fibrillation or cerebral infarction who were receiving warfarin therapy. A large interindividual variation (up to 11-fold) in warfarin dose was observed (median, 25 mg/wk; range, 7-77 mg/wk). Estimated dose and actual maintenance dose correlated well overall (r range, 0.52-0.73). Mean absolute error (MAE) of the 16 algorithms ranged from -1.2 to -20.1 mg/wk. The percentage of patients whose estimated dose fell within 20% of the actual dose ranged from 1.0% to 49%. All algorithms showed poor accuracy with increased MAE in a higher dose range. Performance of the dosing algorithms was worse in patients with VKORC1 1173TC or CC than in total (r range, 0.38-0.61 vs 0.52-0.73; MAE range, -2.6 to -28.0 mg/wk vs -1.2 to -20.1 mg/wk). The algorithms had comparable prediction abilities but showed limited accuracy depending on ethnicity, warfarin dose, and VKORC1 genotype. Further studies are needed to develop genotype-guided warfarin dosing algorithms with greater accuracy in the Korean population. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
ERIC Educational Resources Information Center
Mangan, Jean; Pugh, Geoff; Gray, John
2005-01-01
The article explores changes in the examination performance of a random sample of 500 English secondary schools between 1992 and 2001. Using econometric methods, it concludes that: there is an overall deterministic trend in school performance but it is not stable, making prediction accuracy poor; the aggregate trend does not explain improvement…
Yu, Zhiyuan; Zheng, Jun; Ali, Hasan; Guo, Rui; Li, Mou; Wang, Xiaoze; Ma, Lu; Li, Hao; You, Chao
2017-11-01
Hematoma expansion is related to poor outcome in spontaneous intracerebral hemorrhage (ICH). Recently, a non-enhanced computed tomography (CT) based finding, termed the 'satellite sign', was reported to be a novel predictor for poor outcome in spontaneous ICH. However, it is still unclear whether the presence of the satellite sign is related to hematoma expansion. Initial computed tomography angiography (CTA) was conducted within 6h after ictus. Satellite sign on non-enhanced CT and spot sign on CTA were detected by two independent reviewers. The sensitivity and specificity of both satellite sign and spot sign were calculated. Receiver-operator analysis was conducted to evaluate their predictive accuracy for hematoma expansion. This study included 153 patients. Satellite sign was detected in 58 (37.91%) patients and spot sign was detected in 38 (24.84%) patients. Among 37 patients with hematoma expansion, 22 (59.46%) had satellite sign and 23 (62.16%) had spot sign. The sensitivity and specificity of satellite sign for prediction of hematoma expansion were 59.46% and 68.97%, respectively. The sensitivity and specificity of spot sign were 62.16% and 87.07%, respectively. The area under the curve (AUC) of satellite sign was 0.642 and the AUC of spot sign was 0.746. (P=0.157) CONCLUSION: Our results suggest that the satellite sign is an independent predictor for hematoma expansion in spontaneous ICH. Although spot sign has the higher predictive accuracy, satellite sign is still an acceptable predictor for hematoma expansion when CTA is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.
Lombardo, Franco; Berellini, Giuliano; Labonte, Laura R; Liang, Guiqing; Kim, Sean
2016-03-01
We present a systematic evaluation of the Wajima superpositioning method to estimate the human intravenous (i.v.) pharmacokinetic (PK) profile based on a set of 54 marketed drugs with diverse structure and range of physicochemical properties. We illustrate the use of average of "best methods" for the prediction of clearance (CL) and volume of distribution at steady state (VDss) as described in our earlier work (Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):178-191; Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):167-177). These methods provided much more accurate prediction of human PK parameters, yielding 88% and 70% of the prediction within 2-fold error for VDss and CL, respectively. The prediction of human i.v. profile using Wajima superpositioning of rat, dog, and monkey time-concentration profiles was tested against the observed human i.v. PK using fold error statistics. The results showed that 63% of the compounds yielded a geometric mean of fold error below 2-fold, and an additional 19% yielded a geometric mean of fold error between 2- and 3-fold, leaving only 18% of the compounds with a relatively poor prediction. Our results showed that good superposition was observed in any case, demonstrating the predictive value of the Wajima approach, and that the cause of poor prediction of human i.v. profile was mainly due to the poorly predicted CL value, while VDss prediction had a minor impact on the accuracy of human i.v. profile prediction. Copyright © 2016. Published by Elsevier Inc.
Annamalai, Alagappan; Harada, Megan Y; Chen, Melissa; Tran, Tram; Ko, Ara; Ley, Eric J; Nuno, Miriam; Klein, Andrew; Nissen, Nicholas; Noureddin, Mazen
2017-03-01
Critically ill cirrhotics require liver transplantation urgently, but are at high risk for perioperative mortality. The Model for End-stage Liver Disease (MELD) score, recently updated to incorporate serum sodium, estimates survival probability in patients with cirrhosis, but needs additional evaluation in the critically ill. The purpose of this study was to evaluate the predictive power of ICU admission MELD scores and identify clinical risk factors associated with increased mortality. This was a retrospective review of cirrhotic patients admitted to the ICU between January 2011 and December 2014. Patients who were discharged or underwent transplantation (survivors) were compared with those who died (nonsurvivors). Demographic characteristics, admission MELD scores, and clinical risk factors were recorded. Multivariate regression was used to identify independent predictors of mortality, and measures of model performance were assessed to determine predictive accuracy. Of 276 patients who met inclusion criteria, 153 were considered survivors and 123 were nonsurvivors. Survivor and nonsurvivor cohorts had similar demographic characteristics. Nonsurvivors had increased MELD, gastrointestinal bleeding, infection, mechanical ventilation, encephalopathy, vasopressors, dialysis, renal replacement therapy, requirement of blood products, and ICU length of stay. The MELD demonstrated low predictive power (c-statistic 0.73). Multivariate analysis identified MELD score (adjusted odds ratio [AOR] = 1.05), mechanical ventilation (AOR = 4.55), vasopressors (AOR = 3.87), and continuous renal replacement therapy (AOR = 2.43) as independent predictors of mortality, with stronger predictive accuracy (c-statistic 0.87). The MELD demonstrated relatively poor predictive accuracy in critically ill patients with cirrhosis and might not be the best indicator for prognosis in the ICU population. Prognostic accuracy is significantly improved when variables indicating organ support (mechanical ventilation, vasopressors, and continuous renal replacement therapy) are included in the model. Copyright © 2016. Published by Elsevier Inc.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
Pires, RES; Pereira, AA; Abreu-e-Silva, GM; Labronici, PJ; Figueiredo, LB; Godoy-Santos, AL; Kfuri, M
2014-01-01
Background: Foot and ankle injuries are frequent in emergency departments. Although only a few patients with foot and ankle sprain present fractures and the fracture patterns are almost always simple, lack of fracture diagnosis can lead to poor functional outcomes. Aim: The present study aims to evaluate the reliability of the Ottawa ankle rules and the orthopedic surgeon subjective perception to assess foot and ankle fractures after sprains. Subjects and Methods: A cross-sectional study was conducted from July 2012 to December 2012. Ethical approval was granted. Two hundred seventy-four adult patients admitted to the emergency department with foot and/or ankle sprain were evaluated by an orthopedic surgeon who completed a questionnaire prior to radiographic assessment. The Ottawa ankle rules and subjective perception of foot and/or ankle fractures were evaluated on the questionnaire. Results: Thirteen percent (36/274) patients presented fracture. Orthopedic surgeon subjective analysis showed 55.6% sensitivity, 90.1% specificity, 46.5% positive predictive value and 92.9% negative predictive value. The general orthopedic surgeon opinion accuracy was 85.4%. The Ottawa ankle rules presented 97.2% sensitivity, 7.8% specificity, 13.9% positive predictive value, 95% negative predictive value and 19.9% accuracy respectively. Weight-bearing inability was the Ottawa ankle rule item that presented the highest reliability, 69.4% sensitivity, 61.6% specificity, 63.1% accuracy, 21.9% positive predictive value and 93% negative predictive value respectively. Conclusion: The Ottawa ankle rules showed high reliability for deciding when to take radiographs in foot and/or ankle sprains. Weight-bearing inability was the most important isolated item to predict fracture presence. Orthopedic surgeon subjective analysis to predict fracture possibility showed a high specificity rate, representing a confident method to exclude unnecessary radiographic exams. PMID:24971221
The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.
Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro
2018-03-01
Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.
Dai, Zi-Ru; Ai, Chun-Zhi; Ge, Guang-Bo; He, Yu-Qi; Wu, Jing-Jing; Wang, Jia-Yue; Man, Hui-Zi; Jia, Yan; Yang, Ling
2015-06-30
Early prediction of xenobiotic metabolism is essential for drug discovery and development. As the most important human drug-metabolizing enzyme, cytochrome P450 3A4 has a large active cavity and metabolizes a broad spectrum of substrates. The poor substrate specificity of CYP3A4 makes it a huge challenge to predict the metabolic site(s) on its substrates. This study aimed to develop a mechanism-based prediction model based on two key parameters, including the binding conformation and the reaction activity of ligands, which could reveal the process of real metabolic reaction(s) and the site(s) of modification. The newly established model was applied to predict the metabolic site(s) of steroids; a class of CYP3A4-preferred substrates. 38 steroids and 12 non-steroids were randomly divided into training and test sets. Two major metabolic reactions, including aliphatic hydroxylation and N-dealkylation, were involved in this study. At least one of the top three predicted metabolic sites was validated by the experimental data. The overall accuracy for the training and test were 82.14% and 86.36%, respectively. In summary, a mechanism-based prediction model was established for the first time, which could be used to predict the metabolic site(s) of CYP3A4 on steroids with high predictive accuracy.
Sousa, Thiago Oliveira; Haiter-Neto, Francisco; Nascimento, Eduarda Helena Leandro; Peroni, Leonardo Vieira; Freitas, Deborah Queiroz; Hassan, Bassam
2017-07-01
The aim of this study was to assess the diagnostic accuracy of periapical radiography (PR) and cone-beam computed tomographic (CBCT) imaging in the detection of the root canal configuration (RCC) of human premolars. PR and CBCT imaging of 114 extracted human premolars were evaluated by 2 oral radiologists. RCC was recorded according to Vertucci's classification. Micro-computed tomographic imaging served as the gold standard to determine RCC. Accuracy, sensitivity, specificity, and predictive values were calculated. The Friedman test compared both PR and CBCT imaging with the gold standard. CBCT imaging showed higher values for all diagnostic tests compared with PR. Accuracy was 0.55 and 0.89 for PR and CBCT imaging, respectively. There was no difference between CBCT imaging and the gold standard, whereas PR differed from both CBCT and micro-computed tomographic imaging (P < .0001). CBCT imaging was more accurate than PR for evaluating different types of RCC individually. Canal configuration types III, VII, and "other" were poorly identified on CBCT imaging with a detection accuracy of 50%, 0%, and 43%, respectively. With PR, all canal configurations except type I were poorly visible. PR presented low performance in the detection of RCC in premolars, whereas CBCT imaging showed no difference compared with the gold standard. Canals with complex configurations were less identifiable using both imaging methods, especially PR. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Ahmed, Shiek S. S. J.; Ramakrishnan, V.
2012-01-01
Background Poor oral bioavailability is an important parameter accounting for the failure of the drug candidates. Approximately, 50% of developing drugs fail because of unfavorable oral bioavailability. In silico prediction of oral bioavailability (%F) based on physiochemical properties are highly needed. Although many computational models have been developed to predict oral bioavailability, their accuracy remains low with a significant number of false positives. In this study, we present an oral bioavailability model based on systems biological approach, using a machine learning algorithm coupled with an optimal discriminative set of physiochemical properties. Results The models were developed based on computationally derived 247 physicochemical descriptors from 2279 molecules, among which 969, 605 and 705 molecules were corresponds to oral bioavailability, intestinal absorption (HIA) and caco-2 permeability data set, respectively. The partial least squares discriminate analysis showed 49 descriptors of HIA and 50 descriptors of caco-2 are the major contributing descriptors in classifying into groups. Of these descriptors, 47 descriptors were commonly associated to HIA and caco-2, which suggests to play a vital role in classifying oral bioavailability. To determine the best machine learning algorithm, 21 classifiers were compared using a bioavailability data set of 969 molecules with 47 descriptors. Each molecule in the data set was represented by a set of 47 physiochemical properties with the functional relevance labeled as (+bioavailability/−bioavailability) to indicate good-bioavailability/poor-bioavailability molecules. The best-performing algorithm was the logistic algorithm. The correlation based feature selection (CFS) algorithm was implemented, which confirms that these 47 descriptors are the fundamental descriptors for oral bioavailability prediction. Conclusion The logistic algorithm with 47 selected descriptors correctly predicted the oral bioavailability, with a predictive accuracy of more than 71%. Overall, the method captures the fundamental molecular descriptors, that can be used as an entity to facilitate prediction of oral bioavailability. PMID:22815781
Ahmed, Shiek S S J; Ramakrishnan, V
2012-01-01
Poor oral bioavailability is an important parameter accounting for the failure of the drug candidates. Approximately, 50% of developing drugs fail because of unfavorable oral bioavailability. In silico prediction of oral bioavailability (%F) based on physiochemical properties are highly needed. Although many computational models have been developed to predict oral bioavailability, their accuracy remains low with a significant number of false positives. In this study, we present an oral bioavailability model based on systems biological approach, using a machine learning algorithm coupled with an optimal discriminative set of physiochemical properties. The models were developed based on computationally derived 247 physicochemical descriptors from 2279 molecules, among which 969, 605 and 705 molecules were corresponds to oral bioavailability, intestinal absorption (HIA) and caco-2 permeability data set, respectively. The partial least squares discriminate analysis showed 49 descriptors of HIA and 50 descriptors of caco-2 are the major contributing descriptors in classifying into groups. Of these descriptors, 47 descriptors were commonly associated to HIA and caco-2, which suggests to play a vital role in classifying oral bioavailability. To determine the best machine learning algorithm, 21 classifiers were compared using a bioavailability data set of 969 molecules with 47 descriptors. Each molecule in the data set was represented by a set of 47 physiochemical properties with the functional relevance labeled as (+bioavailability/-bioavailability) to indicate good-bioavailability/poor-bioavailability molecules. The best-performing algorithm was the logistic algorithm. The correlation based feature selection (CFS) algorithm was implemented, which confirms that these 47 descriptors are the fundamental descriptors for oral bioavailability prediction. The logistic algorithm with 47 selected descriptors correctly predicted the oral bioavailability, with a predictive accuracy of more than 71%. Overall, the method captures the fundamental molecular descriptors, that can be used as an entity to facilitate prediction of oral bioavailability.
Popovic, Batric; Girerd, Nicolas; Rossignol, Patrick; Agrinier, Nelly; Camenzind, Edoardo; Fay, Renaud; Pitt, Bertram; Zannad, Faiez
2016-11-15
The Thrombolysis in Myocardial Infarction (TIMI) risk score remains a robust prediction tool for short-term and midterm outcome in the patients with ST-elevation myocardial infarction (STEMI). However, the validity of this risk score in patients with STEMI with reduced left ventricular ejection fraction (LVEF) remains unclear. A total of 2,854 patients with STEMI with early coronary revascularization participating in the randomized EPHESUS (Epleronone Post-Acute Myocardial Infarction Heart Failure Efficacy and Survival Study) trial were analyzed. TIMI risk score was calculated at baseline, and its predictive value was evaluated using C-indexes from Cox models. The increase in reclassification of other variables in addition to TIMI score was assessed using the net reclassification index. TIMI risk score had a poor predictive accuracy for all-cause mortality (C-index values at 30 days and 1 year ≤0.67) and recurrent myocardial infarction (MI; C-index values ≤0.60). Among TIMI score items, diabetes/hypertension/angina, heart rate >100 beats/min, and systolic blood pressure <100 mm Hg were inconsistently associated with survival, whereas none of the TIMI score items, aside from age, were significantly associated with MI recurrence. Using a constructed predictive model, lower LVEF, lower estimated glomerular filtration rate (eGFR), and previous MI were significantly associated with all-cause mortality. The predictive accuracy of this model, which included LVEF and eGFR, was fair for both 30-day and 1-year all-cause mortality (C-index values ranging from 0.71 to 0.75). In conclusion, TIMI risk score demonstrates poor discrimination in predicting mortality or recurrent MI in patients with STEMI with reduced LVEF. LVEF and eGFR are major factors that should not be ignored by predictive risk scores in this population. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Peake, Christian; Diaz, Alicia; Artiles, Ceferino
2017-01-01
This study examined the relationship and degree of predictability that the fluency of writing the alphabet from memory and the selection of allographs have on measures of fluency and accuracy of spelling in a free-writing sentence task when keyboarding. The "Test Estandarizado para la Evaluación de la Escritura con Teclado"…
Accurate Monitoring and Fault Detection in Wind Measuring Devices through Wireless Sensor Networks
Khan, Komal Saifullah; Tariq, Muhammad
2014-01-01
Many wind energy projects report poor performance as low as 60% of the predicted performance. The reason for this is poor resource assessment and the use of new untested technologies and systems in remote locations. Predictions about the potential of an area for wind energy projects (through simulated models) may vary from the actual potential of the area. Hence, introducing accurate site assessment techniques will lead to accurate predictions of energy production from a particular area. We solve this problem by installing a Wireless Sensor Network (WSN) to periodically analyze the data from anemometers installed in that area. After comparative analysis of the acquired data, the anemometers transmit their readings through a WSN to the sink node for analysis. The sink node uses an iterative algorithm which sequentially detects any faulty anemometer and passes the details of the fault to the central system or main station. We apply the proposed technique in simulation as well as in practical implementation and study its accuracy by comparing the simulation results with experimental results to analyze the variation in the results obtained from both simulation model and implemented model. Simulation results show that the algorithm indicates faulty anemometers with high accuracy and low false alarm rate when as many as 25% of the anemometers become faulty. Experimental analysis shows that anemometers incorporating this solution are better assessed and performance level of implemented projects is increased above 86% of the simulated models. PMID:25421739
NASA Astrophysics Data System (ADS)
Hemmat Esfe, Mohammad; Tatar, Afshin; Ahangar, Mohammad Reza Hassani; Rostamian, Hossein
2018-02-01
Since the conventional thermal fluids such as water, oil, and ethylene glycol have poor thermal properties, the tiny solid particles are added to these fluids to increase their heat transfer improvement. As viscosity determines the rheological behavior of a fluid, studying the parameters affecting the viscosity is crucial. Since the experimental measurement of viscosity is expensive and time consuming, predicting this parameter is the apt method. In this work, three artificial intelligence methods containing Genetic Algorithm-Radial Basis Function Neural Networks (GA-RBF), Least Square Support Vector Machine (LS-SVM) and Gene Expression Programming (GEP) were applied to predict the viscosity of TiO2/SAE 50 nano-lubricant with Non-Newtonian power-law behavior using experimental data. The correlation factor (R2), Average Absolute Relative Deviation (AARD), Root Mean Square Error (RMSE), and Margin of Deviation were employed to investigate the accuracy of the proposed models. RMSE values of 0.58, 1.28, and 6.59 and R2 values of 0.99998, 0.99991, and 0.99777 reveal the accuracy of the proposed models for respective GA-RBF, CSA-LSSVM, and GEP methods. Among the developed models, the GA-RBF shows the best accuracy.
Predictive model for survival in patients with gastric cancer.
Goshayeshi, Ladan; Hoseini, Benyamin; Yousefli, Zahra; Khooie, Alireza; Etminani, Kobra; Esmaeilzadeh, Abbas; Golabpour, Amin
2017-12-01
Gastric cancer is one of the most prevalent cancers in the world. Characterized by poor prognosis, it is a frequent cause of cancer in Iran. The aim of the study was to design a predictive model of survival time for patients suffering from gastric cancer. This was a historical cohort conducted between 2011 and 2016. Study population were 277 patients suffering from gastric cancer. Data were gathered from the Iranian Cancer Registry and the laboratory of Emam Reza Hospital in Mashhad, Iran. Patients or their relatives underwent interviews where it was needed. Missing values were imputed by data mining techniques. Fifteen factors were analyzed. Survival was addressed as a dependent variable. Then, the predictive model was designed by combining both genetic algorithm and logistic regression. Matlab 2014 software was used to combine them. Of the 277 patients, only survival of 80 patients was available whose data were used for designing the predictive model. Mean ?SD of missing values for each patient was 4.43?.41 combined predictive model achieved 72.57% accuracy. Sex, birth year, age at diagnosis time, age at diagnosis time of patients' family, family history of gastric cancer, and family history of other gastrointestinal cancers were six parameters associated with patient survival. The study revealed that imputing missing values by data mining techniques have a good accuracy. And it also revealed six parameters extracted by genetic algorithm effect on the survival of patients with gastric cancer. Our combined predictive model, with a good accuracy, is appropriate to forecast the survival of patients suffering from Gastric cancer. So, we suggest policy makers and specialists to apply it for prediction of patients' survival.
Sapara, Adegboyega; ffytche, Dominic H.; Birchwood, Max; Cooke, Michael A.; Fannon, Dominic; Williams, Steven C.R.; Kuipers, Elizabeth; Kumari, Veena
2014-01-01
Background Poor insight in schizophrenia has been theorised to reflect a cognitive deficit that is secondary to brain abnormalities, localized in the brain regions that are implicated in higher order cognitive functions, including working memory (WM). This study investigated WM-related neural substrates of preserved and poor insight in schizophrenia. Method Forty stable schizophrenia outpatients, 20 with preserved and 20 with poor insight (usable data obtained from 18 preserved and 14 poor insight patients), and 20 healthy participants underwent functional magnetic resonance imaging (fMRI) during a parametric ‘n-back’ task. The three groups were preselected to match on age, education and predicted IQ, and the two patient groups to have distinct insight levels. Performance and fMRI data were analysed to determine how groups of patients with preserved and poor insight differed from each other, and from healthy participants. Results Poor insight patients showed lower performance accuracy, relative to healthy participants (p = 0.01) and preserved insight patients (p = 0.08); the two patient groups were comparable on symptoms and medication. Preserved insight patients, relative to poor insight patients, showed greater activity most consistently in the precuneus and cerebellum (both bilateral) during WM; they also showed greater activity than healthy participants in the inferior–superior frontal gyrus and cerebellum (bilateral). Group differences in brain activity did not co-vary significantly with performance accuracy. Conclusions The precuneus and cerebellum function contribute to preserved insight in schizophrenia. Preserved insight as well as normal-range WM capacity in schizophrenia sub-groups may be achieved via compensatory neural activity in the frontal cortex and cerebellum. PMID:24332795
Hosey, Chelsea M; Benet, Leslie Z
2015-01-01
The Biopharmaceutics Drug Disposition Classification System (BDDCS) can be utilized to predict drug disposition, including interactions with other drugs and transporter or metabolizing enzyme effects based on the extent of metabolism and solubility of a drug. However, defining the extent of metabolism relies upon clinical data. Drugs exhibiting high passive intestinal permeability rates are extensively metabolized. Therefore, we aimed to determine if in vitro measures of permeability rate or in silico permeability rate predictions could predict the extent of metabolism, to determine a reference compound representing the permeability rate above which compounds would be expected to be extensively metabolized, and to predict the major route of elimination of compounds in a two-tier approach utilizing permeability rate and a previously published model predicting the major route of elimination of parent drug. Twenty-two in vitro permeability rate measurement data sets in Caco-2 and MDCK cell lines and PAMPA were collected from the literature, while in silico permeability rate predictions were calculated using ADMET Predictor™ or VolSurf+. The potential for permeability rate to differentiate between extensively and poorly metabolized compounds was analyzed with receiver operating characteristic curves. Compounds that yielded the highest sensitivity-specificity average were selected as permeability rate reference standards. The major route of elimination of poorly permeable drugs was predicted by our previously published model and the accuracies and predictive values were calculated. The areas under the receiver operating curves were >0.90 for in vitro measures of permeability rate and >0.80 for the VolSurf+ model of permeability rate, indicating they were able to predict the extent of metabolism of compounds. Labetalol and zidovudine predicted greater than 80% of extensively metabolized drugs correctly and greater than 80% of poorly metabolized drugs correctly in Caco-2 and MDCK, respectively, while theophylline predicted greater than 80% of extensively and poorly metabolized drugs correctly in PAMPA. A two-tier approach predicting elimination route predicts 72±9%, 49±10%, and 66±7% of extensively metabolized, biliarily eliminated, and renally eliminated parent drugs correctly when the permeability rate is predicted in silico and 74±7%, 85±2%, and 73±8% of extensively metabolized, biliarily eliminated, and renally eliminated parent drugs correctly, respectively when the permeability rate is determined in vitro. PMID:25816851
Positive Illusions? The Accuracy of Academic Self-Appraisals in Adolescents With ADHD.
Chan, Todd; Martinussen, Rhonda
2016-08-01
Children with attention deficit/hyperactivity disorder (ADHD) overestimate their academic competencies (AC) relative to performance and informant indicators (i.e., positive illusory bias; PIB). Do adolescents with ADHD exhibit this PIB and does it render self-views inaccurate? We examined the magnitude of the AC-PIB in adolescents with and without ADHD, the predictive accuracy of parent and adolescent AC ratings, and whether executive functions (EF) predict the AC-PIB. Adolescents (49 ADHD; 47 typically developing) completed math and EF tests, and self-rated their AC. Parents rated their adolescents' AC and EF. Adolescents with ADHD performed more poorly on the math task (vs. comparison group) but had a larger AC-PIB relative to parents' ratings. EFs predicted the PIB within the full sample. Adolescents' AC ratings, regardless of ADHD status, were more predictive of math performance than their parents' AC ratings. Adolescents with ADHD appear self-aware in their AC despite a modest PIB; nuanced self-appraisals may depend on EFs. © The Author 2015. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The Power of Implicit Social Relation in Rating Prediction of Social Recommender Systems
Reafee, Waleed; Salim, Naomie; Khan, Atif
2016-01-01
The explosive growth of social networks in recent times has presented a powerful source of information to be utilized as an extra source for assisting in the social recommendation problems. The social recommendation methods that are based on probabilistic matrix factorization improved the recommendation accuracy and partly solved the cold-start and data sparsity problems. However, these methods only exploited the explicit social relations and almost completely ignored the implicit social relations. In this article, we firstly propose an algorithm to extract the implicit relation in the undirected graphs of social networks by exploiting the link prediction techniques. Furthermore, we propose a new probabilistic matrix factorization method to alleviate the data sparsity problem through incorporating explicit friendship and implicit friendship. We evaluate our proposed approach on two real datasets, Last.Fm and Douban. The experimental results show that our method performs much better than the state-of-the-art approaches, which indicates the importance of incorporating implicit social relations in the recommendation process to address the poor prediction accuracy. PMID:27152663
Pang, Hui; Han, Bing; Fu, Qiang; Zong, Zhenkun
2017-07-05
The presence of acute myocardial infarction (AMI) confers a poor prognosis in atrial fibrillation (AF), associated with increased mortality dramatically. This study aimed to evaluate the predictive value of CHADS 2 and CHA 2 DS 2 -VASc scores for AMI in patients with AF. This retrospective study enrolled 5140 consecutive nonvalvular AF patients, 300 patients with AMI and 4840 patients without AMI. We identified the optimal cut-off values of the CHADS 2 and CHA 2 DS 2 -VASc scores each based on receiver operating characteristic curves to predict the risk of AMI. Both CHADS 2 score and CHA 2 DS 2 -VASc score were associated with an increased odds ratio of the prevalence of AMI in patients with AF, after adjustment for hyperlipidaemia, hyperuricemia, hyperthyroidism, hypothyroidism and obstructive sleep apnea. The present results showed that the area under the curve (AUC) for CHADS 2 score was 0.787 with a similar accuracy of the CHA 2 DS 2 -VASc score (AUC 0.750) in predicting "high-risk" AF patients who developed AMI. However, the predictive accuracy of the two clinical-based risk scores was fair. The CHA 2 DS 2 -VASc score has fair predictive value for identifying high-risk patients with AF and is not significantly superior to CHADS 2 in predicting patients who develop AMI.
In silico models for predicting ready biodegradability under REACH: a comparative study.
Pizzo, Fabiola; Lombardo, Anna; Manganaro, Alberto; Benfenati, Emilio
2013-10-01
REACH (Registration Evaluation Authorization and restriction of Chemicals) legislation is a new European law which aims to raise the human protection level and environmental health. Under REACH all chemicals manufactured or imported for more than one ton per year must be evaluated for their ready biodegradability. Ready biodegradability is also used as a screening test for persistent, bioaccumulative and toxic (PBT) substances. REACH encourages the use of non-testing methods such as QSAR (quantitative structure-activity relationship) models in order to save money and time and to reduce the number of animals used for scientific purposes. Some QSAR models are available for predicting ready biodegradability. We used a dataset of 722 compounds to test four models: VEGA, TOPKAT, BIOWIN 5 and 6 and START and compared their performance on the basis of the following parameters: accuracy, sensitivity, specificity and Matthew's correlation coefficient (MCC). Performance was analyzed from different points of view. The first calculation was done on the whole dataset and VEGA and TOPKAT gave the best accuracy (88% and 87% respectively). Then we considered the compounds inside and outside the training set: BIOWIN 6 and 5 gave the best results for accuracy (81%) outside training set. Another analysis examined the applicability domain (AD). VEGA had the highest value for compounds inside the AD for all the parameters taken into account. Finally, compounds outside the training set and in the AD of the models were considered to assess predictive ability. VEGA gave the best accuracy results (99%) for this group of chemicals. Generally, START model gave poor results. Since BIOWIN, TOPKAT and VEGA models performed well, they may be used to predict ready biodegradability. Copyright © 2013 Elsevier B.V. All rights reserved.
Liu, Kun; Zhou, Yongjin; Cui, Shihan; Song, Jiawen; Ye, Peipei; Xiang, Wei; Huang, Xiaoyan; Chen, Yiping; Yan, Zhihan; Ye, Xinjian
2018-04-05
Brainstem encephalitis is the most common neurologic complication after enterovirus 71 infection. The involvement of brainstem, especially the dorsal medulla oblongata, can cause severe sequelae or death in children with enterovirus 71 infection. We aimed to determine the prevalence of dorsal medulla oblongata involvement in children with enterovirus 71-related brainstem encephalitis (EBE) by using conventional MRI and to evaluate the value of dorsal medulla oblongata involvement in outcome prediction. 46 children with EBE were enrolled in the study. All subjects underwent a 1.5 Tesla MR examination of the brain. The disease distribution and clinical data were collected. Dichotomized outcomes (good versus poor) at longer than 6 months were available for 28 patients. Logistic regression was used to determine whether the MRI-confirmed dorsal medulla oblongata involvement resulted in improved clinical outcome prediction when compared with other location involvement. Of the 46 patients, 35 had MRI evidence of dorsal medulla oblongata involvement, 32 had pons involvement, 10 had midbrain involvement, and 7 had dentate nuclei involvement. Patients with dorsal medulla oblongata involvement or multiple area involvement were significantly more often in the poor outcome group than in the good outcome group. Logistic regression analysis showed that dorsal medulla oblongata involvement was the most significant single variable in outcome prediction (predictive accuracy, 90.5%), followed by multiple area involvement, age, and initial glasgow coma scale score. Dorsal medulla oblongata involvement on conventional MRI correlated significantly with poor outcomes in EBE children, improved outcome prediction when compared with other clinical and disease location variables, and was most predictive when combined with multiple area involvement, glasgow coma scale score and age.
Lillitos, Peter J; Hadley, Graeme; Maconochie, Ian
2016-05-01
Designed to detect early deterioration of the hospitalised child, paediatric early warning scores (PEWS) validity in the emergency department (ED) is less validated. We aimed to evaluate sensitivity and specificity of two commonly used PEWS (Brighton and COAST) in predicting hospital admission and, for the first time, significant illness. Retrospective analysis of PEWS data for paediatric ED attendances at St Mary's Hospital, London, UK, in November 2012. Patients with missing data were excluded. Diagnoses were grouped: medical and surgical. To classify diagnoses as significant, established guidelines were used and, where not available, common agreement between three acute paediatricians. 1921 patients were analysed. There were 211 admissions (11%). 1630 attendances were medical (86%) and 273 (14%) surgical. Brighton and COAST PEWS performed similarly. hospital admission: PEWS of ≥3 was specific (93%) but poorly sensitive (32%). The area under the receiver operating curve (AUC) was low at 0.690. Significant illness: for medical illness, PEWS ≥3 was highly specific (96%) but poorly sensitive (44%). The AUC was 0.754 and 0.755 for Brighton and COAST PEWS, respectively. Both scores performed poorly for predicting significant surgical illness (AUC 0.642). PEWS ≥3 performed well in predicting significant respiratory illness: sensitivity 75%, specificity 91%. Both Brighton and COAST PEWS scores performed similarly. A score of ≥3 has good specificity but poor sensitivity for predicting hospital admission and significant illness. Therefore, a high PEWS should be taken seriously but a low score is poor at ruling out the requirement for admission or serious underlying illness. PEWS was better at detecting significant medical illness compared with detecting the need for admission. PEWS performed poorly in detecting significant surgical illness. PEWS may be particularly useful in evaluating respiratory illness in a paediatric ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
nGASP - the nematode genome annotation assessment project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coghlan, A; Fiedler, T J; McKay, S J
2008-12-19
While the C. elegans genome is extensively annotated, relatively little information is available for other Caenorhabditis species. The nematode genome annotation assessment project (nGASP) was launched to objectively assess the accuracy of protein-coding gene prediction software in C. elegans, and to apply this knowledge to the annotation of the genomes of four additional Caenorhabditis species and other nematodes. Seventeen groups worldwide participated in nGASP, and submitted 47 prediction sets for 10 Mb of the C. elegans genome. Predictions were compared to reference gene sets consisting of confirmed or manually curated gene models from WormBase. The most accurate gene-finders were 'combiner'more » algorithms, which made use of transcript- and protein-alignments and multi-genome alignments, as well as gene predictions from other gene-finders. Gene-finders that used alignments of ESTs, mRNAs and proteins came in second place. There was a tie for third place between gene-finders that used multi-genome alignments and ab initio gene-finders. The median gene level sensitivity of combiners was 78% and their specificity was 42%, which is nearly the same accuracy as reported for combiners in the human genome. C. elegans genes with exons of unusual hexamer content, as well as those with many exons, short exons, long introns, a weak translation start signal, weak splice sites, or poorly conserved orthologs were the most challenging for gene-finders. While the C. elegans genome is extensively annotated, relatively little information is available for other Caenorhabditis species. The nematode genome annotation assessment project (nGASP) was launched to objectively assess the accuracy of protein-coding gene prediction software in C. elegans, and to apply this knowledge to the annotation of the genomes of four additional Caenorhabditis species and other nematodes. Seventeen groups worldwide participated in nGASP, and submitted 47 prediction sets for 10 Mb of the C. elegans genome. Predictions were compared to reference gene sets consisting of confirmed or manually curated gene models from WormBase. The most accurate gene-finders were 'combiner' algorithms, which made use of transcript- and protein-alignments and multi-genome alignments, as well as gene predictions from other gene-finders. Gene-finders that used alignments of ESTs, mRNAs and proteins came in second place. There was a tie for third place between gene-finders that used multi-genome alignments and ab initio gene-finders. The median gene level sensitivity of combiners was 78% and their specificity was 42%, which is nearly the same accuracy as reported for combiners in the human genome. C. elegans genes with exons of unusual hexamer content, as well as those with many exons, short exons, long introns, a weak translation start signal, weak splice sites, or poorly conserved orthologs were the most challenging for gene-finders.« less
Improving consensus contact prediction via server correlation reduction.
Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming
2009-05-06
Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.
Artificial neural network classifier predicts neuroblastoma patients' outcome.
Cangelosi, Davide; Pelassa, Simone; Morini, Martina; Conte, Massimo; Bosco, Maria Carla; Eva, Alessandra; Sementa, Angela Rita; Varesio, Luigi
2016-11-08
More than fifty percent of neuroblastoma (NB) patients with adverse prognosis do not benefit from treatment making the identification of new potential targets mandatory. Hypoxia is a condition of low oxygen tension, occurring in poorly vascularized tissues, which activates specific genes and contributes to the acquisition of the tumor aggressive phenotype. We defined a gene expression signature (NB-hypo), which measures the hypoxic status of the neuroblastoma tumor. We aimed at developing a classifier predicting neuroblastoma patients' outcome based on the assessment of the adverse effects of tumor hypoxia on the progression of the disease. Multi-layer perceptron (MLP) was trained on the expression values of the 62 probe sets constituting NB-hypo signature to develop a predictive model for neuroblastoma patients' outcome. We utilized the expression data of 100 tumors in a leave-one-out analysis to select and construct the classifier and the expression data of the remaining 82 tumors to test the classifier performance in an external dataset. We utilized the Gene set enrichment analysis (GSEA) to evaluate the enrichment of hypoxia related gene sets in patients predicted with "Poor" or "Good" outcome. We utilized the expression of the 62 probe sets of the NB-Hypo signature in 182 neuroblastoma tumors to develop a MLP classifier predicting patients' outcome (NB-hypo classifier). We trained and validated the classifier in a leave-one-out cross-validation analysis on 100 tumor gene expression profiles. We externally tested the resulting NB-hypo classifier on an independent 82 tumors' set. The NB-hypo classifier predicted the patients' outcome with the remarkable accuracy of 87 %. NB-hypo classifier prediction resulted in 2 % classification error when applied to clinically defined low-intermediate risk neuroblastoma patients. The prediction was 100 % accurate in assessing the death of five low/intermediated risk patients. GSEA of tumor gene expression profile demonstrated the hypoxic status of the tumor in patients with poor prognosis. We developed a robust classifier predicting neuroblastoma patients' outcome with a very low error rate and we provided independent evidence that the poor outcome patients had hypoxic tumors, supporting the potential of using hypoxia as target for neuroblastoma treatment.
Dumont, F; Tilly, C; Dartigues, P; Goéré, D; Honoré, C; Elias, D
2015-09-01
Low rectal cancers carry a high risk of circumferential margin involvement (CRM+). The anatomy of the lower part of the rectum and a long course of chemoradiotherapy (CRT) limit the accuracy of imaging to predict the CRM+. Additional criteria are required. Eighty six patients undergoing rectal resection with a sphincter-sparing procedure after CRT for low rectal cancer between 2000 and 2013 were retrospectively reviewed. Risk factors of CRM+ and the cut-off number of risk factors required to accurately predict the CRM+ were analyzed. The CRM+ rate was 9.3% and in the multivariate analysis, the significant risk factors were a tumor size exceeding 3 cm, poor response to CRT and a fixed tumor. The best cut-off to predict CRM+ was the presence of 2 risk factors. Patients with 0-1 and 2-3 risk factors had a CRM+ respectively in 1.3% and 50% of cases and a 3-year recurrence rate of 7% and 35% after a median follow-up of 50 months. Poor response, a residual tumor greater than 3 cm and a fixed tumor are predictive of CRM+. Sphincter sparing is an oncological safety procedure for patients with 0-1 criteria but not for patients with 2-3 criteria. Copyright © 2015 Elsevier Ltd. All rights reserved.
ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.
Morota, Gota
2017-12-20
Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.
Peng, Yang; Wu, Chao; Zheng, Yifu; Dong, Jun
2017-01-01
Welded joints are prone to fatigue cracking with the existence of welding defects and bending stress. Fracture mechanics is a useful approach in which the fatigue life of the welded joint can be predicted. The key challenge of such predictions using fracture mechanics is how to accurately calculate the stress intensity factor (SIF). An empirical formula for calculating the SIF of welded joints under bending stress was developed by Baik, Yamada and Ishikawa based on the hybrid method. However, when calculating the SIF of a semi-elliptical crack, this study found that the accuracy of the Baik-Yamada formula was poor when comparing the benchmark results, experimental data and numerical results. The reasons for the reduced accuracy of the Baik-Yamada formula were identified and discussed in this paper. Furthermore, a new correction factor was developed and added to the Baik-Yamada formula by using theoretical analysis and numerical regression. Finally, the predictions using the modified Baik-Yamada formula were compared with the benchmark results, experimental data and numerical results. It was found that the accuracy of the modified Baik-Yamada formula was greatly improved. Therefore, it is proposed that this modified formula is used to conveniently and accurately calculate the SIF of semi-elliptical cracks in welded joints under bending stress. PMID:28772527
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
Hyperfibrinogenemia is a poor prognostic factor in diffuse large B cell lymphoma.
Niu, Jun-Ying; Tian, Tian; Zhu, Hua-Yuan; Liang, Jin-Hua; Wu, Wei; Cao, Lei; Lu, Rui-Nan; Wang, Li; Li, Jian-Yong; Xu, Wei
2018-06-02
Diffuse large B cell lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphomas worldwide. Previous studies indicated that hyperfibrinogenemia was a poor predictor in various tumors. The purpose of our study was to evaluate the prognostic effect of hyperfibrinogenemia in DLBCL. Data of 228 patients, who were diagnosed with DLBCL in our hospital between May 2009 and February 2016, were analyzed retrospectively. The Kaplan-Meier method and Cox regression were performed to find prognostic factors associated with progression-free survival (PFS) and overall survival (OS). Receiver operator characteristic (ROC) curve and the areas under the curve were used to evaluate the predictive accuracy of predictors. Comparison of characters between groups indicated that patients with high National Comprehensive Cancer Network-International Prognostic Index (NCCN-IPI) score (4-8) and advanced stage (III-IV) were more likely to suffer from hyperfibrinogenemia. The Kaplan-Meier method revealed that patients with hyperfibrinogenemia showed inferior PFS (P < 0.001) and OS (P < 0.001) than those without hyperfibrinogenemia. Multivariate analysis showed that hyperfibrinogenemia was an independent prognostic factor associated with poor outcomes (HR = 1.90, 95% CI: 1.15-3.16 for PFS, P = 0.013; HR = 2.65, 95% CI: 1.46-4.79 for OS, P = 0.001). We combined hyperfibrinogenemia and NCCN-IPI to build a new prognostic index (NPI). The NPI was demonstrated to have a superior predictive effect on prognosis (P = 0.0194 for PFS, P = 0.0034 for OS). Hyperfibrinogenemia was demonstrated to be able to predict poor outcome in DLBCL, especially for patients with advanced stage and high NCCN-IPI score. Adding hyperfibrinogenemia to NCCN-IPI could significantly improve the predictive effect of NCCN-IPI.
Soultan, Alaaeldin; Safi, Kamran
2017-01-01
Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.
Accuracy of gestalt perception of acute chest pain in predicting coronary artery disease
das Virgens, Cláudio Marcelo Bittencourt; Lemos Jr, Laudenor; Noya-Rabelo, Márcia; Carvalhal, Manuela Campelo; Cerqueira Junior, Antônio Maurício dos Santos; Lopes, Fernanda Oliveira de Andrade; de Sá, Nicole Cruz; Suerdieck, Jéssica Gonzalez; de Souza, Thiago Menezes Barbosa; Correia, Vitor Calixto de Almeida; Sodré, Gabriella Sant'Ana; da Silva, André Barcelos; Alexandre, Felipe Kalil Beirão; Ferreira, Felipe Rodrigues Marques; Correia, Luís Cláudio Lemos
2017-01-01
AIM To test accuracy and reproducibility of gestalt to predict obstructive coronary artery disease (CAD) in patients with acute chest pain. METHODS We studied individuals who were consecutively admitted to our Chest Pain Unit. At admission, investigators performed a standardized interview and recorded 14 chest pain features. Based on these features, a cardiologist who was blind to other clinical characteristics made unstructured judgment of CAD probability, both numerically and categorically. As the reference standard for testing the accuracy of gestalt, angiography was required to rule-in CAD, while either angiography or non-invasive test could be used to rule-out. In order to assess reproducibility, a second cardiologist did the same procedure. RESULTS In a sample of 330 patients, the prevalence of obstructive CAD was 48%. Gestalt’s numerical probability was associated with CAD, but the area under the curve of 0.61 (95%CI: 0.55-0.67) indicated low level of accuracy. Accordingly, categorical definition of typical chest pain had a sensitivity of 48% (95%CI: 40%-55%) and specificity of 66% (95%CI: 59%-73%), yielding a negligible positive likelihood ratio of 1.4 (95%CI: 0.65-2.0) and negative likelihood ratio of 0.79 (95%CI: 0.62-1.02). Agreement between the two cardiologists was poor in the numerical classification (95% limits of agreement = -71% to 51%) and categorical definition of typical pain (Kappa = 0.29; 95%CI: 0.21-0.37). CONCLUSION Clinical judgment based on a combination of chest pain features is neither accurate nor reproducible in predicting obstructive CAD in the acute setting. PMID:28400920
Keenswijk, Werner; Vanmassenhove, Jill; Raes, Ann; Dhont, Evelyn; Vande Walle, Johan
2017-03-01
Diarrhea-associated hemolytic uremic syndrome (D+HUS) is a common thrombotic microangiopathy during childhood and early identification of parameters predicting poor outcome could enable timely intervention. This study aims to establish the accuracy of BUN-to-serum creatinine ratio at admission, in addition to other parameters in predicting the clinical course and outcome. Records were searched for children between 1 January 2008 and 1 January 2015 admitted with D+HUS. A complicated course was defined as developing one or more of the following: neurological dysfunction, pancreatitis, cardiac or pulmonary involvement, hemodynamic instability, and hematologic complications while poor outcome was defined by death or development of chronic kidney disease. Thirty-four children were included from which 11 with a complicated disease course/poor outcome. Risk of a complicated course/poor outcome was strongly associated with oliguria (p = 0.000006) and hypertension (p = 0.00003) at presentation. In addition, higher serum creatinine (p = 0.000006) and sLDH (p = 0.02) with lower BUN-to-serum creatinine ratio (p = 0.000007) were significantly associated with development of complications. A BUN-to-sCreatinine ratio ≤40 at admission was a sensitive and highly specific predictor of a complicated disease course/poor outcome. A BUN-to-serum Creatinine ratio can accurately identify children with D+HUS at risk for a complicated course and poor outcome. What is Known: • Oliguria is a predictor of poor long-term outcome in D+HUS What is New: • BUN-to-serum Creatinine ratio at admission is an entirely novel and accurate predictor of poor outcome and complicated clinical outcome in D+HUS • Early detection of the high risk group in D+HUS enabling early treatment and adequate monitoring.
Carvalho, Benilton S.; Bilevicius, Elizabeth; Alvim, Marina K. M.; Lopes-Cendes, Iscia
2017-01-01
Mesial temporal lobe epilepsy is the most common form of adult epilepsy in surgical series. Currently, the only characteristic used to predict poor response to clinical treatment in this syndrome is the presence of hippocampal sclerosis. Single nucleotide polymorphisms (SNPs) located in genes encoding drug transporter and metabolism proteins could influence response to therapy. Therefore, we aimed to evaluate whether combining information from clinical variables as well as SNPs in candidate genes could improve the accuracy of predicting response to drug therapy in patients with mesial temporal lobe epilepsy. For this, we divided 237 patients into two groups: 75 responsive and 162 refractory to antiepileptic drug therapy. We genotyped 119 SNPs in ABCB1, ABCC2, CYP1A1, CYP1A2, CYP1B1, CYP2C9, CYP2C19, CYP2D6, CYP2E1, CYP3A4, and CYP3A5 genes. We used 98 additional SNPs to evaluate population stratification. We assessed a first scenario using only clinical variables and a second one including SNP information. The random forests algorithm combined with leave-one-out cross-validation was used to identify the best predictive model in each scenario and compared their accuracies using the area under the curve statistic. Additionally, we built a variable importance plot to present the set of most relevant predictors on the best model. The selected best model included the presence of hippocampal sclerosis and 56 SNPs. Furthermore, including SNPs in the model improved accuracy from 0.4568 to 0.8177. Our findings suggest that adding genetic information provided by SNPs, located on drug transport and metabolism genes, can improve the accuracy for predicting which patients with mesial temporal lobe epilepsy are likely to be refractory to drug treatment, making it possible to identify patients who may benefit from epilepsy surgery sooner. PMID:28052106
Predicting the Geothermal Heat Flux in Greenland: A Machine Learning Approach
NASA Astrophysics Data System (ADS)
Rezvanbehbahani, Soroush; Stearns, Leigh A.; Kadivar, Amir; Walker, J. Doug; van der Veen, C. J.
2017-12-01
Geothermal heat flux (GHF) is a crucial boundary condition for making accurate predictions of ice sheet mass loss, yet it is poorly known in Greenland due to inaccessibility of the bedrock. Here we use a machine learning algorithm on a large collection of relevant geologic features and global GHF measurements and produce a GHF map of Greenland that we argue is within ˜15% accuracy. The main features of our predicted GHF map include a large region with high GHF in central-north Greenland surrounding the NorthGRIP ice core site, and hot spots in the Jakobshavn Isbræ catchment, upstream of Petermann Gletscher, and near the terminus of Nioghalvfjerdsfjorden glacier. Our model also captures the trajectory of Greenland movement over the Icelandic plume by predicting a stripe of elevated GHF in central-east Greenland. Finally, we show that our model can produce substantially more accurate predictions if additional measurements of GHF in Greenland are provided.
Chara, Liaskou; Eleftherios, Vouzounerakis; Maria, Moirasgenti; Anastasia, Trikoupi; Chryssoula, Staikou
2014-01-01
Background and Aims: Difficult airway assessment is based on various anatomic parameters of upper airway, much of it being concentrated on oral cavity and the pharyngeal structures. The diagnostic value of tests based on neck anatomy in predicting difficult laryngoscopy was assessed in this prospective, open cohort study. Methods: We studied 341 adult patients scheduled to receive general anaesthesia. Thyromental distance (TMD), sternomental distance (STMD), ratio of height to thyromental distance (RHTMD) and neck circumference (NC) were measured pre-operatively. The laryngoscopic view was classified according to the Cormack–Lehane Grade (1-4). Difficult laryngoscopy was defined as Cormack–Lehane Grade 3 or 4. The optimal cut-off points for each variable were identified by using receiver operating characteristic analysis. Sensitivity, specificity and positive predictive value and negative predictive value (NPV) were calculated for each test. Multivariate analysis with logistic regression, including all variables, was used to create a predictive model. Comparisons between genders were also performed. Results: Laryngoscopy was difficult in 12.6% of the patients. The cut-off values were: TMD ≤7 cm, STMD ≤15 cm, RHTMD >18.4 and NC >37.5 cm. The RHTMD had the highest sensitivity (88.4%) and NPV (95.2%), while TMD had the highest specificity (83.9%). The area under curve (AUC) for the TMD, STMD, RHTMD and NC was 0.63, 0.64, 0.62 and 0.54, respectively. The predictive model exhibited a higher and statistically significant diagnostic accuracy (AUC: 0.68, P < 0.001). Gender-specific cut-off points improved the predictive accuracy of NC in women (AUC: 0.65). Conclusions: The TMD, STMD, RHTMD and NC were found to be poor single predictors of difficult laryngoscopy, while a model including all four variables had a significant predictive accuracy. Among the studied tests, gender-specific cut-off points should be used for NC. PMID:24963183
Liaskou, Chara; Chara, Liaskou; Vouzounerakis, Eleftherios; Eleftherios, Vouzounerakis; Moirasgenti, Maria; Maria, Moirasgenti; Trikoupi, Anastasia; Anastasia, Trikoupi; Staikou, Chryssoula; Chryssoula, Staikou
2014-03-01
Difficult airway assessment is based on various anatomic parameters of upper airway, much of it being concentrated on oral cavity and the pharyngeal structures. The diagnostic value of tests based on neck anatomy in predicting difficult laryngoscopy was assessed in this prospective, open cohort study. We studied 341 adult patients scheduled to receive general anaesthesia. Thyromental distance (TMD), sternomental distance (STMD), ratio of height to thyromental distance (RHTMD) and neck circumference (NC) were measured pre-operatively. The laryngoscopic view was classified according to the Cormack-Lehane Grade (1-4). Difficult laryngoscopy was defined as Cormack-Lehane Grade 3 or 4. The optimal cut-off points for each variable were identified by using receiver operating characteristic analysis. Sensitivity, specificity and positive predictive value and negative predictive value (NPV) were calculated for each test. Multivariate analysis with logistic regression, including all variables, was used to create a predictive model. Comparisons between genders were also performed. Laryngoscopy was difficult in 12.6% of the patients. The cut-off values were: TMD ≤7 cm, STMD ≤15 cm, RHTMD >18.4 and NC >37.5 cm. The RHTMD had the highest sensitivity (88.4%) and NPV (95.2%), while TMD had the highest specificity (83.9%). The area under curve (AUC) for the TMD, STMD, RHTMD and NC was 0.63, 0.64, 0.62 and 0.54, respectively. The predictive model exhibited a higher and statistically significant diagnostic accuracy (AUC: 0.68, P < 0.001). Gender-specific cut-off points improved the predictive accuracy of NC in women (AUC: 0.65). The TMD, STMD, RHTMD and NC were found to be poor single predictors of difficult laryngoscopy, while a model including all four variables had a significant predictive accuracy. Among the studied tests, gender-specific cut-off points should be used for NC.
Serrancolí, Gil; Kinney, Allison L.; Fregly, Benjamin J.; Font-Llagunes, Josep M.
2016-01-01
Though walking impairments are prevalent in society, clinical treatments are often ineffective at restoring lost function. For this reason, researchers have begun to explore the use of patient-specific computational walking models to develop more effective treatments. However, the accuracy with which models can predict internal body forces in muscles and across joints depends on how well relevant model parameter values can be calibrated for the patient. This study investigated how knowledge of internal knee contact forces affects calibration of neuromusculoskeletal model parameter values and subsequent prediction of internal knee contact and leg muscle forces during walking. Model calibration was performed using a novel two-level optimization procedure applied to six normal walking trials from the Fourth Grand Challenge Competition to Predict In Vivo Knee Loads. The outer-level optimization adjusted time-invariant model parameter values to minimize passive muscle forces, reserve actuator moments, and model parameter value changes with (Approach A) and without (Approach B) tracking of experimental knee contact forces. Using the current guess for model parameter values but no knee contact force information, the inner-level optimization predicted time-varying muscle activations that were close to experimental muscle synergy patterns and consistent with the experimental inverse dynamic loads (both approaches). For all the six gait trials, Approach A predicted knee contact forces with high accuracy for both compartments (average correlation coefficient r = 0.99 and root mean square error (RMSE) = 52.6 N medial; average r = 0.95 and RMSE = 56.6 N lateral). In contrast, Approach B overpredicted contact force magnitude for both compartments (average RMSE = 323 N medial and 348 N lateral) and poorly matched contact force shape for the lateral compartment (average r = 0.90 medial and −0.10 lateral). Approach B had statistically higher lateral muscle forces and lateral optimal muscle fiber lengths but lower medial, central, and lateral normalized muscle fiber lengths compared to Approach A. These findings suggest that poorly calibrated model parameter values may be a major factor limiting the ability of neuromusculoskeletal models to predict knee contact and leg muscle forces accurately for walking. PMID:27210105
Almajwal, Ali M; Williams, Peter G; Batterham, Marijka J
2011-07-01
To assess the accuracy of resting energy expenditure (REE) measurement in a sample of overweight and obese Saudi males, using the BodyGem device (BG) with whole room calorimetry (WRC) as a reference, and to evaluate the accuracy of predictive equations. Thirty-eight subjects (mean +/- SD, age 26.8+/- 3.7 years, body mass index 31.0+/- 4.8) were recruited during the period from 5 February 2007 to 28 March 2008. Resting energy expenditure was measured using a WRC and BG device, and also calculated using 7 prediction equations. Mean differences, bias, percent of bias (%bias), accurate estimation, underestimation and overestimation were calculated. Repeated measures with the BG were not significantly different (accurate prediction: 81.6%; %bias 1.1+/- 6.3, p>0.24) with limits of agreement ranging from +242 to -200 kcal. Resting energy expenditure measured by BG was significantly less than WRC values (accurate prediction: 47.4%; %bias: 11.0+/- 14.6, p = 0.0001) with unacceptably wide limits of agreement. Harris-Benedict, Schofield and World Health Organization equations were the most accurate, estimating REE within 10% of measured REE, but none seem appropriate to predict the REE of individuals. There was a poor agreement between the REE measured by WRC compared to BG or predictive equations. The BG assessed REE accurately in 47.4% of the subjects on an individual level.
LOCALIZER: subcellular localization prediction of both plant and effector proteins in the plant cell
Sperschneider, Jana; Catanzariti, Ann-Maree; DeBoer, Kathleen; Petre, Benjamin; Gardiner, Donald M.; Singh, Karam B.; Dodds, Peter N.; Taylor, Jennifer M.
2017-01-01
Pathogens secrete effector proteins and many operate inside plant cells to enable infection. Some effectors have been found to enter subcellular compartments by mimicking host targeting sequences. Although many computational methods exist to predict plant protein subcellular localization, they perform poorly for effectors. We introduce LOCALIZER for predicting plant and effector protein localization to chloroplasts, mitochondria, and nuclei. LOCALIZER shows greater prediction accuracy for chloroplast and mitochondrial targeting compared to other methods for 652 plant proteins. For 107 eukaryotic effectors, LOCALIZER outperforms other methods and predicts a previously unrecognized chloroplast transit peptide for the ToxA effector, which we show translocates into tobacco chloroplasts. Secretome-wide predictions and confocal microscopy reveal that rust fungi might have evolved multiple effectors that target chloroplasts or nuclei. LOCALIZER is the first method for predicting effector localisation in plants and is a valuable tool for prioritizing effector candidates for functional investigations. LOCALIZER is available at http://localizer.csiro.au/. PMID:28300209
nGASP--the nematode genome annotation assessment project.
Coghlan, Avril; Fiedler, Tristan J; McKay, Sheldon J; Flicek, Paul; Harris, Todd W; Blasiar, Darin; Stein, Lincoln D
2008-12-19
While the C. elegans genome is extensively annotated, relatively little information is available for other Caenorhabditis species. The nematode genome annotation assessment project (nGASP) was launched to objectively assess the accuracy of protein-coding gene prediction software in C. elegans, and to apply this knowledge to the annotation of the genomes of four additional Caenorhabditis species and other nematodes. Seventeen groups worldwide participated in nGASP, and submitted 47 prediction sets across 10 Mb of the C. elegans genome. Predictions were compared to reference gene sets consisting of confirmed or manually curated gene models from WormBase. The most accurate gene-finders were 'combiner' algorithms, which made use of transcript- and protein-alignments and multi-genome alignments, as well as gene predictions from other gene-finders. Gene-finders that used alignments of ESTs, mRNAs and proteins came in second. There was a tie for third place between gene-finders that used multi-genome alignments and ab initio gene-finders. The median gene level sensitivity of combiners was 78% and their specificity was 42%, which is nearly the same accuracy reported for combiners in the human genome. C. elegans genes with exons of unusual hexamer content, as well as those with unusually many exons, short exons, long introns, a weak translation start signal, weak splice sites, or poorly conserved orthologs posed the greatest difficulty for gene-finders. This experiment establishes a baseline of gene prediction accuracy in Caenorhabditis genomes, and has guided the choice of gene-finders for the annotation of newly sequenced genomes of Caenorhabditis and other nematode species. We have created new gene sets for C. briggsae, C. remanei, C. brenneri, C. japonica, and Brugia malayi using some of the best-performing gene-finders.
Low Expression of Mucin-4 Predicts Poor Prognosis in Patients With Clear-Cell Renal Cell Carcinoma
Fu, Hangcheng; Liu, Yidong; Xu, Le; Chang, Yuan; Zhou, Lin; Zhang, Weijuan; Yang, Yuanfeng; Xu, Jiejie
2016-01-01
Abstract Mucin-4 (MUC4), a member of membrane-bound mucins, has been reported to exert a large variety of distinctive roles in tumorigenesis of different cancers. MUC4 is aberrantly expressed in clear-cell renal cell carcinoma (ccRCC) but its prognostic value is still unveiled. This study aims to assess the clinical significance of MUC4 expression in patients with ccRCC. The expression of MUC4 was assessed by immunohistochemistry in 198 patients with ccRCC who underwent nephrectomy retrospectively in 2003 and 2004. Sixty-seven patients died before the last follow-up in the cohort. Kaplan–Meier method with log-rank test was applied to compare survival curves. Univariate and multivariate Cox regression models were applied to evaluate the prognostic value of MUC4 expression in overall survival (OS). The predictive nomogram was constructed based on the independent prognostic factors. The calibration was built to evaluate the predictive accuracy of nomogram. In patients with ccRCC, MUC4 expression, which was determined to be an independent prognostic indicator for OS (hazard ratio [HR] 3.891; P < 0.001), was negatively associated with tumor size (P = 0.036), Fuhrman grade (P = 0.044), and OS (P < 0.001). The prognostic accuracy of TNM stage, UCLA Integrated Scoring System (UISS), and Mayo clinic stage, size, grade, and necrosis score (SSIGN) prognostic models was improved when MUC4 expression was added. The independent prognostic factors, pT stage, distant metastases, Fuhrman grade, sarcomatoid, and MUC4 expression were integrated to establish a predictive nomogram with high predictive accuracy. MUC4 expression is an independent prognostic factor for OS in patients with ccRCC. PMID:27124015
Milker, Yvonne; Weinkauf, Manuel F G; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0-200 m) but far less accuracy of 130 m for deeper water depths (200-1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene.
Weinkauf, Manuel F. G.; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J.; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0–200 m) but far less accuracy of 130 m for deeper water depths (200–1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene. PMID:29166653
Robust prediction of protein subcellular localization combining PCA and WSVMs.
Tian, Jiang; Gu, Hong; Liu, Wenqi; Gao, Chiyang
2011-08-01
Automated prediction of protein subcellular localization is an important tool for genome annotation and drug discovery, and Support Vector Machines (SVMs) can effectively solve this problem in a supervised manner. However, the datasets obtained from real experiments are likely to contain outliers or noises, which can lead to poor generalization ability and classification accuracy. To explore this problem, we adopt strategies to lower the effect of outliers. First we design a method based on Weighted SVMs, different weights are assigned to different data points, so the training algorithm will learn the decision boundary according to the relative importance of the data points. Second we analyse the influence of Principal Component Analysis (PCA) on WSVM classification, propose a hybrid classifier combining merits of both PCA and WSVM. After performing dimension reduction operations on the datasets, kernel-based possibilistic c-means algorithm can generate more suitable weights for the training, as PCA transforms the data into a new coordinate system with largest variances affected greatly by the outliers. Experiments on benchmark datasets show promising results, which confirms the effectiveness of the proposed method in terms of prediction accuracy. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cario, Gunnar; Stanulla, Martin; Fine, Bernard M; Teuffel, Oliver; Neuhoff, Nils V; Schrauder, André; Flohr, Thomas; Schäfer, Beat W; Bartram, Claus R; Welte, Karl; Schlegelberger, Brigitte; Schrappe, Martin
2005-01-15
Treatment resistance, as indicated by the presence of high levels of minimal residual disease (MRD) after induction therapy and induction consolidation, is associated with a poor prognosis in childhood acute lymphoblastic leukemia (ALL). We hypothesized that treatment resistance is an intrinsic feature of ALL cells reflected in the gene expression pattern and that resistance to chemotherapy can be predicted before treatment. To test these hypotheses, gene expression signatures of ALL samples with high MRD load were compared with those of samples without measurable MRD during treatment. We identified 54 genes that clearly distinguished resistant from sensitive ALL samples. Genes with low expression in resistant samples were predominantly associated with cell-cycle progression and apoptosis, suggesting that impaired cell proliferation and apoptosis are involved in treatment resistance. Prediction analysis using randomly selected samples as a training set and the remaining samples as a test set revealed an accuracy of 84%. We conclude that resistance to chemotherapy seems at least in part to be an intrinsic feature of ALL cells. Because treatment response could be predicted with high accuracy, gene expression profiling could become a clinically relevant tool for treatment stratification in the early course of childhood ALL.
Chen, Zhiru; Hong, Wenxue
2016-02-01
Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.
Non-verbal sensorimotor timing deficits in children and adolescents who stutter
Falk, Simone; Müller, Thilo; Dalla Bella, Simone
2015-01-01
There is growing evidence that motor and speech disorders co-occur during development. In the present study, we investigated whether stuttering, a developmental speech disorder, is associated with a predictive timing deficit in childhood and adolescence. By testing sensorimotor synchronization abilities, we aimed to assess whether predictive timing is dysfunctional in young participants who stutter (8–16 years). Twenty German children and adolescents who stutter and 43 non-stuttering participants matched for age and musical training were tested on their ability to synchronize their finger taps with periodic tone sequences and with a musical beat. Forty percent of children and 90% of adolescents who stutter displayed poor synchronization with both metronome and musical stimuli, falling below 2.5% of the estimated population based on the performance of the group without the disorder. Synchronization deficits were characterized by either lower synchronization accuracy or lower consistency or both. Lower accuracy resulted in an over-anticipation of the pacing event in participants who stutter. Moreover, individual profiles revealed that lower consistency was typical of participants that were severely stuttering. These findings support the idea that malfunctioning predictive timing during auditory–motor coupling plays a role in stuttering in children and adolescents. PMID:26217245
Theron, Grant; Pooran, Anil; Peter, Jonny; van Zyl-Smit, Richard; Mishra, Hridesh Kumar; Meldau, Richard; Calligaro, Greg; Allwood, Brian; Sharma, Surendra Kumar; Dawson, Rod; Dheda, Keertan
2017-01-01
Information regarding the utility of adjunct diagnostic tests in combination with Xpert MTB/RIF (Cepheid, Sunnyvale, CA, USA) is limited. We hypothesised adjunct tests could enhance accuracy and/or reduce the cost of tuberculosis (TB) diagnosis prior to MTB/RIF testing, and rule-in or rule-out TB in MTB/RIF-negative individuals. We assessed the accuracy and/or laboratory-associated cost of diagnosis of smear microscopy, chest radiography (CXR) and interferon-γ release assays (IGRAs; T-SPOT-TB (Oxford Immunotec, Oxford, UK) and QuantiFERON-TB Gold In-Tube (Cellestis, Chadstone, Australia)) combined with MTB/RIF for TB in 480 patients in South Africa. When conducted prior to MTB/RIF: 1) smear microscopy followed by MTB/RIF (if smear negative) had the lowest cost of diagnosis of any strategy investigated; 2) a combination of smear microscopy, CXR (if smear negative) and MTB/RIF (if imaging compatible with active TB) did not further reduce the cost per TB case diagnosed; and 3) a normal CXR ruled out TB in 18% of patients (57 out of 324; negative predictive value (NPV) 100%). When downstream adjunct tests were applied to MTB/RIF-negative individuals, radiology ruled out TB in 24% (56 out of 234; NPV 100%), smear microscopy ruled in TB in 21% (seven out of 24) of culture-positive individuals and IGRAs were not useful in either context. In resource-poor settings, smear microscopy combined with MTB/RIF had the highest accuracy and lowest cost of diagnosis compared to either technique alone. In MTB/RIF-negative individuals, CXR has poor rule-in value but can reliably rule out TB in approximately one in four cases. These data inform upon the programmatic utility of MTB/RIF in high-burden settings. PMID:22075479
Chaos and the Double Function of Communication
NASA Astrophysics Data System (ADS)
Aula, P. S.
Since at least the needle model age, communication researchers have systematically sought means to explain, control and predict communication behavior between people. For many reasons, the accuracy of constructed models and the studies based upon them has not risen very high. It can be claimed that the reasons for the inaccuracy of communication models, and thus the poor predictability of everyday action, originate from the processes' innate chaos, apparent beneath their behavior. This leads to the argument that communication systems, which appear stable and have precisely identical starting points and identical operating environments, can nevertheless behave in an exceptional and completely different manner, despite the fact that their behavior is ruled or directed by the same rules or laws.
Ha, Diep H; Spencer, A John; Slade, Gary D; Chartier, Andrew D
2014-01-01
Objectives To determine the accuracy of the caries risk assessment system and performance of clinicians in their attempts to predict caries for children during routine practice. Design Longitudinal study. Setting and participants Data on caries risk assessment conducted by clinicians during routine practice while providing care for children in the South Australian School Dental Service (SA SDS) were collected from electronic patient records. Baseline data on caries experience, clinicians’ ratings of caries risk status and child demographics were obtained for all SA SDS patients aged 5–15 years examined during 2002–2005. Outcome measure Children’s caries incidence rate, calculated using examination data after a follow-up period of 6–48 months from baseline, was used as the gold standard to compute the sensitivity (Se) and specificity (Sp) of clinicians’ baseline ratings of caries risk. Multivariate binomial regression models were used to evaluate effects of children's baseline characteristics on Se and Sp. Results A total of 133 clinicians rated caries risk status of 71 430 children during 2002–2005. The observed Se and Sp were 0.48 and 0.86, respectively (Se+Sp=1.34). Caries experience at baseline was the strongest factor influencing accuracy in multivariable regression model. Among children with no caries experience at baseline, overall accuracy (Se+Sp) was only 1.05, whereas it was 1.28 among children with at least one tooth surfaces with caries experience at baseline. Conclusions Clinicians’ accuracy in predicting caries risk during routine practice was similar to levels reported in research settings that simulated patient care. Accuracy was acceptable in children who had prior caries experience at the baseline examination, while it was poor among children with no caries experience. PMID:24477318
Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
Chawarska, Katarzyna; Shic, Frederick; Macari, Suzanne; Campbell, Daniel J.; Brian, Jessica; Landa, Rebecca; Hutman, Ted; Nelson, Charles A.; Ozonoff, Sally; Tager-Flusberg, Helen; Young, Gregory S.; Zwaigenbaum, Lonnie; Cohen, Ira L.; Charman, Tony; Messinger, Daniel S.; Klin, Ami; Johnson, Scott; Bryson, Susan
2014-01-01
Objective Younger siblings of children with autism spectrum disorder (ASD) are at high risk (HR) for developing ASD as well as features of the broader autism phenotype. While this complicates early diagnostic considerations in this cohort, it also provides an opportunity to examine patterns of behavior associated specifically with ASD compared to other developmental outcomes. Method We applied Classification and Regression Trees (CART) analysis to individual items of the Autism Diagnostic Observation Schedule (ADOS) in 719 HR siblings to identify behavioral features at 18 months predictive of diagnostic outcomes (ASD, atypical development, and typical development) at 36 months. Results Three distinct combinations of features at 18 months were predictive of ASD outcome: 1) poor eye contact combined with lack of communicative gestures and giving; 2) poor eye contact combined with a lack of imaginative play; and 3) lack of giving and presence of repetitive behaviors, but with intact eye contact. These 18-month behavioral profiles predicted ASD versus non-ASD status at 36 months with 82.7% accuracy in an initial test sample and 77.3% accuracy in a validation sample. Clinical features at age 3 among children with ASD varied as a function of their 18-month symptom profiles. Children with ASD who were misclassified at 18 months were higher functioning, and their autism symptoms increased between 18 and 36 months. Conclusion These findings suggest the presence of different developmental pathways to ASD in HR siblings. Understanding such pathways will provide clearer targets for neural and genetic research and identification of developmentally specific treatments for ASD. PMID:25457930
Chawarska, Katarzyna; Shic, Frederick; Macari, Suzanne; Campbell, Daniel J; Brian, Jessica; Landa, Rebecca; Hutman, Ted; Nelson, Charles A; Ozonoff, Sally; Tager-Flusberg, Helen; Young, Gregory S; Zwaigenbaum, Lonnie; Cohen, Ira L; Charman, Tony; Messinger, Daniel S; Klin, Ami; Johnson, Scott; Bryson, Susan
2014-12-01
Younger siblings of children with autism spectrum disorder (ASD) are at high risk (HR) for developing ASD as well as features of the broader autism phenotype. Although this complicates early diagnostic considerations in this cohort, it also provides an opportunity to examine patterns of behavior associated specifically with ASD compared to other developmental outcomes. We applied Classification and Regression Trees (CART) analysis to individual items of the Autism Diagnostic Observation Schedule (ADOS) in 719 HR siblings to identify behavioral features at 18 months that were predictive of diagnostic outcomes (ASD, atypical development, and typical development) at 36 months. Three distinct combinations of features at 18 months were predictive of ASD outcome: poor eye contact combined with lack of communicative gestures and giving; poor eye contact combined with a lack of imaginative play; and lack of giving and presence of repetitive behaviors, but with intact eye contact. These 18-month behavioral profiles predicted ASD versus non-ASD status at 36 months with 82.7% accuracy in an initial test sample and 77.3% accuracy in a validation sample. Clinical features at age 3 years among children with ASD varied as a function of their 18-month symptom profiles. Children with ASD who were misclassified at 18 months were higher functioning, and their autism symptoms increased between 18 and 36 months. These findings suggest the presence of different developmental pathways to ASD in HR siblings. Understanding such pathways will provide clearer targets for neural and genetic research and identification of developmentally specific treatments for ASD. Copyright © 2014 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Femtosecond laser micromachining of compound parabolic concentrator fiber tipped glucose sensors.
Hassan, Hafeez Ul; Lacraz, Amédée; Kalli, Kyriacos; Bang, Ole
2017-03-01
We report on highly accurate femtosecond (fs) laser micromachining of a compound parabolic concentrator (CPC) fiber tip on a polymer optical fiber (POF). The accuracy is reflected in an unprecedented correspondence between the numerically predicted and experimentally found improvement in fluorescence pickup efficiency of a Förster resonance energy transfer-based POF glucose sensor. A Zemax model of the CPC-tipped sensor predicts an optimal improvement of a factor of 3.96 compared to the sensor with a plane-cut fiber tip. The fs laser micromachined CPC tip showed an increase of a factor of 3.5, which is only 11.6% from the predicted value. Earlier state-of-the-art fabrication of the CPC-shaped tip by fiber tapering was of so poor quality that the actual improvement was 43% lower than the predicted improvement of the ideal CPC shape.
Femtosecond laser micromachining of compound parabolic concentrator fiber tipped glucose sensors
NASA Astrophysics Data System (ADS)
Hassan, Hafeez Ul; Lacraz, Amédée; Kalli, Kyriacos; Bang, Ole
2017-03-01
We report on highly accurate femtosecond (fs) laser micromachining of a compound parabolic concentrator (CPC) fiber tip on a polymer optical fiber (POF). The accuracy is reflected in an unprecedented correspondence between the numerically predicted and experimentally found improvement in fluorescence pickup efficiency of a Förster resonance energy transfer-based POF glucose sensor. A Zemax model of the CPC-tipped sensor predicts an optimal improvement of a factor of 3.96 compared to the sensor with a plane-cut fiber tip. The fs laser micromachined CPC tip showed an increase of a factor of 3.5, which is only 11.6% from the predicted value. Earlier state-of-the-art fabrication of the CPC-shaped tip by fiber tapering was of so poor quality that the actual improvement was 43% lower than the predicted improvement of the ideal CPC shape.
RepeatsDB-lite: a web server for unit annotation of tandem repeat proteins.
Hirsh, Layla; Paladin, Lisanna; Piovesan, Damiano; Tosatto, Silvio C E
2018-05-09
RepeatsDB-lite (http://protein.bio.unipd.it/repeatsdb-lite) is a web server for the prediction of repetitive structural elements and units in tandem repeat (TR) proteins. TRs are a widespread but poorly annotated class of non-globular proteins carrying heterogeneous functions. RepeatsDB-lite extends the prediction to all TR types and strongly improves the performance both in terms of computational time and accuracy over previous methods, with precision above 95% for solenoid structures. The algorithm exploits an improved TR unit library derived from the RepeatsDB database to perform an iterative structural search and assignment. The web interface provides tools for analyzing the evolutionary relationships between units and manually refine the prediction by changing unit positions and protein classification. An all-against-all structure-based sequence similarity matrix is calculated and visualized in real-time for every user edit. Reviewed predictions can be submitted to RepeatsDB for review and inclusion.
Modi, Payal; Glavis-Bloom, Justin; Nasrin, Sabiha; Guy, Allysia; Chowa, Erika P; Dvor, Nathan; Dworkis, Daniel A; Oh, Michael; Silvestri, David M; Strasberg, Stephen; Rege, Soham; Noble, Vicki E; Alam, Nur H; Levine, Adam C
2016-01-01
Although dehydration from diarrhea is a leading cause of morbidity and mortality in children under five, existing methods of assessing dehydration status in children have limited accuracy. To assess the accuracy of point-of-care ultrasound measurement of the aorta-to-IVC ratio as a predictor of dehydration in children. A prospective cohort study of children under five years with acute diarrhea was conducted in the rehydration unit of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b). Ultrasound measurements of aorta-to-IVC ratio and dehydrated weight were obtained on patient arrival. Percent weight change was monitored during rehydration to classify children as having "some dehydration" with weight change 3-9% or "severe dehydration" with weight change > 9%. Logistic regression analysis and Receiver-Operator Characteristic (ROC) curves were used to evaluate the accuracy of aorta-to-IVC ratio as a predictor of dehydration severity. 850 children were enrolled, of which 771 were included in the final analysis. Aorta to IVC ratio was a significant predictor of the percent dehydration in children with acute diarrhea, with each 1-point increase in the aorta to IVC ratio predicting a 1.1% increase in the percent dehydration of the child. However, the area under the ROC curve (0.60), sensitivity (67%), and specificity (49%), for predicting severe dehydration were all poor. Point-of-care ultrasound of the aorta-to-IVC ratio was statistically associated with volume status, but was not accurate enough to be used as an independent screening tool for dehydration in children under five years presenting with acute diarrhea in a resource-limited setting.
Modi, Payal; Glavis-Bloom, Justin; Nasrin, Sabiha; Guy, Allysia; Rege, Soham; Noble, Vicki E.; Alam, Nur H.; Levine, Adam C.
2016-01-01
Introduction Although dehydration from diarrhea is a leading cause of morbidity and mortality in children under five, existing methods of assessing dehydration status in children have limited accuracy. Objective To assess the accuracy of point-of-care ultrasound measurement of the aorta-to-IVC ratio as a predictor of dehydration in children. Methods A prospective cohort study of children under five years with acute diarrhea was conducted in the rehydration unit of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b). Ultrasound measurements of aorta-to-IVC ratio and dehydrated weight were obtained on patient arrival. Percent weight change was monitored during rehydration to classify children as having “some dehydration” with weight change 3–9% or “severe dehydration” with weight change > 9%. Logistic regression analysis and Receiver-Operator Characteristic (ROC) curves were used to evaluate the accuracy of aorta-to-IVC ratio as a predictor of dehydration severity. Results 850 children were enrolled, of which 771 were included in the final analysis. Aorta to IVC ratio was a significant predictor of the percent dehydration in children with acute diarrhea, with each 1-point increase in the aorta to IVC ratio predicting a 1.1% increase in the percent dehydration of the child. However, the area under the ROC curve (0.60), sensitivity (67%), and specificity (49%), for predicting severe dehydration were all poor. Conclusions Point-of-care ultrasound of the aorta-to-IVC ratio was statistically associated with volume status, but was not accurate enough to be used as an independent screening tool for dehydration in children under five years presenting with acute diarrhea in a resource-limited setting. PMID:26766306
A Ranking Approach to Genomic Selection.
Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori
2015-01-01
Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.
Oliver, D; Kotlicka-Antczak, M; Minichino, A; Spada, G; McGuire, P; Fusar-Poli, P
2018-03-01
Primary indicated prevention is reliant on accurate tools to predict the onset of psychosis. The gold standard assessment for detecting individuals at clinical high risk (CHR-P) for psychosis in the UK and many other countries is the Comprehensive Assessment for At Risk Mental States (CAARMS). While the prognostic accuracy of CHR-P instruments has been assessed in general, this is the first study to specifically analyse that of the CAARMS. As such, the CAARMS was used as the index test, with the reference index being psychosis onset within 2 years. Six independent studies were analysed using MIDAS (STATA 14), with a total of 1876 help-seeking subjects referred to high risk services (CHR-P+: n=892; CHR-P-: n=984). Area under the curve (AUC), summary receiver operating characteristic curves (SROC), quality assessment, likelihood ratios, and probability modified plots were computed, along with sensitivity analyses and meta-regressions. The current meta-analysis confirmed that the 2-year prognostic accuracy of the CAARMS is only acceptable (AUC=0.79 95% CI: 0.75-0.83) and not outstanding as previously reported. In particular, specificity was poor. Sensitivity of the CAARMS is inferior compared to the SIPS, while specificity is comparably low. However, due to the difficulties in performing these types of studies, power in this meta-analysis was low. These results indicate that refining and improving the prognostic accuracy of the CAARMS should be the mainstream area of research for the next era. Avenues of prediction improvement are critically discussed and presented to better benefit patients and improve outcomes of first episode psychosis. Copyright © 2017 The Authors. Published by Elsevier Masson SAS.. All rights reserved.
Wang, Shulian; Campbell, Jeff; Stenmark, Matthew H; Stanton, Paul; Zhao, Jing; Matuszak, Martha M; Ten Haken, Randall K; Kong, Feng-Ming
2018-03-01
To study whether cytokine markers may improve predictive accuracy of radiation esophagitis (RE) in non-small cell lung cancer (NSCLC) patients. A total of 129 patients with stage I-III NSCLC treated with radiotherapy (RT) from prospective studies were included. Thirty inflammatory cytokines were measured in platelet-poor plasma samples. Logistic regression was performed to evaluate the risk factors of RE. Stepwise Akaike information criterion (AIC) and likelihood ratio test were used to assess model predictions. Forty-nine of 129 patients (38.0%) developed grade ≥2 RE. Univariate analysis showed that age, stage, concurrent chemotherapy, and eight dosimetric parameters were significantly associated with grade ≥2 RE (p < 0.05). IL-4, IL-5, IL-8, IL-13, IL-15, IL-1α, TGFα and eotaxin were also associated with grade ≥2 RE (p < 0.1). Age, esophagus generalized equivalent uniform dose (EUD), and baseline IL-8 were independently associated grade ≥2 RE. The combination of these three factors had significantly higher predictive power than any single factor alone. Addition of IL-8 to toxicity model significantly improves RE predictive accuracy (p = 0.019). Combining baseline level of IL-8, age and esophagus EUD may predict RE more accurately. Refinement of this model with larger sample sizes and validation from multicenter database are warranted. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
The use of presurgical psychological screening to predict the outcome of spine surgery.
Block, A R; Ohnmeiss, D D; Guyer, R D; Rashbaum, R F; Hochschuler, S H
2001-01-01
Several previous studies have shown that psychosocial factors can influence the outcome of elective spine surgery. The purpose of the current study was to determine how well a presurgical screening instrument could predict surgical outcome. The study was conducted by staff of a psychologist's office. They performed preoperative screening for spine surgery candidates and collected the follow-up data. Presurgical screening and follow-up data collection was performed on 204 patients who underwent laminectomy/discectomy (n=118) or fusion (n=86) of the lumbar spine. The outcome measures used in the study were visual analog pain scales, the Oswestry Disability Questionnaire, and medication use. A semi-structured interview and psychometric testing were used to identify specific, quantifiable psychological, and "medical" risk factors for poor surgical outcome. A presurgical psychological screening (PPS) scorecard was completed for each patient, assessing whether the patient had a high or low level of risk on these psychological and medical dimensions. Based on the scorecard, an overall surgical prognosis of "good," "fair," or "poor" was generated. Results showed spine surgery led to significant overall improvements in pain, functional ability, and medication use. Medical and psychological risk levels were significantly related to outcome, with the poorest results obtained by patients having both high psychological and medical risk. Further, the accuracy of PPS surgical prognosis in predicting overall outcome was 82%. Only 9 of 53 patients predicted to have poor outcome achieved fair or good results from spine surgery. These findings suggest that PPS should become a more routine part of the evaluation of chronic pain patients in whom spine surgery is being considered.
Sensorimotor Mismapping in Poor-pitch Singing.
He, Hao; Zhang, Wei-Dong
2017-09-01
This study proposes that there are two types of sensorimotor mismapping in poor-pitch singing: erroneous mapping and no mapping. We created operational definitions for the two types of mismapping based on the precision of pitch-matching and predicted that in the two types of mismapping, phonation differs in terms of accuracy and the dependence on the articulation consistency between the target and the intended vocal action. The study aimed to test this hypothesis by examining the reliability and criterion-related validity of the operational definitions. A within-subject design was used in this study. Thirty-two participants identified as poor-pitch singers were instructed to vocally imitate pure tones and to imitate their own vocal recordings with the same articulation as self-targets and with different articulation from self-targets. Definitions of the types of mismapping were demonstrated to be reliable with the split-half approach and to have good criterion-related validity with findings that pitch-matching with no mapping was less accurate and more dependent on the articulation consistency between the target and the intended vocal action than pitch-matching with erroneous mapping was. Furthermore, the precision of pitch-matching was positively associated with its accuracy and its dependence on articulation consistency when mismapping was analyzed on a continuum. Additionally, the data indicated that the self-imitation advantage was a function of articulation consistency. Types of sensorimotor mismapping lead to pitch-matching that differs in accuracy and its dependence on the articulation consistency between the target and the intended vocal action. Additionally, articulation consistency produces the self-advantage. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Effects of feather wear and temperature on prediction of food intake and residual food consumption.
Herremans, M; Decuypere, E; Siau, O
1989-03-01
Heat production, which accounts for 0.6 of gross energy intake, is insufficiently represented in predictions of food intake. Especially when heat production is elevated (for example by lower temperature or poor feathering) the classical predictions based on body weight, body-weight change and egg mass are inadequate. Heat production was reliably estimated as [35.5-environmental temperature (degree C)] x [Defeathering (=%IBPW) + 21]. Including this term (PHP: predicted heat production) in equations predicting food intake significantly increased accuracy of prediction, especially under suboptimal conditions. Within the range of body weights tested (from 1.6 kg in brown layers to 2.8 kg in dwarf broiler breeders), body weight as an independent variable contributed little to the prediction of food intake; especially within strains its effect was better included in the intercept. Significantly reduced absolute values of residual food consumption were obtained over a wide range of conditions by using predictions of food intake based on body-weight change, egg mass, predicted heat production (PHP) and an intercept, instead of body weight, body-weight change, egg mass and an intercept.
Olesen, Tine Kold; Denys, Marie-Astrid; Vande Walle, Johan; Everaert, Karel
2018-02-06
Background Evidence of diagnostic accuracy for proposed definitions of nocturnal polyuria is currently unclear. Purpose Systematic review to determine population-based evidence of the diagnostic accuracy of proposed definitions of nocturnal polyuria based on data from frequency-volume charts. Methods Seventeen pre-specified search terms identified 351 unique investigations published from 1990 to 2016 in BIOSIS, Embase, Embase Alerts, International Pharmaceutical Abstract, Medline, and Cochrane. Thirteen original communications were included in this review based on pre-specified exclusion criteria. Data were extracted from each paper regarding subject age, sex, ethnicity, health status, sample size, data collection methods, and diagnostic discrimination of proposed definitions including sensitivity, specificity, positive and negative predictive value. Results The sample size of study cohorts, participant age, sex, ethnicity, and health status varied considerably in 13 studies reporting on the diagnostic performance of seven different definitions of nocturnal polyuria using frequency-volume chart data from 4968 participants. Most study cohorts were small, mono-ethnic, including only Caucasian males aged 50 or higher with primary or secondary polyuria that were compared to a control group of healthy men without nocturia in prospective or retrospective settings. Proposed definitions had poor discriminatory accuracy in evaluations based on data from subjects independent from the original study cohorts with findings being similar regarding the most widely evaluated definition endorsed by ICS. Conclusions Diagnostic performance characteristics for proposed definitions of nocturnal polyuria show poor to modest discrimination and are not based on sufficient level of evidence from representative, multi-ethnic population-based data from both females and males of all adult ages.
Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds
NASA Astrophysics Data System (ADS)
Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea
2013-04-01
Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.
Carter, Jane V.; Roberts, Henry L.; Pan, Jianmin; Rice, Jonathan D.; Burton, James F.; Galbraith, Norman J.; Eichenberger, M. Robert; Jorden, Jeffery; Deveaux, Peter; Farmer, Russell; Williford, Anna; Kanaan, Ziad; Rai, Shesh N.; Galandiuk, Susan
2016-01-01
OBJECTIVE(S) Develop a plasma-based microRNA (miRNA) diagnostic assay specific for colorectal neoplasms, building upon our prior work. BACKGROUND Colorectal neoplasms (colorectal cancer [CRC] and colorectal advanced adenoma [CAA]) frequently develop in individuals at ages when other common cancers also occur. Current screening methods lack sensitivity, specificity, and have poor patient compliance. METHODS Plasma was screened for 380 miRNAs using microfluidic array technology from a “Training” cohort of 60 patients, (10 each) control, CRC, CAA, breast (BC), pancreatic (PC) and lung (LC) cancer. We identified uniquely dysregulated miRNAs specific for colorectal neoplasia (p<0.05, false discovery rate: 5%, adjusted α=0.0038). These miRNAs were evaluated using single assays in a “Test” cohort of 120 patients. A mathematical model was developed to predict blinded sample identity in a 150 patient “Validation” cohort using repeat-sub-sampling validation of the testing dataset with 1000 iterations each to assess model detection accuracy. RESULTS Seven miRNAs (miR-21, miR-29c, miR-122, miR-192, miR-346, miR-372, miR-374a) were selected based upon p-value, area-under-the-curve (AUC), fold-change, and biological plausibility. AUC (±95% CI) for “Test” cohort comparisons were 0.91 (0.85-0.96), 0.79 (0.70-0.88) and 0.98 (0.96-1.0), respectively. Our mathematical model predicted blinded sample identity with 69-77% accuracy between all neoplasia and controls, 67-76% accuracy between colorectal neoplasia and other cancers, and 86-90% accuracy between colorectal cancer and colorectal adenoma. CONCLUSIONS Our plasma miRNA assay and prediction model differentiates colorectal neoplasia from patients with other neoplasms and from controls with higher sensitivity and specificity compared to current clinical standards. PMID:27471839
Evidence for Deficits in the Temporal Attention Span of Poor Readers
Visser, Troy A. W.
2014-01-01
Background While poor reading is often associated with phonological deficits, many studies suggest that visual processing might also be impaired. In particular, recent research has indicated that poor readers show impaired spatial visual attention spans in partial and whole report tasks. Given the similarities between competition-based accounts for reduced visual attention span and similar explanations for impairments in sequential object processing, the present work examined whether poor readers show deficits in their “temporal attention span” – that is, their ability to rapidly and accurately process sequences of consecutive target items. Methodology/Principal Findings Poor and normal readers monitored a sequential stream of visual items for two (TT condition) or three (TTT condition) consecutive target digits. Target identification was examined using both unconditional and conditional measures of accuracy in order to gauge the overall likelihood of identifying a target and the likelihood of identifying a target given successful identification of previous items. Compared to normal readers, poor readers showed small but consistent deficits in identification across targets whether unconditional or conditional accuracy was used. Additionally, in the TTT condition, final-target conditional accuracy was poorer than unconditional accuracy, particularly for poor readers, suggesting a substantial cost arising from processing the previous two targets that was not present in normal readers. Conclusions/Significance Mirroring the differences found between poor and normal readers in spatial visual attention span, the present findings suggest two principal differences between the temporal attention spans of poor and normal readers. First, the consistent pattern of reduced performance across targets suggests increased competition amongst items within the same span for poor readers. Second, the steeper decline in final target performance amongst poor readers in the TTT condition suggests a reduction in the extent of their temporal attention span. PMID:24651313
Evidence for deficits in the temporal attention span of poor readers.
Visser, Troy A W
2014-01-01
While poor reading is often associated with phonological deficits, many studies suggest that visual processing might also be impaired. In particular, recent research has indicated that poor readers show impaired spatial visual attention spans in partial and whole report tasks. Given the similarities between competition-based accounts for reduced visual attention span and similar explanations for impairments in sequential object processing, the present work examined whether poor readers show deficits in their "temporal attention span"--that is, their ability to rapidly and accurately process sequences of consecutive target items. Poor and normal readers monitored a sequential stream of visual items for two (TT condition) or three (TTT condition) consecutive target digits. Target identification was examined using both unconditional and conditional measures of accuracy in order to gauge the overall likelihood of identifying a target and the likelihood of identifying a target given successful identification of previous items. Compared to normal readers, poor readers showed small but consistent deficits in identification across targets whether unconditional or conditional accuracy was used. Additionally, in the TTT condition, final-target conditional accuracy was poorer than unconditional accuracy, particularly for poor readers, suggesting a substantial cost arising from processing the previous two targets that was not present in normal readers. Mirroring the differences found between poor and normal readers in spatial visual attention span, the present findings suggest two principal differences between the temporal attention spans of poor and normal readers. First, the consistent pattern of reduced performance across targets suggests increased competition amongst items within the same span for poor readers. Second, the steeper decline in final target performance amongst poor readers in the TTT condition suggests a reduction in the extent of their temporal attention span.
Zimolzak, Andrew J; Spettell, Claire M; Fernandes, Joaquim; Fusaro, Vincent A; Palmer, Nathan P; Saria, Suchi; Kohane, Isaac S; Jonikas, Magdalena A; Mandl, Kenneth D
2013-01-01
Medication nonadherence costs $300 billion annually in the US. Medicare Advantage plans have a financial incentive to increase medication adherence among members because the Centers for Medicare and Medicaid Services (CMS) now awards substantive bonus payments to such plans, based in part on population adherence to chronic medications. We sought to build an individualized surveillance model that detects early which beneficiaries will fall below the CMS adherence threshold. This was a retrospective study of over 210,000 beneficiaries initiating statins, in a database of private insurance claims, from 2008-2011. A logistic regression model was constructed to use statin adherence from initiation to day 90 to predict beneficiaries who would not meet the CMS measure of proportion of days covered 0.8 or above, from day 91 to 365. The model controlled for 15 additional characteristics. In a sensitivity analysis, we varied the number of days of adherence data used for prediction. Lower adherence in the first 90 days was the strongest predictor of one-year nonadherence, with an odds ratio of 25.0 (95% confidence interval 23.7-26.5) for poor adherence at one year. The model had an area under the receiver operating characteristic curve of 0.80. Sensitivity analysis revealed that predictions of comparable accuracy could be made only 40 days after statin initiation. When members with 30-day supplies for their first statin fill had predictions made at 40 days, and members with 90-day supplies for their first fill had predictions made at 100 days, poor adherence could be predicted with 86% positive predictive value. To preserve their Medicare Star ratings, plan managers should identify or develop effective programs to improve adherence. An individualized surveillance approach can be used to target members who would most benefit, recognizing the tradeoff between improved model performance over time and the advantage of earlier detection.
Whiting, Penny; Birnie, Kate; Sterne, Jonathan A C; Jameson, Catherine; Skinner, Rod; Phillips, Bob
2018-05-01
We conducted a systematic review and individual patient data (IPD) meta-analysis to examine the utility of cystatin C for evaluation of glomerular function in children with cancer. Eligible studies evaluated the accuracy of cystatin C for detecting poor renal function in children undergoing chemotherapy. Study quality was assessed using QUADAS-2. Authors of four studies shared IPD. We calculated the correlation between log cystatin C and GFR stratified by study and measure of cystatin C. We dichotomized the reference standard at GFR 80 ml/min/1.73m 2 and stratified cystatin C at 1 mg/l, to calculate sensitivity and specificity in each study and according to age group (0-4, 5-12, and ≥ 13 years). In sensitivity analyses, we investigated different GFR and cystatin C cut points. We used logistic regression to estimate the association of impaired renal function with log cystatin C and quantified diagnostic accuracy using the area under the ROC curve (AUC). Six studies, which used different test and reference standard thresholds, suggested that cystatin C has the potential to monitor renal function in children undergoing chemotherapy for malignancy. IPD data (504 samples, 209 children) showed that cystatin C has poor sensitivity (63%) and moderate specificity (89%), although use of a GFR cut point of < 60 ml/min/1.73m 2 (data only available from two of the studies) estimated sensitivity to be 92% and specificity 81.3%. The AUC for the combined data set was 0.890 (95% CI 0.826, 0.951). Diagnostic accuracy appeared to decrease with age. Cystatin C has better diagnostic accuracy than creatinine as a test for glomerular dysfunction in young people undergoing treatment for cancer. Diagnostic accuracy is not sufficient for it to replace current reference standards for predicting clinically relevant impairments that may alter dosing of important nephrotoxic agents.
NASA Astrophysics Data System (ADS)
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2016-04-01
In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.
Evoked potentials recorded during routine EEG predict outcome after perinatal asphyxia.
Nevalainen, Päivi; Marchi, Viviana; Metsäranta, Marjo; Lönnqvist, Tuula; Toiviainen-Salo, Sanna; Vanhatalo, Sampsa; Lauronen, Leena
2017-07-01
To evaluate the added value of somatosensory (SEPs) and visual evoked potentials (VEPs) recorded simultaneously with routine EEG in early outcome prediction of newborns with hypoxic-ischemic encephalopathy under modern intensive care. We simultaneously recorded multichannel EEG, median nerve SEPs, and flash VEPs during the first few postnatal days in 50 term newborns with hypoxic-ischemic encephalopathy. EEG background was scored into five grades and the worst two grades were considered to indicate poor cerebral recovery. Evoked potentials were classified as absent or present. Clinical outcome was determined from the medical records at a median age of 21months. Unfavorable outcome included cerebral palsy, severe mental retardation, severe epilepsy, or death. The accuracy of outcome prediction was 98% with SEPs compared to 90% with EEG. EEG alone always predicted unfavorable outcome when it was inactive (n=9), and favorable outcome when it was normal or only mildly abnormal (n=17). However, newborns with moderate or severe EEG background abnormality could have either favorable or unfavorable outcome, which was correctly predicted by SEP in all but one newborn (accuracy in this subgroup 96%). Absent VEPs were always associated with an inactive EEG, and an unfavorable outcome. However, presence of VEPs did not guarantee a favorable outcome. SEPs accurately predict clinical outcomes in newborns with hypoxic-ischemic encephalopathy and improve the EEG-based prediction particularly in those newborns with severely or moderately abnormal EEG findings. SEPs should be added to routine EEG recordings for early bedside assessment of newborns with hypoxic-ischemic encephalopathy. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Training set selection for the prediction of essential genes.
Cheng, Jian; Xu, Zhao; Wu, Wenwu; Zhao, Li; Li, Xiangchen; Liu, Yanlin; Tao, Shiheng
2014-01-01
Various computational models have been developed to transfer annotations of gene essentiality between organisms. However, despite the increasing number of microorganisms with well-characterized sets of essential genes, selection of appropriate training sets for predicting the essential genes of poorly-studied or newly sequenced organisms remains challenging. In this study, a machine learning approach was applied reciprocally to predict the essential genes in 21 microorganisms. Results showed that training set selection greatly influenced predictive accuracy. We determined four criteria for training set selection: (1) essential genes in the selected training set should be reliable; (2) the growth conditions in which essential genes are defined should be consistent in training and prediction sets; (3) species used as training set should be closely related to the target organism; and (4) organisms used as training and prediction sets should exhibit similar phenotypes or lifestyles. We then analyzed the performance of an incomplete training set and an integrated training set with multiple organisms. We found that the size of the training set should be at least 10% of the total genes to yield accurate predictions. Additionally, the integrated training sets exhibited remarkable increase in stability and accuracy compared with single sets. Finally, we compared the performance of the integrated training sets with the four criteria and with random selection. The results revealed that a rational selection of training sets based on our criteria yields better performance than random selection. Thus, our results provide empirical guidance on training set selection for the identification of essential genes on a genome-wide scale.
Ando, Tatsuya; Suguro, Miyuki; Kobayashi, Takeshi; Seto, Masao; Honda, Hiroyuki
2003-10-01
A fuzzy neural network (FNN) using gene expression profile data can select combinations of genes from thousands of genes, and is applicable to predict outcome for cancer patients after chemotherapy. However, wide clinical heterogeneity reduces the accuracy of prediction. To overcome this problem, we have proposed an FNN system based on majoritarian decision using multiple noninferior models. We used transcriptional profiling data, which were obtained from "Lymphochip" DNA microarrays (http://llmpp.nih.gov/DLBCL), reported by Rosenwald (N Engl J Med 2002; 346: 1937-47). When the data were analyzed by our FNN system, accuracy (73.4%) of outcome prediction using only 1 FNN model with 4 genes was higher than that (68.5%) of the Cox model using 17 genes. Higher accuracy (91%) was obtained when an FNN system with 9 noninferior models, consisting of 35 independent genes, was used. The genes selected by the system included genes that are informative in the prognosis of Diffuse large B-cell lymphoma (DLBCL), such as genes showing an expression pattern similar to that of CD10 and BCL-6 or similar to that of IRF-4 and BCL-4. We classified 220 DLBCL patients into 5 groups using the prediction results of 9 FNN models. These groups may correspond to DLBCL subtypes. In group A containing half of the 220 patients, patients with poor outcome were found to satisfy 2 rules, i.e., high expression of MAX dimerization with high expression of unknown A (LC_26146), or high expression of MAX dimerization with low expression of unknown B (LC_33144). The present paper is the first to describe the multiple noninferior FNN modeling system. This system is a powerful tool for predicting outcome and classifying patients, and is applicable to other heterogeneous diseases.
Gizzo, Salvatore; Andrisani, Alessandra; Noventa, Marco; Quaranta, Michela; Esposito, Federica; Armanini, Decio; Gangemi, Michele; Nardelli, Giovanni B; Litta, Pietro; D'Antona, Donato; Ambrosini, Guido
2015-04-10
Aim of the study was to investigate whether menstrual cycle length may be considered as a surrogate measure of reproductive health, improving the accuracy of biochemical/sonographical ovarian reserve test in estimating the reproductive chances of women referred to ART. A retrospective-observational-study in Padua' public tertiary level Centre was conducted. A total of 455 normo-ovulatory infertile women scheduled for their first fresh non-donor IVF/ICSI treatment. The mean menstrual cycle length (MCL) during the preceding 6 months was calculated by physicians on the basis of information contained in our electronic database (first day of menstrual cycle collected every month by telephonic communication by single patients). We evaluated the relations between MCL, ovarian response to stimulation protocol, oocytes fertilization ratio, ovarian sensitivity index (OSI) and pregnancy rate in different cohorts of patients according to the class of age and the estimated ovarian reserve. In women younger than 35 years, MCL over 31 days may be associated with an increased risk of OHSS and with a good OSI. In women older than 35 years, and particularly than 40 years, MCL shortening may be considered as a marker of ovarian aging and may be associated with poor ovarian response, low OSI and reduced fertilization rate. When AMH serum value is lower than 1.1 ng/ml in patients older than 40 years, MCL may help Clinicians discriminate real from expected poor responders. Considering the pool of normoresponders, MCL was not correlated with pregnancy rate while a positive association was found with patients' age. MCL diary is more predictive than chronological age in estimating ovarian biological age and response to COH and it is more predictive than AMH in discriminating expected from real poor responders. In women older than 35 years MCL shortening may be considered as a marker of ovarian aging while chronological age remains most accurate parameter in predicting pregnancy.
Yu, Zhiyuan; Zheng, Jun; Ma, Lu; Guo, Rui; Li, Mou; Wang, Xiaoze; Lin, Sen; Li, Hao; You, Chao
2017-09-01
In patients with spontaneous intracerebral hemorrhage (sICH), hematoma expansion (HE) is associated with poor outcome. Spot sign and black hole sign are neuroimaging predictors for HE. This study was aimed to compare the predictive value of two signs for HE. Within 6 h after onset of sICH, patients were screened for the computed tomography angiography spot sign and the non-contrast computed tomography black hole sign. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of two signs for HE prediction were calculated. The accuracy of two signs in predicting HE was analyzed by receiver-operator analysis. A total of 129 patients were included in this study. Spot sign was identified in 30 (23.3%) patients and black hole sign in 29 (22.5%) patients, respectively. Of 32 patients with HE, spot sign was observed in 19 (59.4%) and black hole sign was found in 14 (43.8%). The occurrence of black hole sign was significantly associated with spot sign (P < 0.001). The sensitivity, specificity, PPV, and NPV of spot sign for predicting HE were 59.38, 88.66, 63.33, and 86.87% respectively. In contrast, the sensitivity, specificity, PPV, and NPV of black hole sign for predicting HE were 43.75, 84.54, 48.28, and 82.00%, respectively. The area under the curve was 0.740 for spot sign and 0.641 for black hole sign. (P = 0.228) Both spot sign and black hole sign appeared to have good predictive value for HE, and spot sign seemed to be a better predictor.
Peake, Christian; Diaz, Alicia; Artiles, Ceferino
This study examined the relationship and degree of predictability that the fluency of writing the alphabet from memory and the selection of allographs have on measures of fluency and accuracy of spelling in a free-writing sentence task when keyboarding. The Test Estandarizado para la Evaluación de la Escritura con Teclado ("Spanish Keyboarding Writing Test"; Jiménez, 2012) was used as the assessment tool. A sample of 986 children from Grades 1 through 3 were classified according to transcription skills measured by keyboard ability (poor vs. good) across the grades. Results demonstrated that fluency in writing the alphabet and selecting allographs mediated the differences in spelling between good and poor keyboarders in the free-writing task. Execution in the allograph selection task and writing alphabet from memory had different degrees of predictability in each of the groups in explaining the level of fluency and spelling in the free-writing task sentences, depending on the grade. These results suggest that early assessment of writing by means of the computer keyboard can provide clues and guidelines for intervention and training to strengthen specific skills to improve writing performance in the early primary grades in transcription skills by keyboarding.
Wang, Liang; Yang, Die; Fang, Cheng; Chen, Zuliang; Lesniewski, Peter J; Mallavarapu, Megharaj; Naidu, Ravendra
2015-01-01
Sodium potassium absorption ratio (SPAR) is an important measure of agricultural water quality, wherein four exchangeable cations (K(+), Na(+), Ca(2+) and Mg(2+)) should be simultaneously determined. An ISE-array is suitable for this application because its simplicity, rapid response characteristics and lower cost. However, cross-interferences caused by the poor selectivity of ISEs need to be overcome using multivariate chemometric methods. In this paper, a solid contact ISE array, based on a Prussian blue modified glassy carbon electrode (PB-GCE), was applied with a novel chemometric strategy. One of the most popular independent component analysis (ICA) methods, the fast fixed-point algorithm for ICA (fastICA), was implemented by the genetic algorithm (geneticICA) to avoid the local maxima problem commonly observed with fastICA. This geneticICA can be implemented as a data preprocessing method to improve the prediction accuracy of the Back-propagation neural network (BPNN). The ISE array system was validated using 20 real irrigation water samples from South Australia, and acceptable prediction accuracies were obtained. Copyright © 2014 Elsevier B.V. All rights reserved.
Gallium 67 citrate scanning and serum angiotensin converting enzyme levels in sarcoidosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, R.G.; Bekerman, C.; Sicilian, L.
1982-09-01
Gallium 67 citrate scans and serum angiotension converting enzyme (ACE) levels were obtained in 54 patients with sarcoidosis and analyzed in relation to clinical manifestation. /sup 67/Ga scans were abnormal in 97% of patients with clinically active disease (n = 30) and in 71% of patients with inactive disease (n = 24). Serum ACE levels were abnormally high (2 standard deviations above the control mean) in 73% of patients with clinically active disease and in 54% of patients with inactive disease. Serum ACE levels correlated significantly with /sup 67/Ga uptake score (r = .436; p < .005). The frequency ofmore » abnormal /sup 67/Ga scans and elevated serum ACE levels suggests that inflammatory activity with little or no clinical expression is common in sarcoidosis. Abnormal /sup 67/Ga scans were highly sensitive (97%) but had poor specificity (29%) to clinical disease activity. The accuracy of negative prediction of clinical activity by normal scans (87%) was better than the accuracy of positive prediction of clinical activity by abnormal scans (63%). /sup 67/Ga scans can be used to support the clinical identification of inactive sarcoidosis.« less
Slip, David J.; Hocking, David P.; Harcourt, Robert G.
2016-01-01
Constructing activity budgets for marine animals when they are at sea and cannot be directly observed is challenging, but recent advances in bio-logging technology offer solutions to this problem. Accelerometers can potentially identify a wide range of behaviours for animals based on unique patterns of acceleration. However, when analysing data derived from accelerometers, there are many statistical techniques available which when applied to different data sets produce different classification accuracies. We investigated a selection of supervised machine learning methods for interpreting behavioural data from captive otariids (fur seals and sea lions). We conducted controlled experiments with 12 seals, where their behaviours were filmed while they were wearing 3-axis accelerometers. From video we identified 26 behaviours that could be grouped into one of four categories (foraging, resting, travelling and grooming) representing key behaviour states for wild seals. We used data from 10 seals to train four predictive classification models: stochastic gradient boosting (GBM), random forests, support vector machine using four different kernels and a baseline model: penalised logistic regression. We then took the best parameters from each model and cross-validated the results on the two seals unseen so far. We also investigated the influence of feature statistics (describing some characteristic of the seal), testing the models both with and without these. Cross-validation accuracies were lower than training accuracy, but the SVM with a polynomial kernel was still able to classify seal behaviour with high accuracy (>70%). Adding feature statistics improved accuracies across all models tested. Most categories of behaviour -resting, grooming and feeding—were all predicted with reasonable accuracy (52–81%) by the SVM while travelling was poorly categorised (31–41%). These results show that model selection is important when classifying behaviour and that by using animal characteristics we can strengthen the overall accuracy. PMID:28002450
Mogensen, Kris M; Andrew, Benjamin Y; Corona, Jasmine C; Robinson, Malcolm K
2016-07-01
The Society of Critical Care Medicine (SCCM) and American Society for Parenteral and Enteral Nutrition (ASPEN) recommend that obese, critically ill patients receive 11-14 kcal/kg/d using actual body weight (ABW) or 22-25 kcal/kg/d using ideal body weight (IBW), because feeding these patients 50%-70% maintenance needs while administering high protein may improve outcomes. It is unknown whether these equations achieve this target when validated against indirect calorimetry, perform equally across all degrees of obesity, or compare well with other equations. Measured resting energy expenditure (MREE) was determined in obese (body mass index [BMI] ≥30 kg/m(2)), critically ill patients. Resting energy expenditure was predicted (PREE) using several equations: 12.5 kcal/kg ABW (ASPEN-Actual BW), 23.5 kcal/kg IBW (ASPEN-Ideal BW), Harris-Benedict (adjusted-weight and 1.5 stress-factor), and Ireton-Jones for obesity. Correlation of PREE to 65% MREE, predictive accuracy, precision, bias, and large error incidence were calculated. All equations were significantly correlated with 65% MREE but had poor predictive accuracy, had excessive large error incidence, were imprecise, and were biased in the entire cohort (N = 31). In the obesity cohort (n = 20, BMI 30-50 kg/m(2)), ASPEN-Actual BW had acceptable predictive accuracy and large error incidence, was unbiased, and was nearly precise. In super obesity (n = 11, BMI >50 kg/m(2)), ASPEN-Ideal BW had acceptable predictive accuracy and large error incidence and was precise and unbiased. SCCM/ASPEN-recommended body weight equations are reasonable predictors of 65% MREE depending on the equation and degree of obesity. Assuming that feeding 65% MREE is appropriate, this study suggests that patients with a BMI 30-50 kg/m(2) should receive 11-14 kcal/kg/d using ABW and those with a BMI >50 kg/m(2) should receive 22-25 kcal/kg/d using IBW. © 2015 American Society for Parenteral and Enteral Nutrition.
Machine learning models in breast cancer survival prediction.
Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin
2016-01-01
Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of accuracy. Therefore, this model is recommended as a useful tool for breast cancer survival prediction as well as medical decision making.
Assawamakin, Anunchai; Prueksaaroon, Supakit; Kulawonganunchai, Supasak; Shaw, Philip James; Varavithya, Vara; Ruangrajitpakorn, Taneth; Tongsima, Sissades
2013-01-01
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly. Here, a novel two-step machine-learning framework is presented to address this need. First, a Naïve Bayes estimator is used to rank features from which the top-ranked will most likely contain the most informative features for prediction of the underlying biological classes. The top-ranked features are then used in a Hidden Naïve Bayes classifier to construct a classification prediction model from these filtered attributes. In order to obtain the minimum set of the most informative biomarkers, the bottom-ranked features are successively removed from the Naïve Bayes-filtered feature list one at a time, and the classification accuracy of the Hidden Naïve Bayes classifier is checked for each pruned feature set. The performance of the proposed two-step Bayes classification framework was tested on different types of -omics datasets including gene expression microarray, single nucleotide polymorphism microarray (SNParray), and surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) proteomic data. The proposed two-step Bayes classification framework was equal to and, in some cases, outperformed other classification methods in terms of prediction accuracy, minimum number of classification markers, and computational time.
Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.
Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu
2017-09-01
Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?
2017-01-01
Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692
Eigbefoh, J O; Isabu, P; Okpere, E; Abebe, J
2008-07-01
Untreated urinary tract infection can have devastating maternal and neonatal effects. Thus, routine screening for bacteriuria is advocated. This study was designed to evaluate the diagnostic accuracy of the rapid dipstick test to predict urinary tract infection in pregnancy with the gold standard of urine microscopy, culture and sensitivity acting as the control. The urine dipstick test uses the leucocyte esterase, nitrite and test for protein singly and in combination. The result of the dipstick was compared with the gold standard, urine microscopy, culture and sensitivity using confidence interval for proportions. The reliability and validity of the urine dipstick was also evaluated. Overall, the urine dipstick test has a poor correlation with urine culture (p = 0.125, CI 95%). The same holds true for individual components of the dipstick test. The overall sensitivity of the urine dipstick test was poor at 2.3%. Individual sensitivity of the various components varied between 9.1% for leucocyte esterase and the nitrite test to 56.8% for leucocyte esterase alone. The other components of the dipstick test, the test of nitrite, test for protein and combination of the test (leucocyte esterase, nitrite and proteinuria) appear to decrease the sensitivity of the leucocyte esterase test alone. The ability of the urine dipstick test to correctly rule out urinary tract infection (specificity) was high. The positive predictive value for the dipstick test was high, with the leucocyte esterase test having the highest positive predictive value compared with the other components of the dipstick test. The negative predictive value (NPV) was expectedly highest for the leucocyte esterase test alone with values higher than the other components of the urine dipstick test singly and in various combinations. Compared with the other parameters of the urine dipstick test, singly and in combination, leucocyte esterase appears to be the most accurate (90.25%). The dipstick test has a limited use in screening for asymptomatic bacteriuria. The leucocyte esterase test component of the dipstick test appears to have the highest reliability and validity. The other parameters of the dipstick test decreases the reliability and validity of the leucocyte esterase test. A positive test merits empirical antibiotics, while a negative test is an indication for urine culture. The urine dipstick test if positive will also be useful in follow-up of patient after treatment of urinary tract infection. This is useful in poor resource setting especially in the third world where there is a dearth of trained personnel and equipment for urine culture.
Breast cancer prognosis by combinatorial analysis of gene expression data.
Alexe, Gabriela; Alexe, Sorin; Axelrod, David E; Bonates, Tibérius O; Lozina, Irina I; Reiss, Michael; Hammer, Peter L
2006-01-01
The potential of applying data analysis tools to microarray data for diagnosis and prognosis is illustrated on the recent breast cancer dataset of van 't Veer and coworkers. We re-examine that dataset using the novel technique of logical analysis of data (LAD), with the double objective of discovering patterns characteristic for cases with good or poor outcome, using them for accurate and justifiable predictions; and deriving novel information about the role of genes, the existence of special classes of cases, and other factors. Data were analyzed using the combinatorics and optimization-based method of LAD, recently shown to provide highly accurate diagnostic and prognostic systems in cardiology, cancer proteomics, hematology, pulmonology, and other disciplines. LAD identified a subset of 17 of the 25,000 genes, capable of fully distinguishing between patients with poor, respectively good prognoses. An extensive list of 'patterns' or 'combinatorial biomarkers' (that is, combinations of genes and limitations on their expression levels) was generated, and 40 patterns were used to create a prognostic system, shown to have 100% and 92.9% weighted accuracy on the training and test sets, respectively. The prognostic system uses fewer genes than other methods, and has similar or better accuracy than those reported in other studies. Out of the 17 genes identified by LAD, three (respectively, five) were shown to play a significant role in determining poor (respectively, good) prognosis. Two new classes of patients (described by similar sets of covering patterns, gene expression ranges, and clinical features) were discovered. As a by-product of the study, it is shown that the training and the test sets of van 't Veer have differing characteristics. The study shows that LAD provides an accurate and fully explanatory prognostic system for breast cancer using genomic data (that is, a system that, in addition to predicting good or poor prognosis, provides an individualized explanation of the reasons for that prognosis for each patient). Moreover, the LAD model provides valuable insights into the roles of individual and combinatorial biomarkers, allows the discovery of new classes of patients, and generates a vast library of biomedical research hypotheses.
Ng, Alex W H; Griffith, James F; Taljanovic, Mihra S; Li, Alvin; Tse, W L; Ho, P C
2013-07-01
To assess dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) as a measure of vascularity in scaphoid delayed-union or non-union. Thirty-five patients (34 male, one female; mean age, 27.4 ± 9.4 years; range, 16-51 years) with scaphoid delayed-union and non-union who underwent DCE MRI of the scaphoid between September 2002 and October 2012 were retrospectively reviewed. Proximal fragment vascularity was classified as good, fair, or poor on unenhanced MRI, contrast-enhanced MRI, and DCE MRI. For DCE MRI, enhancement slope, Eslope comparison of proximal and distal fragments was used to classify the proximal fragment as good, fair, or poor vascularity. Proximal fragment vascularity was similarly graded at surgery in all patients. Paired t test and McNemar test were used for data comparison. Kappa value was used to assess level of agreement between MRI findings and surgical findings. Twenty-five (71 %) of 35 patients had good vascularity, four (11 %) had fair vascularity, and six (17 %) had poor vascularity of the proximal scaphoid fragment at surgery. DCE MRI parameters had the highest correlation with surgical findings (kappa = 0.57). Proximal scaphoid fragments with surgical poor vascularity had a significantly lower Emax and Eslope than those with good vascularity (p = 0.0043 and 0.027). The sensitivity, specificity, positive and negative predictive value and accuracy of DCE MRI in predicting impaired vascularity was 67, 86, 67, 86, and 80 %, respectively, which was better than that seen with unenhanced and post-contrast MRI. Flattened time intensity curves in both proximal and distal fragments were a feature of protracted non-union with a mean time interval of 101.6 ± 95.5 months between injury and MRI. DCE MRI has a higher diagnostic accuracy than either non-enhanced MRI or contrast enhanced MRI for assessing proximal fragment vascularity in scaphoid delayed-union and non-union. For proper interpretation of contrast-enhanced studies in scaphoid vascularity, one needs to incorporate the time frame between injury and MRI.
King, Alice; Shipley, Martin; Markus, Hugh
2011-10-01
Improved methods are required to identify patients with asymptomatic carotid stenosis at high risk for stroke. The Asymptomatic Carotid Emboli Study recently showed embolic signals (ES) detected by transcranial Doppler on 2 recordings that lasted 1-hour independently predict 2-year stroke risk. ES detection is time-consuming, and whether similar predictive information could be obtained from simpler recording protocols is unknown. In a predefined secondary analysis of Asymptomatic Carotid Emboli Study, we looked at the temporal variation of ES. We determined the predictive yield associated with different recording protocols and with the use of a higher threshold to indicate increased risk (≥2 ES). To compare the different recording protocols, sensitivity and specificity analyses were performed using analysis of receiver-operator characteristic curves. Of 477 patients, 467 had baseline recordings adequate for analysis; 77 of these had ES on 1 or both of the 2 recordings. ES status on the 2 recordings was significantly associated (P<0.0001), but there was poor agreement between ES positivity on the 2 recordings (κ=0.266). For the primary outcome of ipsilateral stroke or transient ischemic attack, the use of 2 baseline recordings lasting 1 hour had greater predictive accuracy than either the first baseline recording alone (P=0.0005), a single 30-minute (P<0.0001) recording, or 2 recordings lasting 30 minutes (P<0.0001). For the outcome of ipsilateral stroke alone, two recordings lasting 1 hour had greater predictive accuracy when compared to all other recording protocols (all P<0.0001). Our analysis demonstrates the relative predictive yield of different recording protocols that can be used in application of the technique in clinical practice. Two baseline recordings lasting 1 hour as used in Asymptomatic Carotid Emboli Study gave the best risk prediction.
Grimmer, K; Milanese, S; Beaton, K; Atlas, A
2014-01-01
The Hospital Admission Risk Profile (HARP) instrument is commonly used to assess risk of functional decline when older people are admitted to hospital. HARP has moderate diagnostic accuracy (65%) for downstream decreased scores in activities of daily living. This paper reports the diagnostic accuracy of HARP for downstream quality of life. It also tests whether adding other measures to HARP improves its diagnostic accuracy. One hundred and forty-eight independent community dwelling individuals aged 65 years or older were recruited in the emergency department of one large Australian hospital with a medical problem for which they were discharged without a hospital ward admission. Data, including age, sex, primary language, highest level of education, postcode, living status, requiring care for daily activities, using a gait aid, receiving formal community supports, instrumental activities of daily living in the last week, hospitalization and falls in the last 12 months, and mental state were collected at recruitment. HARP scores were derived from a formula that summed scores assigned to age, activities of daily living, and mental state categories. Physical and mental component scores of a quality of life measure were captured by telephone interview at 1 and 3 months after recruitment. HARP scores are moderately accurate at predicting downstream decline in physical quality of life, but did not predict downstream decline in mental quality of life. The addition of other variables to HARP did not improve its diagnostic accuracy for either measure of quality of life. HARP is a poor predictor of quality of life.
Application of geo-spatial technology in schistosomiasis modelling in Africa: a review.
Manyangadze, Tawanda; Chimbari, Moses John; Gebreslasie, Michael; Mukaratirwa, Samson
2015-11-04
Schistosomiasis continues to impact socio-economic development negatively in sub-Saharan Africa. The advent of spatial technologies, including geographic information systems (GIS), Earth observation (EO) and global positioning systems (GPS) assist modelling efforts. However, there is increasing concern regarding the accuracy and precision of the current spatial models. This paper reviews the literature regarding the progress and challenges in the development and utilization of spatial technology with special reference to predictive models for schistosomiasis in Africa. Peer-reviewed papers identified through a PubMed search using the following keywords: geo-spatial analysis OR remote sensing OR modelling OR earth observation OR geographic information systems OR prediction OR mapping AND schistosomiasis AND Africa were used. Statistical uncertainty, low spatial and temporal resolution satellite data and poor validation were identified as some of the factors that compromise the precision and accuracy of the existing predictive models. The need for high spatial resolution of remote sensing data in conjunction with ancillary data viz. ground-measured climatic and environmental information, local presence/absence intermediate host snail surveys as well as prevalence and intensity of human infection for model calibration and validation are discussed. The importance of a multidisciplinary approach in developing robust, spatial data capturing, modelling techniques and products applicable in epidemiology is highlighted.
Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2013-02-01
Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally very efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore) and Canning River (Western Australia)) representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.
Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2013-07-01
Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.
NASA Astrophysics Data System (ADS)
Koga-Vicente, A.; Friedel, M. J.
2010-12-01
Every year thousands of people are affected by floods and landslide hazards caused by rainstorms. The problem is more serious in tropical developing countries because of the susceptibility as a result of the high amount of available energy to form storms, and the high vulnerability due to poor economic and social conditions. Predictive models of hazards are important tools to manage this kind of risk. In this study, a comparison of two different modeling approaches was made for predicting hydrometeorological hazards in 12 cities on the coast of São Paulo, Brazil, from 1994 to 2003. In the first approach, an empirical multiple linear regression (MLR) model was developed and used; the second approach used a type of unsupervised nonlinear artificial neural network called a self-organized map (SOM). By using twenty three independent variables of susceptibility (precipitation, soil type, slope, elevation, and regional atmospheric system scale) and vulnerability (distribution and total population, income and educational characteristics, poverty intensity, human development index), binary hazard responses were obtained. Model performance by cross-validation indicated that the respective MLR and SOM model accuracy was about 67% and 80%. Prediction accuracy can be improved by the addition of information, but the SOM approach is preferred because of sparse data and highly nonlinear relations among the independent variables.
Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,
2013-01-01
Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.
Pearce, J; Ferrier, S; Scotts, D
2001-06-01
To use models of species distributions effectively in conservation planning, it is important to determine the predictive accuracy of such models. Extensive modelling of the distribution of vascular plant and vertebrate fauna species within north-east New South Wales has been undertaken by linking field survey data to environmental and geographical predictors using logistic regression. These models have been used in the development of a comprehensive and adequate reserve system within the region. We evaluate the predictive accuracy of models for 153 small reptile, arboreal marsupial, diurnal bird and vascular plant species for which independent evaluation data were available. The predictive performance of each model was evaluated using the relative operating characteristic curve to measure discrimination capacity. Good discrimination ability implies that a model's predictions provide an acceptable index of species occurrence. The discrimination capacity of 89% of the models was significantly better than random, with 70% of the models providing high levels of discrimination. Predictions generated by this type of modelling therefore provide a reasonably sound basis for regional conservation planning. The discrimination ability of models was highest for the less mobile biological groups, particularly the vascular plants and small reptiles. In the case of diurnal birds, poor performing models tended to be for species which occur mainly within specific habitats not well sampled by either the model development or evaluation data, highly mobile species, species that are locally nomadic or those that display very broad habitat requirements. Particular care needs to be exercised when employing models for these types of species in conservation planning.
Bar-Cohen, Yaniv; Khairy, Paul; Morwood, James; Alexander, Mark E; Cecchin, Frank; Berul, Charles I
2006-07-01
ECG algorithms used to localize accessory pathways (AP) in patients with Wolff-Parkinson-White (WPW) syndrome have been validated in adults, but less is known of their use in children, especially in patients with congenital heart disease (CHD). We hypothesize that these algorithms have low diagnostic accuracy in children and even lower in those with CHD. Pre-excited ECGs in 43 patients with WPW and CHD (median age 5.4 years [0.9-32 years]) were evaluated and compared to 43 consecutive WPW control patients without CHD (median age 14.5 years [1.8-18 years]). Two blinded observers predicted AP location using 2 adult and 1 pediatric WPW algorithms, and a third blinded observer served as a tiebreaker. Predicted locations were compared with ablation-verified AP location to identify (a) exact match for AP location and (b) match for laterality (left-sided vs right-sided AP). In control children, adult algorithms were accurate in only 56% and 60%, while the pediatric algorithm was correct in 77%. In 19 patients with Ebstein's anomaly, diagnostic accuracy was similar to controls with at times an even better ability to predict laterality. In non-Ebstein's CHD, however, the algorithms were markedly worse (29% for the adult algorithms and 42% for the pediatric algorithms). A relatively large degree of interobserver variability was seen (kappa values from 0.30 to 0.58). Adult localization algorithms have poor diagnostic accuracy in young patients with and without CHD. Both adult and pediatric algorithms are particularly misleading in non-Ebstein's CHD patients and should be interpreted with caution.
Schreck, David M; Fishberg, Robert D
2014-01-01
Objective A new cardiac “electrical” biomarker (CEB) for detection of 12-lead electrocardiogram (ECG) changes indicative of acute myocardial ischemic injury has been identified. Objective was to test CEB diagnostic accuracy. Methods This is a blinded, observational retrospective case-control, noninferiority study. A total of 508 ECGs obtained from archived digital databases were interpreted by cardiologist and emergency physician (EP) blinded reference standards for presence of acute myocardial ischemic injury. CEB was constructed from three ECG cardiac monitoring leads using nonlinear modeling. Comparative active controls included ST voltage changes (J-point, ST area under curve) and a computerized ECG interpretive algorithm (ECGI). Training set of 141 ECGs identified CEB cutoffs by receiver-operating-characteristic (ROC) analysis. Test set of 367 ECGs was analyzed for validation. Poor-quality ECGs were excluded. Sensitivity, specificity, and negative and positive predictive values were calculated with 95% confidence intervals. Adjudication was performed by consensus. Results CEB demonstrated noninferiority to all active controls by hypothesis testing. CEB adjudication demonstrated 85.3–94.4% sensitivity, 92.5–93.0% specificity, 93.8–98.6% negative predictive value, and 74.6–83.5% positive predictive value. CEB was superior against all active controls in EP analysis, and against ST area under curve and ECGI by cardiologist. Conclusion CEB detects acute myocardial ischemic injury with high diagnostic accuracy. CEB is instantly constructed from three ECG leads on the cardiac monitor and displayed instantly allowing immediate cost-effective identification of patients with acute ischemic injury during cardiac rhythm monitoring. PMID:24118724
Nowosad, Jakub; Stach, Alfred; Kasprzyk, Idalia; Weryszko-Chmielewska, Elżbieta; Piotrowska-Weryszko, Krystyna; Puc, Małgorzata; Grewling, Łukasz; Pędziszewska, Anna; Uruska, Agnieszka; Myszkowska, Dorota; Chłopek, Kazimiera; Majkowska-Wojciechowska, Barbara
The aim of the study was to create and evaluate models for predicting high levels of daily pollen concentration of Corylus , Alnus , and Betula using a spatiotemporal correlation of pollen count. For each taxon, a high pollen count level was established according to the first allergy symptoms during exposure. The dataset was divided into a training set and a test set, using a stratified random split. For each taxon and city, the model was built using a random forest method. Corylus models performed poorly. However, the study revealed the possibility of predicting with substantial accuracy the occurrence of days with high pollen concentrations of Alnus and Betula using past pollen count data from monitoring sites. These results can be used for building (1) simpler models, which require data only from aerobiological monitoring sites, and (2) combined meteorological and aerobiological models for predicting high levels of pollen concentration.
Inoa, Violiza; Aron, Abraham W; Staff, Ilene; Fortunato, Gilbert; Sansing, Lauren H
2014-01-01
The NIH stroke scale (NIHSS) is an indispensable tool that aids in the determination of acute stroke prognosis and decision making. Patients with posterior circulation (PC) strokes often present with lower NIHSS scores, which may result in the withholding of thrombolytic treatment from these patients. However, whether these lower initial NIHSS scores predict better long-term prognoses is uncertain. We aimed to assess the utility of the NIHSS at presentation for predicting the functional outcome at 3 months in anterior circulation (AC) versus PC strokes. This was a retrospective analysis of a large prospectively collected database of adults with acute ischemic stroke. Univariate and multivariate analyses were conducted to identify factors associated with outcome. Additional analyses were performed to determine the receiver operating characteristic (ROC) curves for NIHSS scores and outcomes in AC and PC infarctions. Both the optimal cutoffs for maximal diagnostic accuracy and the cutoffs to obtain >80% sensitivity for poor outcomes were determined in AC and PC strokes. The analysis included 1,197 patients with AC stroke and 372 with PC stroke. The median initial NIHSS score for patients with AC strokes was 7 and for PC strokes it was 2. The majority (71%) of PC stroke patients had baseline NIHSS scores ≤4, and 15% of these 'minor' stroke patients had a poor outcome at 3 months. ROC analysis identified that the optimal NIHSS cutoff for outcome prediction after infarction in the AC was 8 and for infarction in the PC it was 4. To achieve >80% sensitivity for detecting patients with a subsequent poor outcome, the NIHSS cutoff for infarctions in the AC was 4 and for infarctions in the PC it was 2. The NIHSS cutoff that most accurately predicts outcomes is 4 points higher in AC compared to PC infarctions. There is potential for poor outcomes in patients with PC strokes and low NIHSS scores, suggesting that thrombolytic treatment should not be withheld from these patients based solely on the NIHSS. © 2014 S. Karger AG, Basel. © 2014 S. Karger AG, Basel.
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding
2013-01-01
Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.
Manoharan, Sujatha C; Ramakrishnan, Swaminathan
2009-10-01
In this work, prediction of forced expiratory volume in pulmonary function test, carried out using spirometry and neural networks is presented. The pulmonary function data were recorded from volunteers using commercial available flow volume spirometer in standard acquisition protocol. The Radial Basis Function neural networks were used to predict forced expiratory volume in 1 s (FEV1) from the recorded flow volume curves. The optimal centres of the hidden layer of radial basis function were determined by k-means clustering algorithm. The performance of the neural network model was evaluated by computing their prediction error statistics of average value, standard deviation, root mean square and their correlation with the true data for normal, restrictive and obstructive cases. Results show that the adopted neural networks are capable of predicting FEV1 in both normal and abnormal cases. Prediction accuracy was more in obstructive abnormality when compared to restrictive cases. It appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.
Modifying Bagnold's Sediment Transport Equation for Use in Watershed-Scale Channel Incision Models
NASA Astrophysics Data System (ADS)
Lammers, R. W.; Bledsoe, B. P.
2016-12-01
Destabilized stream channels may evolve through a sequence of stages, initiated by bed incision and followed by bank erosion and widening. Channel incision can be modeled using Exner-type mass balance equations, but model accuracy is limited by the accuracy and applicability of the selected sediment transport equation. Additionally, many sediment transport relationships require significant data inputs, limiting their usefulness in data-poor environments. Bagnold's empirical relationship for bedload transport is attractive because it is based on stream power, a relatively straightforward parameter to estimate using remote sensing data. However, the equation is also dependent on flow depth, which is more difficult to measure or estimate for entire drainage networks. We recast Bagnold's original sediment transport equation using specific discharge in place of flow depth. Using a large dataset of sediment transport rates from the literature, we show that this approach yields similar predictive accuracy as other stream power based relationships. We also explore the applicability of various critical stream power equations, including Bagnold's original, and support previous conclusions that these critical values can be predicted well based solely on sediment grain size. In addition, we propagate error in these sediment transport equations through channel incision modeling to compare the errors associated with our equation to alternative formulations. This new version of Bagnold's bedload transport equation has utility for channel incision modeling at larger spatial scales using widely available and remote sensing data.
Suh, Sang-Yeon; Choi, Youn Seon; Shim, Jae Yong; Kim, Young Sung; Yeom, Chang Hwan; Kim, Daeyoung; Park, Shin Ae; Kim, Sooa; Seo, Ji Yeon; Kim, Su Hyun; Kim, Daegyeun; Choi, Sung-Eun; Ahn, Hong-Yup
2010-02-01
The goal of this study was to develop a new, objective prognostic score (OPS) for terminally ill cancer patients based on an integrated model that includes novel objective prognostic factors. A multicenter study of 209 terminally ill cancer patients from six training hospitals in Korea were prospectively followed until death. The Cox proportional hazard model was used to adjust for the influence of clinical and laboratory variables on survival time. The OPS was calculated from the sum of partial scores obtained from seven significant predictors determined by the final model. The partial score was based on the hazard ratio of each predictor. The accuracy of the OPS was evaluated. The overall median survival was 26 days. On the multivariate analysis, reduced oral intake, resting dyspnea, low performance status, leukocytosis, elevated bilirubin, elevated creatinine, and elevated lactate dehydrogenase (LDH) were identified as poor prognostic factors. The range of OPS was from 0.0 to 7.0. For the above cutoff point of 3.0, the 3-week prediction sensitivity was 74.7%, the specificity was 76.5%, and the overall accuracy was 75.5%. We developed the new OPS, without clinician's survival estimates but including a new prognostic factor (LDH). This new instrument demonstrated accurate prediction of the 3-week survival. The OPS had acceptable accuracy in this study population (training set). Further validation is required on an independent population (testing set).
Rana, Jamal S.; Tabada, Grace H.; Solomon, Matthew D.; Lo, Joan C.; Jaffe, Marc G.; Sung, Sue Hee; Ballantyne, Christie M.; Go, Alan S.
2016-01-01
Background The accuracy of the 2013 American College of Cardiology/American Heart Association (ACC/AHA) risk equation for atherosclerotic cardiovascular disease (ASCVD) events in contemporary and ethnically diverse populations is not well understood. Objectives We sought to evaluate the accuracy of the 2013 ACC/AHA risk equation within a large, multiethnic population in clinical care. Methods The target population for consideration of cholesterol-lowering therapy in a large, integrated health care delivery system population was identified in 2008 and followed through 2013. The main analyses excluded those with known ASCVD, diabetes mellitus, low-density lipoprotein cholesterol levels <70 or ≥190 mg/dl, prior statin use, or incomplete 5-year follow-up. Patient characteristics were obtained from electronic medical records and ASCVD events were ascertained using validated algorithms for hospitalization databases and death certificates. We compared predicted versus observed 5-year ASCVD risk, overall and by sex and race/ethnicity. We additionally examined predicted versus observed risk in patients with diabetes mellitus. Results Among 307,591 eligible adults without diabetes between 40 and 75 years of age, 22,283 were black, 52,917 Asian/Pacific Islander, and 18,745 Hispanic. We observed 2,061 ASCVD events during 1,515,142 person-years. In each 5-year predicted ASCVD risk category, observed 5-year ASCVD risk was substantially lower: 0.20% for predicted risk <2.50%; 0.65% for predicted risk 2.50 to 3.74%; 0.90% for predicted risk 3.75 to 4.99%; and 1.85% for predicted risk ≥5.00%, with C: 0.74. Similar ASCVD risk overestimation and poor calibration with moderate discrimination (C: 0.68 to 0.74) was observed in sex, racial/ethnic, and socioeconomic status subgroups, and in sensitivity analyses among patients receiving statins for primary prevention. Calibration among 4,242 eligible adults with diabetes was improved, but discrimination was worse (C: 0.64). Conclusions In a large, contemporary “real-world” population, the ACC/AHA Pooled Cohort risk equation substantially overestimated actual 5-year risk in adults without diabetes, overall and across sociodemographic subgroups. PMID:27151343
Perceptual impairment in face identification with poor sleep
Beattie, Louise; Walsh, Darragh; McLaren, Jessica; Biello, Stephany M.
2016-01-01
Previous studies have shown impaired memory for faces following restricted sleep. However, it is not known whether lack of sleep impairs performance on face identification tasks that do not rely on recognition memory, despite these tasks being more prevalent in security and forensic professions—for example, in photo-ID checks at national borders. Here we tested whether poor sleep affects accuracy on a standard test of face-matching ability that does not place demands on memory: the Glasgow Face-Matching Task (GFMT). In Experiment 1, participants who reported sleep disturbance consistent with insomnia disorder show impaired accuracy on the GFMT when compared with participants reporting normal sleep behaviour. In Experiment 2, we then used a sleep diary method to compare GFMT accuracy in a control group to participants reporting poor sleep on three consecutive nights—and again found lower accuracy scores in the short sleep group. In both experiments, reduced face-matching accuracy in those with poorer sleep was not associated with lower confidence in their decisions, carrying implications for occupational settings where identification errors made with high confidence can have serious outcomes. These results suggest that sleep-related impairments in face memory reflect difficulties in perceptual encoding of identity, and point towards metacognitive impairment in face matching following poor sleep. PMID:27853547
Devarajan, Prasad; Zappitelli, Michael; Sint, Kyaw; Thiessen-Philbrook, Heather; Li, Simon; Kim, Richard W.; Koyner, Jay L.; Coca, Steven G.; Edelstein, Charles L.; Shlipak, Michael G.; Garg, Amit X.; Krawczeski, Catherine D.
2011-01-01
Acute kidney injury (AKI) occurs commonly after pediatric cardiac surgery and associates with poor outcomes. Biomarkers may help the prediction or early identification of AKI, potentially increasing opportunities for therapeutic interventions. Here, we conducted a prospective, multicenter cohort study involving 311 children undergoing surgery for congenital cardiac lesions to evaluate whether early postoperative measures of urine IL-18, urine neutrophil gelatinase-associated lipocalin (NGAL), or plasma NGAL could identify which patients would develop AKI and other adverse outcomes. Urine IL-18 and urine and plasma NGAL levels peaked within 6 hours after surgery. Severe AKI, defined by dialysis or doubling in serum creatinine during hospital stay, occurred in 53 participants at a median of 2 days after surgery. The first postoperative urine IL-18 and urine NGAL levels strongly associated with severe AKI. After multivariable adjustment, the highest quintiles of urine IL-18 and urine NGAL associated with 6.9- and 4.1-fold higher odds of AKI, respectively, compared with the lowest quintiles. Elevated urine IL-18 and urine NGAL levels associated with longer hospital stay, longer intensive care unit stay, and duration of mechanical ventilation. The accuracy of urine IL-18 and urine NGAL for diagnosis of severe AKI was moderate, with areas under the curve of 0.72 and 0.71, respectively. The addition of these urine biomarkers improved risk prediction over clinical models alone as measured by net reclassification improvement and integrated discrimination improvement. In conclusion, urine IL-18 and urine NGAL, but not plasma NGAL, associate with subsequent AKI and poor outcomes among children undergoing cardiac surgery. PMID:21836147
Using Bluetooth proximity sensing to determine where office workers spend time at work.
Clark, Bronwyn K; Winkler, Elisabeth A; Brakenridge, Charlotte L; Trost, Stewart G; Healy, Genevieve N
2018-01-01
Most wearable devices that measure movement in workplaces cannot determine the context in which people spend time. This study examined the accuracy of Bluetooth sensing (10-second intervals) via the ActiGraph GT9X Link monitor to determine location in an office setting, using two simple, bespoke algorithms. For one work day (mean±SD 6.2±1.1 hours), 30 office workers (30% men, aged 38±11 years) simultaneously wore chest-mounted cameras (video recording) and Bluetooth-enabled monitors (initialised as receivers) on the wrist and thigh. Additional monitors (initialised as beacons) were placed in the entry, kitchen, photocopy room, corridors, and the wearer's office. Firstly, participant presence/absence at each location was predicted from the presence/absence of signals at that location (ignoring all other signals). Secondly, using the information gathered at multiple locations simultaneously, a simple heuristic model was used to predict at which location the participant was present. The Bluetooth-determined location for each algorithm was tested against the camera in terms of F-scores. When considering locations individually, the accuracy obtained was excellent in the office (F-score = 0.98 and 0.97 for thigh and wrist positions) but poor in other locations (F-score = 0.04 to 0.36), stemming primarily from a high false positive rate. The multi-location algorithm exhibited high accuracy for the office location (F-score = 0.97 for both wear positions). It also improved the F-scores obtained in the remaining locations, but not always to levels indicating good accuracy (e.g., F-score for photocopy room ≈0.1 in both wear positions). The Bluetooth signalling function shows promise for determining where workers spend most of their time (i.e., their office). Placing beacons in multiple locations and using a rule-based decision model improved classification accuracy; however, for workplace locations visited infrequently or with considerable movement, accuracy was below desirable levels. Further development of algorithms is warranted.
Passante, E; Würstle, M L; Hellwig, C T; Leverkus, M; Rehm, M
2013-01-01
Many cancer entities and their associated cell line models are highly heterogeneous in their responsiveness to apoptosis inducers and, despite a detailed understanding of the underlying signaling networks, cell death susceptibility currently cannot be predicted reliably from protein expression profiles. Here, we demonstrate that an integration of quantitative apoptosis protein expression data with pathway knowledge can predict the cell death responsiveness of melanoma cell lines. By a total of 612 measurements, we determined the absolute expression (nM) of 17 core apoptosis regulators in a panel of 11 melanoma cell lines, and enriched these data with systems-level information on apoptosis pathway topology. By applying multivariate statistical analysis and multi-dimensional pattern recognition algorithms, the responsiveness of individual cell lines to tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) or dacarbazine (DTIC) could be predicted with very high accuracy (91 and 82% correct predictions), and the most effective treatment option for individual cell lines could be pre-determined in silico. In contrast, cell death responsiveness was poorly predicted when not taking knowledge on protein–protein interactions into account (55 and 36% correct predictions). We also generated mathematical predictions on whether anti-apoptotic Bcl-2 family members or x-linked inhibitor of apoptosis protein (XIAP) can be targeted to enhance TRAIL responsiveness in individual cell lines. Subsequent experiments, making use of pharmacological Bcl-2/Bcl-xL inhibition or siRNA-based XIAP depletion, confirmed the accuracy of these predictions. We therefore demonstrate that cell death responsiveness to TRAIL or DTIC can be predicted reliably in a large number of melanoma cell lines when investigating expression patterns of apoptosis regulators in the context of their network-level interplay. The capacity to predict responsiveness at the cellular level may contribute to personalizing anti-cancer treatments in the future. PMID:23933815
NASA Astrophysics Data System (ADS)
Anderson, R. B.; Clegg, S. M.; Frydenvang, J.
2015-12-01
One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.
Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions
Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.
2013-01-01
Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843
Lodha, Abhay; Sauvé, Reg; Chen, Sophie; Tang, Selphee; Christianson, Heather
2009-11-01
In this study, we evaluated the Clinical Risk Index for Babies - revised (CRIB-II) score as a predictor of long-term neurodevelopmental outcomes in preterm infants at 36 months' corrected age. CRIB-II scores, which include birthweight, gestational age, sex, admission temperature, and base excess, were recorded prospectively on all infants weighing 1250g or less admitted to the neonatal intensive care unit (NICU). The sensitivity and specificity of CRIB-II scores to predict poor outcomes were examined using receiver operating characteristic curves, and predictive accuracy was assessed using the area under the curve (AUC), based on the observed values entered on a continuous scale. Poor outcomes were defined as death or major neurodevelopmental disability (cerebral palsy, neurosensory hearing loss requiring amplification, legal blindness, severe seizure disorder, or cognitive score >2SD below the mean for adjusted age determined by clinical neurological examination and on the Wechsler Preschool and Primary Scale of Intelligence, Bayley Scales of Infant Development, or revised Leiter International Performance Scale). Of the 180 infants admitted to the NICU, 155 survived. Complete follow-up data were available for 107 children. The male:female ratio was 50:57 (47-53%), median birthweight was 930g (range 511-1250g), and median gestational age was 27 weeks (range 23-32wks). Major neurodevelopmental impairment was observed in 11.2% of participants. In a regression model, the CRIB-II score was significantly correlated with long-term neurodevelopmental outcomes. It predicted major neurodevelopmental impairment (odds ratio [OR] 1.57, bootstrap 95% confidence interval [CI] 1.26-3.01; AUC 0.84) and poor outcome (OR 1.46; bootstrap 95% CI 1.31-1.71, AUC 0.82) at 36 months' corrected age. CRIB-II scores of 13 or more in the first hour of life can reliably predict major neurodevelopmental impairment at 36 months' corrected age (sensitivity 83%; specificity 84%).
Multiclass cancer diagnosis using tumor gene expression signatures
Ramaswamy, S.; Tamayo, P.; Rifkin, R.; ...
2001-12-11
The optimal treatment of patients with cancer depends on establishing accurate diagnoses by using a complex combination of clinical and histopathological data. In some instances, this task is difficult or impossible because of atypical clinical presentation or histopathology. To determine whether the diagnosis of multiple common adult malignancies could be achieved purely by molecular classification, we subjected 218 tumor samples, spanning 14 common tumor types, and 90 normal tissue samples to oligonucleotide microarray gene expression analysis. The expression levels of 16,063 genes and expressed sequence tags were used to evaluate the accuracy of a multiclass classifier based on a supportmore » vector machine algorithm. Overall classification accuracy was 78%, far exceeding the accuracy of random classification (9%). Poorly differentiated cancers resulted in low-confidence predictions and could not be accurately classified according to their tissue of origin, indicating that they are molecularly distinct entities with dramatically different gene expression patterns compared with their well differentiated counterparts. Taken together, these results demonstrate the feasibility of accurate, multiclass molecular cancer classification and suggest a strategy for future clinical implementation of molecular cancer diagnostics.« less
Chipps, S.R.; Einfalt, L.M.; Wahl, David H.
2000-01-01
We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.
Effect of Spatio-Temporal Variability of Rainfall on Stream flow Prediction of Birr Watershed
NASA Astrophysics Data System (ADS)
Demisse, N. S.; Bitew, M. M.; Gebremichael, M.
2012-12-01
The effect of rainfall variability on our ability to forecast flooding events was poorly studied in complex terrain region of Ethiopia. In order to establish relation between rainfall variability and stream flow, we deployed 24 rain gauges across Birr watershed. Birr watershed is a medium size mountainous watershed with an area of 3000 km2 and elevation ranging between 1435 m.a.s.l and 3400 m.a.s.l in the central Ethiopia highlands. One summer monsoon rainfall of 2012 recorded at high temporal scale of 15 minutes interval and stream flow recorded at an hourly interval in three sub-watershed locations representing different scales were used in this study. Based on the data obtained from the rain gauges and stream flow observations, we quantify extent of temporal and spatial variability of rainfall across the watershed using standard statistical measures including mean, standard deviation and coefficient of variation. We also establish rainfall-runoff modeling system using a physically distributed hydrological model: the Soil and Water Assessment Tool (SWAT) and examine the effect of rainfall variability on stream flow prediction. The accuracy of predicted stream flow is measured through direct comparison with observed flooding events. The results demonstrate the significance of relation between stream flow prediction and rainfall variability in the understanding of runoff generation mechanisms at watershed scale, determination of dominant water balance components, and effect of variability on accuracy of flood forecasting activities.
Ernst, Corinna; Hahnen, Eric; Engel, Christoph; Nothnagel, Michael; Weber, Jonas; Schmutzler, Rita K; Hauke, Jan
2018-03-27
The use of next-generation sequencing approaches in clinical diagnostics has led to a tremendous increase in data and a vast number of variants of uncertain significance that require interpretation. Therefore, prediction of the effects of missense mutations using in silico tools has become a frequently used approach. Aim of this study was to assess the reliability of in silico prediction as a basis for clinical decision making in the context of hereditary breast and/or ovarian cancer. We tested the performance of four prediction tools (Align-GVGD, SIFT, PolyPhen-2, MutationTaster2) using a set of 236 BRCA1/2 missense variants that had previously been classified by expert committees. However, a major pitfall in the creation of a reliable evaluation set for our purpose is the generally accepted classification of BRCA1/2 missense variants using the multifactorial likelihood model, which is partially based on Align-GVGD results. To overcome this drawback we identified 161 variants whose classification is independent of any previous in silico prediction. In addition to the performance as stand-alone tools we examined the sensitivity, specificity, accuracy and Matthews correlation coefficient (MCC) of combined approaches. PolyPhen-2 achieved the lowest sensitivity (0.67), specificity (0.67), accuracy (0.67) and MCC (0.39). Align-GVGD achieved the highest values of specificity (0.92), accuracy (0.92) and MCC (0.73), but was outperformed regarding its sensitivity (0.90) by SIFT (1.00) and MutationTaster2 (1.00). All tools suffered from poor specificities, resulting in an unacceptable proportion of false positive results in a clinical setting. This shortcoming could not be bypassed by combination of these tools. In the best case scenario, 138 families would be affected by the misclassification of neutral variants within the cohort of patients of the German Consortium for Hereditary Breast and Ovarian Cancer. We show that due to low specificities state-of-the-art in silico prediction tools are not suitable to predict pathogenicity of variants of uncertain significance in BRCA1/2. Thus, clinical consequences should never be based solely on in silico forecasts. However, our data suggests that SIFT and MutationTaster2 could be suitable to predict benignity, as both tools did not result in false negative predictions in our analysis.
Link Prediction in Evolving Networks Based on Popularity of Nodes.
Wang, Tong; He, Xing-Sheng; Zhou, Ming-Yang; Fu, Zhong-Qian
2017-08-02
Link prediction aims to uncover the underlying relationship behind networks, which could be utilized to predict missing edges or identify the spurious edges. The key issue of link prediction is to estimate the likelihood of potential links in networks. Most classical static-structure based methods ignore the temporal aspects of networks, limited by the time-varying features, such approaches perform poorly in evolving networks. In this paper, we propose a hypothesis that the ability of each node to attract links depends not only on its structural importance, but also on its current popularity (activeness), since active nodes have much more probability to attract future links. Then a novel approach named popularity based structural perturbation method (PBSPM) and its fast algorithm are proposed to characterize the likelihood of an edge from both existing connectivity structure and current popularity of its two endpoints. Experiments on six evolving networks show that the proposed methods outperform state-of-the-art methods in accuracy and robustness. Besides, visual results and statistical analysis reveal that the proposed methods are inclined to predict future edges between active nodes, rather than edges between inactive nodes.
The evaluation of hepatic fibrosis scores in children with nonalcoholic fatty liver disease.
Mansoor, Sana; Yerian, Lisa; Kohli, Rohit; Xanthakos, Stavra; Angulo, Paul; Ling, Simon; Lopez, Rocio; Christine, Carter-Kent; Feldstein, Ariel E; Alkhouri, Naim
2015-05-01
Nonalcoholic fatty liver disease (NAFLD) is the most common form of chronic liver disease in children and can progress to liver cirrhosis during childhood. Patients with more advanced fibrosis on biopsy tend to have more liver complications. Noninvasive hepatic fibrosis scores have been developed for adult patients with NAFLD; however, these scores have not been validated in children. The aim of our study was to evaluate some of these scores in assessing the presence of fibrosis in children with biopsy-proven NAFLD. Our study consisted of 92 biopsy-proven NAFLD children from five major US centers. Fibrosis was determined by an experienced pathologist (F0-4). Clinically significant fibrosis was defined as fibrosis stage ≥ 2, and advanced fibrosis was defined as F3-4. The following fibrosis scores were calculated for each child: AST/ALT ratio, AST/platelet ratio index (APRI), NAFLD fibrosis score (NFS), and FIB-4 index. ROC was performed to assess the performance of different scores for prediction of presence of any, significant, or advanced fibrosis. A p value < 0.05 was considered statistically significant. Mean age was 13.3 ± 3 years, and 33 % were females. Eleven (12 %) subjects had no fibrosis, 35 (38 %) had fibrosis score of 1, 26 (28 %) had fibrosis score of 2, and 20 (22 %) had a score of 3. APRI had a fair diagnostic accuracy for the presence of any fibrosis (AUC of 0.80) and poor diagnostic accuracy for significant or advanced fibrosis. AST/ALT, NFS, and FIB-4 index all either had poor diagnostic accuracy or failed to diagnose the presence of any, significant, or advanced fibrosis. Noninvasive hepatic fibrosis scores developed in adults had poor performance in diagnosing significant fibrosis in children with NAFLD. Our results highlight the urgent need to develop a reliable pediatric fibrosis score.
/sup 67/Ga citrate scanning and serum angiotensin converting enzyme levels in sarcoidosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, R.G.; Bekerman, C.; Sicilian, L.
1982-09-01
/sup 67/Ga citrate scans and serum angiotensin converting enzyme (ACE) levels were obtained in 54 patients with sarcoidosis and analyzed in relation to clinical manifestations. /sup 67/Ga scans were abnormal in 97% of patients with clinically active disease (n . 30) and in 71% of patients with inactive disease (n . 24). Serum ACE levels were abnormally high (2 standard deviations above the control mean) in 73% of patients with clinically active disease and in 54% of patients with inactive disease. Serum ACE levels correlated significantly with /sup 67/Ga uptake score (r..436; p less than .005). The frequency of abnormalmore » /sup 67/Ga scans and elevated serum ACE levels suggests that inflammatory activity with little or no clinical expression is common in sarcoidosis. Abnormal /sup 67/Ga scans were highly sensitive (97%) but had poor specificity (29%) to clinical disease activity. The accuracy of negative prediction of clinical activity by normal scans (87%) was better than the accuracy of positive prediction of clinical activity by abnormal scans (63%). /sup 67/Ga scans can be used to support the clinical indentification of inactive sacoidosis.« less
NASA Astrophysics Data System (ADS)
Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han
2017-11-01
Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.
Robust Multimodal Dictionary Learning
Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674
Applying data mining techniques to improve diagnosis in neonatal jaundice.
Ferreira, Duarte; Oliveira, Abílio; Freitas, Alberto
2012-12-07
Hyperbilirubinemia is emerging as an increasingly common problem in newborns due to a decreasing hospital length of stay after birth. Jaundice is the most common disease of the newborn and although being benign in most cases it can lead to severe neurological consequences if poorly evaluated. In different areas of medicine, data mining has contributed to improve the results obtained with other methodologies.Hence, the aim of this study was to improve the diagnosis of neonatal jaundice with the application of data mining techniques. This study followed the different phases of the Cross Industry Standard Process for Data Mining model as its methodology.This observational study was performed at the Obstetrics Department of a central hospital (Centro Hospitalar Tâmega e Sousa--EPE), from February to March of 2011. A total of 227 healthy newborn infants with 35 or more weeks of gestation were enrolled in the study. Over 70 variables were collected and analyzed. Also, transcutaneous bilirubin levels were measured from birth to hospital discharge with maximum time intervals of 8 hours between measurements, using a noninvasive bilirubinometer.Different attribute subsets were used to train and test classification models using algorithms included in Weka data mining software, such as decision trees (J48) and neural networks (multilayer perceptron). The accuracy results were compared with the traditional methods for prediction of hyperbilirubinemia. The application of different classification algorithms to the collected data allowed predicting subsequent hyperbilirubinemia with high accuracy. In particular, at 24 hours of life of newborns, the accuracy for the prediction of hyperbilirubinemia was 89%. The best results were obtained using the following algorithms: naive Bayes, multilayer perceptron and simple logistic. The findings of our study sustain that, new approaches, such as data mining, may support medical decision, contributing to improve diagnosis in neonatal jaundice.
Evaluation of Pharmacokinetic Assumptions Using a 443 ...
With the increasing availability of high-throughput and in vitro data for untested chemicals, there is a need for pharmacokinetic (PK) models for in vitro to in vivo extrapolation (IVIVE). Though some PBPK models have been created for individual compounds using in vivo data, we are now able to rapidly parameterize generic PBPK models using in vitro data to allow IVIVE for chemicals tested for bioactivity via high-throughput screening. However, these new models are expected to have limited accuracy due to their simplicity and generalization of assumptions. We evaluated the assumptions and performance of a generic PBPK model (R package “httk”) parameterized by a library of in vitro PK data for 443 chemicals. We evaluate and calibrate Schmitt’s method by comparing the predicted volume of distribution (Vd) and tissue partition coefficients to in vivo measurements. The partition coefficients are initially over predicted, likely due to overestimation of partitioning into phospholipids in tissues and the lack of lipid partitioning in the in vitro measurements of the fraction unbound in plasma. Correcting for phospholipids and plasma binding improved the predictive ability (R2 to 0.52 for partition coefficients and 0.32 for Vd). We lacked enough data to evaluate the accuracy of changing the model structure to include tissue blood volumes and/or separate compartments for richly/poorly perfused tissues, therefore we evaluated the impact of these changes on model
Ollikainen, Noah; de Jong, René M; Kortemme, Tanja
2015-01-01
Interactions between small molecules and proteins play critical roles in regulating and facilitating diverse biological functions, yet our ability to accurately re-engineer the specificity of these interactions using computational approaches has been limited. One main difficulty, in addition to inaccuracies in energy functions, is the exquisite sensitivity of protein-ligand interactions to subtle conformational changes, coupled with the computational problem of sampling the large conformational search space of degrees of freedom of ligands, amino acid side chains, and the protein backbone. Here, we describe two benchmarks for evaluating the accuracy of computational approaches for re-engineering protein-ligand interactions: (i) prediction of enzyme specificity altering mutations and (ii) prediction of sequence tolerance in ligand binding sites. After finding that current state-of-the-art "fixed backbone" design methods perform poorly on these tests, we develop a new "coupled moves" design method in the program Rosetta that couples changes to protein sequence with alterations in both protein side-chain and protein backbone conformations, and allows for changes in ligand rigid-body and torsion degrees of freedom. We show significantly increased accuracy in both predicting ligand specificity altering mutations and binding site sequences. These methodological improvements should be useful for many applications of protein-ligand design. The approach also provides insights into the role of subtle conformational adjustments that enable functional changes not only in engineering applications but also in natural protein evolution.
Herpers, Pierre C M; Klip, Helen; Rommelse, Nanda N J; Taylor, Mark J; Greven, Corina U; Buitelaar, Jan K
2017-07-01
Callous-unemotional (CU) traits have mainly been studied in relation to conduct disorder (CD), but can also occur in other disorder groups. However, it is unclear whether there is a clinically relevant cut-off value of levels of CU traits in predicting reduced quality of life (QoL) and clinical symptoms, and whether CU traits better fit a categorical (taxonic) or dimensional model. Parents of 979 youths referred to a child and adolescent psychiatric clinic rated their child's CU traits on the Inventory of Callous-Unemotional traits (ICU), QoL on the Kidscreen-27, and clinical symptoms on the Child Behavior Checklist. Experienced clinicians conferred DSM-IV-TR diagnoses of ADHD, ASD, anxiety/mood disorders and DBD-NOS/ODD. The ICU was also used to score the DSM-5 specifier 'with limited prosocial emotions' (LPE) of Conduct Disorder. Receiver operating characteristic (ROC) analyses revealed that the predictive accuracy of the ICU and LPE regarding QoL and clinical symptoms was poor to fair, and similar across diagnoses. A clinical cut-off point could not be defined. Taxometric analyses suggested that callous-unemotional traits on the ICU best reflect a dimension rather than taxon. More research is needed on the impact of CU traits on the functional adaptation, course, and response to treatment of non-CD conditions. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Daylami, Rouzbeh; Rogers, Ann M; King, Tonya S; Haluck, Randy S; Shope, Timothy R
2008-01-01
Stricture at the gastrojejunal anastomosis after Roux-en-Y gastric bypass is a significant sequela that often requires intervention. The diagnosis of stricture is usually established by a recognized constellation of symptoms, followed by contrast radiography or endoscopy. The purpose of this report was to evaluate the accuracy of contrast swallow studies in excluding the diagnosis of gastrojejunal stricture. A retrospective analysis of the charts of 119 patients who had undergone laparoscopic Roux-en-Y gastric bypass, representing 41 upper gastrointestinal (GI) swallow studies, was conducted. Of those patients who underwent GI swallow studies, 30 then underwent definitive upper endoscopy to confirm or rule out stricture. The overall sensitivity, specificity, and negative predictive value of the swallow studies were calculated. Of the 30 patients who underwent upper endoscopic examination for symptoms of stricture after laparoscopic gastric bypass, 20 were confirmed to have a stricture. The sensitivity, specificity, and negative predictive value of the upper GI swallow study in this group was 55%, 100%, and 53%, respectively. The demographics of the patients with strictures were similar to those of the study group as a whole. The results of our study have shown that a positive upper GI swallow study is 100% specific for the presence of stricture. However, the sensitivity and negative predictive value of upper GI swallow studies were poor, making this modality unsatisfactory in definitively excluding the diagnosis of gastrojejunal stricture.
Langeslag-Smith, Miriam A; Vandal, Alain C; Briane, Vincent; Thompson, Benjamin; Anstice, Nicola S
2015-01-01
Objectives To assess the accuracy of preschool vision screening in a large, ethnically diverse, urban population in South Auckland, New Zealand. Design Retrospective longitudinal study. Methods B4 School Check vision screening records (n=5572) were compared with hospital eye department data for children referred from screening due to impaired acuity in one or both eyes who attended a referral appointment (n=556). False positive screens were identified by comparing screening data from the eyes that failed screening with hospital data. Estimation of false negative screening rates relied on data from eyes that passed screening. Data were analysed using logistic regression modelling accounting for the high correlation between results for the two eyes of each child. Primary outcome measure Positive predictive value of the preschool vision screening programme. Results Screening produced high numbers of false positive referrals, resulting in poor positive predictive value (PPV=31%, 95% CI 26% to 38%). High estimated negative predictive value (NPV=92%, 95% CI 88% to 95%) suggested most children with a vision disorder were identified at screening. Relaxing the referral criteria for acuity from worse than 6/9 to worse than 6/12 improved PPV without adversely affecting NPV. Conclusions The B4 School Check generated numerous false positive referrals and consequently had a low PPV. There is scope for reducing costs by altering the visual acuity criterion for referral. PMID:26614622
Schaefferkoetter, Joshua D; Carlson, Eric R; Heidel, Robert E
2015-07-01
The present study investigated the performance of cellular metabolism imaging with 2-deoxy-2-((18)F) fluoro-D-glucose (FDG) versus cellular proliferation imaging with 3'-deoxy-3'-((18)F) fluorothymidine (FLT) in the detection of cervical lymph node metastases in oral/head and neck cancer. We conducted a prospective cohort study to assess a head-to-head performance of FLT imaging and clinical FDG imaging for characterizing cervical lymph node metastases in patients with squamous cell carcinoma (SCC) of the oral/head and neck region. The primary predictor variable of the study was the presence of FDG or FLT avidity within the cervical lymph nodes. The primary outcome variable was the histologic presence of metastatic SCC in the cervical lymph nodes. The performance was reported in terms of the sensitivity, specificity, accuracy, and positive and negative predictive values. The overall accuracy for discriminating positive from negative lymph nodes was evaluated as a function of the positron emission tomography (PET) standardized uptake value (SUV). Receiver operating characteristic (ROC) analyses were performed for both tracers. Eleven patients undergoing surgical resection of SCC of the oral/head and neck region underwent preoperative FDG and FLT PET-computed tomography (CT) scans on separate days. The interpretation of the FDG PET-CT imaging resulted in sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of 43.2, 99.5, 94.4, 88.9, and 94.7%, respectively. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value for FLT PET-CT imaging was 75.7, 99.2, 97.1, 90.3, and 97.7%, respectively. The areas under the curve for the ROC curves were 0.9 and 0.84 for FDG and FLT, respectively. Poor correlation was observed between the SUV for FDG and FLT within the lymph nodes and tumors. FLT showed better overall performance for detecting lymphadenopathy on qualitative assessment within the total nodal population. This notwithstanding, FDG SUV performed better for pathologic discrimination within the visible lymph nodes. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Sutorius, Fleur L; Hoogendijk, Emiel O; Prins, Bernard A H; van Hout, Hein P J
2016-08-03
Many instruments have been developed to identify frail older adults in primary care. A direct comparison of the accuracy and prevalence of identification methods is rare and most studies ignore the stepped selection typically employed in routine care practice. Also it is unclear whether the various methods select persons with different characteristics. We aimed to estimate the accuracy of 10 single and stepped methods to identify frailty in older adults and to predict adverse health outcomes. In addition, the methods were compared on their prevalence of the identified frail persons and on the characteristics of persons identified. The Groningen Frailty Indicator (GFI), the PRISMA-7, polypharmacy, the clinical judgment of the general practitioner (GP), the self-rated health of the older adult, the Edmonton Frail Scale (EFS), the Identification Seniors At Risk Primary Care (ISAR PC), the Frailty Index (FI), the InterRAI screener and gait speed were compared to three measures: two reference standards (the clinical judgment of a multidisciplinary expert panel and Fried's frailty criteria) and 6-years mortality or long term care admission. Data were used from the Dutch Identification of Frail Elderly Study, consisting of 102 people aged 65 and over from a primary care practice in Amsterdam. Frail older adults were oversampled. The accuracy of each instrument and several stepped strategies was estimated by calculating the area under the ROC-curve. Prevalence rates of frailty ranged from 14.8 to 52.9 %. The accuracy for recommended cut off values ranged from poor (AUC = 0.556 ISAR-PC) to good (AUC = 0.865 gait speed). PRISMA-7 performed best over two reference standards, GP predicted adversities best. Stepped strategies resulted in lower prevalence rates and accuracy. Persons selected by the different instruments varied greatly in age, IADL dependency, receiving homecare and mood. We found huge differences between methods to identify frail persons in prevalence, accuracy and in characteristics of persons they select. A necessary next step is to find out which frail persons can benefit from intervention before case finding programs are implemented. Further evidence is needed to guide this emerging clinical field.
A local space time kriging approach applied to a national outpatient malaria data set
NASA Astrophysics Data System (ADS)
Gething, P. W.; Atkinson, P. M.; Noor, A. M.; Gikandi, P. W.; Hay, S. I.; Nixon, M. S.
2007-10-01
Increases in the availability of reliable health data are widely recognised as essential for efforts to strengthen health-care systems in resource-poor settings worldwide. Effective health-system planning requires comprehensive and up-to-date information on a range of health metrics and this requirement is generally addressed by a Health Management Information System (HMIS) that coordinates the routine collection of data at individual health facilities and their compilation into national databases. In many resource-poor settings, these systems are inadequate and national databases often contain only a small proportion of the expected records. In this paper, we take an important health metric in Kenya (the proportion of outpatient treatments for malaria (MP)) from the national HMIS database and predict the values of MP at facilities where monthly records are missing. The available MP data were densely distributed across a spatiotemporal domain and displayed second-order heterogeneity. We used three different kriging methodologies to make cross-validation predictions of MP in order to test the effect on prediction accuracy of (a) the extension of a spatial-only to a space-time prediction approach, and (b) the replacement of a globally stationary with a locally varying random function model. Space-time kriging was found to produce predictions with 98.4% less mean bias and 14.8% smaller mean imprecision than conventional spatial-only kriging. A modification of space-time kriging that allowed space-time variograms to be recalculated for every prediction location within a spatially local neighbourhood resulted in a larger decrease in mean imprecision over ordinary kriging (18.3%) although the mean bias was reduced less (87.5%).
A local space–time kriging approach applied to a national outpatient malaria data set
Gething, P.W.; Atkinson, P.M.; Noor, A.M.; Gikandi, P.W.; Hay, S.I.; Nixon, M.S.
2007-01-01
Increases in the availability of reliable health data are widely recognised as essential for efforts to strengthen health-care systems in resource-poor settings worldwide. Effective health-system planning requires comprehensive and up-to-date information on a range of health metrics and this requirement is generally addressed by a Health Management Information System (HMIS) that coordinates the routine collection of data at individual health facilities and their compilation into national databases. In many resource-poor settings, these systems are inadequate and national databases often contain only a small proportion of the expected records. In this paper, we take an important health metric in Kenya (the proportion of outpatient treatments for malaria (MP)) from the national HMIS database and predict the values of MP at facilities where monthly records are missing. The available MP data were densely distributed across a spatiotemporal domain and displayed second-order heterogeneity. We used three different kriging methodologies to make cross-validation predictions of MP in order to test the effect on prediction accuracy of (a) the extension of a spatial-only to a space–time prediction approach, and (b) the replacement of a globally stationary with a locally varying random function model. Space–time kriging was found to produce predictions with 98.4% less mean bias and 14.8% smaller mean imprecision than conventional spatial-only kriging. A modification of space–time kriging that allowed space–time variograms to be recalculated for every prediction location within a spatially local neighbourhood resulted in a larger decrease in mean imprecision over ordinary kriging (18.3%) although the mean bias was reduced less (87.5%). PMID:19424510
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Learning curve of office-based ultrasonography for rotator cuff tendons tears.
Ok, Ji-Hoon; Kim, Yang-Soo; Kim, Jung-Man; Yoo, Tae-Wook
2013-07-01
To compare the accuracy of ultrasonography and MR arthrography (MRA) imaging in detecting of rotator cuff tears with arthroscopic finding used as the reference standard. The ultrasonography and MRA findings of 51 shoulders that underwent the arthroscopic surgery were prospectively analysed. Two orthopaedic doctors independently performed ultrasonography and interpreted the findings at the office. The tear size measured at ultrasonography and MRA was compared with the size measured at surgery using Pearson correlation coefficients (r). The sensitivity, specificity, accuracy, positive predictive value, negative predictive value and false-positive rate were calculated for a diagnosis of partial-and full-thickness rotator cuff tears. The kappa coefficient was calculated to verify the inter-observer agreement. The sensitivity of ultrasonography and MRA for detecting partial-thickness tears was 45.5 and 72.7 %, and that for full-thickness tears was 80.0 and 100 %, respectively. The accuracy of ultrasonograpy and MRA for detecting partial-thickness tears was 45.1 and 88.2 %, and that for full-thickness tears was 82.4 and 98 %, respectively. Tear size measured based on ultrasonography examination showed a poor correlation with the size measured at arthroscopic surgery (r = 0.21; p < 0.05). However, tear size estimated by MRA showed a strong correlation (r = 0.75; p < 0.05). The kappa coefficient was 0.47 between the two independent examiners. The accuracy of office-based ultrasonography for beginner orthopaedic surgeons to detect full-thickness rotator cuff tears was comparable to that of MRA but was less accurate for detecting partial-thickness tears and torn size measurement. Inter-observer agreement on the interpretation was fair. These results highlight the importance of the correct technique and experience in operation of ultrasonography in shoulder joint. Diagnostic study, Level II.
A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction
NASA Astrophysics Data System (ADS)
Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.
2017-03-01
There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.
Sforza, Alfonso; Mancusi, Costantino; Carlino, Maria Viviana; Buonauro, Agostino; Barozzi, Marco; Romano, Giuseppe; Serra, Sossio; de Simone, Giovanni
2017-06-19
The availability of ultra-miniaturized pocket ultrasound devices (PUD) adds diagnostic power to the clinical examination. Information on accuracy of ultrasound with handheld units in immediate differential diagnosis in emergency department (ED) is poor. The aim of this study is to test the usefulness and accuracy of lung ultrasound (LUS) alone or combined with ultrasound of the heart and inferior vena cava (IVC) using a PUD for the differential diagnosis of acute dyspnea (AD). We included 68 patients presenting to the ED of "Maurizio Bufalini" Hospital in Cesena (Italy) for AD. All patients underwent integrated ultrasound examination (IUE) of lung-heart-IVC, using PUD. The series was divided into patients with dyspnea of cardiac or non-cardiac origin. We used 2 × 2 contingency tables to analyze sensitivity, specificity, positive predictive value and negative predictive value of the three ultrasonic methods and their various combinations for the diagnosis of cardiogenic dyspnea (CD), comparing with the final diagnosis made by an independent emergency physician. LUS alone exhibited a good sensitivity (92.6%) and specificity (80.5%). The highest accuracy (90%) for the diagnosis of CD was obtained with the combination of LUS and one of the other two methods (heart or IVC). The IUE with PUD is a useful extension of the clinical examination, can be readily available at the bedside or in ambulance, requires few minutes and has a reliable diagnostic discriminant ability in the setting of AD.
Prognostic factors of Bell's palsy: prospective patient collected observational study.
Fujiwara, Takashi; Hato, Naohito; Gyo, Kiyofumi; Yanagihara, Naoaki
2014-07-01
The purpose of this study was to evaluate various parameters potentially influencing poor prognosis in Bell's palsy and to assess the predictive value for Bell's palsy. A single-center prospective patient collected observation and validation study was conducted. To evaluate the correlation between patient characteristics and poor prognosis, we performed univariate and multivariate analyzes of age, gender, side of palsy, diabetes mellitus, hypertension, and facial grading score 1 week after onset. To evaluate the accuracy of the facial grading score, we prepared a receiver operating characteristic (ROC) curve and calculated the area under the ROC curve (AUROC). We also calculated sensitivity, specificity, positive/negative likelihood ratio, and positive/negative predictive value. We included Bell's palsy patients who attended Ehime University Hospital within 1 week after onset between 1977 and 2011. We excluded patients who were less than 15 years old and lost-to-follow-up within 6 months. The main outcome was defined as non-recovery at 6 months after onset. In total, 679 adults with Bell's palsy were included. The facial grading score at 1 week showed a correlation with non-recovery in the multivariate analysis, although age, gender, side of palsy, diabetes mellitus, and hypertension did not. The AUROC of the facial grading score was 0.793. The Y-system score at 1 week moderate accurately predicted non-recovery at 6 months in Bell's palsy.
Genomic Prediction of Gene Bank Wheat Landraces.
Crossa, José; Jarquín, Diego; Franco, Jorge; Pérez-Rodríguez, Paulino; Burgueño, Juan; Saint-Pierre, Carolina; Vikram, Prashant; Sansaloni, Carolina; Petroli, Cesar; Akdemir, Deniz; Sneller, Clay; Reynolds, Matthew; Tattaris, Maria; Payne, Thomas; Guzman, Carlos; Peña, Roberto J; Wenzl, Peter; Singh, Sukhwinder
2016-07-07
This study examines genomic prediction within 8416 Mexican landrace accessions and 2403 Iranian landrace accessions stored in gene banks. The Mexican and Iranian collections were evaluated in separate field trials, including an optimum environment for several traits, and in two separate environments (drought, D and heat, H) for the highly heritable traits, days to heading (DTH), and days to maturity (DTM). Analyses accounting and not accounting for population structure were performed. Genomic prediction models include genotype × environment interaction (G × E). Two alternative prediction strategies were studied: (1) random cross-validation of the data in 20% training (TRN) and 80% testing (TST) (TRN20-TST80) sets, and (2) two types of core sets, "diversity" and "prediction", including 10% and 20%, respectively, of the total collections. Accounting for population structure decreased prediction accuracy by 15-20% as compared to prediction accuracy obtained when not accounting for population structure. Accounting for population structure gave prediction accuracies for traits evaluated in one environment for TRN20-TST80 that ranged from 0.407 to 0.677 for Mexican landraces, and from 0.166 to 0.662 for Iranian landraces. Prediction accuracy of the 20% diversity core set was similar to accuracies obtained for TRN20-TST80, ranging from 0.412 to 0.654 for Mexican landraces, and from 0.182 to 0.647 for Iranian landraces. The predictive core set gave similar prediction accuracy as the diversity core set for Mexican collections, but slightly lower for Iranian collections. Prediction accuracy when incorporating G × E for DTH and DTM for Mexican landraces for TRN20-TST80 was around 0.60, which is greater than without the G × E term. For Iranian landraces, accuracies were 0.55 for the G × E model with TRN20-TST80. Results show promising prediction accuracies for potential use in germplasm enhancement and rapid introgression of exotic germplasm into elite materials. Copyright © 2016 Crossa et al.
How reliable and accurate is the AO/OTA comprehensive classification for adult long-bone fractures?
Meling, Terje; Harboe, Knut; Enoksen, Cathrine H; Aarflot, Morten; Arthursson, Astvaldur J; Søreide, Kjetil
2012-07-01
Reliable classification of fractures is important for treatment allocation and study comparisons. The overall accuracy of scoring applied to a general population of fractures is little known. This study aimed to investigate the accuracy and reliability of the comprehensive Arbeitsgemeinschaft für Osteosynthesefragen/Orthopedic Trauma Association classification for adult long-bone fractures and identify factors associated with poor coding agreement. Adults (>16 years) with long-bone fractures coded in a Fracture and Dislocation Registry at the Stavanger University Hospital during the fiscal year 2008 were included. An unblinded reference code dataset was generated for the overall accuracy assessment by two experienced orthopedic trauma surgeons. Blinded analysis of intrarater reliability was performed by rescoring and of interrater reliability by recoding of a randomly selected fracture sample. Proportion of agreement (PA) and kappa (κ) statistics are presented. Uni- and multivariate logistic regression analyses of factors predicting accuracy were performed. During the study period, 949 fractures were included and coded by 26 surgeons. For the intrarater analysis, overall agreements were κ = 0.67 (95% confidence interval [CI]: 0.64-0.70) and PA 69%. For interrater assessment, κ = 0.67 (95% CI: 0.62-0.72) and PA 69%. The accuracy of surgeons' blinded recoding was κ = 0.68 (95% CI: 0.65- 0.71) and PA 68%. Fracture type, frequency of the fracture, and segment fractured significantly influenced accuracy whereas the coder's experience did not. Both the reliability and accuracy of the comprehensive Arbeitsgemeinschaft für Osteosynthesefragen/Orthopedic Trauma Association classification for long-bone fractures ranged from substantial to excellent. Variations in coding accuracy seem to be related more to the fracture itself than the surgeon. Diagnostic study, level I.
Field comparison of several commercially available radon detectors.
Field, R W; Kross, B C
1990-01-01
To determine the accuracy and precision of commercially available radon detectors in a field setting, 15 detectors from six companies were exposed to radon and compared to a reference radon level. The detectors from companies that had already passed National Radon Measurement Proficiency Program testing had better precision and accuracy than those detectors awaiting proficiency testing. Charcoal adsorption detectors and diffusion barrier charcoal adsorption detectors performed very well, and the latter detectors displayed excellent time averaging ability. Alternatively, charcoal liquid scintillation detectors exhibited acceptable accuracy but poor precision, and bare alpha registration detectors showed both poor accuracy and precision. The mean radon level reported by the bare alpha registration detectors was 68 percent lower than the radon reference level. PMID:2368851
Palacio, Montse; Bonet-Carne, Elisenda; Cobo, Teresa; Perez-Moreno, Alvaro; Sabrià, Joan; Richter, Jute; Kacerovsky, Marian; Jacobsson, Bo; García-Posada, Raúl A; Bugatto, Fernando; Santisteve, Ramon; Vives, Àngels; Parra-Cordero, Mauro; Hernandez-Andrade, Edgar; Bartha, José Luis; Carretero-Lucena, Pilar; Tan, Kai Lit; Cruz-Martínez, Rogelio; Burke, Minke; Vavilala, Suseela; Iruretagoyena, Igor; Delgado, Juan Luis; Schenone, Mauro; Vilanova, Josep; Botet, Francesc; Yeo, George S H; Hyett, Jon; Deprest, Jan; Romero, Roberto; Gratacos, Eduard
2017-08-01
Prediction of neonatal respiratory morbidity may be useful to plan delivery in complicated pregnancies. The limited predictive performance of the current diagnostic tests together with the risks of an invasive procedure restricts the use of fetal lung maturity assessment. The objective of the study was to evaluate the performance of quantitative ultrasound texture analysis of the fetal lung (quantusFLM) to predict neonatal respiratory morbidity in preterm and early-term (<39.0 weeks) deliveries. This was a prospective multicenter study conducted in 20 centers worldwide. Fetal lung ultrasound images were obtained at 25.0-38.6 weeks of gestation within 48 hours of delivery, stored in Digital Imaging and Communication in Medicine format, and analyzed with quantusFLM. Physicians were blinded to the analysis. At delivery, perinatal outcomes and the occurrence of neonatal respiratory morbidity, defined as either respiratory distress syndrome or transient tachypnea of the newborn, were registered. The performance of the ultrasound texture analysis test to predict neonatal respiratory morbidity was evaluated. A total of 883 images were collected, but 17.3% were discarded because of poor image quality or exclusion criteria, leaving 730 observations for the final analysis. The prevalence of neonatal respiratory morbidity was 13.8% (101 of 730). The quantusFLM predicted neonatal respiratory morbidity with a sensitivity, specificity, positive and negative predictive values of 74.3% (75 of 101), 88.6% (557 of 629), 51.0% (75 of 147), and 95.5% (557 of 583), respectively. Accuracy was 86.5% (632 of 730) and positive and negative likelihood ratios were 6.5 and 0.3, respectively. The quantusFLM predicted neonatal respiratory morbidity with an accuracy similar to that previously reported for other tests with the advantage of being a noninvasive technique. Copyright © 2017. Published by Elsevier Inc.
Genomic Prediction of Gene Bank Wheat Landraces
Crossa, José; Jarquín, Diego; Franco, Jorge; Pérez-Rodríguez, Paulino; Burgueño, Juan; Saint-Pierre, Carolina; Vikram, Prashant; Sansaloni, Carolina; Petroli, Cesar; Akdemir, Deniz; Sneller, Clay; Reynolds, Matthew; Tattaris, Maria; Payne, Thomas; Guzman, Carlos; Peña, Roberto J.; Wenzl, Peter; Singh, Sukhwinder
2016-01-01
This study examines genomic prediction within 8416 Mexican landrace accessions and 2403 Iranian landrace accessions stored in gene banks. The Mexican and Iranian collections were evaluated in separate field trials, including an optimum environment for several traits, and in two separate environments (drought, D and heat, H) for the highly heritable traits, days to heading (DTH), and days to maturity (DTM). Analyses accounting and not accounting for population structure were performed. Genomic prediction models include genotype × environment interaction (G × E). Two alternative prediction strategies were studied: (1) random cross-validation of the data in 20% training (TRN) and 80% testing (TST) (TRN20-TST80) sets, and (2) two types of core sets, “diversity” and “prediction”, including 10% and 20%, respectively, of the total collections. Accounting for population structure decreased prediction accuracy by 15–20% as compared to prediction accuracy obtained when not accounting for population structure. Accounting for population structure gave prediction accuracies for traits evaluated in one environment for TRN20-TST80 that ranged from 0.407 to 0.677 for Mexican landraces, and from 0.166 to 0.662 for Iranian landraces. Prediction accuracy of the 20% diversity core set was similar to accuracies obtained for TRN20-TST80, ranging from 0.412 to 0.654 for Mexican landraces, and from 0.182 to 0.647 for Iranian landraces. The predictive core set gave similar prediction accuracy as the diversity core set for Mexican collections, but slightly lower for Iranian collections. Prediction accuracy when incorporating G × E for DTH and DTM for Mexican landraces for TRN20-TST80 was around 0.60, which is greater than without the G × E term. For Iranian landraces, accuracies were 0.55 for the G × E model with TRN20-TST80. Results show promising prediction accuracies for potential use in germplasm enhancement and rapid introgression of exotic germplasm into elite materials. PMID:27172218
The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.
Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E
2009-11-01
Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.
De Novo Chromosome Structure Prediction
NASA Astrophysics Data System (ADS)
di Pierro, Michele; Cheng, Ryan R.; Lieberman-Aiden, Erez; Wolynes, Peter G.; Onuchic, Jose'n.
Chromatin consists of DNA and hundreds of proteins that interact with the genetic material. In vivo, chromatin folds into nonrandom structures. The physical mechanism leading to these characteristic conformations, however, remains poorly understood. We recently introduced MiChroM, a model that generates chromosome conformations by using the idea that chromatin can be subdivided into types based on its biochemical interactions. Here we extend and complete our previous finding by showing that structural chromatin types can be inferred from ChIP-Seq data. Chromatin types, which are distinct from DNA sequence, are partially epigenetically controlled and change during cell differentiation, thus constituting a link between epigenetics, chromosomal organization, and cell development. We show that, for GM12878 lymphoblastoid cells we are able to predict accurate chromosome structures with the only input of genomic data. The degree of accuracy achieved by our prediction supports the viability of the proposed physical mechanism of chromatin folding and makes the computational model a powerful tool for future investigations.
Validation of the Danish version of the constipation risk assessment scale (CRAS).
Trads, Mette; Håkonson, Sasja J; Pedersen, Preben U
2017-11-01
The Constipation Assessment Scale (CRAS) was developed in order to enable the prediction of the risk of developing constipation. The scale needs validation in acute and elective patients with common disorders. Two hundred and six acute patients with hip fracture and 200 elective patients with total knee or hip replacement were included. They were assessed with CRAS before surgery and their defecation pattern, stool consistency and degree of straining were measured at admission and 30 days after surgery. The prevalence of constipation was 0.49 for the acute patients and 0.34 for the elective patients. Sensitivity was 0.67 and 0.57. Specificity was 0.54 and 0.52. Positive predictive value was 0.59 and 0.38, whereas the negative predictive value was 0.63 and 0.7. When used in an orthopaedic ward, the prognostic accuracy of CRAS is poor and it cannot be recommended as a screening tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prediction of Happy-Sad mood from daily behaviors and previous sleep history.
Sano, Akane; Yu, Amy Z; McHill, Andrew W; Phillips, Andrew J K; Taylor, Sara; Jaques, Natasha; Klerman, Elizabeth B; Picard, Rosalind W
2015-01-01
We collected and analyzed subjective and objective data using surveys and wearable sensors worn day and night from 68 participants for ~30 days each, to address questions related to the relationships among sleep duration, sleep irregularity, self-reported Happy-Sad mood and other daily behavioral factors in college students. We analyzed this behavioral and physiological data to (i) identify factors that classified the participants into Happy-Sad mood using support vector machines (SVMs); and (ii) analyze how accurately sleep duration and sleep regularity for the past 1-5 days classified morning Happy-Sad mood. We found statistically significant associations amongst Sad mood and poor health-related factors. Behavioral factors including the frequency of negative social interactions, and negative emails, and total academic activity hours showed the best performance in separating the Happy-Sad mood groups. Sleep regularity and sleep duration predicted daily Happy-Sad mood with 65-80% accuracy. The number of nights giving the best prediction of Happy-Sad mood varied for different individuals.
EVALUATING RISK-PREDICTION MODELS USING DATA FROM ELECTRONIC HEALTH RECORDS.
Wang, L E; Shaw, Pamela A; Mathelier, Hansie M; Kimmel, Stephen E; French, Benjamin
2016-03-01
The availability of data from electronic health records facilitates the development and evaluation of risk-prediction models, but estimation of prediction accuracy could be limited by outcome misclassification, which can arise if events are not captured. We evaluate the robustness of prediction accuracy summaries, obtained from receiver operating characteristic curves and risk-reclassification methods, if events are not captured (i.e., "false negatives"). We derive estimators for sensitivity and specificity if misclassification is independent of marker values. In simulation studies, we quantify the potential for bias in prediction accuracy summaries if misclassification depends on marker values. We compare the accuracy of alternative prognostic models for 30-day all-cause hospital readmission among 4548 patients discharged from the University of Pennsylvania Health System with a primary diagnosis of heart failure. Simulation studies indicate that if misclassification depends on marker values, then the estimated accuracy improvement is also biased, but the direction of the bias depends on the direction of the association between markers and the probability of misclassification. In our application, 29% of the 1143 readmitted patients were readmitted to a hospital elsewhere in Pennsylvania, which reduced prediction accuracy. Outcome misclassification can result in erroneous conclusions regarding the accuracy of risk-prediction models.
Singing proficiency in the majority: normality and "phenotypes" of poor singing.
Dalla Bella, Simone; Berkowska, Magdalena
2009-07-01
Recent evidence indicates that the majority of occasional singers can carry a tune. For example, when asked to sing a well-known song (e.g., "Happy Birthday"), nonmusicians performing at a slow tempo are as proficient as professional singers. Yet, some occasional singers are poor singers, mostly in the pitch domain, and sometimes despite not having impoverished perception. Poor singing is not a monolithic deficit, but is likely to be characterized by a diversity of singing "phenotypes." Here we systematically examined singing proficiency in a group of occasional singers, with the goal of characterizing the different patterns of poor singing. Participants sang three well-known melodies (e.g., "Jingle Bells") at a natural tempo and at a slow tempo, as indicated by a metronome. For each rendition, we computed objective measures of pitch and time accuracy with an acoustical method. The results confirmed previous observations that the majority of occasional singers can sing in tune and in time. Moreover, singing at a slow tempo after the target melody to be imitated was presented with a metronome improved pitch and time accuracy. In general, poor singers were mostly impaired on the pitch dimension, although various patterns of impairment emerged. Pitch accuracy or time accuracy could be selectively impaired; moreover, absolute measures of singing proficiency (pitch or tempo transposition) dissociated from relative measures of proficiency (pitch intervals, relative duration). These patterns of dissociations point to a multicomponent system underlying proficient singing that fractionates as a result of a developmental anomaly.
Widmer, Mariana; Cuesta, Cristina; Khan, Khalid S; Conde-Agudelo, Agustin; Carroli, Guillermo; Fusey, Shalini; Karumanchi, S Ananth; Lapaire, Olav; Lumbiganon, Pisake; Sequeira, Evan; Zavaleta, Nelly; Frusca, Tiziana; Gülmezoglu, A Metin; Lindheimer, Marshall D
2015-10-01
To assess the accuracy of angiogenic biomarkers to predict pre-eclampsia. Prospective multicentre study. From 2006 to 2009, 5121 pregnant women with risk factors for pre-eclampsia (nulliparity, diabetes, previous pre-eclampsia, chronic hypertension) from Argentina, Colombia, Peru, India, Italy, Kenya, Switzerland and Thailand had their serum tested for sFlt-1, PlGF and sEng levels and their urine for PlGF levels at ⩽20, 23-27 and 32-35weeks' gestation (index tests, results blinded from carers). Women were monitored for signs of pre-eclampsia, diagnosed by systolic blood pressure ⩾140mmHg and/or diastolic blood pressure ⩾90mmHg, and proteinuria (protein/creatinine ratio ⩾0.3, protein ⩾1g/l, or one dipstick measurement ⩾2+) appearing after 20weeks' gestation. Early pre-eclampsia was defined when these signs appeared ⩽34weeks' gestation. Pre-eclampsia. Pre-eclampsia was diagnosed in 198 of 5121 women tested (3.9%) of whom 47 (0.9%) developed it early. The median maternal serum concentrations of index tests were significantly altered in women who subsequently developed pre-eclampsia than in those who did not. However, the area under receiver operating characteristics curve at ⩽20weeks' gestation were closer to 0.5 than to 1.0 for all biomarkers both for predicting any pre-eclampsia or at ⩽34weeks' gestation. The corresponding sensitivity, specificity and likelihood ratios were poor. Multivariable models combining sEng with clinical features slightly improved the prediction capability. Angiogenic biomarkers in first half of pregnancy do not perform well enough in predicting the later development of pre-eclampsia. Copyright © 2015. Published by Elsevier B.V.
Improved method for predicting protein fold patterns with ensemble classifiers.
Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C
2012-01-27
Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
Forecasting biodiversity in breeding birds using best practices
Taylor, Shawn D.; White, Ethan P.
2018-01-01
Biodiversity forecasts are important for conservation, management, and evaluating how well current models characterize natural systems. While the number of forecasts for biodiversity is increasing, there is little information available on how well these forecasts work. Most biodiversity forecasts are not evaluated to determine how well they predict future diversity, fail to account for uncertainty, and do not use time-series data that captures the actual dynamics being studied. We addressed these limitations by using best practices to explore our ability to forecast the species richness of breeding birds in North America. We used hindcasting to evaluate six different modeling approaches for predicting richness. Hindcasts for each method were evaluated annually for a decade at 1,237 sites distributed throughout the continental United States. All models explained more than 50% of the variance in richness, but none of them consistently outperformed a baseline model that predicted constant richness at each site. The best practices implemented in this study directly influenced the forecasts and evaluations. Stacked species distribution models and “naive” forecasts produced poor estimates of uncertainty and accounting for this resulted in these models dropping in the relative performance compared to other models. Accounting for observer effects improved model performance overall, but also changed the rank ordering of models because it did not improve the accuracy of the “naive” model. Considering the forecast horizon revealed that the prediction accuracy decreased across all models as the time horizon of the forecast increased. To facilitate the rapid improvement of biodiversity forecasts, we emphasize the value of specific best practices in making forecasts and evaluating forecasting methods. PMID:29441230
Modelling invasion for a habitat generalist and a specialist plant species
Evangelista, P.H.; Kumar, S.; Stohlgren, T.J.; Jarnevich, C.S.; Crall, A.W.; Norman, J. B.; Barnett, D.T.
2008-01-01
Predicting suitable habitat and the potential distribution of invasive species is a high priority for resource managers and systems ecologists. Most models are designed to identify habitat characteristics that define the ecological niche of a species with little consideration to individual species' traits. We tested five commonly used modelling methods on two invasive plant species, the habitat generalist Bromus tectorum and habitat specialist Tamarix chinensis, to compare model performances, evaluate predictability, and relate results to distribution traits associated with each species. Most of the tested models performed similarly for each species; however, the generalist species proved to be more difficult to predict than the specialist species. The highest area under the receiver-operating characteristic curve values with independent validation data sets of B. tectorum and T. chinensis was 0.503 and 0.885, respectively. Similarly, a confusion matrix for B. tectorum had the highest overall accuracy of 55%, while the overall accuracy for T. chinensis was 85%. Models for the generalist species had varying performances, poor evaluations, and inconsistent results. This may be a result of a generalist's capability to persist in a wide range of environmental conditions that are not easily defined by the data, independent variables or model design. Models for the specialist species had consistently strong performances, high evaluations, and similar results among different model applications. This is likely a consequence of the specialist's requirement for explicit environmental resources and ecological barriers that are easily defined by predictive models. Although defining new invaders as generalist or specialist species can be challenging, model performances and evaluations may provide valuable information on a species' potential invasiveness.
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
2014-01-01
Introduction Prolonged ventilation and failed extubation are associated with increased harm and cost. The added value of heart and respiratory rate variability (HRV and RRV) during spontaneous breathing trials (SBTs) to predict extubation failure remains unknown. Methods We enrolled 721 patients in a multicenter (12 sites), prospective, observational study, evaluating clinical estimates of risk of extubation failure, physiologic measures recorded during SBTs, HRV and RRV recorded before and during the last SBT prior to extubation, and extubation outcomes. We excluded 287 patients because of protocol or technical violations, or poor data quality. Measures of variability (97 HRV, 82 RRV) were calculated from electrocardiogram and capnography waveforms followed by automated cleaning and variability analysis using Continuous Individualized Multiorgan Variability Analysis (CIMVA™) software. Repeated randomized subsampling with training, validation, and testing were used to derive and compare predictive models. Results Of 434 patients with high-quality data, 51 (12%) failed extubation. Two HRV and eight RRV measures showed statistically significant association with extubation failure (P <0.0041, 5% false discovery rate). An ensemble average of five univariate logistic regression models using RRV during SBT, yielding a probability of extubation failure (called WAVE score), demonstrated optimal predictive capacity. With repeated random subsampling and testing, the model showed mean receiver operating characteristic area under the curve (ROC AUC) of 0.69, higher than heart rate (0.51), rapid shallow breathing index (RBSI; 0.61) and respiratory rate (0.63). After deriving a WAVE model based on all data, training-set performance demonstrated that the model increased its predictive power when applied to patients conventionally considered high risk: a WAVE score >0.5 in patients with RSBI >105 and perceived high risk of failure yielded a fold increase in risk of extubation failure of 3.0 (95% confidence interval (CI) 1.2 to 5.2) and 3.5 (95% CI 1.9 to 5.4), respectively. Conclusions Altered HRV and RRV (during the SBT prior to extubation) are significantly associated with extubation failure. A predictive model using RRV during the last SBT provided optimal accuracy of prediction in all patients, with improved accuracy when combined with clinical impression or RSBI. This model requires a validation cohort to evaluate accuracy and generalizability. Trial registration ClinicalTrials.gov NCT01237886. Registered 13 October 2010. PMID:24713049
Rana, Jamal S; Tabada, Grace H; Solomon, Matthew D; Lo, Joan C; Jaffe, Marc G; Sung, Sue Hee; Ballantyne, Christie M; Go, Alan S
2016-05-10
The accuracy of the 2013 American College of Cardiology/American Heart Association (ACC/AHA) Pooled Cohort Risk Equation for atherosclerotic cardiovascular disease (ASCVD) events in contemporary and ethnically diverse populations is not well understood. The goal of this study was to evaluate the accuracy of the 2013 ACC/AHA Pooled Cohort Risk Equation within a large, multiethnic population in clinical care. The target population for consideration of cholesterol-lowering therapy in a large, integrated health care delivery system population was identified in 2008 and followed up through 2013. The main analyses excluded those with known ASCVD, diabetes mellitus, low-density lipoprotein cholesterol levels <70 or ≥190 mg/dl, prior lipid-lowering therapy use, or incomplete 5-year follow-up. Patient characteristics were obtained from electronic medical records, and ASCVD events were ascertained by using validated algorithms for hospitalization databases and death certificates. We compared predicted versus observed 5-year ASCVD risk, overall and according to sex and race/ethnicity. We additionally examined predicted versus observed risk in patients with diabetes mellitus. Among 307,591 eligible adults without diabetes between 40 and 75 years of age, 22,283 were black, 52,917 were Asian/Pacific Islander, and 18,745 were Hispanic. We observed 2,061 ASCVD events during 1,515,142 person-years. In each 5-year predicted ASCVD risk category, observed 5-year ASCVD risk was substantially lower: 0.20% for predicted risk <2.50%; 0.65% for predicted risk 2.50% to <3.75%; 0.90% for predicted risk 3.75% to <5.00%; and 1.85% for predicted risk ≥5.00% (C statistic: 0.74). Similar ASCVD risk overestimation and poor calibration with moderate discrimination (C statistic: 0.68 to 0.74) were observed in sex, racial/ethnic, and socioeconomic status subgroups, and in sensitivity analyses among patients receiving statins for primary prevention. Calibration among 4,242 eligible adults with diabetes was improved, but discrimination was worse (C statistic: 0.64). In a large, contemporary "real-world" population, the ACC/AHA Pooled Cohort Risk Equation substantially overestimated actual 5-year risk in adults without diabetes, overall and across sociodemographic subgroups. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Hu, Jun; Ji, Ming-liang; Qian, Bang-ping; Qiu, Yong; Wang, Bin; Yu, Yang; Zhu, Ze-Zhang; Jiang, Jun
2014-11-01
A retrospective radiographical study. To construct a predictive model for pelvic tilt (PT) based on the sacrofemoral-pubic (SFP) angle in patients with thoracolumbar kyphosis secondary to ankylosing spondylitis (or AS). PT is a key pelvic parameter in the regulation of spine sagittal alignment that can be used to plan the appropriate osteotomy angle in patients with AS with thoracolumbar kyphosis. However, it could be difficult to measure PT in patients with femoral heads poorly visualized on lateral radiographs. Previous studies showed that the SFP angle could be used to evaluate PT in adult patients with scoliosis. However, this method has not been validated in patients with AS. A total of 115 patients with AS with thoracolumbar kyphosis were included. Full-length anteroposterior and lateral spine radiographs were all available, with spinal and pelvic anatomical landmarks clearly identified. PT, SFP angle, and global kyphosis were measured. The patients were randomly divided into group A (n=65) and group B (n=50). In group A, the predictive model for PT was constructed by the results of the linear regression analysis. In group B, the predictive ability and accuracy of the predictive model were investigated. In group A, the Pearson correlation analysis revealed a strong correlation between the SFP angle and PT (r=0.852; P<0.001). The predictive model for PT was constructed as PT=72.3-0.82×(SFP angle). In group B, PT was predicted by the model with a mean error of 4.6° (SD=4.5°) with a predictive value of 78%. PT can be accurately predicted by the SFP angle using the current model: PT=72.3-0.82×(SFP angle), when the femur heads are poorly visualized on lateral radiographs in patients with AS with thoracolumbar kyphosis. 4.
Chen, L; Schenkel, F; Vinsky, M; Crews, D H; Li, C
2013-10-01
In beef cattle, phenotypic data that are difficult and/or costly to measure, such as feed efficiency, and DNA marker genotypes are usually available on a small number of animals of different breeds or populations. To achieve a maximal accuracy of genomic prediction using the phenotype and genotype data, strategies for forming a training population to predict genomic breeding values (GEBV) of the selection candidates need to be evaluated. In this study, we examined the accuracy of predicting GEBV for residual feed intake (RFI) based on 522 Angus and 395 Charolais steers genotyped on SNP with the Illumina Bovine SNP50 Beadchip for 3 training population forming strategies: within breed, across breed, and by pooling data from the 2 breeds (i.e., combined). Two other scenarios with the training and validation data split by birth year and by sire family within a breed were also investigated to assess the impact of genetic relationships on the accuracy of genomic prediction. Three statistical methods including the best linear unbiased prediction with the relationship matrix defined based on the pedigree (PBLUP), based on the SNP genotypes (GBLUP), and a Bayesian method (BayesB) were used to predict the GEBV. The results showed that the accuracy of the GEBV prediction was the highest when the prediction was within breed and when the validation population had greater genetic relationships with the training population, with a maximum of 0.58 for Angus and 0.64 for Charolais. The within-breed prediction accuracies dropped to 0.29 and 0.38, respectively, when the validation populations had a minimal pedigree link with the training population. When the training population of a different breed was used to predict the GEBV of the validation population, that is, across-breed genomic prediction, the accuracies were further reduced to 0.10 to 0.22, depending on the prediction method used. Pooling data from the 2 breeds to form the training population resulted in accuracies increased to 0.31 and 0.43, respectively, for the Angus and Charolais validation populations. The results suggested that the genetic relationship of selection candidates with the training population has a greater impact on the accuracy of GEBV using the Illumina Bovine SNP50 Beadchip. Pooling data from different breeds to form the training population will improve the accuracy of across breed genomic prediction for RFI in beef cattle.
Konishi, Tsuyoshi; Shimada, Yoshifumi; Lee, Lik Hang; Cavalcanti, Marcela S; Hsu, Meier; Smith, Jesse Joshua; Nash, Garrett M; Temple, Larissa K; Guillem, José G; Paty, Philip B; Garcia-Aguilar, Julio; Vakiani, Efsevia; Gonen, Mithat; Shia, Jinru; Weiser, Martin R
2018-06-01
This study aimed to compare common histologic markers at the invasive front of colon adenocarcinoma in terms of prognostic accuracy and interobserver agreement. Consecutive patients who underwent curative resection for stages I to III colon adenocarcinoma at a single institution in 2007 to 2014 were identified. Poorly differentiated clusters (PDCs), tumor budding, perineural invasion, desmoplastic reaction, and Crohn-like lymphoid reaction at the invasive front, as well as the World Health Organization (WHO) grade of the entire tumor, were analyzed. Prognostic accuracies for recurrence-free survival (RFS) were compared, and interobserver agreement among 3 pathologists was assessed. The study cohort consisted of 851 patients. Although all the histologic markers except WHO grade were significantly associated with RFS (PDCs, tumor budding, perineural invasion, and desmoplastic reaction: P<0.001; Crohn-like lymphoid reaction: P=0.021), PDCs (grade 1 [G1]: n=581; G2: n=145; G3: n=125) showed the largest separation of 3-year RFS in the full cohort (G1: 94.1%; G3: 63.7%; hazard ratio [HR], 6.39; 95% confidence interval [CI], 4.11-9.95; P<0.001), stage II patients (G1: 94.0%; G3: 67.3%; HR, 4.15; 95% CI, 1.96-8.82; P<0.001), and stage III patients (G1: 89.0%; G3: 59.4%; HR, 4.50; 95% CI, 2.41-8.41; P<0.001). PDCs had the highest prognostic accuracy for RFS with the concordance probability estimate of 0.642, whereas WHO grade had the lowest. Interobserver agreement was the highest for PDCs, with a weighted kappa of 0.824. The risk of recurrence over time peaked earlier for worse PDCs grade. Our findings indicate that PDCs are the best invasive-front histologic marker in terms of prognostic accuracy and interobserver agreement. PDCs may replace WHO grade as a prognostic indicator.
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
Prabhu, Roshan S; Press, Robert H; Boselli, Danielle M; Miller, Katherine R; Lankford, Scott P; McCammon, Robert J; Moeller, Benjamin J; Heinzerling, John H; Fasola, Carolina E; Patel, Kirtesh R; Asher, Anthony L; Sumrall, Ashley L; Curran, Walter J; Shu, Hui-Kuo G; Burri, Stuart H
2018-03-01
Patients treated with stereotactic radiosurgery (SRS) for brain metastases (BM) are at increased risk of distant brain failure (DBF). Two nomograms have been recently published to predict individualized risk of DBF after SRS. The goal of this study was to assess the external validity of these nomograms in an independent patient cohort. The records of consecutive patients with BM treated with SRS at Levine Cancer Institute and Emory University between 2005 and 2013 were reviewed. Three validation cohorts were generated based on the specific nomogram or recursive partitioning analysis (RPA) entry criteria: Wake Forest nomogram (n = 281), Canadian nomogram (n = 282), and Canadian RPA (n = 303) validation cohorts. Freedom from DBF at 1-year in the Wake Forest study was 30% compared with 50% in the validation cohort. The validation c-index for both the 6-month and 9-month freedom from DBF Wake Forest nomograms was 0.55, indicating poor discrimination ability, and the goodness-of-fit test for both nomograms was highly significant (p < 0.001), indicating poor calibration. The 1-year actuarial DBF in the Canadian nomogram study was 43.9% compared with 50.9% in the validation cohort. The validation c-index for the Canadian 1-year DBF nomogram was 0.56, and the goodness-of-fit test was also highly significant (p < 0.001). The validation accuracy and c-index of the Canadian RPA classification was 53% and 0.61, respectively. The Wake Forest and Canadian nomograms for predicting risk of DBF after SRS were found to have limited predictive ability in an independent bi-institutional validation cohort. These results reinforce the importance of validating predictive models in independent patient cohorts.
Leptospirosis in American Samoa – Estimating and Mapping Risk Using Environmental Data
Lau, Colleen L.; Clements, Archie C. A.; Skelly, Chris; Dobson, Annette J.; Smythe, Lee D.; Weinstein, Philip
2012-01-01
Background The recent emergence of leptospirosis has been linked to many environmental drivers of disease transmission. Accurate epidemiological data are lacking because of under-diagnosis, poor laboratory capacity, and inadequate surveillance. Predictive risk maps have been produced for many diseases to identify high-risk areas for infection and guide allocation of public health resources, and are particularly useful where disease surveillance is poor. To date, no predictive risk maps have been produced for leptospirosis. The objectives of this study were to estimate leptospirosis seroprevalence at geographic locations based on environmental factors, produce a predictive disease risk map for American Samoa, and assess the accuracy of the maps in predicting infection risk. Methodology and Principal Findings Data on seroprevalence and risk factors were obtained from a recent study of leptospirosis in American Samoa. Data on environmental variables were obtained from local sources, and included rainfall, altitude, vegetation, soil type, and location of backyard piggeries. Multivariable logistic regression was performed to investigate associations between seropositivity and risk factors. Using the multivariable models, seroprevalence at geographic locations was predicted based on environmental variables. Goodness of fit of models was measured using area under the curve of the receiver operating characteristic, and the percentage of cases correctly classified as seropositive. Environmental predictors of seroprevalence included living below median altitude of a village, in agricultural areas, on clay soil, and higher density of piggeries above the house. Models had acceptable goodness of fit, and correctly classified ∼84% of cases. Conclusions and Significance Environmental variables could be used to identify high-risk areas for leptospirosis. Environmental monitoring could potentially be a valuable strategy for leptospirosis control, and allow us to move from disease surveillance to environmental health hazard surveillance as a more cost-effective tool for directing public health interventions. PMID:22666516
The accuracy of new wheelchair users' predictions about their future wheelchair use.
Hoenig, Helen; Griffiths, Patricia; Ganesh, Shanti; Caves, Kevin; Harris, Frances
2012-06-01
This study examined the accuracy of new wheelchair user predictions about their future wheelchair use. This was a prospective cohort study of 84 community-dwelling veterans provided a new manual wheelchair. The association between predicted and actual wheelchair use was strong at 3 mos (ϕ coefficient = 0.56), with 90% of those who anticipated using the wheelchair at 3 mos still using it (i.e., positive predictive value = 0.96) and 60% of those who anticipated not using it indeed no longer using the wheelchair (i.e., negative predictive value = 0.60, overall accuracy = 0.92). Predictive accuracy diminished over time, with overall accuracy declining from 0.92 at 3 mos to 0.66 at 6 mos. At all time points, and for all types of use, patients better predicted use as opposed to disuse, with correspondingly higher positive than negative predictive values. Accuracy of prediction of use in specific indoor and outdoor locations varied according to location. This study demonstrates the importance of better understanding the potential mismatch between the anticipated and actual patterns of wheelchair use. The findings suggest that users can be relied upon to accurately predict their basic wheelchair-related needs in the short-term. Further exploration is needed to identify characteristics that will aid users and their providers in more accurately predicting mobility needs for the long-term.
Predicting risk and outcomes for frail older adults: an umbrella review of frailty screening tools
Apóstolo, João; Cooke, Richard; Bobrowicz-Campos, Elzbieta; Santana, Silvina; Marcucci, Maura; Cano, Antonio; Vollenbroek-Hutten, Miriam; Germini, Federico; Holland, Carol
2017-01-01
EXECUTIVE SUMMARY Background A scoping search identified systematic reviews on diagnostic accuracy and predictive ability of frailty measures in older adults. In most cases, research was confined to specific assessment measures related to a specific clinical model. Objectives To summarize the best available evidence from systematic reviews in relation to reliability, validity, diagnostic accuracy and predictive ability of frailty measures in older adults. Inclusion criteria Population Older adults aged 60 years or older recruited from community, primary care, long-term residential care and hospitals. Index test Available frailty measures in older adults. Reference test Cardiovascular Health Study phenotype model, the Canadian Study of Health and Aging cumulative deficit model, Comprehensive Geriatric Assessment or other reference tests. Diagnosis of interest Frailty defined as an age-related state of decreased physiological reserves characterized by an increased risk of poor clinical outcomes. Types of studies Quantitative systematic reviews. Search strategy A three-step search strategy was utilized to find systematic reviews, available in English, published between January 2001 and October 2015. Methodological quality Assessed by two independent reviewers using the Joanna Briggs Institute critical appraisal checklist for systematic reviews and research synthesis. Data extraction Two independent reviewers extracted data using the standardized data extraction tool designed for umbrella reviews. Data synthesis Data were only presented in a narrative form due to the heterogeneity of included reviews. Results Five reviews with a total of 227,381 participants were included in this umbrella review. Two reviews focused on reliability, validity and diagnostic accuracy; two examined predictive ability for adverse health outcomes; and one investigated validity, diagnostic accuracy and predictive ability. In total, 26 questionnaires and brief assessments and eight frailty indicators were analyzed, most of which were applied to community-dwelling older people. The Frailty Index was examined in almost all these dimensions, with the exception of reliability, and its diagnostic and predictive characteristics were shown to be satisfactory. Gait speed showed high sensitivity, but only moderate specificity, and excellent predictive ability for future disability in activities of daily living. The Tilburg Frailty Indicator was shown to be a reliable and valid measure for frailty screening, but its diagnostic accuracy was not evaluated. Screening Letter, Timed-up-and-go test and PRISMA 7 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) demonstrated high sensitivity and moderate specificity for identifying frailty. In general, low physical activity, variously measured, was one of the most powerful predictors of future decline in activities of daily living. Conclusion Only a few frailty measures seem to be demonstrably valid, reliable and diagnostically accurate, and have good predictive ability. Among them, the Frailty Index and gait speed emerged as the most useful in routine care and community settings. However, none of the included systematic reviews provided responses that met all of our research questions on their own and there is a need for studies that could fill this gap, covering all these issues within the same study. Nevertheless, it was clear that no suitable tool for assessing frailty appropriately in emergency departments was identified. PMID:28398987
Wang, Wei-Qing; Cheng, Hong-Yan; Song, Song-Quan
2013-01-01
Effects of temperature, storage time and their combination on germination of aspen (Populus tomentosa) seeds were investigated. Aspen seeds were germinated at 5 to 30°C at 5°C intervals after storage for a period of time under 28°C and 75% relative humidity. The effect of temperature on aspen seed germination could not be effectively described by the thermal time (TT) model, which underestimated the germination rate at 5°C and poorly predicted the time courses of germination at 10, 20, 25 and 30°C. A modified TT model (MTT) which assumed a two-phased linear relationship between germination rate and temperature was more accurate in predicting the germination rate and percentage and had a higher likelihood of being correct than the TT model. The maximum lifetime threshold (MLT) model accurately described the effect of storage time on seed germination across all the germination temperatures. An aging thermal time (ATT) model combining both the TT and MLT models was developed to describe the effect of both temperature and storage time on seed germination. When the ATT model was applied to germination data across all the temperatures and storage times, it produced a relatively poor fit. Adjusting the ATT model to separately fit germination data at low and high temperatures in the suboptimal range increased the models accuracy for predicting seed germination. Both the MLT and ATT models indicate that germination of aspen seeds have distinct physiological responses to temperature within a suboptimal range. PMID:23658654
A known-groups evaluation of the response bias scale in a neuropsychological setting.
Sullivan, Karen A; Elliott, Cameron D; Lange, Rael T; Anderson, Deborah S
2013-01-01
We evaluated the Minnesota Multiphasic Personality Inventory-Second Edition (MMPI-2) Response Bias Scale (RBS). Archival data from 83 individuals who were referred for neuropsychological assessment with no formal diagnosis (n = 10), following a known or suspected traumatic brain injury (n = 36), with a psychiatric diagnosis (n = 20), or with a history of both trauma and a psychiatric condition (n = 17) were retrieved. The criteria for malingered neurocognitive dysfunction (MNCD) were applied, and two groups of participants were formed: poor effort (n = 15) and genuine responders (n = 68). Consistent with previous studies, the difference in scores between groups was greatest for the RBS (d = 2.44), followed by two established MMPI-2 validity scales, F (d = 0.25) and K (d = 0.23), and strong significant correlations were found between RBS and F (rs = .48) and RBS and K (r = -.41). When MNCD group membership was predicted using logistic regression, the RBS failed to add incrementally to F. In a separate regression to predict group membership, K added significantly to the RBS. Receiver-operating curve analysis revealed a nonsignificant area under the curve statistic, and at the ideal cutoff in this sample of >12, specificity was moderate (.79), sensitivity was low (.47), and positive and negative predictive power values at a 13% base rate were .25 and .91, respectively. Although the results of this study require replication because of a number of limitations, this study has made an important first attempt to report RBS classification accuracy statistics for predicting poor effort at a range of base rates.
EffectorP: predicting fungal effector proteins from secretomes using machine learning.
Sperschneider, Jana; Gardiner, Donald M; Dodds, Peter N; Tini, Francesco; Covarelli, Lorenzo; Singh, Karam B; Manners, John M; Taylor, Jennifer M
2016-04-01
Eukaryotic filamentous plant pathogens secrete effector proteins that modulate the host cell to facilitate infection. Computational effector candidate identification and subsequent functional characterization delivers valuable insights into plant-pathogen interactions. However, effector prediction in fungi has been challenging due to a lack of unifying sequence features such as conserved N-terminal sequence motifs. Fungal effectors are commonly predicted from secretomes based on criteria such as small size and cysteine-rich, which suffers from poor accuracy. We present EffectorP which pioneers the application of machine learning to fungal effector prediction. EffectorP improves fungal effector prediction from secretomes based on a robust signal of sequence-derived properties, achieving sensitivity and specificity of over 80%. Features that discriminate fungal effectors from secreted noneffectors are predominantly sequence length, molecular weight and protein net charge, as well as cysteine, serine and tryptophan content. We demonstrate that EffectorP is powerful when combined with in planta expression data for predicting high-priority effector candidates. EffectorP is the first prediction program for fungal effectors based on machine learning. Our findings will facilitate functional fungal effector studies and improve our understanding of effectors in plant-pathogen interactions. EffectorP is available at http://effectorp.csiro.au. © 2015 CSIRO New Phytologist © 2015 New Phytologist Trust.
New smoke predictions for Alaska in NOAA’s National Air Quality Forecast Capability
NASA Astrophysics Data System (ADS)
Davidson, P. M.; Ruminski, M.; Draxler, R.; Kondragunta, S.; Zeng, J.; Rolph, G.; Stajner, I.; Manikin, G.
2009-12-01
Smoke from wildfire is an important component of fine particle pollution, which is responsible for tens of thousands of premature deaths each year in the US. In Alaska, wildfire smoke is the leading cause of poor air quality in summer. Smoke forecast guidance helps air quality forecasters and the public take steps to limit exposure to airborne particulate matter. A new smoke forecast guidance tool, built by a cross-NOAA team, leverages efforts of NOAA’s partners at the USFS on wildfire emissions information, and with EPA, in coordinating with state/local air quality forecasters. Required operational deployment criteria, in categories of objective verification, subjective feedback, and production readiness, have been demonstrated in experimental testing during 2008-2009, for addition to the operational products in NOAA's National Air Quality Forecast Capability. The Alaska smoke forecast tool is an adaptation of NOAA’s smoke predictions implemented operationally for the lower 48 states (CONUS) in 2007. The tool integrates satellite information on location of wildfires with weather (North American mesoscale model) and smoke dispersion (HYSPLIT) models to produce daily predictions of smoke transport for Alaska, in binary and graphical formats. Hour-by hour predictions at 12km grid resolution of smoke at the surface and in the column are provided each day by 13 UTC, extending through midnight next day. Forecast accuracy and reliability are monitored against benchmark criteria for accuracy and reliability. While wildfire activity in the CONUS is year-round, the intense wildfire activity in AK is limited to the summer. Initial experimental testing during summer 2008 was hindered by unusually limited wildfire activity and very cloudy conditions. In contrast, heavier than average wildfire activity during summer 2009 provided a representative basis (more than 60 days of wildfire smoke) for demonstrating required prediction accuracy. A new satellite observation product was developed for routine near-real time verification of these predictions. The footprint of the predicted smoke from identified fires is verified with satellite observations of the spatial extent of smoke aerosols (5km resolution). Based on geostationary aerosol optical depth measurements that provide good time resolution of the horizontal spatial extent of the plumes, these observations do not yield quantitative concentrations of smoke particles at the surface. Predicted surface smoke concentrations are consistent with the limited number of in situ observations of total fine particle mass from all sources; however they are much higher than predicted for most CONUS fires. To assess uncertainty associated with fire emissions estimates, sensitivity analyses are in progress.
Miller, Jena L; Block-Abraham, Dana M; Blakemore, Karin J; Baschat, Ahmet A
2018-06-06
The insertion site of the fetoscope for laser occlusion (FLOC) treatment of twin-twin transfusion syndrome (TTTS) determines the likelihood of treatment success. We assessed a standardized preoperative ultrasound approach for its ability to identify critical landmarks for successful FLOC. Three surgeons independently performed preoperative ultrasound and deduced the likely orientation of the intertwin membrane (ITM) and vascular equator (VE) based on the sites of the cord insertion, the lie of the donor, and the size discordance between twins. At FLOC, these landmarks were visually verified and compared to preoperative assessments. Fifty consecutive FLOC surgeries had 127 preoperative assessments. Basic ITM and VE orientation were accurately predicted in 115 (90.6%), 109 (85.8%), and 105 (82.7%) assessments. Predictions were anatomically correct in 96 (75.6%), 70 (55.1%), and 58 (45.7%) assessments with no differences in accuracy between operators of different training level. The ITM/VE relationship was most poorly predicted in stage-3 TTTS (χ2, p = 0.016). In TTTS, preoperative ultrasound identification of placental cord insertion sites, lie of the donor twin, and size discordance enables preoperative prediction of key landmarks for successful FLOC. © 2018 S. Karger AG, Basel.
You cannot speak and listen at the same time: a probabilistic model of turn-taking.
Donnarumma, Francesco; Dindo, Haris; Iodice, Pierpaolo; Pezzulo, Giovanni
2017-04-01
Turn-taking is a preverbal skill whose mastering constitutes an important precondition for many social interactions and joint actions. However, the cognitive mechanisms supporting turn-taking abilities are still poorly understood. Here, we propose a computational analysis of turn-taking in terms of two general mechanisms supporting joint actions: action prediction (e.g., recognizing the interlocutor's message and predicting the end of turn) and signaling (e.g., modifying one's own speech to make it more predictable and discriminable). We test the hypothesis that in a simulated conversational scenario dyads using these two mechanisms can recognize the utterances of their co-actors faster, which in turn permits them to give and take turns more efficiently. Furthermore, we discuss how turn-taking dynamics depend on the fact that agents cannot simultaneously use their internal models for both action (or messages) prediction and production, as these have different requirements-or, in other words, they cannot speak and listen at the same time with the same level of accuracy. Our results provide a computational-level characterization of turn-taking in terms of cognitive mechanisms of action prediction and signaling that are shared across various interaction and joint action domains.
Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.
Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M
2016-03-11
Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.
Performance of genomic prediction within and across generations in maritime pine.
Bartholomé, Jérôme; Van Heerwaarden, Joost; Isik, Fikret; Boury, Christophe; Vidal, Marjorie; Plomion, Christophe; Bouffier, Laurent
2016-08-11
Genomic selection (GS) is a promising approach for decreasing breeding cycle length in forest trees. Assessment of progeny performance and of the prediction accuracy of GS models over generations is therefore a key issue. A reference population of maritime pine (Pinus pinaster) with an estimated effective inbreeding population size (status number) of 25 was first selected with simulated data. This reference population (n = 818) covered three generations (G0, G1 and G2) and was genotyped with 4436 single-nucleotide polymorphism (SNP) markers. We evaluated the effects on prediction accuracy of both the relatedness between the calibration and validation sets and validation on the basis of progeny performance. Pedigree-based (best linear unbiased prediction, ABLUP) and marker-based (genomic BLUP and Bayesian LASSO) models were used to predict breeding values for three different traits: circumference, height and stem straightness. On average, the ABLUP model outperformed genomic prediction models, with a maximum difference in prediction accuracies of 0.12, depending on the trait and the validation method. A mean difference in prediction accuracy of 0.17 was found between validation methods differing in terms of relatedness. Including the progenitors in the calibration set reduced this difference in prediction accuracy to 0.03. When only genotypes from the G0 and G1 generations were used in the calibration set and genotypes from G2 were used in the validation set (progeny validation), prediction accuracies ranged from 0.70 to 0.85. This study suggests that the training of prediction models on parental populations can predict the genetic merit of the progeny with high accuracy: an encouraging result for the implementation of GS in the maritime pine breeding program.
Population estimates of Nearctic shorebirds
Morrison, R.I.G.; Gill, Robert E.; Harrington, B.A.; Skagen, S.K.; Page, G.W.; Gratto-Trevor, C. L.; Haig, S.M.
2000-01-01
Estimates are presented for the population sizes of 53 species of Nearctic shorebirds occurring regularly in North America, plus four species that breed occasionally. Shorebird population sizes were derived from data obtained by a variety of methods from breeding, migration and wintering areas, and formal assessments of accuracy of counts or estimates are rarely available. Accurate estimates exist only for a few species that have been the subject of detailed investigation, and the likely accuracy of most estimates is considered poor or low. Population estimates range from a few tens to several millions. Overall, population estimates most commonly fell in the range of hundreds of thousands, particularly the low hundreds of thousands; estimated population sizes for large shorebird species currently all fall below 500,000. Population size was inversely related to size (mass) of the species, with a statistically significant negative regression between log (population size) and log (mass). Two outlying groups were evident on the regression graph: one, with populations lower than predicted, included species considered either to be "at risk" or particularly hard to count, and a second, with populations higher than predicted, included two species that are hunted. Population estimates are an integral part of conservation plans being developed for shorebirds in the United States and Canada, and may be used to identify areas of key international and regional importance.
Estimates of shorebird populations in North America
Morrison, R.I.G.; Gill, Robert E.; Harrington, B.A.; Skagen, S.K.; Page, G.W.; Gratto-Trevor, C. L.; Haig, S.M.
2001-01-01
Estimates are presented for the population sizes of 53 species of Nearctic shorebirds occurring regularly in North America, plus four species that breed occasionally. Population estimates range from a few tens to several millions. Overall, population estimates most commonly fall in the range of hundreds of thousands, particularly the low hundreds of thousands; estimated population sizes for large shorebird species currently all fall below 500 000. Population size is inversely related to size (mass) of the species, with a statistically significant negative regression between log(population size) and log(mass). Two outlying groups are evident on the regression graph: one, with populations lower than predicted, includes species considered to be either “at risk” or particularly hard to count, and a second, with populations higher than predicted, includes two species that are hunted. Shorebird population sizes were derived from data obtained by a variety of methods from breeding, migration, and wintering areas, and formal assessments of accuracy of counts or estimates are rarely available. Accurate estimates exist only for a few species that have been the subject of detailed investigation, and the likely accuracy of most estimates is considered poor or low. Population estimates are an integral part of conservation plans being developed for shorebirds in the United States and Canada and may be used to identify areas of key international and regional importance.
Fan, Aiping; Wang, Chen; Zhang, Liqin; Yan, Ye; Han, Cha; Xue, Fengxia
2018-02-06
To evaluate the diagnostic accuracy of the 2011 International Federation for Cervical Pathology and Colposcopy (IFCPC) colposcopic terminology. The clinicopathological data of 2262 patients who underwent colposcopy from September 2012 to September 2016 were reviewed. The colposcopic findings, colposcopic impression, and cervical histopathology of the patients were analyzed. Correlations between variables were evaluated using cervical histopathology as the gold standard. Colposcopic diagnosis matched biopsy histopathology in 1482 patients (65.5%), and the weighted kappa strength of agreement was 0.480 (P<0.01). Colposcopic diagnoses more often underestimated (22.1%) than overestimated (12.3%) cervical pathology. There was no significant difference between the colposcopic diagnosis and cervical pathology agreement among the various grades of lesions (P=0.282). The sensitivity, specificity for detecting high-grade lesions/carcinoma was 71.6% and 98.0%, respectively. Multivariate analysis showed that major changes were independent factors in predicting high-grade lesion/carcinoma, whereas transformation zone, lesion size, and non-stained were not statistically related to high-grade lesion/carcinoma. The 2011 IFCPC terminology can improve the diagnostic accuracy for all lesion severities. The categorization of major changes and minor changes is appropriate. However, colposcopic diagnosis remains unsatisfactory. Poor reproducibility of type 2 transformation zone and the significance of leukoplakia require further study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Tourassi, Georgia
2012-01-01
The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: (i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses withmore » the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.« less
Langeslag-Smith, Miriam A; Vandal, Alain C; Briane, Vincent; Thompson, Benjamin; Anstice, Nicola S
2015-11-27
To assess the accuracy of preschool vision screening in a large, ethnically diverse, urban population in South Auckland, New Zealand. Retrospective longitudinal study. B4 School Check vision screening records (n=5572) were compared with hospital eye department data for children referred from screening due to impaired acuity in one or both eyes who attended a referral appointment (n=556). False positive screens were identified by comparing screening data from the eyes that failed screening with hospital data. Estimation of false negative screening rates relied on data from eyes that passed screening. Data were analysed using logistic regression modelling accounting for the high correlation between results for the two eyes of each child. Positive predictive value of the preschool vision screening programme. Screening produced high numbers of false positive referrals, resulting in poor positive predictive value (PPV=31%, 95% CI 26% to 38%). High estimated negative predictive value (NPV=92%, 95% CI 88% to 95%) suggested most children with a vision disorder were identified at screening. Relaxing the referral criteria for acuity from worse than 6/9 to worse than 6/12 improved PPV without adversely affecting NPV. The B4 School Check generated numerous false positive referrals and consequently had a low PPV. There is scope for reducing costs by altering the visual acuity criterion for referral. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Electrophysiological evidence for preserved primacy of lexical prediction in aging.
Dave, Shruti; Brothers, Trevor A; Traxler, Matthew J; Ferreira, Fernanda; Henderson, John M; Swaab, Tamara Y
2018-05-28
Young adults show consistent neural benefits of predictable contexts when processing upcoming words, but these benefits are less clear-cut in older adults. Here we disentangle the neural correlates of prediction accuracy and contextual support during word processing, in order to test current theories that suggest that neural mechanisms underlying predictive processing are specifically impaired in older adults. During a sentence comprehension task, older and younger readers were asked to predict passage-final words and report the accuracy of these predictions. Age-related reductions were observed for N250 and N400 effects of prediction accuracy, as well as for N400 effects of contextual support independent of prediction accuracy. Furthermore, temporal primacy of predictive processing (i.e., earlier facilitation for successful predictions) was preserved across the lifespan, suggesting that predictive mechanisms are unlikely to be uniquely impaired in older adults. In addition, older adults showed prediction effects on frontal post-N400 positivities (PNPs) that were similar in amplitude to PNPs in young adults. Previous research has shown correlations between verbal fluency and lexical prediction in older adult readers, suggesting that the production system may be linked to capacity for lexical prediction, especially in aging. The current study suggests that verbal fluency modulates PNP effects of contextual support, but not prediction accuracy. Taken together, our findings suggest that aging does not result in specific declines in lexical prediction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Does ADHD in Adults Affect the Relative Accuracy of Metamemory Judgments?
ERIC Educational Resources Information Center
Knouse, Laura E.; Paradise, Matthew J.; Dunlosky, John
2006-01-01
Objective: Prior research suggests that individuals with ADHD overestimate their performance across domains despite performing more poorly in these domains. The authors introduce measures of accuracy from the larger realm of judgment and decision making--namely, relative accuracy and calibration--to the study of self-evaluative judgment accuracy…
NASA Astrophysics Data System (ADS)
Turkki, Riku; Linder, Nina; Kovanen, Panu E.; Pellinen, Teijo; Lundin, Johan
2016-03-01
The characteristics of immune cells in the tumor microenvironment of breast cancer capture clinically important information. Despite the heterogeneity of tumor-infiltrating immune cells, it has been shown that the degree of infiltration assessed by visual evaluation of hematoxylin-eosin (H and E) stained samples has prognostic and possibly predictive value. However, quantification of the infiltration in H and E-stained tissue samples is currently dependent on visual scoring by an expert. Computer vision enables automated characterization of the components of the tumor microenvironment, and texture-based methods have successfully been used to discriminate between different tissue morphologies and cell phenotypes. In this study, we evaluate whether local binary pattern texture features with superpixel segmentation and classification with support vector machine can be utilized to identify immune cell infiltration in H and E-stained breast cancer samples. Guided with the pan-leukocyte CD45 marker, we annotated training and test sets from 20 primary breast cancer samples. In the training set of arbitrary sized image regions (n=1,116) a 3-fold cross-validation resulted in 98% accuracy and an area under the receiver-operating characteristic curve (AUC) of 0.98 to discriminate between immune cell -rich and - poor areas. In the test set (n=204), we achieved an accuracy of 96% and AUC of 0.99 to label cropped tissue regions correctly into immune cell -rich and -poor categories. The obtained results demonstrate strong discrimination between immune cell -rich and -poor tissue morphologies. The proposed method can provide a quantitative measurement of the degree of immune cell infiltration and applied to digitally scanned H and E-stained breast cancer samples for diagnostic purposes.
Genomic prediction of reproduction traits for Merino sheep.
Bolormaa, S; Brown, D J; Swan, A A; van der Werf, J H J; Hayes, B J; Daetwyler, H D
2017-06-01
Economically important reproduction traits in sheep, such as number of lambs weaned and litter size, are expressed only in females and later in life after most selection decisions are made, which makes them ideal candidates for genomic selection. Accurate genomic predictions would lead to greater genetic gain for these traits by enabling accurate selection of young rams with high genetic merit. The aim of this study was to design and evaluate the accuracy of a genomic prediction method for female reproduction in sheep using daughter trait deviations (DTD) for sires and ewe phenotypes (when individual ewes were genotyped) for three reproduction traits: number of lambs born (NLB), litter size (LSIZE) and number of lambs weaned. Genomic best linear unbiased prediction (GBLUP), BayesR and pedigree BLUP analyses of the three reproduction traits measured on 5340 sheep (4503 ewes and 837 sires) with real and imputed genotypes for 510 174 SNPs were performed. The prediction of breeding values using both sire and ewe trait records was validated in Merino sheep. Prediction accuracy was evaluated by across sire family and random cross-validations. Accuracies of genomic estimated breeding values (GEBVs) were assessed as the mean Pearson correlation adjusted by the accuracy of the input phenotypes. The addition of sire DTD into the prediction analysis resulted in higher accuracies compared with using only ewe records in genomic predictions or pedigree BLUP. Using GBLUP, the average accuracy based on the combined records (ewes and sire DTD) was 0.43 across traits, but the accuracies varied by trait and type of cross-validations. The accuracies of GEBVs from random cross-validations (range 0.17-0.61) were higher than were those from sire family cross-validations (range 0.00-0.51). The GEBV accuracies of 0.41-0.54 for NLB and LSIZE based on the combined records were amongst the highest in the study. Although BayesR was not significantly different from GBLUP in prediction accuracy, it identified several candidate genes which are known to be associated with NLB and LSIZE. The approach provides a way to make use of all data available in genomic prediction for traits that have limited recording. © 2017 Stichting International Foundation for Animal Genetics.
Wykrzykowska, Joanna J.; Arbab-Zadeh, Armin; Godoy, Gustavo; Miller, Julie M.; Lin, Shezhang; Vavere, Andrea; Paul, Narinder; Niinuma, Hiroyuki; Hoe, John; Brinker, Jeffrey; Khosa, Faisal; Sarwar, Sheryar; Lima, Joao; Clouse, Melvin E.
2012-01-01
OBJECTIVE Evaluations of stents by MDCT from studies performed at single centers have yielded variable results with a high proportion of unassessable stents. The purpose of this study was to evaluate the accuracy of 64-MDCT angiography (MDCTA) in identifying in-stent restenosis in a multicenter trial. MATERIALS AND METHODS The Coronary Evaluation Using Multidetector Spiral Computed Tomography Angiography Using 64 Detectors (CORE-64) Multicenter Trial and Registry evaluated the accuracy of 64-MDCTA in assessing 405 patients referred for coronary angiography. A total of 75 stents in 52 patients were assessed: 48 of 75 stents (64%) in 36 of 52 patients (69%) could be evaluated. The prevalence of in-stent restenosis by quantitative coronary angiography (QCA) in this subgroup was 23% (17/75). Eighty percent of the stents were ≤ 3.0 mm in diameter. RESULTS The overall sensitivity, specificity, positive predictive value, and negative predictive value to detect 50% in-stent stenosis visually using MDCT compared with QCA was 33.3%, 91.7%, 57.1%, and 80.5%, respectively, with an overall accuracy of 77.1% for the 48 assessable stents. The ability to evaluate stents on MDCTA varied by stent type: Thick-strut stents such as Bx Velocity were assessable in 50% of the cases; Cypher, 62.5% of the cases; and thinner-strut stents such as Taxus, 75% of the cases. We performed quantitative assessment of in-stent contrast attenuation in Hounsfield units and correlated that value with the quantitative percentage of stenosis by QCA. The correlation coefficient between the average attenuation decrease and ≥ 50% stenosis by QCA was 0.25 (p = 0.073). Quantitative assessment failed to improve the accuracy of MDCT over qualitative assessment. CONCLUSION The results of our study showed that 64-MDCT has poor ability to detect in-stent restenosis in small-diameter stents. Evaluability and negative predictive value were better in large-diameter stents. Thus, 64-MDCT may be appropriate for stent assessment in only selected patients. PMID:20028909
Percival, Elizabeth; Bhatia, Rani; Preece, Kahn; McElduff, Patrick; McEvoy, Mark; Collison, Adam; Mattes, Joerg
2016-01-01
Ara h2 sIgE serum levels improve the diagnostic accuracy for predicting peanut allergy, but the use of Ara h2 purified protein as a skin prick test (SPT), has not been substantially evaluated. The fraction of exhaled nitric oxide (FeNO) shows promise as a novel biomarker of peanut allergy. Reproducibility of these measures has not been determined. The aim was to assess the accuracy and reproducibility (over a time-period of at least 12 months) of SPT to Ara h2 in comparison with four predictors of clinical peanut allergy (Peanut SPT, Ara h2 specific Immunoglobulin E (sIgE), Peanut sIgE and FeNO). Twenty-seven children were recruited in a follow-up of a prospective cohort of fifty-six children at least 12 months after an open-labelled peanut food challenge. Their repeat assessment involved a questionnaire, SPT to peanut and Ara h2 purified protein, FeNO and sIgE to peanut and Ara h2 measurements. Ara h2 SPT was no worse in accuracy when compared with peanut SPT, FeNO, Ara h2 sIgE and peanut sIgE (AUC 0.908 compared with 0.887, 0.889, 0.935 and 0.804 respectively) for predicting allergic reaction at previous food challenge. SPT for peanut and Ara h2 demonstrated limited reproducibility (ICC = 0.51 and 0.44); while FeNO demonstrated good reproducibility (ICC = 0.73) and sIgE for peanut and Ara h2 were highly reproducible (ICC = 0.81 and 0.85). In this population, Ara h2 SPT was no worse in accuracy when compared with current testing for the evaluation of clinical peanut allergy, but had-like peanut SPT-poor reproducibility. FeNO, peanut sIgE and Ara h2 sIgE were consistently reproducible despite an interval of at least 12 months between the repeated measurements.
de Moraes, Augusto César Ferreira; Cassenote, Alex Jones Flores; Leclercq, Catherine; Dallongeville, Jean; Androutsos, Odysseas; Török, Katalin; González-Gross, Marcela; Widhalm, Kurt; Kafatos, Anthony; Carvalho, Heráclito Barbosa; Moreno, Luis Alberto
2015-01-01
Background Resting heart rate (RHR) reflects sympathetic nerve activity a significant association between RHR and all-cause and cardiovascular mortality has been reported in some epidemiologic studies. Methods To analyze the predictive power and accuracy of RHR as a screening measure for individual and clustered cardiovascular risk in adolescents. The study comprised 769 European adolescents (376 boys) participating in the HELENA cross-sectional study (2006–2008) were included in this study. Measurements on systolic blood pressure, HOMA index, triglycerides, TC/HDL-c, VO2máx and the sum of four skinfolds were obtained, and a clustered cardiovascular disease (CVD) risk index was computed. The receiver operating characteristics curve was applied to calculate the power and accuracy of RHR to predict individual and clustered CVD risk factors. Results RHR showed low accuracy for screening CVD risk factors in both sexes (range 38.5%–54.4% in boys and 45.5%–54.3% in girls). Low specificity’s (15.6%–19.7% in boys; 18.1%–20.0% in girls) were also found. Nevertheless, the sensitivities were moderate-to-high (61.4%–89.1% in boys; 72.9%–90.3% in girls). Conclusion RHR is a poor predictor of individual CVD risk factors and of clustered CVD and the estimates based on RHR are not accurate. The use of RHR as an indicator of CVD risk in adolescents may produce a biased screening of cardiovascular health in both sexes. PMID:26010248
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639
Influence of outliers on accuracy estimation in genomic prediction in plant breeding.
Estaghvirou, Sidi Boubacar Ould; Ogutu, Joseph O; Piepho, Hans-Peter
2014-10-01
Outliers often pose problems in analyses of data in plant breeding, but their influence on the performance of methods for estimating predictive accuracy in genomic prediction studies has not yet been evaluated. Here, we evaluate the influence of outliers on the performance of methods for accuracy estimation in genomic prediction studies using simulation. We simulated 1000 datasets for each of 10 scenarios to evaluate the influence of outliers on the performance of seven methods for estimating accuracy. These scenarios are defined by the number of genotypes, marker effect variance, and magnitude of outliers. To mimic outliers, we added to one observation in each simulated dataset, in turn, 5-, 8-, and 10-times the error SD used to simulate small and large phenotypic datasets. The effect of outliers on accuracy estimation was evaluated by comparing deviations in the estimated and true accuracies for datasets with and without outliers. Outliers adversely influenced accuracy estimation, more so at small values of genetic variance or number of genotypes. A method for estimating heritability and predictive accuracy in plant breeding and another used to estimate accuracy in animal breeding were the most accurate and resistant to outliers across all scenarios and are therefore preferable for accuracy estimation in genomic prediction studies. The performances of the other five methods that use cross-validation were less consistent and varied widely across scenarios. The computing time for the methods increased as the size of outliers and sample size increased and the genetic variance decreased. Copyright © 2014 Ould Estaghvirou et al.
Outbreak column 17: Situational Awareness for healthcare outbreaks
2015-01-01
Outbreak column 17 introduces the utility of Situation Awareness (SA) for outbreak management. For any given time period, an individual or team’s SA involves a perception of what is going on, meaning derived from the perception and a prediction of what is likely to happen next. The individual or team’s SA informs, but is separate to, both the decisions and actions that follow. The accuracy and completeness of an individual or team’s SA will therefore impact on the effectiveness of decisions and actions taken. SA was developed by the aviation industry and is utilised in situations which, like outbreaks, have dynamic, i.e. continuously changing problem spaces, and wherein a loss of SA is likely to lead to both poor decision-making and actions with potentially fatal consequences. The potential benefits of using SA for outbreaks are discussed and include: (1) retrospectively to identify if poor decision-making was a result of a poor SA; (2) prospectively to identify where the system is weakest; and (3) as a teaching tool to improve the skills of individuals and teams in developing a shared understanding of the here and now. PMID:28989433
Pahlavian, Soroush Heidari; Bunck, Alexander C.; Thyagaraj, Suraj; Giese, Daniel; Loth, Francis; Hedderich, Dennis M.; Kröger, Jan Robert; Martin, Bryn A.
2016-01-01
Abnormal alterations in cerebrospinal fluid (CSF) flow are thought to play an important role in pathophysiology of various craniospinal disorders such as hydrocephalus and Chiari malformation. Three directional phase contrast MRI (4D Flow) has been proposed as one method for quantification of the CSF dynamics in healthy and disease states, but prior to further implementation of this technique, its accuracy in measuring CSF velocity magnitude and distribution must be evaluated. In this study, an MR-compatible experimental platform was developed based on an anatomically detailed 3D printed model of the cervical subarachnoid space and subject specific flow boundary conditions. Accuracy of 4D Flow measurements was assessed by comparison of CSF velocities obtained within the in vitro model with the numerically predicted velocities calculated from a spatially averaged computational fluid dynamics (CFD) model based on the same geometry and flow boundary conditions. Good agreement was observed between CFD and 4D Flow in terms of spatial distribution and peak magnitude of through-plane velocities with an average difference of 7.5% and 10.6% for peak systolic and diastolic velocities, respectively. Regression analysis showed lower accuracy of 4D Flow measurement at the timeframes corresponding to low CSF flow rate and poor correlation between CFD and 4D Flow in-plane velocities. PMID:27043214
Griffiths, Alex; Beaussier, Anne-Laure; Demeritt, David; Rothstein, Henry
2017-02-01
The Care Quality Commission (CQC) is responsible for ensuring the quality of the health and social care delivered by more than 30 000 registered providers in England. With only limited resources for conducting on-site inspections, the CQC has used statistical surveillance tools to help it identify which providers it should prioritise for inspection. In the face of planned funding cuts, the CQC plans to put more reliance on statistical surveillance tools to assess risks to quality and prioritise inspections accordingly. To evaluate the ability of the CQC's latest surveillance tool, Intelligent Monitoring (IM), to predict the quality of care provided by National Health Service (NHS) hospital trusts so that those at greatest risk of providing poor-quality care can be identified and targeted for inspection. The predictive ability of the IM tool is evaluated through regression analyses and χ 2 testing of the relationship between the quantitative risk score generated by the IM tool and the subsequent quality rating awarded following detailed on-site inspection by large expert teams of inspectors. First, the continuous risk scores generated by the CQC's IM statistical surveillance tool cannot predict inspection-based quality ratings of NHS hospital trusts (OR 0.38 (0.14 to 1.05) for Outstanding/Good, OR 0.94 (0.80 to -1.10) for Good/Requires improvement, and OR 0.90 (0.76 to 1.07) for Requires improvement/Inadequate). Second, the risk scores cannot be used more simply to distinguish the trusts performing poorly-those subsequently rated either 'Requires improvement' or 'Inadequate'-from the trusts performing well-those subsequently rated either 'Good' or 'Outstanding' (OR 1.07 (0.91 to 1.26)). Classifying CQC's risk bandings 1-3 as high risk and 4-6 as low risk, 11 of the high risk trusts were performing well and 43 of the low risk trusts were performing poorly, resulting in an overall accuracy rate of 47.6%. Third, the risk scores cannot be used even more simply to distinguish the worst performing trusts-those subsequently rated 'Inadequate'-from the remaining, better performing trusts (OR 1.11 (0.94 to 1.32)). Classifying CQC's risk banding 1 as high risk and 2-6 as low risk, the highest overall accuracy rate of 72.8% was achieved, but still only 6 of the 13 Inadequate trusts were correctly classified as being high risk. Since the IM statistical surveillance tool cannot predict the outcome of NHS hospital trust inspections, it cannot be used for prioritisation. A new approach to inspection planning is therefore required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Reid, Clare L
2007-10-01
A wide variation in 24h energy expenditure has been demonstrated previously in intensive care unit (ICU) patients. The accuracy of equations used to predict energy expenditure in critically ill patients is frequently compared with single or short-duration indirect calorimetry measurements, which may not represent the total energy expenditure (TEE) of these patients. To take into account this variability in energy expenditure, estimates have been compared with continuous indirect calorimetry measurements. Continuous (24h/day for 5 days) indirect calorimetry measurements were made in patients requiring mechanical ventilation for 5 days. The Harris-Benedict, Schofield and Ireton-Jones equations and the American College of Chest Physicians recommendation of 25 kcal/kg/day were used to estimate energy requirements. A total of 192 days of measurements, in 27 patients, were available for comparison with the different equations. Agreement between the equations and measured values was poor. The Harris-Benedict, Schofield and ACCP equations provided more estimates (66%, 66% and 65%, respectively) within 80% and 110% of TEE values. However, each of these equations would have resulted in clinically significant underfeeding (<80% of TEE) in 16%, 15% and 22% of patients, respectively, and overfeeding (>110% of TEE) in 18%, 19% and 13% of patients, respectively. Limits of agreement between the different equations and TEE values were unacceptably wide. Prediction equations may result in significant under or overfeeding in the clinical setting.
The effect of using genealogy-based haplotypes for genomic prediction
2013-01-01
Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971
The effect of using genealogy-based haplotypes for genomic prediction.
Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt
2013-03-06
Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.
Achamrah, Najate; Jésus, Pierre; Grigioni, Sébastien; Rimbert, Agnès; Petit, André; Déchelotte, Pierre; Folope, Vanessa; Coëffier, Moïse
2018-01-01
Predictive equations have been specifically developed for obese patients to estimate resting energy expenditure (REE). Body composition (BC) assessment is needed for some of these equations. We assessed the impact of BC methods on the accuracy of specific predictive equations developed in obese patients. REE was measured (mREE) by indirect calorimetry and BC assessed by bioelectrical impedance analysis (BIA) and dual-energy X-ray absorptiometry (DXA). mREE, percentages of prediction accuracy (±10% of mREE) were compared. Predictive equations were studied in 2588 obese patients. Mean mREE was 1788 ± 6.3 kcal/24 h. Only the Müller (BIA) and Harris & Benedict (HB) equations provided REE with no difference from mREE. The Huang, Müller, Horie-Waitzberg, and HB formulas provided a higher accurate prediction (>60% of cases). The use of BIA provided better predictions of REE than DXA for the Huang and Müller equations. Inversely, the Horie-Waitzberg and Lazzer formulas provided a higher accuracy using DXA. Accuracy decreased when applied to patients with BMI ≥ 40, except for the Horie-Waitzberg and Lazzer (DXA) formulas. Müller equations based on BIA provided a marked improvement of REE prediction accuracy than equations not based on BC. The interest of BC to improve REE predictive equations accuracy in obese patients should be confirmed. PMID:29320432
Beaulieu, Jean; Doerksen, Trevor K; MacKay, John; Rainville, André; Bousquet, Jean
2014-12-02
Genomic selection (GS) may improve selection response over conventional pedigree-based selection if markers capture more detailed information than pedigrees in recently domesticated tree species and/or make it more cost effective. Genomic prediction accuracies using 1748 trees and 6932 SNPs representative of as many distinct gene loci were determined for growth and wood traits in white spruce, within and between environments and breeding groups (BG), each with an effective size of Ne ≈ 20. Marker subsets were also tested. Model fits and/or cross-validation (CV) prediction accuracies for ridge regression (RR) and the least absolute shrinkage and selection operator models approached those of pedigree-based models. With strong relatedness between CV sets, prediction accuracies for RR within environment and BG were high for wood (r = 0.71-0.79) and moderately high for growth (r = 0.52-0.69) traits, in line with trends in heritabilities. For both classes of traits, these accuracies achieved between 83% and 92% of those obtained with phenotypes and pedigree information. Prediction into untested environments remained moderately high for wood (r ≥ 0.61) but dropped significantly for growth (r ≥ 0.24) traits, emphasizing the need to phenotype in all test environments and model genotype-by-environment interactions for growth traits. Removing relatedness between CV sets sharply decreased prediction accuracies for all traits and subpopulations, falling near zero between BGs with no known shared ancestry. For marker subsets, similar patterns were observed but with lower prediction accuracies. Given the need for high relatedness between CV sets to obtain good prediction accuracies, we recommend to build GS models for prediction within the same breeding population only. Breeding groups could be merged to build genomic prediction models as long as the total effective population size does not exceed 50 individuals in order to obtain high prediction accuracy such as that obtained in the present study. A number of markers limited to a few hundred would not negatively impact prediction accuracies, but these could decrease more rapidly over generations. The most promising short-term approach for genomic selection would likely be the selection of superior individuals within large full-sib families vegetatively propagated to implement multiclonal forestry.
2009-01-01
Background Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle. Methods Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls. Results For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy. All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time. Conclusions The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended. PMID:20043835
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.
Genomic selection across multiple breeding cycles in applied bread wheat breeding.
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2016-06-01
We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.
Predicting New Indications for Approved Drugs Using a Proteo-Chemometric Method
Dakshanamurthy, Sivanesan; Issa, Naiem T; Assefnia, Shahin; Seshasayee, Ashwini; Peters, Oakland J; Madhavan, Subha; Uren, Aykut; Brown, Milton L; Byers, Stephen W
2012-01-01
The most effective way to move from target identification to the clinic is to identify already approved drugs with the potential for activating or inhibiting unintended targets (repurposing or repositioning). This is usually achieved by high throughput chemical screening, transcriptome matching or simple in silico ligand docking. We now describe a novel rapid computational proteo-chemometric method called “Train, Match, Fit, Streamline” (TMFS) to map new drug-target interaction space and predict new uses. The TMFS method combines shape, topology and chemical signatures, including docking score and functional contact points of the ligand, to predict potential drug-target interactions with remarkable accuracy. Using the TMFS method, we performed extensive molecular fit computations on 3,671 FDA approved drugs across 2,335 human protein crystal structures. The TMFS method predicts drug-target associations with 91% accuracy for the majority of drugs. Over 58% of the known best ligands for each target were correctly predicted as top ranked, followed by 66%, 76%, 84% and 91% for agents ranked in the top 10, 20, 30 and 40, respectively, out of all 3,671 drugs. Drugs ranked in the top 1–40, that have not been experimentally validated for a particular target now become candidates for repositioning. Furthermore, we used the TMFS method to discover that mebendazole, an anti-parasitic with recently discovered and unexpected anti-cancer properties, has the structural potential to inhibit VEGFR2. We confirmed experimentally that mebendazole inhibits VEGFR2 kinase activity as well as angiogenesis at doses comparable with its known effects on hookworm. TMFS also predicted, and was confirmed with surface plasmon resonance, that dimethyl celecoxib and the anti-inflammatory agent celecoxib can bind cadherin-11, an adhesion molecule important in rheumatoid arthritis and poor prognosis malignancies for which no targeted therapies exist. We anticipate that expanding our TMFS method to the >27,000 clinically active agents available worldwide across all targets will be most useful in the repositioning of existing drugs for new therapeutic targets. PMID:22780961
Suarez-Kurtz, Guilherme; Fuchshuber-Moraes, Mateus; Struchiner, Claudio J; Parra, Esteban J
2016-08-01
Several algorithms have been proposed to reduce the genotyping effort and cost, while retaining the accuracy of N-acetyltransferase-2 (NAT2) phenotype prediction. Data from the 1000 Genomes (1KG) project and an admixed cohort of Black Brazilians were used to assess the accuracy of NAT2 phenotype prediction using algorithms based on paired single nucleotide polymorphisms (SNPs) (rs1041983 and rs1801280) or a tag SNP (rs1495741). NAT2 haplotypes comprising SNPs rs1801279, rs1041983, rs1801280, rs1799929, rs1799930, rs1208 and rs1799931 were assigned according to the arylamine N-acetyltransferases database. Contingency tables were used to visualize the agreement between the NAT2 acetylator phenotypes on the basis of these haplotypes versus phenotypes inferred by the prediction algorithms. The paired and tag SNP algorithms provided more than 96% agreement with the 7-SNP derived phenotypes in Europeans, East Asians, South Asians and Admixed Americans, but discordance of phenotype prediction occurred in 30.2 and 24.8% 1KG Africans and in 14.4 and 18.6% Black Brazilians, respectively. Paired SNP panel misclassification occurs in carriers of NATs haplotypes *13A (282T alone), *12B (282T and 803G), *6B (590A alone) and *14A (191A alone), whereas haplotype *14, defined by the 191A allele, is the major culprit of misclassification by the tag allele. Both the paired SNP and the tag SNP algorithms may be used, with economy of scale, to infer NAT2 acetylator phenotypes, including the ultra-slow phenotype, in European, East Asian, South Asian and American populations represented in the 1KG cohort. Both algorithms, however, perform poorly in populations of predominant African descent, including admixed African-Americans, African Caribbeans and Black Brazilians.
Lohsiriwat, Varut; Prapasrivorakul, Siriluck; Lohsiriwat, Darin
2009-01-01
The purposes of this study were to determine clinical presentations and surgical outcomes of perforated peptic ulcer (PPU), and to evaluate the accuracy of the Boey scoring system in predicting mortality and morbidity. We carried out a retrospective study of patients undergoing emergency surgery for PPU between 2001 and 2006 in a university hospital. Clinical presentations and surgical outcomes were analyzed. Adjusted odds ratio (OR) of each Boey score on morbidity and mortality rate was compared with zero risk score. Receiver-operating characteristic curve analysis was used to compare the predictive ability between Boey score, American Society of Anesthesiologists (ASA) classification, and Mannheim Peritonitis Index (MPI). The study included 152 patients with average age of 52 years (range: 15-88 years), and 78% were male. The most common site of PPU was the prepyloric region (74%). Primary closure and omental graft was the most common procedure performed. Overall mortality rate was 9% and the complication rate was 30%. The mortality rate increased progressively with increasing numbers of the Boey score: 1%, 8% (OR=2.4), 33% (OR=3.5), and 38% (OR=7.7) for 0, 1, 2, and 3 scores, respectively (p<0.001). The morbidity rates for 0, 1, 2, and 3 Boey scores were 11%, 47% (OR=2.9), 75% (OR=4.3), and 77% (OR=4.9), respectively (p<0.001). Boey score and ASA classification appeared to be better than MPI for predicting the poor surgical outcomes. Perforated peptic ulcer is associated with high rates of mortality and morbidity. The Boey risk score serves as a simple and precise predictor for postoperative mortality and morbidity.
Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.
Jeff Jenness; J. Judson Wynne
2005-01-01
In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...
Final Technical Report: Increasing Prediction Accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Bruce Hardison; Hansen, Clifford; Stein, Joshua
2015-12-01
PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.
Genotyping by sequencing for genomic prediction in a soybean breeding population.
Jarquín, Diego; Kocak, Kyle; Posadas, Luis; Hyma, Katie; Jedlicka, Joseph; Graef, George; Lorenz, Aaron
2014-08-29
Advances in genotyping technology, such as genotyping by sequencing (GBS), are making genomic prediction more attractive to reduce breeding cycle times and costs associated with phenotyping. Genomic prediction and selection has been studied in several crop species, but no reports exist in soybean. The objectives of this study were (i) evaluate prospects for genomic selection using GBS in a typical soybean breeding program and (ii) evaluate the effect of GBS marker selection and imputation on genomic prediction accuracy. To achieve these objectives, a set of soybean lines sampled from the University of Nebraska Soybean Breeding Program were genotyped using GBS and evaluated for yield and other agronomic traits at multiple Nebraska locations. Genotyping by sequencing scored 16,502 single nucleotide polymorphisms (SNPs) with minor-allele frequency (MAF) > 0.05 and percentage of missing values ≤ 5% on 301 elite soybean breeding lines. When SNPs with up to 80% missing values were included, 52,349 SNPs were scored. Prediction accuracy for grain yield, assessed using cross validation, was estimated to be 0.64, indicating good potential for using genomic selection for grain yield in soybean. Filtering SNPs based on missing data percentage had little to no effect on prediction accuracy, especially when random forest imputation was used to impute missing values. The highest accuracies were observed when random forest imputation was used on all SNPs, but differences were not significant. A standard additive G-BLUP model was robust; modeling additive-by-additive epistasis did not provide any improvement in prediction accuracy. The effect of training population size on accuracy began to plateau around 100, but accuracy steadily climbed until the largest possible size was used in this analysis. Including only SNPs with MAF > 0.30 provided higher accuracies when training populations were smaller. Using GBS for genomic prediction in soybean holds good potential to expedite genetic gain. Our results suggest that standard additive G-BLUP models can be used on unfiltered, imputed GBS data without loss in accuracy.
Correa, Katharina; Bangera, Rama; Figueroa, René; Lhorente, Jean P; Yáñez, José M
2017-01-31
Sea lice infestations caused by Caligus rogercresseyi are a main concern to the salmon farming industry due to associated economic losses. Resistance to this parasite was shown to have low to moderate genetic variation and its genetic architecture was suggested to be polygenic. The aim of this study was to compare accuracies of breeding value predictions obtained with pedigree-based best linear unbiased prediction (P-BLUP) methodology against different genomic prediction approaches: genomic BLUP (G-BLUP), Bayesian Lasso, and Bayes C. To achieve this, 2404 individuals from 118 families were measured for C. rogercresseyi count after a challenge and genotyped using 37 K single nucleotide polymorphisms. Accuracies were assessed using fivefold cross-validation and SNP densities of 0.5, 1, 5, 10, 25 and 37 K. Accuracy of genomic predictions increased with increasing SNP density and was higher than pedigree-based BLUP predictions by up to 22%. Both Bayesian and G-BLUP methods can predict breeding values with higher accuracies than pedigree-based BLUP, however, G-BLUP may be the preferred method because of reduced computation time and ease of implementation. A relatively low marker density (i.e. 10 K) is sufficient for maximal increase in accuracy when using G-BLUP or Bayesian methods for genomic prediction of C. rogercresseyi resistance in Atlantic salmon.
Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric
2012-02-01
The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.
Dervin, Geoffrey F.; Stiell, Ian G.; Wells, George A.; Rody, Kelly; Grabowski, Jenny
2001-01-01
Objective To determine clinicians’ accuracy and reliability for the clinical diagnosis of unstable meniscus tears in patients with symptomatic osteoarthritis of the knee. Design A prospective cohort study. Setting A single tertiary care centre. Patients One hundred and fifty-two patients with symptomatic osteoarthritis of the knee refractory to conservative medical treatment were selected for prospective evaluation of arthroscopic débridement. Intervention Arthroscopic débridement of the knee, including meniscal tear and chondral flap resection, without abrasion arthroplasty. Outcome measures A standardized assessment protocol was administered to each patient by 2 independent observers. Arthroscopic determination of unstable meniscal tears was recorded by 1 observer who reviewed a video recording and was blinded to preoperative data. Those variables that had the highest interobserver agreement and the strongest association with meniscal tear by univariate methods were entered into logistic regression to model the best prediction of resectable tears. Results There were 92 meniscal tears (77 medial, 15 lateral). Interobserver agreement between clinical fellows and treating surgeons was poor to fair (κ < 0.4) for all clinical variables except radiographic measures, which were good. Fellows and surgeons predicted unstable meniscal tear preoperatively with equivalent accuracy of 60%. Logistic regression modelling revealed that a history of swelling and a ballottable effusion were negative predictors. A positive McMurray test was the only positive predictor of unstable meniscal tear. “Mechanical” symptoms were not reliable predictors in this prospective study. The model was 69% accurate for all patients and 76% for those with advanced medial compartment osteoarthritis defined by a joint space height of 2 mm or less. Conclusions This study underscored the difficulty in using clinical variables to predict unstable medial meniscal tears in patients with pre-existing osteoarthritis of the knee. The lack of interobserver agreement must be overcome to ensure that the findings can be generalized to other physician observers. PMID:11504260
Heidaritabar, M; Wolc, A; Arango, J; Zeng, J; Settar, P; Fulton, J E; O'Sullivan, N P; Bastiaansen, J W M; Fernando, R L; Garrick, D J; Dekkers, J C M
2016-10-01
Most genomic prediction studies fit only additive effects in models to estimate genomic breeding values (GEBV). However, if dominance genetic effects are an important source of variation for complex traits, accounting for them may improve the accuracy of GEBV. We investigated the effect of fitting dominance and additive effects on the accuracy of GEBV for eight egg production and quality traits in a purebred line of brown layers using pedigree or genomic information (42K single-nucleotide polymorphism (SNP) panel). Phenotypes were corrected for the effect of hatch date. Additive and dominance genetic variances were estimated using genomic-based [genomic best linear unbiased prediction (GBLUP)-REML and BayesC] and pedigree-based (PBLUP-REML) methods. Breeding values were predicted using a model that included both additive and dominance effects and a model that included only additive effects. The reference population consisted of approximately 1800 animals hatched between 2004 and 2009, while approximately 300 young animals hatched in 2010 were used for validation. Accuracy of prediction was computed as the correlation between phenotypes and estimated breeding values of the validation animals divided by the square root of the estimate of heritability in the whole population. The proportion of dominance variance to total phenotypic variance ranged from 0.03 to 0.22 with PBLUP-REML across traits, from 0 to 0.03 with GBLUP-REML and from 0.01 to 0.05 with BayesC. Accuracies of GEBV ranged from 0.28 to 0.60 across traits. Inclusion of dominance effects did not improve the accuracy of GEBV, and differences in their accuracies between genomic-based methods were small (0.01-0.05), with GBLUP-REML yielding higher prediction accuracies than BayesC for egg production, egg colour and yolk weight, while BayesC yielded higher accuracies than GBLUP-REML for the other traits. In conclusion, fitting dominance effects did not impact accuracy of genomic prediction of breeding values in this population. © 2016 Blackwell Verlag GmbH.
Hidalgo, A M; Bastiaansen, J W M; Lopes, M S; Veroneze, R; Groenen, M A M; de Koning, D-J
2015-07-01
Genomic selection is applied to dairy cattle breeding to improve the genetic progress of purebred (PB) animals, whereas in pigs and poultry the target is a crossbred (CB) animal for which a different strategy appears to be needed. The source of information used to estimate the breeding values, i.e., using phenotypes of CB or PB animals, may affect the accuracy of prediction. The objective of our study was to assess the direct genomic value (DGV) accuracy of CB and PB pigs using different sources of phenotypic information. Data used were from 3 populations: 2,078 Dutch Landrace-based, 2,301 Large White-based, and 497 crossbreds from an F1 cross between the 2 lines. Two female reproduction traits were analyzed: gestation length (GLE) and total number of piglets born (TNB). Phenotypes used in the analyses originated from offspring of genotyped individuals. Phenotypes collected on CB and PB animals were analyzed as separate traits using a single-trait model. Breeding values were estimated separately for each trait in a pedigree BLUP analysis and subsequently deregressed. Deregressed EBV for each trait originating from different sources (CB or PB offspring) were used to study the accuracy of genomic prediction. Accuracy of prediction was computed as the correlation between DGV and the DEBV of the validation population. Accuracy of prediction within PB populations ranged from 0.43 to 0.62 across GLE and TNB. Accuracies to predict genetic merit of CB animals with one PB population in the training set ranged from 0.12 to 0.28, with the exception of using the CB offspring phenotype of the Dutch Landrace that resulted in an accuracy estimate around 0 for both traits. Accuracies to predict genetic merit of CB animals with both parental PB populations in the training set ranged from 0.17 to 0.30. We conclude that prediction within population and trait had good predictive ability regardless of the trait being the PB or CB performance, whereas using PB population(s) to predict genetic merit of CB animals had zero to moderate predictive ability. We observed that the DGV accuracy of CB animals when training on PB data was greater than or equal to training on CB data. However, when results are corrected for the different levels of reliabilities in the PB and CB training data, we showed that training on CB data does outperform PB data for the prediction of CB genetic merit, indicating that more CB animals should be phenotyped to increase the reliability and, consequently, accuracy of DGV for CB genetic merit.
NASA Astrophysics Data System (ADS)
Lee, Sang-Min; Nam, Ji-Eun; Choi, Hee-Wook; Ha, Jong-Chul; Lee, Yong Hee; Kim, Yeon-Hee; Kang, Hyun-Suk; Cho, ChunHo
2016-08-01
This study was conducted to evaluate the prediction accuracies of THe Observing system Research and Predictability EXperiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) data at six operational forecast centers using the root-mean square difference (RMSD) and Brier score (BS) from April to July 2012. And it was performed to test the precipitation predictability of ensemble prediction systems (EPS) on the onset of the summer rainy season, the day of withdrawal in spring drought over South Korea on 29 June 2012 with use of the ensemble mean precipitation, ensemble probability precipitation, 10-day lag ensemble forecasts (ensemble mean and probability precipitation), and effective drought index (EDI). The RMSD analysis of atmospheric variables (geopotential-height at 500 hPa, temperature at 850 hPa, sea-level pressure and specific humidity at 850 hPa) showed that the prediction accuracies of the EPS at the Meteorological Service of Canada (CMC) and China Meteorological Administration (CMA) were poor and those at the European Center for Medium-Range Weather Forecasts (ECMWF) and Korea Meteorological Administration (KMA) were good. Also, ECMWF and KMA showed better results than other EPSs for predicting precipitation in the BS distributions. It is also evaluated that the onset of the summer rainy season could be predicted using ensemble-mean precipitation from 4-day leading time at all forecast centers. In addition, the spatial distributions of predicted precipitation of the EPS at KMA and the Met Office of the United Kingdom (UKMO) were similar to those of observed precipitation; thus, the predictability showed good performance. The precipitation probability forecasts of EPS at CMA, the National Centers for Environmental Prediction (NCEP), and UKMO (ECMWF and KMA) at 1-day lead time produced over-forecasting (under-forecasting) in the reliability diagram. And all the ones at 2˜4-day lead time showed under-forecasting. Also, the precipitation on onset day of the summer rainy season could be predicted from a 4-day lead time to initial time by using the 10-day lag ensemble mean and probability forecasts. Additionally, the predictability for withdrawal day of spring drought to be ended due to precipitation on onset day of summer rainy season was evaluated using Effective Drought Index (EDI) to be calculated by ensemble mean precipitation forecasts and spreads at five EPSs.
van Mourik, Maaike S M; van Duijn, Pleun Joppe; Moons, Karel G M; Bonten, Marc J M; Lee, Grace M
2015-01-01
Objective Measuring the incidence of healthcare-associated infections (HAI) is of increasing importance in current healthcare delivery systems. Administrative data algorithms, including (combinations of) diagnosis codes, are commonly used to determine the occurrence of HAI, either to support within-hospital surveillance programmes or as free-standing quality indicators. We conducted a systematic review evaluating the diagnostic accuracy of administrative data for the detection of HAI. Methods Systematic search of Medline, Embase, CINAHL and Cochrane for relevant studies (1995–2013). Methodological quality assessment was performed using QUADAS-2 criteria; diagnostic accuracy estimates were stratified by HAI type and key study characteristics. Results 57 studies were included, the majority aiming to detect surgical site or bloodstream infections. Study designs were very diverse regarding the specification of their administrative data algorithm (code selections, follow-up) and definitions of HAI presence. One-third of studies had important methodological limitations including differential or incomplete HAI ascertainment or lack of blinding of assessors. Observed sensitivity and positive predictive values of administrative data algorithms for HAI detection were very heterogeneous and generally modest at best, both for within-hospital algorithms and for formal quality indicators; accuracy was particularly poor for the identification of device-associated HAI such as central line associated bloodstream infections. The large heterogeneity in study designs across the included studies precluded formal calculation of summary diagnostic accuracy estimates in most instances. Conclusions Administrative data had limited and highly variable accuracy for the detection of HAI, and their judicious use for internal surveillance efforts and external quality assessment is recommended. If hospitals and policymakers choose to rely on administrative data for HAI surveillance, continued improvements to existing algorithms and their robust validation are imperative. PMID:26316651
ERIC Educational Resources Information Center
Kwon, Heekyung
2011-01-01
The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…
Poor readers' retrieval mechanism: efficient access is not dependent on reading skill
Johns, Clinton L.; Matsuki, Kazunaga; Van Dyke, Julie A.
2015-01-01
A substantial body of evidence points to a cue-based direct-access retrieval mechanism as a crucial component of skilled adult reading. We report two experiments aimed at examining whether poor readers are able to make use of the same retrieval mechanism. This is significant in light of findings that poor readers have difficulty retrieving linguistic information (e.g., Perfetti, 1985). Our experiments are based on a previous demonstration of direct-access retrieval in language processing, presented in McElree et al. (2003). Experiment 1 replicates the original result using an auditory implementation of the Speed-Accuracy Tradeoff (SAT) method. This finding represents a significant methodological advance, as it opens up the possibility of exploring retrieval speeds in non-reading populations. Experiment 2 provides evidence that poor readers do use a direct-access retrieval mechanism during listening comprehension, despite overall poorer accuracy and slower retrieval speeds relative to skilled readers. The findings are discussed with respect to hypotheses about the source of poor reading comprehension. PMID:26528212
Fuzzy logic-based analogue forecasting and hybrid modelling of horizontal visibility
NASA Astrophysics Data System (ADS)
Tuba, Zoltán; Bottyán, Zsolt
2018-04-01
Forecasting visibility is one of the greatest challenges in aviation meteorology. At the same time, high accuracy visibility forecasts can significantly reduce or make avoidable weather-related risk in aviation as well. To improve forecasting visibility, this research links fuzzy logic-based analogue forecasting and post-processed numerical weather prediction model outputs in hybrid forecast. Performance of analogue forecasting model was improved by the application of Analytic Hierarchy Process. Then, linear combination of the mentioned outputs was applied to create ultra-short term hybrid visibility prediction which gradually shifts the focus from statistical to numerical products taking their advantages during the forecast period. It gives the opportunity to bring closer the numerical visibility forecast to the observations even it is wrong initially. Complete verification of categorical forecasts was carried out; results are available for persistence and terminal aerodrome forecasts (TAF) as well in order to compare. The average value of Heidke Skill Score (HSS) of examined airports of analogue and hybrid forecasts shows very similar results even at the end of forecast period where the rate of analogue prediction in the final hybrid output is 0.1-0.2 only. However, in case of poor visibility (1000-2500 m), hybrid (0.65) and analogue forecasts (0.64) have similar average of HSS in the first 6 h of forecast period, and have better performance than persistence (0.60) or TAF (0.56). Important achievement that hybrid model takes into consideration physics and dynamics of the atmosphere due to the increasing part of the numerical weather prediction. In spite of this, its performance is similar to the most effective visibility forecasting methods and does not follow the poor verification results of clearly numerical outputs.
Bashari, Hossein; Naghipour, Ali Asghar; Khajeddin, Seyed Jamaleddin; Sangoony, Hamed; Tahmasebi, Pejman
2016-09-01
Identifying areas that have a high risk of burning is a main component of fire management planning. Although the available tools can predict the fire risks, these are poor in accommodating uncertainties in their predictions. In this study, we accommodated uncertainty in wildfire prediction using Bayesian belief networks (BBNs). An influence diagram was developed to identify the factors influencing wildfire in arid and semi-arid areas of Iran, and it was populated with probabilities to produce a BBNs model. The behavior of the model was tested using scenario and sensitivity analysis. Land cover/use, mean annual rainfall, mean annual temperature, elevation, and livestock density were recognized as the main variables determining wildfire occurrence. The produced model had good accuracy as its ROC area under the curve was 0.986. The model could be applied in both predictive and diagnostic analysis for answering "what if" and "how" questions. The probabilistic relationships within the model can be updated over time using observation and monitoring data. The wildfire BBN model may be updated as new knowledge emerges; hence, it can be used to support the process of adaptive management.
CRC-113 gene expression signature for predicting prognosis in patients with colorectal cancer
Nguyen, Dinh Truong; Kim, Jin-Hwan; Jo, Yong Hwa; Shahid, Muhammad; Akter, Salima; Aryal, Saurav Nath; Yoo, Ji Youn; Ahn, Yong-Joo; Cho, Kyoung Min; Lee, Ju-Seog; Choe, Wonchae; Kang, Insug; Ha, Joohun; Kim, Sung Soo
2015-01-01
Colorectal cancer (CRC) is the third leading cause of global cancer mortality. Recent studies have proposed several gene signatures to predict CRC prognosis, but none of those have proven reliable for predicting prognosis in clinical practice yet due to poor reproducibility and molecular heterogeneity. Here, we have established a prognostic signature of 113 probe sets (CRC-113) that include potential biomarkers and reflect the biological and clinical characteristics. Robustness and accuracy were significantly validated in external data sets from 19 centers in five countries. In multivariate analysis, CRC-113 gene signature showed a stronger prognostic value for survival and disease recurrence in CRC patients than current clinicopathological risk factors and molecular alterations. We also demonstrated that the CRC-113 gene signature reflected both genetic and epigenetic molecular heterogeneity in CRC patients. Furthermore, incorporation of the CRC-113 gene signature into a clinical context and molecular markers further refined the selection of the CRC patients who might benefit from postoperative chemotherapy. Conclusively, CRC-113 gene signature provides new possibilities for improving prognostic models and personalized therapeutic strategies. PMID:26397224
CRC-113 gene expression signature for predicting prognosis in patients with colorectal cancer.
Nguyen, Minh Nam; Choi, Tae Gyu; Nguyen, Dinh Truong; Kim, Jin-Hwan; Jo, Yong Hwa; Shahid, Muhammad; Akter, Salima; Aryal, Saurav Nath; Yoo, Ji Youn; Ahn, Yong-Joo; Cho, Kyoung Min; Lee, Ju-Seog; Choe, Wonchae; Kang, Insug; Ha, Joohun; Kim, Sung Soo
2015-10-13
Colorectal cancer (CRC) is the third leading cause of global cancer mortality. Recent studies have proposed several gene signatures to predict CRC prognosis, but none of those have proven reliable for predicting prognosis in clinical practice yet due to poor reproducibility and molecular heterogeneity. Here, we have established a prognostic signature of 113 probe sets (CRC-113) that include potential biomarkers and reflect the biological and clinical characteristics. Robustness and accuracy were significantly validated in external data sets from 19 centers in five countries. In multivariate analysis, CRC-113 gene signature showed a stronger prognostic value for survival and disease recurrence in CRC patients than current clinicopathological risk factors and molecular alterations. We also demonstrated that the CRC-113 gene signature reflected both genetic and epigenetic molecular heterogeneity in CRC patients. Furthermore, incorporation of the CRC-113 gene signature into a clinical context and molecular markers further refined the selection of the CRC patients who might benefit from postoperative chemotherapy. Conclusively, CRC-113 gene signature provides new possibilities for improving prognostic models and personalized therapeutic strategies.
Prediction of Happy-Sad Mood from Daily Behaviors and Previous Sleep History
Sano, Akane; Yu, Amy; McHill, Andrew W.; Phillips, Andrew J. K.; Taylor, Sara; Jaques, Natasha; Klerman, Elizabeth B.; Picard, Rosalind W.
2016-01-01
We collected and analyzed subjective and objective data using surveys and wearable sensors worn day and night from 68 participants, for 30 days each, to address questions related to the relationships among sleep duration, sleep irregularity, self-reported Happy-Sad mood and other factors in college students. We analyzed daily and monthly behavior and physiology and identified factors that affect mood, including how accurately sleep duration and sleep regularity for the past 1-5 days classified the participants into high/low mood using support vector machines. We found statistically significant associations among sad mood and poor health-related factors. Behavioral factors such as the percentage of neutral social interactions and the total academic activity hours showed the best performance in separating the Happy-Sad mood groups. Sleep regularity was a more important discriminator of mood than sleep duration for most participants, although both variables predicted happy/sad mood with from 70-82% accuracy. The number of nights giving the best prediction of happy/sad mood varied for different groups of individuals. PMID:26737854
The value of vital sign trends for detecting clinical deterioration on the wards
Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P
2016-01-01
Aim Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient’s current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Methods Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). Results A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC −0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Conclusion Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. PMID:26898412
The value of vital sign trends for detecting clinical deterioration on the wards.
Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P
2016-05-01
Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient's current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC -0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Accuracies of univariate and multivariate genomic prediction models in African cassava.
Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2017-12-04
Genomic selection (GS) promises to accelerate genetic gain in plant breeding programs especially for crop species such as cassava that have long breeding cycles. Practically, to implement GS in cassava breeding, it is necessary to evaluate different GS models and to develop suitable models for an optimized breeding pipeline. In this paper, we compared (1) prediction accuracies from a single-trait (uT) and a multi-trait (MT) mixed model for a single-environment genetic evaluation (Scenario 1), and (2) accuracies from a compound symmetric multi-environment model (uE) parameterized as a univariate multi-kernel model to a multivariate (ME) multi-environment mixed model that accounts for genotype-by-environment interaction for multi-environment genetic evaluation (Scenario 2). For these analyses, we used 16 years of public cassava breeding data for six target cassava traits and a fivefold cross-validation scheme with 10-repeat cycles to assess model prediction accuracies. In Scenario 1, the MT models had higher prediction accuracies than the uT models for all traits and locations analyzed, which amounted to on average a 40% improved prediction accuracy. For Scenario 2, we observed that the ME model had on average (across all locations and traits) a 12% improved prediction accuracy compared to the uE model. We recommend the use of multivariate mixed models (MT and ME) for cassava genetic evaluation. These models may be useful for other plant species.
The Use of Linear Programming for Prediction.
ERIC Educational Resources Information Center
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
Accuracy of CNV Detection from GWAS Data.
Zhang, Dandan; Qian, Yudong; Akula, Nirmala; Alliey-Rodriguez, Ney; Tang, Jinsong; Gershon, Elliot S; Liu, Chunyu
2011-01-13
Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites--Birdsuite, Partek, HelixTree, and PennCNV-Affy--in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two "gold standards," the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a "gold standard" for detection of CNVs remains to be established.
van Eijkeren, Jan C H; Olie, J Daniël N; Bradberry, Sally M; Vale, J Allister; de Vries, Irma; Clewell, Harvey J; Meulenbelt, Jan; Hunault, Claudine C
2017-02-01
Kinetic models could assist clinicians potentially in managing cases of lead poisoning. Several models exist that can simulate lead kinetics but none of them can predict the effect of chelation in lead poisoning. Our aim was to devise a model to predict the effect of succimer (dimercaptosuccinic acid; DMSA) chelation therapy on blood lead concentrations. We integrated a two-compartment kinetic succimer model into an existing PBPK lead model and produced a Chelation Lead Therapy (CLT) model. The accuracy of the model's predictions was assessed by simulating clinical observations in patients poisoned by lead and treated with succimer. The CLT model calculates blood lead concentrations as the sum of the background exposure and the acute or chronic lead poisoning. The latter was due either to ingestion of traditional remedies or occupational exposure to lead-polluted ambient air. The exposure duration was known. The blood lead concentrations predicted by the CLT model were compared to the measured blood lead concentrations. Pre-chelation blood lead concentrations ranged between 99 and 150 μg/dL. The model was able to simulate accurately the blood lead concentrations during and after succimer treatment. The pattern of urine lead excretion was successfully predicted in some patients, while poorly predicted in others. Our model is able to predict blood lead concentrations after succimer therapy, at least, in situations where the duration of lead exposure is known.
Samadoulougou, Sekou; Kirakoya-Samadoulougou, Fati; Sarrassat, Sophie; Tinto, Halidou; Bakiono, Fidèle; Nebié, Issa; Robert, Annie
2014-03-17
Over the past ten years, Rapid Diagnostic Tests (RDT) played a major role in improving the use of biological malaria diagnosis, in particular in poor-resources settings. In Burkina Faso, a recent Demography and Health Survey (DHS) gave the opportunity to assess the performance of the Paracheck® test in under five children nationwide at community level. A national representative sample of 14,947 households was selected using a stratified two-stage cluster sampling. In one out of two households, all under five children were eligible to be tested for malaria using both RDT and microscopy diagnosis. Paracheck® performance was assessed using miscroscopy as the gold standard. Sensitivity and specificity were calculated as well as the diagnosis accuracy (DA) and the Youden index. The malaria infection prevalence was estimated at 66% (95% CI: 64.8-67.2) according to microscopy and at 76.2% (95% CI: 75.1-77.3) according to Paracheck®. The sensitivity and specificity were estimated at 89.9% (95% CI: 89.0-90.8) and 50.4% (95% CI: 48.3-52.6) respectively with a Diagnosis Accuracy of 77% and a Youden index of 40%. The positive predictive value for malaria infection was 77.9% (95% CI: 76.7-79.1) and the negative predictive value was 72.1% (95% CI: 69.7-74.3). Variations were found by age group, period of the year and urban and rural areas, as well as across the 13 regions of the country. While the sensitivity of the Paracheck® test was high, its specificity was poor in the general under five population of Burkina Faso. These results suggest that Paracheck® is not suitable to assess malaria infection prevalence at community level in areas with high malaria transmission. In such settings, malaria prevalence in the general population could be estimated using microscopy.
Bourdel, Nicolas; Modaffari, Paola; Tognazza, Enrica; Pertile, Riccardo; Chauvet, Pauline; Botchorishivili, Revaz; Savary, Dennis; Pouly, Jean Luc; Rabischong, Benoit; Canis, Michel
2016-12-01
Hysteroscopic reliability may be influenced by the experience of the operator and by a lack of morphological diagnostic criteria for endometrial malignant pathologies. The aim of this study was to evaluate the diagnostic accuracy and the inter-observer agreement (IOA) in the management of abnormal uterine bleeding (AUB) among different experienced gynecologists. Each gynecologist, without any other clinical information, was asked to evaluate the anonymous video recordings of 51 consecutive patients who underwent hysteroscopy and endometrial resection for AUB. Experts (>500 hysteroscopies), seniors (20-499 procedures) and junior (≤19 procedures) gynecologists were asked to judge endometrial macroscopic appearance (benign, suspicious or frankly malignant). They also had to propose the histological diagnosis (atrophic or proliferative endometrium; simple, glandulocystic or atypical endometrial hyperplasia and endometrial carcinoma). Observers were free to indicate whether the quality of recordings were not good enough for adequate assessment. IOA (k coefficient), sensitivity, specificity, predictive value and the likelihood ratio were calculated. Five expert, five senior and six junior gynecologists were involved in the study. Considering endometrial cancer and endometrial atypical hyperplasia, sensitivity and specificity were respectively 55.5 % and 84.5 % for juniors, 66.6 % and 81.2 % for seniors and 86.6 % and 87.3 % for experts. Concerning endometrial macroscopic appearance, IOA was poor for juniors (k = 0.10) and fair for seniors and experts (k = 0.23 and 0.22, respectively). IOA was poor for juniors and experts (k = 0.18 and 0.20, respectively) and fair for seniors (k = 0.30) in predicting the histological diagnosis. Sensitivity improves with the observer's experience, but inter-observer agreement and reproducibility of hysteroscopy for endometrial malignancies are not satisfying no matter the level of expertise. Therefore, an accurate and complete endometrial sampling is still needed.
PPCM: Combing multiple classifiers to improve protein-protein interaction prediction
Yao, Jianzhuang; Guo, Hong; Yang, Xiaohan
2015-08-01
Determining protein-protein interaction (PPI) in biological systems is of considerable importance, and prediction of PPI has become a popular research area. Although different classifiers have been developed for PPI prediction, no single classifier seems to be able to predict PPI with high confidence. We postulated that by combining individual classifiers the accuracy of PPI prediction could be improved. We developed a method called protein-protein interaction prediction classifiers merger (PPCM), and this method combines output from two PPI prediction tools, GO2PPI and Phyloprof, using Random Forests algorithm. The performance of PPCM was tested by area under the curve (AUC) using anmore » assembled Gold Standard database that contains both positive and negative PPI pairs. Our AUC test showed that PPCM significantly improved the PPI prediction accuracy over the corresponding individual classifiers. We found that additional classifiers incorporated into PPCM could lead to further improvement in the PPI prediction accuracy. Furthermore, cross species PPCM could achieve competitive and even better prediction accuracy compared to the single species PPCM. This study established a robust pipeline for PPI prediction by integrating multiple classifiers using Random Forests algorithm. Ultimately, this pipeline will be useful for predicting PPI in nonmodel species.« less
Reading Arabic Texts: Effects of Text Type, Reader Type and Vowelization.
ERIC Educational Resources Information Center
Abu-Rabia, Salim
1998-01-01
Investigates the effect of vowels on reading accuracy in Arabic orthography. Finds that vowels had a significant effect on reading accuracy of poor and skilled readers in reading each of four kinds of texts. (NH)
Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène
2015-03-01
Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.
Evaluating the Discriminant Accuracy of a Grammatical Measure With Spanish-Speaking Children
Gutiérrez-Clellen, Vera F.; Restrepo, M. Adelaida; Simón-Cereijido, Gabriela
2012-01-01
Purpose The purpose of this study was to evaluate the discriminant accuracy of a grammatical measure for the identification of language impairment in Latino Spanish-speaking children. The authors hypothesized that if exposure to and use of English as a second language have an effect on the first language, bilingual children might exhibit lower rates of grammatical accuracy than their peers and be more likely to be misclassified. Method Eighty children with typical language development and 80 with language impairment were sampled from 4 different geographical regions and compared using linear discriminant function analysis. Results Results indicated fair-to-good sensitivity from 4;0 to 5;1 years, good sensitivity from 5;2 to 5;11 years, and poor sensitivity above age 6 years. The discriminant functions derived from the exploratory studies were able to predict group membership in confirmatory analyses with fair-to-excellent sensitivity up to age 6 years. Children who were bilingual did not show lower scores and were not more likely to be misclassified compared with their Spanish-only peers. Conclusions The measure seems to be appropriate for identifying language impairment in either Spanish-dominant or Spanish-only speakers between 4 and 6 years of age. However, for older children, supplemental testing is necessary. PMID:17197491
Efficacy of "Dimodent" sex predictive equation assessed in an Indian population.
Bharti, A; Angadi, P V; Kale, A D; Hallikerimath, S R
2011-07-01
Teeth are considered as a useful adjunct for sex assessment and may play an important role in constructing a post-mortem profile. The Dimodent method is based on the high degree of sex discrimination obtained with the mandibular canine and the high correlation coefficients between mandibular canine and lateral incisor mesiodistal (MD) and buccolingual (BL) dimensions. This has been evaluated in the French and Lebanese, but no study exists on its efficacy in Indians. Here, we have applied the 'Dimodent' equation on an Indian sample (100 males, 100 females; age range of 19-27yrs). Additionally, a population-specific Dimodent equation was derived using logistic regression analysis and applied to our sample. Also, the sex determination potential of MD and BL measurements of mandibular lateral incisors and canines, individually, was assessed. We found a poor sex assessment accuracy using the Dimodent equation of Fronty (34.5%) in our Indian sample, but the populationspecific Dimodent equation gave a better accuracy (72%).Thus, it appears that sexual dimorphism in teeth is population-specific; consequently the Dimodent equation has to be derived individually in different populations for use in sex assessment. The mesiodistal measurement of the mandibular canine alone gave a marginally higher accuracy (72.5%); therefore, we suggest the use of mandibular canines alone rather than the Dimodent method.
Klink, P Christiaan; Jeurissen, Danique; Theeuwes, Jan; Denys, Damiaan; Roelfsema, Pieter R
2017-08-22
The richness of sensory input dictates that the brain must prioritize and select information for further processing and storage in working memory. Stimulus salience and reward expectations influence this prioritization but their relative contributions and underlying mechanisms are poorly understood. Here we investigate how the quality of working memory for multiple stimuli is determined by priority during encoding and later memory phases. Selective attention could, for instance, act as the primary gating mechanism when stimuli are still visible. Alternatively, observers might still be able to shift priorities across memories during maintenance or retrieval. To distinguish between these possibilities, we investigated how and when reward cues determine working memory accuracy and found that they were only effective during memory encoding. Previously learned, but currently non-predictive, color-reward associations had a similar influence, which gradually weakened without reinforcement. Finally, we show that bottom-up salience, manipulated through varying stimulus contrast, influences memory accuracy during encoding with a fundamentally different time-course than top-down reward cues. While reward-based effects required long stimulus presentation, the influence of contrast was strongest with brief presentations. Our results demonstrate how memory resources are distributed over memory targets and implicates selective attention as a main gating mechanism between sensory and memory systems.
Occlusal factors are not related to self-reported bruxism.
Manfredini, Daniele; Visscher, Corine M; Guarda-Nardini, Luca; Lobbezoo, Frank
2012-01-01
To estimate the contribution of various occlusal features of the natural dentition that may identify self-reported bruxers compared to nonbruxers. Two age- and sex-matched groups of self-reported bruxers (n = 67) and self-reported nonbruxers (n = 75) took part in the study. For each patient, the following occlusal features were clinically assessed: retruded contact position (RCP) to intercuspal contact position (ICP) slide length (< 2 mm was considered normal), vertical overlap (< 0 mm was considered an anterior open bite; > 4 mm, a deep bite), horizontal overlap (> 4 mm was considered a large horizontal overlap), incisor dental midline discrepancy (< 2 mm was considered normal), and the presence of a unilateral posterior crossbite, mediotrusive interferences, and laterotrusive interferences. A multiple logistic regression model was used to identify the significant associations between the assessed occlusal features (independent variables) and self-reported bruxism (dependent variable). Accuracy values to predict self-reported bruxism were unacceptable for all occlusal variables. The only variable remaining in the final regression model was laterotrusive interferences (P = .030). The percentage of explained variance for bruxism by the final multiple regression model was 4.6%. This model including only one occlusal factor showed low positive (58.1%) and negative predictive values (59.7%), thus showing a poor accuracy to predict the presence of self-reported bruxism (59.2%). This investigation suggested that the contribution of occlusion to the differentiation between bruxers and nonbruxers is negligible. This finding supports theories that advocate a much diminished role for peripheral anatomical-structural factors in the pathogenesis of bruxism.
Pitcher, Brandon; Alaqla, Ali; Noujeim, Marcel; Wealleans, James A; Kotsakis, Georgios; Chrepa, Vanessa
2017-03-01
Cone-beam computed tomographic (CBCT) analysis allows for 3-dimensional assessment of periradicular lesions and may facilitate preoperative periapical cyst screening. The purpose of this study was to develop and assess the predictive validity of a cyst screening method based on CBCT volumetric analysis alone or combined with designated radiologic criteria. Three independent examiners evaluated 118 presurgical CBCT scans from cases that underwent apicoectomies and had an accompanying gold standard histopathological diagnosis of either a cyst or granuloma. Lesion volume, density, and specific radiologic characteristics were assessed using specialized software. Logistic regression models with histopathological diagnosis as the dependent variable were constructed for cyst prediction, and receiver operating characteristic curves were used to assess the predictive validity of the models. A conditional inference binary decision tree based on a recursive partitioning algorithm was constructed to facilitate preoperative screening. Interobserver agreement was excellent for volume and density, but it varied from poor to good for the radiologic criteria. Volume and root displacement were strong predictors for cyst screening in all analyses. The binary decision tree classifier determined that if the volume of the lesion was >247 mm 3 , there was 80% probability of a cyst. If volume was <247 mm 3 and root displacement was present, cyst probability was 60% (78% accuracy). The good accuracy and high specificity of the decision tree classifier renders it a useful preoperative cyst screening tool that can aid in clinical decision making but not a substitute for definitive histopathological diagnosis after biopsy. Confirmatory studies are required to validate the present findings. Published by Elsevier Inc.
Chen, Mao-Gen; Wang, Xiao-Ping; Ju, Wei-Qiang; Zhao, Qiang; Wu, Lin-Wei; Ren, Qing-Qi; Guo, Zhi-Yong; Wang, Dong-Ping; Zhu, Xiao-Feng; Ma, Yi; He, Xiao-Shun
2017-01-01
Objectives Elevated plasma fibrinogen (Fib) correlated with patient's prognosis in several solid tumors. However, few studies have illuminated the relationship between preoperative Fib and prognosis of HCC after liver transplantation. We aimed to clarify the prognostic value of Fib and whether the prognostic accuracy can be enhanced by the combination of Fib and neutrophil–lymphocyte ratio (NLR). Results Fib was correlated with Child-pugh stage, alpha-fetoprotein (AFP), size of largest tumor, macro- and micro-vascular invasion. Univariate analysis showed preoperative Fib, AFP, NLR, size of largest tumor, tumor number, macro- and micro- vascular invasion were significantly associated with disease-free survival (DFS) and overall survival (OS) in HCC patients with liver transplantation. After multivariate analysis, only Fib and macro-vascular invasion were independently correlated with DFS and OS. Survival analysis showed that preoperative Fib > 2.345 g/L predicted poor prognosis of patients HCC after liver transplantation. Preoperative Fib showed prognostic value in various subgroups of HCC. Furthermore, the predictive range was expanded by the combination of Fib and NLR. Materials and Methods Data were collected retrospectively from 130 HCC patients who underwent liver transplantation. Preoperative Fib, NLR and clinicopathologic variables were analyzed. The survival analysis was performed by the Kaplan-Meier method, and compared by the log-rank test. Univariate and multivariate analyses were performed to identify the prognostic factors for DFS and OS. Conclusions Preoperative Fib is an independent effective predictor of prognosis for HCC patients, higher levels of Fib predict poorer outcomes and the combination of Fib and NLR enlarges the prognostic accuracy of testing. PMID:27935864
McConville, Anna; Law, Bradley S.; Mahony, Michael J.
2013-01-01
Habitat modelling and predictive mapping are important tools for conservation planning, particularly for lesser known species such as many insectivorous bats. However, the scale at which modelling is undertaken can affect the predictive accuracy and restrict the use of the model at different scales. We assessed the validity of existing regional-scale habitat models at a local-scale and contrasted the habitat use of two morphologically similar species with differing conservation status (Mormopterus norfolkensis and Mormopterus species 2). We used negative binomial generalised linear models created from indices of activity and environmental variables collected from systematic acoustic surveys. We found that habitat type (based on vegetation community) best explained activity of both species, which were more active in floodplain areas, with most foraging activity recorded in the freshwater wetland habitat type. The threatened M. norfolkensis avoided urban areas, which contrasts with M. species 2 which occurred frequently in urban bushland. We found that the broad habitat types predicted from local-scale models were generally consistent with those from regional-scale models. However, threshold-dependent accuracy measures indicated a poor fit and we advise caution be applied when using the regional models at a fine scale, particularly when the consequences of false negatives or positives are severe. Additionally, our study illustrates that habitat type classifications can be important predictors and we suggest they are more practical for conservation than complex combinations of raw variables, as they are easily communicated to land managers. PMID:23977296
Sert Kuniyoshi, Fatima H.; Zellmer, Mark R.; Calvin, Andrew D.; Lopez-Jimenez, Francisco; Albuquerque, Felipe N.; van der Walt, Christelle; Trombetta, Ivani C; Caples, Sean M.; Shamsuzzaman, Abu S.; Bukartyk, Jan; Konecny, Tomas; Gami, Apoor S.; Kara, Tomas
2011-01-01
Background: The Berlin Questionnaire (BQ) has been used to identify patients at high risk for sleep-disordered breathing (SDB) in a variety of populations. However, there are no data regarding the validity of the BQ in detecting the presence of SDB in patients after myocardial infarction (MI). The aim of this study was to determine the performance of the BQ in patients after MI. Methods: We conducted a cross-sectional study of 99 patients who had an MI 1 to 3 months previously. The BQ was administered, scored using the published methods, and followed by completed overnight polysomnography as the “gold standard.” SDB was defined as an apnea-hypopnea index of ≥ 5 events/h. The sensitivity, specificity, and positive and negative predictive values of the BQ were calculated. Results: Of the 99 patients, the BQ identified 64 (65%) as being at high-risk for having SDB. Overnight polysomnography showed that 73 (73%) had SDB. The BQ sensitivity and specificity was 0.68 and 0.34, respectively, with a positive predictive value of 0.68 and a negative predictive value of 0.50. Positive and negative likelihood ratios were 1.27 and 0.68, respectively, and the BQ overall diagnostic accuracy was 63%. Using different apnea-hypopnea index cutoff values did not meaningfully alter these results. Conclusion: The BQ performed with modest sensitivity, but the specificity was poor, suggesting that the BQ is not ideal in identifying SDB in patients with a recent MI. PMID:21596794
Bio-knowledge based filters improve residue-residue contact prediction accuracy.
Wozniak, P P; Pelc, J; Skrzypecki, M; Vriend, G; Kotulska, M
2018-05-29
Residue-residue contact prediction through direct coupling analysis has reached impressive accuracy, but yet higher accuracy will be needed to allow for routine modelling of protein structures. One way to improve the prediction accuracy is to filter predicted contacts using knowledge about the particular protein of interest or knowledge about protein structures in general. We focus on the latter and discuss a set of filters that can be used to remove false positive contact predictions. Each filter depends on one or a few cut-off parameters for which the filter performance was investigated. Combining all filters while using default parameters resulted for a test-set of 851 protein domains in the removal of 29% of the predictions of which 92% were indeed false positives. All data and scripts are available from http://comprec-lin.iiar.pwr.edu.pl/FPfilter/. malgorzata.kotulska@pwr.edu.pl. Supplementary data are available at Bioinformatics online.
Protein contact prediction using patterns of correlation.
Hamilton, Nicholas; Burrage, Kevin; Ragan, Mark A; Huber, Thomas
2004-09-01
We describe a new method for using neural networks to predict residue contact pairs in a protein. The main inputs to the neural network are a set of 25 measures of correlated mutation between all pairs of residues in two "windows" of size 5 centered on the residues of interest. While the individual pair-wise correlations are a relatively weak predictor of contact, by training the network on windows of correlation the accuracy of prediction is significantly improved. The neural network is trained on a set of 100 proteins and then tested on a disjoint set of 1033 proteins of known structure. An average predictive accuracy of 21.7% is obtained taking the best L/2 predictions for each protein, where L is the sequence length. Taking the best L/10 predictions gives an average accuracy of 30.7%. The predictor is also tested on a set of 59 proteins from the CASP5 experiment. The accuracy is found to be relatively consistent across different sequence lengths, but to vary widely according to the secondary structure. Predictive accuracy is also found to improve by using multiple sequence alignments containing many sequences to calculate the correlations. Copyright 2004 Wiley-Liss, Inc.
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
Correcting Memory Improves Accuracy of Predicted Task Duration
ERIC Educational Resources Information Center
Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.
2008-01-01
People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…
Tang, Jing-Hua; An, Xin; Lin, Xi; Gao, Yuan-Hong; Liu, Guo-Chen; Kong, Ling-Heng; Pan, Zhi-Zhong; Ding, Pei-Rong
2015-10-20
Patients with pathological complete remission (pCR) after treated with neoadjuvant chemoradiotherapy (nCRT) have better long-term outcome and may receive conservative treatments in locally advanced rectal cancer (LARC). The study aimed to evaluate the value of forceps biopsy and core needle biopsy in prediction of pCR in LARC treated with nCRT. In total, 120 patients entered this study. Sixty-one consecutive patients received preoperative forceps biopsy during endoscopic examination. Ex vivo core needle biopsy was performed in resected specimens of another 43 consecutive patients. The accuracy for ex vivo core needle biopsy was significantly higher than forceps biopsy (76.7% vs. 36.1%; p < 0.001). The sensitivity for ex vivo core needle biopsy was significantly lower in good responder (TRG 3) than poor responder (TRG ≤ 2) (52.9% vs. 94.1%; p = 0.017). In vivo core needle biopsy was further performed in 16 patients with good response. Eleven patients had residual cancer cells in final resected specimens, among whom 4 (36.4%) patients were biopsy positive. In conclusion, routine forceps biopsy was of limited value in identifying pCR after nCRT. Although core needle biopsy might further identify a subset of patients with residual cancer cells, the accuracy was not substantially increased in good responders.
Gao, Yuan-Hong; Liu, Guo-Chen; Kong, Ling-Heng; Pan, Zhi-Zhong; Ding, Pei-Rong
2015-01-01
Patients with pathological complete remission (pCR) after treated with neoadjuvant chemoradiotherapy (nCRT) have better long-term outcome and may receive conservative treatments in locally advanced rectal cancer (LARC). The study aimed to evaluate the value of forceps biopsy and core needle biopsy in prediction of pCR in LARC treated with nCRT. In total, 120patients entered this study. Sixty-one consecutive patients received preoperative forceps biopsy during endoscopic examination. Ex vivo core needle biopsy was performed in resected specimens of another 43 consecutive patients. The accuracy for ex vivo core needle biopsy was significantly higher than forceps biopsy (76.7% vs. 36.1%; p < 0.001). The sensitivity for ex vivo core needle biopsy was significantly lower in good responder (TRG 3) than poor responder (TRG ≤ 2) (52.9% vs. 94.1%; p = 0.017). In vivo core needle biopsy was further performed in 16 patients with good response. Eleven patients had residual cancer cells in final resected specimens, among whom 4 (36.4%) patients were biopsy positive. In conclusion, routine forceps biopsy was of limited value in identifying pCR after nCRT. Although core needle biopsy might further identify a subset of patients with residual cancer cells, the accuracy was not substantially increased in good responders. PMID:26416245
Forecasting malaria in a highly endemic country using environmental and clinical predictors.
Zinszer, Kate; Kigozi, Ruth; Charland, Katia; Dorsey, Grant; Brewer, Timothy F; Brownstein, John S; Kamya, Moses R; Buckeridge, David L
2015-06-18
Malaria thrives in poor tropical and subtropical countries where local resources are limited. Accurate disease forecasts can provide public and clinical health services with the information needed to implement targeted approaches for malaria control that make effective use of limited resources. The objective of this study was to determine the relevance of environmental and clinical predictors of malaria across different settings in Uganda. Forecasting models were based on health facility data collected by the Uganda Malaria Surveillance Project and satellite-derived rainfall, temperature, and vegetation estimates from 2006 to 2013. Facility-specific forecasting models of confirmed malaria were developed using multivariate autoregressive integrated moving average models and produced weekly forecast horizons over a 52-week forecasting period. The model with the most accurate forecasts varied by site and by forecast horizon. Clinical predictors were retained in the models with the highest predictive power for all facility sites. The average error over the 52 forecasting horizons ranged from 26 to 128% whereas the cumulative burden forecast error ranged from 2 to 22%. Clinical data, such as drug treatment, could be used to improve the accuracy of malaria predictions in endemic settings when coupled with environmental predictors. Further exploration of malaria forecasting is necessary to improve its accuracy and value in practice, including examining other environmental and intervention predictors, including insecticide-treated nets.
Khan, Momna; Sultana, Syeda Seema; Jabeen, Nigar; Arain, Uzma; Khans, Salma
2015-02-01
To determine the diagnostic accuracy of visual inspection of cervix using 3% acetic acid as a screening test for early detection of cervical cancer taking histopathology as the gold standard. The cross-sectional study was conducted at Civil Hospital Karachi from July 1 to December 31, 2012 and comprised all sexually active women aged 19-60 years. During speculum examination 3% acetic acid was applied over the cervix with the help of cotton swab. The observations were noted as positive or negative on visual inspection of the cervix after acetic acid application according to acetowhite changes. Colposcopy-guided cervical biopsy was done in patients with positive or abnormal looking cervix. Colposcopic-directed biopsy was taken as the gold standard to assess visual inspection readings. SPSS 17 was used for statistical analysis. There were 500 subjects with a mean age of 35.74 ± 9.64 years. Sensitivity, specifically, positive predicted value, negative predicted value of visual inspection of the cervix after acetic acid application was 93.5%, 95.8%, 76.3%, 99%, and the diagnostic accuracy was 95.6%. Visual inspection of the cervix after acetic acid application is an effective method of detecting pre-invasive phase of cervical cancer and a good alternative to cytological screening for cervical cancer in resource-poor setting like Pakistan and can reduce maternal morbidity and mortality.
Zhang, Liqin; Yan, Ye; Han, Cha; Xue, Fengxia
2018-01-01
Objective To evaluate the diagnostic accuracy of the 2011 International Federation for Cervical Pathology and Colposcopy (IFCPC) colposcopic terminology. Methods The clinicopathological data of 2262 patients who underwent colposcopy from September 2012 to September 2016 were reviewed. The colposcopic findings, colposcopic impression, and cervical histopathology of the patients were analyzed. Correlations between variables were evaluated using cervical histopathology as the gold standard. Results Colposcopic diagnosis matched biopsy histopathology in 1482 patients (65.5%), and the weighted kappa strength of agreement was 0.480 (P<0.01). Colposcopic diagnoses more often underestimated (22.1%) than overestimated (12.3%) cervical pathology. There was no significant difference between the colposcopic diagnosis and cervical pathology agreement among the various grades of lesions (P=0.282). The sensitivity, specificity for detecting high-grade lesions/carcinoma was 71.6% and 98.0%, respectively. Multivariate analysis showed that major changes were independent factors in predicting high-grade lesion/carcinoma, whereas transformation zone, lesion size, and non-stained were not statistically related to high-grade lesion/carcinoma. Conclusions The 2011 IFCPC terminology can improve the diagnostic accuracy for all lesion severities. The categorization of major changes and minor changes is appropriate. However, colposcopic diagnosis remains unsatisfactory. Poor reproducibility of type 2 transformation zone and the significance of leukoplakia require further study. PMID:29507681
ERIC Educational Resources Information Center
Nation, Kate; Cocksey, Joanne; Taylor, Jo S. H.; Bishop, Dorothy V. M.
2010-01-01
Background: Poor comprehenders have difficulty comprehending connected text, despite having age-appropriate levels of reading accuracy and fluency. We used a longitudinal design to examine earlier reading and language skills in children identified as poor comprehenders in mid-childhood. Method: Two hundred and forty-two children began the study at…
ERIC Educational Resources Information Center
Rabia, Salim Abu; Siegel, Linda S.
1995-01-01
Investigates whether Arabic orthography differs from an alphabetic orthography regarding context effects among poor and skilled readers. Finds that skilled as well as poor readers significantly improved their reading accuracy when they read voweled and unvoweled words in context and that skilled readers significantly improved their reading voweled…
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas
2016-12-01
This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.
Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma
Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.
2013-01-01
Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eos
Training improves interobserver reliability for the diagnosis of scaphoid fracture displacement.
Buijze, Geert A; Guitton, Thierry G; van Dijk, C Niek; Ring, David
2012-07-01
The diagnosis of displacement in scaphoid fractures is notorious for poor interobserver reliability. We tested whether training can improve interobserver reliability and sensitivity, specificity, and accuracy for the diagnosis of scaphoid fracture displacement on radiographs and CT scans. Sixty-four orthopaedic surgeons rated a set of radiographs and CT scans of 10 displaced and 10 nondisplaced scaphoid fractures for the presence of displacement, using a web-based rating application. Before rating, observers were randomized to a training group (34 observers) and a nontraining group (30 observers). The training group received an online training module before the rating session, and the nontraining group did not. Interobserver reliability for training and nontraining was assessed by Siegel's multirater kappa and the Z-test was used to test for significance. There was a small, but significant difference in the interobserver reliability for displacement ratings in favor of the training group compared with the nontraining group. Ratings of radiographs and CT scans combined resulted in moderate agreement for both groups. The average sensitivity, specificity, and accuracy of diagnosing displacement of scaphoid fractures were, respectively, 83%, 85%, and 84% for the nontraining group and 87%, 86%, and 87% for the training group. Assuming a 5% prevalence of fracture displacement, the positive predictive value was 0.23 in the nontraining group and 0.25 in the training group. The negative predictive value was 0.99 in both groups. Our results suggest training can improve interobserver reliability and sensitivity, specificity and accuracy for the diagnosis of scaphoid fracture displacement, but the improvements are slight. These findings are encouraging for future research regarding interobserver variation and how to reduce it further.
Makam, Anil N; Nguyen, Oanh K; Auerbach, Andrew D
2015-06-01
Although timely treatment of sepsis improves outcomes, delays in administering evidence-based therapies are common. To determine whether automated real-time electronic sepsis alerts can: (1) accurately identify sepsis and (2) improve process measures and outcomes. We systematically searched MEDLINE, Embase, The Cochrane Library, and Cumulative Index to Nursing and Allied Health Literature from database inception through June 27, 2014. Included studies that empirically evaluated 1 or both of the prespecified objectives. Two independent reviewers extracted data and assessed the risk of bias. Diagnostic accuracy of sepsis identification was measured by sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and likelihood ratio (LR). Effectiveness was assessed by changes in sepsis care process measures and outcomes. Of 1293 citations, 8 studies met inclusion criteria, 5 for the identification of sepsis (n = 35,423) and 5 for the effectiveness of sepsis alerts (n = 6894). Though definition of sepsis alert thresholds varied, most included systemic inflammatory response syndrome criteria ± evidence of shock. Diagnostic accuracy varied greatly, with PPV ranging from 20.5% to 53.8%, NPV 76.5% to 99.7%, LR+ 1.2 to 145.8, and LR- 0.06 to 0.86. There was modest evidence for improvement in process measures (ie, antibiotic escalation), but only among patients in non-critical care settings; there were no corresponding improvements in mortality or length of stay. Minimal data were reported on potential harms due to false positive alerts. Automated sepsis alerts derived from electronic health data may improve care processes but tend to have poor PPV and do not improve mortality or length of stay. © 2015 Society of Hospital Medicine.
Palucci Vieira, Luiz H; de Andrade, Vitor L; Aquino, Rodrigo L; Moraes, Renato; Barbieri, Fabio A; Cunha, Sérgio A; Bedo, Bruno L; Santiago, Paulo R
2017-12-01
The main aim of this study was to verify the relationship between the classification of coaches and actual performance in field tests that measure the kicking performance in young soccer players, using the K-means clustering technique. Twenty-three U-14 players performed 8 tests to measure their kicking performance. Four experienced coaches provided a rating for each player as follows: 1: poor; 2: below average; 3: average; 4: very good; 5: excellent as related to three parameters (i.e. accuracy, power and ability to put spin on the ball). The scores interval established from k-means cluster metric was useful to originating five groups of performance level, since ANOVA revealed significant differences between clusters generated (P<0.01). Accuracy seems to be moderately predicted by the penalty kick, free kick, kicking the ball rolling and Wall Volley Test (0.44≤r≤0.56), while the ability to put spin on the ball can be measured by the free kick and the corner kick tests (0.52≤r≤0.61). Body measurements, age and PHV did not systematically influence the performance. The Wall Volley Test seems to be a good predictor of other tests. Five tests showed reasonable construct validity and can be used to predict the accuracy (penalty kick, free kick, kicking a rolling ball and Wall Volley Test) and ability to put spin on the ball (free kick and corner kick tests) when kicking in soccer. In contrast, the goal kick, kicking the ball when airborne and the vertical kick tests exhibited low power of discrimination and using them should be viewed with caution.
Makam, Anil N.; Nguyen, Oanh K.; Auerbach, Andrew D.
2015-01-01
Background Although timely treatment of sepsis improves outcomes, delays in administering evidence-based therapies are common. Purpose To determine whether automated real-time electronic sepsis alerts can: 1) accurately identify sepsis, and 2) improve process measures and outcomes. Data Sources We systematically searched MEDLINE, Embase, The Cochrane Library, and CINAHL from database inception through June 27, 2014. Study Selection Included studies that empirically evaluated one or both of the prespecified objectives. Data Extraction Two independent reviewers extracted data and assessed the risk of bias. Diagnostic accuracy of sepsis identification was measured by sensitivity, specificity, positive (PPV) and negative predictive values (NPV) and likelihood ratios (LR). Effectiveness was assessed by changes in sepsis care process measures and outcomes. Data Synthesis Of 1,293 citations, 8 studies met inclusion criteria, 5 for the identification of sepsis (n=35,423) and 5 for the effectiveness of sepsis alerts (n=6,894). Though definition of sepsis alert thresholds varied, most included systemic inflammatory response syndrome criteria ± evidence of shock. Diagnostic accuracy varied greatly, with PPV ranging from 20.5-53.8%, NPV 76.5-99.7%; LR+ 1.2-145.8; and LR- 0.06-0.86. There was modest evidence for improvement in process measures (i.e., antibiotic escalation), but only among patients in non-critical care settings; there were no corresponding improvements in mortality or length of stay. Minimal data were reported on potential harms due to false positive alerts. Conclusions Automated sepsis alerts derived from electronic health data may improve care processes but tend to have poor positive predictive value and do not improve mortality or length of stay. PMID:25758641
Schoene, Daniel; Wu, Sandy M-S; Mikolaizak, A Stefanie; Menant, Jasmine C; Smith, Stuart T; Delbaere, Kim; Lord, Stephen R
2013-02-01
To investigate the discriminative ability and diagnostic accuracy of the Timed Up and Go Test (TUG) as a clinical screening instrument for identifying older people at risk of falling. Systematic literature review and meta-analysis. People aged 60 and older living independently or in institutional settings. Studies were identified with searches of the PubMed, EMBASE, CINAHL, and Cochrane CENTRAL data bases. Retrospective and prospective cohort studies comparing times to complete any version of the TUG of fallers and non-fallers were included. Fifty-three studies with 12,832 participants met the inclusion criteria. The pooled mean difference between fallers and non-fallers depended on the functional status of the cohort investigated: 0.63 seconds (95% confidence (CI) = 0.14-1.12 seconds) for high-functioning to 3.59 seconds (95% CI = 2.18-4.99 seconds) for those in institutional settings. The majority of studies did not retain TUG scores in multivariate analysis. Derived cut-points varied greatly between studies, and with the exception of a few small studies, diagnostic accuracy was poor to moderate. The findings suggest that the TUG is not useful for discriminating fallers from non-fallers in healthy, high-functioning older people but is of more value in less-healthy, lower-functioning older people. Overall, the predictive ability and diagnostic accuracy of the TUG are at best moderate. No cut-point can be recommended. Quick, multifactorial fall risk screens should be considered to provide additional information for identifying older people at risk of falls. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.
Jiang, Y; Zhao, Y; Rodemann, B; Plieske, J; Kollers, S; Korzun, V; Ebmeyer, E; Argillier, O; Hinze, M; Ling, J; Röder, M S; Ganal, M W; Mette, M F; Reif, J C
2015-03-01
Genome-wide mapping approaches in diverse populations are powerful tools to unravel the genetic architecture of complex traits. The main goals of our study were to investigate the potential and limits to unravel the genetic architecture and to identify the factors determining the accuracy of prediction of the genotypic variation of Fusarium head blight (FHB) resistance in wheat (Triticum aestivum L.) based on data collected with a diverse panel of 372 European varieties. The wheat lines were phenotyped in multi-location field trials for FHB resistance and genotyped with 782 simple sequence repeat (SSR) markers, and 9k and 90k single-nucleotide polymorphism (SNP) arrays. We applied genome-wide association mapping in combination with fivefold cross-validations and observed surprisingly high accuracies of prediction for marker-assisted selection based on the detected quantitative trait loci (QTLs). Using a random sample of markers not selected for marker-trait associations revealed only a slight decrease in prediction accuracy compared with marker-based selection exploiting the QTL information. The same picture was confirmed in a simulation study, suggesting that relatedness is a main driver of the accuracy of prediction in marker-assisted selection of FHB resistance. When the accuracy of prediction of three genomic selection models was contrasted for the three marker data sets, no significant differences in accuracies among marker platforms and genomic selection models were observed. Marker density impacted the accuracy of prediction only marginally. Consequently, genomic selection of FHB resistance can be implemented most cost-efficiently based on low- to medium-density SNP arrays.
Prediction algorithms for urban traffic control
DOT National Transportation Integrated Search
1979-02-01
The objectives of this study are to 1) review and assess the state-of-the-art of prediction algorithms for urban traffic control in terms of their accuracy and application, and 2) determine the prediction accuracy obtainable by examining the performa...
Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model
NASA Astrophysics Data System (ADS)
Wang, Qijie
2015-08-01
The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.
Lee, S Hong; Clark, Sam; van der Werf, Julius H J
2017-01-01
Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne). Both the effective number of chromosome segments (Me) and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data) in animal, plant and human genetics.
Low elementary movement speed is associated with poor motor skill in Turner's syndrome.
Nijhuis-van der Sanden, Maria W G; Smits-Engelsman, Bouwien C M; Eling, Paul A T M; Nijhuis, Bianca J G; Van Galen, Gerard P
2002-01-01
The article aims to discriminate between 2 features that in principle both may be characteristic of the frequently observed poor motor performance in girls with Turner's syndrome (TS). On the one hand, a reduced movement speed that is independent of variations in spatial accuracy demands and therefore suggests a problem in motor execution. On the other hand, a disproportional slowing down of movement speed under spatial-accuracy demands, indicating a more central problem in motor programming. To assess their motor performance problems, 15 girls with TS (age 9.6-13.0 years) and 14 female controls (age 9.1-13.0 years) were tested using the Movement Assessment Battery for Children (MABC). In additionally, an experimental procedure using a variant of Fitts' graphic aiming task was used to try and disentangle the role of spatial-accuracy demands in different motor task conditions. The results of the MABC reestablish that overall motor performance in girls with TS is poor. The data from the Fitts' task reveal that TS girls move with the same accuracy as their normal peers but show a significantly lower speed independent of task difficulty. We conclude that a problem in motor execution is the main factor determining performance differences between girls with TS and controls.
Esfandiari, Hamed; Pakravan, Mohammad; Loewen, Nils A; Yaseri, Mehdi
2017-01-01
Background : To determine the predictive value of postoperative bleb morphological features and intraocular pressure (IOP) on the success rate of trabeculectomy. Methods : In this prospective interventional case series, we analyzed for one year 80 consecutive primary open angle glaucoma patients who underwent mitomycin-augmented trabeculectomy. Bleb morphology was scored using the Indiana bleb appearance grading scale (IBAGS). Success was defined as IOP ≤15 mmHg at 12 months. We applied a multivariable regression analysis and determined the area under the receiver operating characteristic curve (AUC). Results : The mean age of participants was 62±12.3 years in the success and 63.2±16.3 years in the failure group (P= 0.430) with equal gender distribution (P=0.911). IOPs on day 1, 7 and 30 were similar in both (P= 0.193, 0.639, and 0.238, respectively.) The AUC of IOP at day 1, day 7 and 30 for predicting a successful outcome was 0.355, 0.452, and 0.80, respectively. The AUC for bleb morphology parameters of bleb height, extension, and vascularization, on day 14 were 0.368, 0.408, and 0.549, respectively. Values for day 30 were 0.428, 0.563, and 0.654. IOP change from day 1 to day 30 was a good predictor of failure (AUC=0.838, 95% CI: 0.704 to 0.971) with a change of more than 3 mmHg predicting failure with a sensitivity of 82.5% (95% CI: 68 to 91%) and a specificity of 87.5% (95% CI: 53 to 98%). Conclusions : IOP on day 30 had a fair to good accuracy while bleb features failed to predict success except bleb vascularity that had a poor to fair accuracy. An IOP increase more than 3 mmHg during the first 30 days was a good predictor of failure.
Almeida Junior, Gustavo Luiz Gouvêa de; Clausell, Nadine; Garcia, Marcelo Iorio; Esporcatte, Roberto; Rangel, Fernando Oswaldo Dias; Rocha, Ricardo Mourilhe; Beck-da-Silva, Luis; Silva, Fabricio Braga da; Gorgulho, Paula de Castro Carvalho; Xavier, Sergio Salles
2018-03-01
Physical examination and B-type natriuretic peptide (BNP) have been used to estimate hemodynamics and tailor therapy of acute decompensated heart failure (ADHF) patients. However, correlation between these parameters and left ventricular filling pressures is controversial. This study was designed to evaluate the diagnostic accuracy of physical examination, chest radiography (CR) and BNP in estimating left atrial pressure (LAP) as assessed by tissue Doppler echocardiogram. Patients admitted with ADHF were prospectively assessed. Diagnostic characteristics of physical signs of heart failure, CR and BNP in predicting elevation (> 15 mm Hg) of LAP, alone or combined, were calculated. Spearman test was used to analyze the correlation between non-normal distribution variables. The level of significance was 5%. Forty-three patients were included, with mean age of 69.9 ± 11.1years, left ventricular ejection fraction of 25 ± 8.0%, and BNP of 1057 ± 1024.21 pg/mL. Individually, all clinical, CR or BNP parameters had a poor performance in predicting LAP ≥ 15 mm Hg. A clinical score of congestion had the poorest performance [area under the receiver operating characteristic curve (AUC) 0.53], followed by clinical score + CR (AUC 0.60), clinical score + CR + BNP > 400 pg/mL (AUC 0.62), and clinical score + CR + BNP > 1000 pg/mL (AUC 0.66). Physical examination, CR and BNP had a poor performance in predicting a LAP ≥ 15 mm Hg. Using these parameters alone or in combination may lead to inaccurate estimation of hemodynamics.
On the accuracy of ERS-1 orbit predictions
NASA Technical Reports Server (NTRS)
Koenig, Rolf; Li, H.; Massmann, Franz-Heinrich; Raimondo, J. C.; Rajasenan, C.; Reigber, C.
1993-01-01
Since the launch of ERS-1, the D-PAF (German Processing and Archiving Facility) provides regularly orbit predictions for the worldwide SLR (Satellite Laser Ranging) tracking network. The weekly distributed orbital elements are so called tuned IRV's and tuned SAO-elements. The tuning procedure, designed to improve the accuracy of the recovery of the orbit at the stations, is discussed based on numerical results. This shows that tuning of elements is essential for ERS-1 with the currently applied tracking procedures. The orbital elements are updated by daily distributed time bias functions. The generation of the time bias function is explained. Problems and numerical results are presented. The time bias function increases the prediction accuracy considerably. Finally, the quality assessment of ERS-1 orbit predictions is described. The accuracy is compiled for about 250 days since launch. The average accuracy lies in the range of 50-100 ms and has considerably improved.
Krendl, Anne C; Rule, Nicholas O; Ambady, Nalini
2014-09-01
Young adults can be surprisingly accurate at making inferences about people from their faces. Although these first impressions have important consequences for both the perceiver and the target, it remains an open question whether first impression accuracy is preserved with age. Specifically, could age differences in impressions toward others stem from age-related deficits in accurately detecting complex social cues? Research on aging and impression formation suggests that young and older adults show relative consensus in their first impressions, but it is unknown whether they differ in accuracy. It has been widely shown that aging disrupts emotion recognition accuracy, and that these impairments may predict deficits in other social judgments, such as detecting deceit. However, it is unclear whether general impression formation accuracy (e.g., emotion recognition accuracy, detecting complex social cues) relies on similar or distinct mechanisms. It is important to examine this question to evaluate how, if at all, aging might affect overall accuracy. Here, we examined whether aging impaired first impression accuracy in predicting real-world outcomes and categorizing social group membership. Specifically, we studied whether emotion recognition accuracy and age-related cognitive decline (which has been implicated in exacerbating deficits in emotion recognition) predict first impression accuracy. Our results revealed that emotion recognition accuracy did not predict first impression accuracy, nor did age-related cognitive decline impair it. These findings suggest that domains of social perception outside of emotion recognition may rely on mechanisms that are relatively unimpaired by aging. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Posterior Predictive Checks for Conditional Independence between Response Time and Accuracy
ERIC Educational Resources Information Center
Bolsinova, Maria; Tijmstra, Jesper
2016-01-01
Conditional independence (CI) between response time and response accuracy is a fundamental assumption of many joint models for time and accuracy used in educational measurement. In this study, posterior predictive checks (PPCs) are proposed for testing this assumption. These PPCs are based on three discrepancy measures reflecting different…
The microcomputer scientific software series 4: testing prediction accuracy.
H. Michael Rauscher
1986-01-01
A computer program, ATEST, is described in this combination user's guide / programmer's manual. ATEST provides users with an efficient and convenient tool to test the accuracy of predictors. As input ATEST requires observed-predicted data pairs. The output reports the two components of accuracy, bias and precision.
Belay, T K; Dagnachew, B S; Boison, S A; Ådnøy, T
2018-03-28
Milk infrared spectra are routinely used for phenotyping traits of interest through links developed between the traits and spectra. Predicted individual traits are then used in genetic analyses for estimated breeding value (EBV) or for phenotypic predictions using a single-trait mixed model; this approach is referred to as indirect prediction (IP). An alternative approach [direct prediction (DP)] is a direct genetic analysis of (a reduced dimension of) the spectra using a multitrait model to predict multivariate EBV of the spectral components and, ultimately, also to predict the univariate EBV or phenotype for the traits of interest. We simulated 3 traits under different genetic (low: 0.10 to high: 0.90) and residual (zero to high: ±0.90) correlation scenarios between the 3 traits and assumed the first trait is a linear combination of the other 2 traits. The aim was to compare the IP and DP approaches for predictions of EBV and phenotypes under the different correlation scenarios. We also evaluated relationships between performances of the 2 approaches and the accuracy of calibration equations. Moreover, the effect of using different regression coefficients estimated from simulated phenotypes (β p ), true breeding values (β g ), and residuals (β r ) on performance of the 2 approaches were evaluated. The simulated data contained 2,100 parents (100 sires and 2,000 cows) and 8,000 offspring (4 offspring per cow). Of the 8,000 observations, 2,000 were randomly selected and used to develop links between the first and the other 2 traits using partial least square (PLS) regression analysis. The different PLS regression coefficients, such as β p , β g , and β r , were used in subsequent predictions following the IP and DP approaches. We used BLUP analyses for the remaining 6,000 observations using the true (co)variance components that had been used for the simulation. Accuracy of prediction (of EBV and phenotype) was calculated as a correlation between predicted and true values from the simulations. The results showed that accuracies of EBV prediction were higher in the DP than in the IP approach. The reverse was true for accuracy of phenotypic prediction when using β p but not when using β g and β r , where accuracy of phenotypic prediction in the DP was slightly higher than in the IP approach. Within the DP approach, accuracies of EBV when using β g were higher than when using β p only at the low genetic correlation scenario. However, we found no differences in EBV prediction accuracy between the β p and β g in the IP approach. Accuracy of the calibration models increased with an increase in genetic and residual correlations between the traits. Performance of both approaches increased with an increase in accuracy of the calibration models. In conclusion, the DP approach is a good strategy for EBV prediction but not for phenotypic prediction, where the classical PLS regression-based equations or the IP approach provided better results. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
van Kleef, R C; van Vliet, R C J A; van Rooijen, E M
2014-03-01
The Dutch basic health-insurance scheme for curative care includes a risk equalization model (RE-model) to compensate competing health insurers for the predictable high costs of people in poor health. Since 2004, this RE-model includes the so-called Diagnoses-based Cost Groups (DCGs) as a risk adjuster. Until 2013, these DCGs have been mainly based on diagnoses from inpatient hospital treatment. This paper examines (1) to what extent the Dutch RE-model can be improved by extending the inpatient DCGs with diagnoses from outpatient hospital treatment and (2) how to treat outpatient diagnoses relative to their corresponding inpatient diagnoses. Based on individual-level administrative costs we estimate the Dutch RE-model with three different DCG modalities. Using individual-level survey information from a prior year we examine the outcomes of these modalities for different groups of people in poor health. We find that extending DCGs with outpatient diagnoses has hardly any effect on the R-squared of the RE-model, but reduces the undercompensation for people with a chronic condition by about 8%. With respect to incentives, it may be preferable to make no distinction between corresponding inpatient and outpatient diagnoses in the DCG-classification, although this will be at the expense of the predictive accuracy of the RE-model. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Imputing data that are missing at high rates using a boosting algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cauthen, Katherine Regina; Lambert, Gregory; Ray, Jaideep
Traditional multiple imputation approaches may perform poorly for datasets with high rates of missingness unless many m imputations are used. This paper implements an alternative machine learning-based approach to imputing data that are missing at high rates. Here, we use boosting to create a strong learner from a weak learner fitted to a dataset missing many observations. This approach may be applied to a variety of types of learners (models). The approach is demonstrated by application to a spatiotemporal dataset for predicting dengue outbreaks in India from meteorological covariates. A Bayesian spatiotemporal CAR model is boosted to produce imputations, andmore » the overall RMSE from a k-fold cross-validation is used to assess imputation accuracy.« less
Neurocognitive and Behavioral Predictors of Math Performance in Children with and without ADHD
Antonini, Tanya N.; O’Brien, Kathleen M.; Narad, Megan E.; Langberg, Joshua M.; Tamm, Leanne; Epstein, Jeff N.
2014-01-01
Objective: This study examined neurocognitive and behavioral predictors of math performance in children with and without attention-deficit/hyperactivity disorder (ADHD). Method: Neurocognitive and behavioral variables were examined as predictors of 1) standardized mathematics achievement scores,2) productivity on an analog math task, and 3) accuracy on an analog math task. Results: Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the Attentional Network Task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Conclusion: Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. PMID:24071774
Neurocognitive and Behavioral Predictors of Math Performance in Children With and Without ADHD.
Antonini, Tanya N; Kingery, Kathleen M; Narad, Megan E; Langberg, Joshua M; Tamm, Leanne; Epstein, Jeffery N
2016-02-01
This study examined neurocognitive and behavioral predictors of math performance in children with and without ADHD. Neurocognitive and behavioral variables were examined as predictors of (a) standardized mathematics achievement scores, (b) productivity on an analog math task, and (c) accuracy on an analog math task. Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the attentional network task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. © The Author(s) 2013.
Regional mapping of soil parent material by machine learning based on point data
NASA Astrophysics Data System (ADS)
Lacoste, Marine; Lemercier, Blandine; Walter, Christian
2011-10-01
A machine learning system (MART) has been used to predict soil parent material (SPM) at the regional scale with a 50-m resolution. The use of point-specific soil observations as training data was tested as a replacement for the soil maps introduced in previous studies, with the aim of generating a more even distribution of training data over the study area and reducing information uncertainty. The 27,020-km 2 study area (Brittany, northwestern France) contains mainly metamorphic, igneous and sedimentary substrates. However, superficial deposits (aeolian loam, colluvial and alluvial deposits) very often represent the actual SPM and are typically under-represented in existing geological maps. In order to calibrate the predictive model, a total of 4920 point soil descriptions were used as training data along with 17 environmental predictors (terrain attributes derived from a 50-m DEM, as well as emissions of K, Th and U obtained by means of airborne gamma-ray spectrometry, geological variables at the 1:250,000 scale and land use maps obtained by remote sensing). Model predictions were then compared: i) during SPM model creation to point data not used in model calibration (internal validation), ii) to the entire point dataset (point validation), and iii) to existing detailed soil maps (external validation). The internal, point and external validation accuracy rates were 56%, 81% and 54%, respectively. Aeolian loam was one of the three most closely predicted substrates. Poor prediction results were associated with uncommon materials and areas with high geological complexity, i.e. areas where existing maps used for external validation were also imprecise. The resultant predictive map turned out to be more accurate than existing geological maps and moreover indicated surface deposits whose spatial coverage is consistent with actual knowledge of the area. This method proves quite useful in predicting SPM within areas where conventional mapping techniques might be too costly or lengthy or where soil maps are insufficient for use as training data. In addition, this method allows producing repeatable and interpretable results, whose accuracy can be assessed objectively.
Li, Ruili; Yang, Mingfei
2017-01-01
Hematoma expansion (HE) is a major determinant of a poor outcome in patients with a spontaneous intracerebral hemorrhage (sICH). The blend sign and the black hole sign are distinguished from non-contrast CT (NCCT) in patients with sICH, and both are independent neuroimaging predictors of HE. The purpose of the current study was to compare the value of the two signs in the prediction of HE. We retrospectively analyzed clinical and neuroimaging data from 228 patients with sICH who were treated at our hospital between August 2015 and September 2017. NCCT of the brain was performed upon admission (within 6 h of the onset of symptoms) to identify the blend sign and the black hole sign. HE was determined based on CT during a follow-up 24 h later. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with which the blend sign and the black hole sign predicted HE were calculated. Receiver operating characteristic (ROC) curve analysis was performed in order to compare the accuracy of the two signs in predicting HE. The blend sign was identified in 46 patients (20.2%) and the black hole sign was identified in 38 (16.7%) based on NCCT of the brain upon admission. Of the 65 patients with HE, the blend sign was noted in 28 and the black hole sign was noted in 22. The blend sign had a sensitivity of predicting HE of 43.1%, a specificity of 89.0%, a PPV of 60.9%, and an NPV of 79.7%. In contrast, the black hole sign had a sensitivity of predicting HE of 33.9%, a specificity of 90.2%, a PPV of 57.9%, and an NPV of 77.4%. The area under the ROC curve was 0.660 for the blend sign and 0.620 for the black hole sign (p = 0.516). In conclusion, the blend sign and the black hole sign on CT are both good predictors of HE in patients with sICH, though the blend sign seems to have a higher level of accuracy.
Cross-validation of recent and longstanding resting metabolic rate prediction equations
USDA-ARS?s Scientific Manuscript database
Resting metabolic rate (RMR) measurement is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, their accuracy likely varies across individuals. Understanding the factors that influence predicted RMR accuracy at the individual lev...
Prospects for Genomic Selection in Cassava Breeding.
Wolfe, Marnin D; Del Carpio, Dunia Pino; Alabi, Olumide; Ezenwaka, Lydia C; Ikeogu, Ugochukwu N; Kayondo, Ismail S; Lozano, Roberto; Okeke, Uche G; Ozimati, Alfred A; Williams, Esuma; Egesi, Chiedozie; Kawuki, Robert S; Kulakow, Peter; Rabbi, Ismail Y; Jannink, Jean-Luc
2017-11-01
Cassava ( Crantz) is a clonally propagated staple food crop in the tropics. Genomic selection (GS) has been implemented at three breeding institutions in Africa to reduce cycle times. Initial studies provided promising estimates of predictive abilities. Here, we expand on previous analyses by assessing the accuracy of seven prediction models for seven traits in three prediction scenarios: cross-validation within populations, cross-population prediction and cross-generation prediction. We also evaluated the impact of increasing the training population (TP) size by phenotyping progenies selected either at random or with a genetic algorithm. Cross-validation results were mostly consistent across programs, with nonadditive models predicting of 10% better on average. Cross-population accuracy was generally low (mean = 0.18) but prediction of cassava mosaic disease increased up to 57% in one Nigerian population when data from another related population were combined. Accuracy across generations was poorer than within-generation accuracy, as expected, but accuracy for dry matter content and mosaic disease severity should be sufficient for rapid-cycling GS. Selection of a prediction model made some difference across generations, but increasing TP size was more important. With a genetic algorithm, selection of one-third of progeny could achieve an accuracy equivalent to phenotyping all progeny. We are in the early stages of GS for this crop but the results are promising for some traits. General guidelines that are emerging are that TPs need to continue to grow but phenotyping can be done on a cleverly selected subset of individuals, reducing the overall phenotyping burden. Copyright © 2017 Crop Science Society of America.
Bisschop, Elaine; Morales, Celia; Gil, Verónica; Jiménez-Suárez, Elizabeth
The aim of this study was to analyze whether children with and without difficulties in handwriting, spelling, or both differed in alphabet writing when using a keyboard. The total sample consisted of 1,333 children from Grades 1 through 3. Scores on the spelling and handwriting factors from the Early Grade Writing Assessment (Jiménez, in press) were used to assign the participants to one of four groups with different ability patterns: poor handwriters, poor spellers, a mixed group, and typically achieving students. Groups were equalized by a matching strategy, resulting in a final sample of 352 children. A MANOVA was executed to analyze effects of group and grade on orthographic motor integration (fluency of alphabet writing) and the number of omissions when writing the alphabet (accuracy of alphabet writing) by keyboard writing mode. The results indicated that poor handwriters did not differ from typically achieving children in both variables, whereas the poor spellers did perform below the typical achievers and the poor handwriters. The difficulties of poor handwriters seem to be alleviated by the use of the keyboard; however, children with spelling difficulties might need extra instruction to become fluent keyboard writers.
Nielsen, Morten; Justesen, Sune; Lund, Ole; Lundegaard, Claus; Buus, Søren
2010-11-13
Binding of peptides to Major Histocompatibility class II (MHC-II) molecules play a central role in governing responses of the adaptive immune system. MHC-II molecules sample peptides from the extracellular space allowing the immune system to detect the presence of foreign microbes from this compartment. Predicting which peptides bind to an MHC-II molecule is therefore of pivotal importance for understanding the immune response and its effect on host-pathogen interactions. The experimental cost associated with characterizing the binding motif of an MHC-II molecule is significant and large efforts have therefore been placed in developing accurate computer methods capable of predicting this binding event. Prediction of peptide binding to MHC-II is complicated by the open binding cleft of the MHC-II molecule, allowing binding of peptides extending out of the binding groove. Moreover, the genes encoding the MHC molecules are immensely diverse leading to a large set of different MHC molecules each potentially binding a unique set of peptides. Characterizing each MHC-II molecule using peptide-screening binding assays is hence not a viable option. Here, we present an MHC-II binding prediction algorithm aiming at dealing with these challenges. The method is a pan-specific version of the earlier published allele-specific NN-align algorithm and does not require any pre-alignment of the input data. This allows the method to benefit also from information from alleles covered by limited binding data. The method is evaluated on a large and diverse set of benchmark data, and is shown to significantly out-perform state-of-the-art MHC-II prediction methods. In particular, the method is found to boost the performance for alleles characterized by limited binding data where conventional allele-specific methods tend to achieve poor prediction accuracy. The method thus shows great potential for efficient boosting the accuracy of MHC-II binding prediction, as accurate predictions can be obtained for novel alleles at highly reduced experimental costs. Pan-specific binding predictions can be obtained for all alleles with know protein sequence and the method can benefit by including data in the training from alleles even where only few binders are known. The method and benchmark data are available at http://www.cbs.dtu.dk/services/NetMHCIIpan-2.0.
NASA Astrophysics Data System (ADS)
Park, J.; Yoo, K.
2013-12-01
For groundwater resource conservation, it is important to accurately assess groundwater pollution sensitivity or vulnerability. In this work, we attempted to use data mining approach to assess groundwater pollution vulnerability in a TCE (trichloroethylene) contaminated Korean industrial site. The conventional DRASTIC method failed to describe TCE sensitivity data with a poor correlation with hydrogeological properties. Among the different data mining methods such as Artificial Neural Network (ANN), Multiple Logistic Regression (MLR), Case Base Reasoning (CBR), and Decision Tree (DT), the accuracy and consistency of Decision Tree (DT) was the best. According to the following tree analyses with the optimal DT model, the failure of the conventional DRASTIC method in fitting with TCE sensitivity data may be due to the use of inaccurate weight values of hydrogeological parameters for the study site. These findings provide a proof of concept that DT based data mining approach can be used in predicting and rule induction of groundwater TCE sensitivity without pre-existing information on weights of hydrogeological properties.
Bedward, Michael; Penman, Trent D.; Doherty, Michael D.; Weber, Rodney O.; Gill, A. Malcolm; Cary, Geoffrey J.
2016-01-01
The influence of plant traits on forest fire behaviour has evolutionary, ecological and management implications, but is poorly understood and frequently discounted. We use a process model to quantify that influence and provide validation in a diverse range of eucalypt forests burnt under varying conditions. Measured height of consumption was compared to heights predicted using a surface fuel fire behaviour model, then key aspects of our model were sequentially added to this with and without species-specific information. Our fully specified model had a mean absolute error 3.8 times smaller than the otherwise identical surface fuel model (p < 0.01), and correctly predicted the height of larger (≥1 m) flames 12 times more often (p < 0.001). We conclude that the primary endogenous drivers of fire severity are the species of plants present rather than the surface fuel load, and demonstrate the accuracy and versatility of the model for quantifying this. PMID:27529789
Zylstra, Philip; Bradstock, Ross A; Bedward, Michael; Penman, Trent D; Doherty, Michael D; Weber, Rodney O; Gill, A Malcolm; Cary, Geoffrey J
2016-01-01
The influence of plant traits on forest fire behaviour has evolutionary, ecological and management implications, but is poorly understood and frequently discounted. We use a process model to quantify that influence and provide validation in a diverse range of eucalypt forests burnt under varying conditions. Measured height of consumption was compared to heights predicted using a surface fuel fire behaviour model, then key aspects of our model were sequentially added to this with and without species-specific information. Our fully specified model had a mean absolute error 3.8 times smaller than the otherwise identical surface fuel model (p < 0.01), and correctly predicted the height of larger (≥1 m) flames 12 times more often (p < 0.001). We conclude that the primary endogenous drivers of fire severity are the species of plants present rather than the surface fuel load, and demonstrate the accuracy and versatility of the model for quantifying this.
Constitutive Equation with Varying Parameters for Superplastic Flow Behavior
NASA Astrophysics Data System (ADS)
Guan, Zhiping; Ren, Mingwen; Jia, Hongjie; Zhao, Po; Ma, Pinkui
2014-03-01
In this study, constitutive equations for superplastic materials with an extra large elongation were investigated through mechanical analysis. From the view of phenomenology, firstly, some traditional empirical constitutive relations were standardized by restricting some strain paths and parameter conditions, and the coefficients in these relations were strictly given new mechanical definitions. Subsequently, a new, general constitutive equation with varying parameters was theoretically deduced based on the general mechanical equation of state. The superplastic tension test data of Zn-5%Al alloy at 340 °C under strain rates, velocities, and loads were employed for building a new constitutive equation and examining its validity. Analysis results indicated that the constitutive equation with varying parameters could characterize superplastic flow behavior in practical superplastic forming with high prediction accuracy and without any restriction of strain path or deformation condition, showing good industrial or scientific interest. On the contrary, those empirical equations have low prediction capabilities due to constant parameters and poor applicability because of the limit of special strain path or parameter conditions based on strict phenomenology.
Naturalistic Assessment of Executive Function and Everyday Multitasking in Healthy Older Adults
McAlister, Courtney; Schmitter-Edgecombe, Maureen
2013-01-01
Everyday multitasking and its cognitive correlates were investigated in an older adult population using a naturalistic task, the Day Out Task. Fifty older adults and 50 younger adults prioritized, organized, initiated and completed a number of subtasks in a campus apartment to prepare for a day out (e.g., gather ingredients for a recipe, collect change for a bus ride). Participants also completed tests assessing cognitive constructs important in multitasking. Compared to younger adults, the older adults took longer to complete the everyday tasks and more poorly sequenced the subtasks. Although they initiated, completed, and interweaved a similar number of subtasks, the older adults demonstrated poorer task quality and accuracy, completing more subtasks inefficiently. For the older adults, reduced prospective memory abilities were predictive of poorer task sequencing, while executive processes and prospective memory were predictive of inefficiently completed subtasks. The findings suggest that executive dysfunction and prospective memory difficulties may contribute to the age-related decline of everyday multitasking abilities in healthy older adults. PMID:23557096
Tertiary structural propensities reveal fundamental sequence/structure relationships.
Zheng, Fan; Zhang, Jian; Grigoryan, Gevorg
2015-05-05
Extracting useful generalizations from the continually growing Protein Data Bank (PDB) is of central importance. We hypothesize that the PDB contains valuable quantitative information on the level of local tertiary structural motifs (TERMs). We show that by breaking a protein structure into its constituent TERMs, and querying the PDB to characterize the natural ensemble matching each, we can estimate the compatibility of the structure with a given amino acid sequence through a metric we term "structure score." Considering submissions from recent Critical Assessment of Structure Prediction (CASP) experiments, we found a strong correlation (R = 0.69) between structure score and model accuracy, with poorly predicted regions readily identifiable. This performance exceeds that of leading atomistic statistical energy functions. Furthermore, TERM-based analysis of two prototypical multi-state proteins rapidly produced structural insights fully consistent with prior extensive experimental studies. We thus find that TERM-based analysis should have considerable utility for protein structural biology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pitchford, Melanie; Ball, Linden J.; Hunt, Thomas E.; Steel, Richard
2017-01-01
We report a study examining the role of ‘cognitive miserliness’ as a determinant of poor performance on the standard three-item Cognitive Reflection Test (CRT). The cognitive miserliness hypothesis proposes that people often respond incorrectly on CRT items because of an unwillingness to go beyond default, heuristic processing and invest time and effort in analytic, reflective processing. Our analysis (N = 391) focused on people’s response times to CRT items to determine whether predicted associations are evident between miserly thinking and the generation of incorrect, intuitive answers. Evidence indicated only a weak correlation between CRT response times and accuracy. Item-level analyses also failed to demonstrate predicted response-time differences between correct analytic and incorrect intuitive answers for two of the three CRT items. We question whether participants who give incorrect intuitive answers on the CRT can legitimately be termed cognitive misers and whether the three CRT items measure the same general construct. PMID:29099840
Rotor systems research aircraft risk-reduction shake test
NASA Technical Reports Server (NTRS)
Wellman, J. Brent
1990-01-01
A shake test and an extensive analysis of results were performed to evaluate the possibility of and the method for dynamically calibrating the Rotor Systems Research Aircraft (RSRA). The RSRA airframe was subjected to known vibratory loads in several degrees of freedom and the responses of many aircraft transducers were recorded. Analysis of the transducer responses using the technique of dynamic force determination showed that the RSRA, when used as a dynamic measurement system, could predict, a posteriori, an excitation force in a single axis to an accuracy of about 5 percent and sometimes better. As the analysis was broadened to include multiple degrees of freedom for the excitation force, the predictive ability of the measurement system degraded to about 20 percent, with the error occasionally reaching 100 percent. The poor performance of the measurement system is explained by the nonlinear response of the RSRA to vibratory forces and the inadequacy of the particular method used in accounting for this nonlinearity.
A simple mathematical model to predict sea surface temperature over the northwest Indian Ocean
NASA Astrophysics Data System (ADS)
Noori, Roohollah; Abbasi, Mahmud Reza; Adamowski, Jan Franklin; Dehghani, Majid
2017-10-01
A novel and simple mathematical model was developed in this study to enhance the capacity of a reduced-order model based on eigenvectors (RMEV) to predict sea surface temperature (SST) in the northwest portion of the Indian Ocean, including the Persian and Oman Gulfs and Arabian Sea. Developed using only the first two of 12,416 possible modes, the enhanced RMEV closely matched observed daily optimum interpolation SST (DOISST) values. Spatial distribution of the first mode indicated the greatest variations in DOISST occurred in the Persian Gulf. Also, the slightly increasing trend in the temporal component of the first mode observed in the study area over the last 34 years properly reflected the impact of climate change and rising DOISST. Given its simplicity and high level of accuracy, the enhanced RMEV can be applied to forecast DOISST in oceans, which the poor forecasting performance and large computational-time of other numerical models may not allow.
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.
Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie
2016-12-07
A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.
He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L
2018-04-01
SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.
De Buck, Stefan S; Sinha, Vikash K; Fenu, Luca A; Gilissen, Ron A; Mackie, Claire E; Nijsen, Marjoleen J
2007-04-01
The aim of this study was to assess a physiologically based modeling approach for predicting drug metabolism, tissue distribution, and bioavailability in rat for a structurally diverse set of neutral and moderate-to-strong basic compounds (n = 50). Hepatic blood clearance (CL(h)) was projected using microsomal data and shown to be well predicted, irrespective of the type of hepatic extraction model (80% within 2-fold). Best predictions of CL(h) were obtained disregarding both plasma and microsomal protein binding, whereas strong bias was seen using either blood binding only or both plasma and microsomal protein binding. Two mechanistic tissue composition-based equations were evaluated for predicting volume of distribution (V(dss)) and tissue-to-plasma partitioning (P(tp)). A first approach, which accounted for ionic interactions with acidic phospholipids, resulted in accurate predictions of V(dss) (80% within 2-fold). In contrast, a second approach, which disregarded ionic interactions, was a poor predictor of V(dss) (60% within 2-fold). The first approach also yielded accurate predictions of P(tp) in muscle, heart, and kidney (80% within 3-fold), whereas in lung, liver, and brain, predictions ranged from 47% to 62% within 3-fold. Using the second approach, P(tp) prediction accuracy in muscle, heart, and kidney was on average 70% within 3-fold, and ranged from 24% to 54% in all other tissues. Combining all methods for predicting V(dss) and CL(h) resulted in accurate predictions of the in vivo half-life (70% within 2-fold). Oral bioavailability was well predicted using CL(h) data and Gastroplus Software (80% within 2-fold). These results illustrate that physiologically based prediction tools can provide accurate predictions of rat pharmacokinetics.
The ACS NSQIP Risk Calculator Is a Fair Predictor of Acute Periprosthetic Joint Infection.
Wingert, Nathaniel C; Gotoff, James; Parrilla, Edgardo; Gotoff, Robert; Hou, Laura; Ghanem, Elie
2016-07-01
Periprosthetic joint infection (PJI) is a severe complication from the patient's perspective and an expensive one in a value-driven healthcare model. Risk stratification can help identify those patients who may have risk factors for complications that can be mitigated in advance of elective surgery. Although numerous surgical risk calculators have been created, their accuracy in predicting outcomes, specifically PJI, has not been tested. (1) How accurate is the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) Surgical Site Infection Calculator in predicting 30-day postoperative infection? (2) How accurate is the calculator in predicting 90-day postoperative infection? We isolated 1536 patients who underwent 1620 primary THAs and TKAs at our institution during 2011 to 2013. Minimum followup was 90 days. The ACS NSQIP Surgical Risk Calculator was assessed in its ability to predict acute PJI within 30 and 90 days postoperatively. Patients who underwent a repeat surgical procedure within 90 days of the index arthroplasty and in whom at least one positive intraoperative culture was obtained at time of reoperation were considered to have PJI. A total of 19 cases of PJI were identified, including 11 at 30 days and an additional eight instances by 90 days postoperatively. Patient-specific risk probabilities for PJI based on demographics and comorbidities were recorded from the ACS NSQIP Surgical Risk Calculator website. The area under the curve (AUC) for receiver operating characteristic (ROC) curves was calculated to determine the predictability of the risk probability for PJI. The AUC is an effective method for quantifying the discriminatory capacity of a diagnostic test to correctly classify patients with and without infection in which it is defined as excellent (AUC 0.9-1), good (AUC 0.8-0.89), fair (AUC 0.7-0.79), poor (AUC 0.6-0.69), or fail/no discriminatory capacity (AUC 0.5-0.59). A p value of < 0.05 was considered to be statistically significant. The ACS NSQIP Surgical Risk Calculator showed only fair accuracy in predicting 30-day PJI (AUC: 74.3% [confidence interval {CI}, 59.6%-89.0%]. For 90-day PJI, the risk calculator was also only fair in accuracy (AUC: 71.3% [CI, 59.9%-82.6%]). Conclusions The ACS NSQIP Surgical Risk Calculator is a fair predictor of acute PJI at the 30- and 90-day intervals after primary THA and TKA. Practitioners should exercise caution in using this tool as a predictive aid for PJI, because it demonstrates only fair value in this application. Existing predictive tools for PJI could potentially be made more robust by incorporating preoperative risk factors and including operative and early postoperative variables. Level III, diagnostic study.
Maden, Orhan; Balci, Kevser Gülcihan; Selcuk, Mehmet Timur; Balci, Mustafa Mücahit; Açar, Burak; Unal, Sefa; Kara, Meryem; Selcuk, Hatice
2015-12-01
The aim of this study was to investigate the accuracy of three algorithms in predicting accessory pathway locations in adult patients with Wolff-Parkinson-White syndrome in Turkish population. A total of 207 adult patients with Wolff-Parkinson-White syndrome were retrospectively analyzed. The most preexcited 12-lead electrocardiogram in sinus rhythm was used for analysis. Two investigators blinded to the patient data used three algorithms for prediction of accessory pathway location. Among all locations, 48.5% were left-sided, 44% were right-sided, and 7.5% were located in the midseptum or anteroseptum. When only exact locations were accepted as match, predictive accuracy for Chiang was 71.5%, 72.4% for d'Avila, and 71.5% for Arruda. The percentage of predictive accuracy of all algorithms did not differ between the algorithms (p = 1.000; p = 0.875; p = 0.885, respectively). The best algorithm for prediction of right-sided, left-sided, and anteroseptal and midseptal accessory pathways was Arruda (p < 0.001). Arruda was significantly better than d'Avila in predicting adjacent sites (p = 0.035) and the percent of the contralateral site prediction was higher with d'Avila than Arruda (p = 0.013). All algorithms were similar in predicting accessory pathway location and the predicted accuracy was lower than previously reported by their authors. However, according to the accessory pathway site, the algorithm designed by Arruda et al. showed better predictions than the other algorithms and using this algorithm may provide advantages before a planned ablation.
Accuracy test for link prediction in terms of similarity index: The case of WS and BA models
NASA Astrophysics Data System (ADS)
Ahn, Min-Woo; Jung, Woo-Sung
2015-07-01
Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.
Effectiveness of link prediction for face-to-face behavioral networks.
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30-0.45 and a recall of 0.10-0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks.
The effect of concurrent hand movement on estimated time to contact in a prediction motion task.
Zheng, Ran; Maraj, Brian K V
2018-04-27
In many activities, we need to predict the arrival of an occluded object. This action is called prediction motion or motion extrapolation. Previous researchers have found that both eye tracking and the internal clocking model are involved in the prediction motion task. Additionally, it is reported that concurrent hand movement facilitates the eye tracking of an externally generated target in a tracking task, even if the target is occluded. The present study examined the effect of concurrent hand movement on the estimated time to contact in a prediction motion task. We found different (accurate/inaccurate) concurrent hand movements had the opposite effect on the eye tracking accuracy and estimated TTC in the prediction motion task. That is, the accurate concurrent hand tracking enhanced eye tracking accuracy and had the trend to increase the precision of estimated TTC, but the inaccurate concurrent hand tracking decreased eye tracking accuracy and disrupted estimated TTC. However, eye tracking accuracy does not determine the precision of estimated TTC.
Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)
2003-01-01
The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.
McCarthy, Jillian H.; Hogan, Tiffany P.; Catts, Hugh W.
2013-01-01
The purpose of this study was to test the hypothesis that word reading accuracy, not oral language, is associated with spelling performance in school-age children. We compared fourth grade spelling accuracy in children with specific language impairment (SLI), dyslexia, or both (SLI/dyslexia) to their typically developing grade-matched peers. Results of the study revealed that children with SLI performed similarly to their typically developing peers on a single word spelling task. Alternatively, those with dyslexia and SLI/dyslexia evidenced poor spelling accuracy. Errors made by both those with dyslexia and SLI/dyslexia were characterized by numerous phonologic, orthographic, and semantic errors. Cumulative results support the hypothesis that word reading accuracy, not oral language, is associated with spelling performance in typically developing school-age children and their peers with SLI and dyslexia. Findings are provided as further support for the notion that SLI and dyslexia are distinct, yet co-morbid, developmental disorders. PMID:22876769
ERIC Educational Resources Information Center
Hilton, N. Zoe; Harris, Grant T.
2009-01-01
Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…
Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell
2014-01-01
Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.
NASA Astrophysics Data System (ADS)
Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto
2017-12-01
Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.
Protein Secondary Structure Prediction Using AutoEncoder Network and Bayes Classifier
NASA Astrophysics Data System (ADS)
Wang, Leilei; Cheng, Jinyong
2018-03-01
Protein secondary structure prediction is belong to bioinformatics,and it's important in research area. In this paper, we propose a new prediction way of protein using bayes classifier and autoEncoder network. Our experiments show some algorithms including the construction of the model, the classification of parameters and so on. The data set is a typical CB513 data set for protein. In terms of accuracy, the method is the cross validation based on the 3-fold. Then we can get the Q3 accuracy. Paper results illustrate that the autoencoder network improved the prediction accuracy of protein secondary structure.
Nitrogen cycling models and their application to forest harvesting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, D.W.; Dale, V.H.
1986-01-01
The characterization of forest nitrogen- (N-) cycling processes by several N-cycling models (FORCYTE, NITCOMP, FORTNITE, and LINKAGES) is briefly reviewed and evaluated against current knowledge of N cycling in forests. Some important processes (e.g., translocation within trees, N dynamics in decaying leaf litter) appear to be well characterized, whereas others (e.g., N mineralization from soil organic matter, N fixation, N dynamics in decaying wood, nitrification, and nitrate leaching) are poorly characterized, primarily because of a lack of knowledge rather than an oversight by model developers. It is remarkable how well the forest models do work in the absence of datamore » on some key processes. For those systems in which the poorly understood processes could cause major changes in N availability or productivity, the accuracy of model predictions should be examined. However, the development of N-cycling models represents a major step beyond the much simpler, classic conceptual models of forest nutrient cycling developed by early investigators. The new generation of computer models will surely improve as research reveals how key nutrient-cycling processes operate.« less
Testing Metal-Poor Stellar Models and Isochrones with HST Parallaxes of Metal-Poor Stars
NASA Astrophysics Data System (ADS)
Chaboyer, B.; McArthur, B. E.; O'Malley, E.; Benedict, G. F.; Feiden, G. A.; Harrison, T. E.; McWilliam, A.; Nelan, E. P.; Patterson, R. J.; Sarajedini, A.
2017-02-01
Hubble Space Telescope (HST) fine guidance sensor observations were used to obtain parallaxes of eight metal-poor ([Fe/H] < -1.4) stars. The parallaxes of these stars determined by the new Hipparcos reduction average 17% accuracy, in contrast to our new HST parallaxes, which average 1% accuracy and have errors on the individual parallaxes ranging from 85 to 144 μas. These parallax data were combined with HST Advanced Camera for Surveys photometry in the F606W and F814W filters to obtain the absolute magnitudes of the stars with an accuracy of 0.02-0.03 mag. Six of these stars are on the main sequence (MS) (with -2.7 < [Fe/H] < -1.8) and are suitable for testing metal-poor stellar evolution models and determining the distances to metal-poor globular clusters (GCs). Using the abundances obtained by O’Malley et al., we find that standard stellar models using the VandenBerg & Clem color transformation do a reasonable job of matching five of the MS stars, with HD 54639 ([Fe/H] = -2.5) being anomalous in its location in the color-magnitude diagram. Stellar models and isochrones were generated using a Monte Carlo analysis to take into account uncertainties in the models. Isochrones that fit the parallax stars were used to determine the distances and ages of nine GCs (with -2.4 ≤ [Fe/H] ≤ -1.9). Averaging together the age of all nine clusters led to an absolute age of the oldest, most metal-poor GCs of 12.7 ± 1.0 Gyr, where the quoted uncertainty takes into account the known uncertainties in the stellar models and isochrones, along with the uncertainty in the distance and reddening of the clusters.
Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru
2017-09-01
Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Kvavilashvili, Lia; Ford, Ruth M
2014-11-01
It is well documented that young children greatly overestimate their performance on tests of retrospective memory (RM), but the current investigation is the first to examine children's prediction accuracy for prospective memory (PM). Three studies were conducted, each testing a different group of 5-year-olds. In Study 1 (N=46), participants were asked to predict their success in a simple event-based PM task (remembering to convey a message to a toy mole if they encountered a particular picture during a picture-naming activity). Before naming the pictures, children listened to either a reminder story or a neutral story. Results showed that children were highly accurate in their PM predictions (78% accuracy) and that the reminder story appeared to benefit PM only in children who predicted they would remember the PM response. In Study 2 (N=80), children showed high PM prediction accuracy (69%) regardless of whether the cue was specific or general and despite typical overoptimism regarding their performance on a 10-item RM task using item-by-item prediction. Study 3 (N=35) showed that children were prone to overestimate RM even when asked about their ability to recall a single item-the mole's unusual name. In light of these findings, we consider possible reasons for children's impressive PM prediction accuracy, including the potential involvement of future thinking in performance predictions and PM. Copyright © 2014 Elsevier Inc. All rights reserved.
Golinvaux, Nicholas S; Bohl, Daniel D; Basques, Bryce A; Grauer, Jonathan N
2014-11-15
Cross-sectional study. To objectively evaluate the ability of International Classification of Diseases, Ninth Revision (ICD-9) codes, which are used as the foundation for administratively coded national databases, to identify preoperative anemia in patients undergoing spinal fusion. National database research in spine surgery continues to rise. However, the validity of studies based on administratively coded data, such as the Nationwide Inpatient Sample, are dependent on the accuracy of ICD-9 coding. Such coding has previously been found to have poor sensitivity to conditions such as obesity and infection. A cross-sectional study was performed at an academic medical center. Hospital-reported anemia ICD-9 codes (those used for administratively coded databases) were directly compared with the chart-documented preoperative hematocrits (true laboratory values). A patient was deemed to have preoperative anemia if the preoperative hematocrit was less than the lower end of the normal range (36.0% for females and 41.0% for males). The study included 260 patients. Of these, 37 patients (14.2%) were anemic; however, only 10 patients (3.8%) received an "anemia" ICD-9 code. Of the 10 patients coded as anemic, 7 were anemic by definition, whereas 3 were not, and thus were miscoded. This equates to an ICD-9 code sensitivity of 0.19, with a specificity of 0.99, and positive and negative predictive values of 0.70 and 0.88, respectively. This study uses preoperative anemia to demonstrate the potential inaccuracies of ICD-9 coding. These results have implications for publications using databases that are compiled from ICD-9 coding data. Furthermore, the findings of the current investigation raise concerns regarding the accuracy of additional comorbidities. Although administrative databases are powerful resources that provide large sample sizes, it is crucial that we further consider the quality of the data source relative to its intended purpose.
NASA Astrophysics Data System (ADS)
Boschetto, Davide; Di Claudio, Gianluca; Mirzaei, Hadis; Leong, Rupert; Grisan, Enrico
2016-03-01
Celiac disease (CD) is an immune-mediated enteropathy triggered by exposure to gluten and similar proteins, affecting genetically susceptible persons, increasing their risk of different complications. Small bowels mucosa damage due to CD involves various degrees of endoscopically relevant lesions, which are not easily recognized: their overall sensitivity and positive predictive values are poor even when zoom-endoscopy is used. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to qualitative evaluate mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a method for automatically classifying CLE images into three different classes: normal regions, villous atrophy and crypt hypertrophy. This classification is performed after a features selection process, in which four features are extracted from each image, through the application of homomorphic filtering and border identification through Canny and Sobel operators. Three different classifiers have been tested on a dataset of 67 different images labeled by experts in three classes (normal, VA and CH): linear approach, Naive-Bayes quadratic approach and a standard quadratic analysis, all validated with a ten-fold cross validation. Linear classification achieves 82.09% accuracy (class accuracies: 90.32% for normal villi, 82.35% for VA and 68.42% for CH, sensitivity: 0.68, specificity 1.00), Naive Bayes analysis returns 83.58% accuracy (90.32% for normal villi, 70.59% for VA and 84.21% for CH, sensitivity: 0.84 specificity: 0.92), while the quadratic analysis achieves a final accuracy of 94.03% (96.77% accuracy for normal villi, 94.12% for VA and 89.47% for CH, sensitivity: 0.89, specificity: 0.98).
A scalable method for computing quadruplet wave-wave interactions
NASA Astrophysics Data System (ADS)
Van Vledder, Gerbrant
2017-04-01
Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.
Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin
2016-11-01
Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.
Weng, Ziqing; Wolc, Anna; Shen, Xia; Fernando, Rohan L; Dekkers, Jack C M; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Garrick, Dorian J
2016-03-19
Genomic estimated breeding values (GEBV) based on single nucleotide polymorphism (SNP) genotypes are widely used in animal improvement programs. It is typically assumed that the larger the number of animals is in the training set, the higher is the prediction accuracy of GEBV. The aim of this study was to quantify genomic prediction accuracy depending on the number of ancestral generations included in the training set, and to determine the optimal number of training generations for different traits in an elite layer breeding line. Phenotypic records for 16 traits on 17,793 birds were used. All parents and some selection candidates from nine non-overlapping generations were genotyped for 23,098 segregating SNPs. An animal model with pedigree relationships (PBLUP) and the BayesB genomic prediction model were applied to predict EBV or GEBV at each validation generation (progeny of the most recent training generation) based on varying numbers of immediately preceding ancestral generations. Prediction accuracy of EBV or GEBV was assessed as the correlation between EBV and phenotypes adjusted for fixed effects, divided by the square root of trait heritability. The optimal number of training generations that resulted in the greatest prediction accuracy of GEBV was determined for each trait. The relationship between optimal number of training generations and heritability was investigated. On average, accuracies were higher with the BayesB model than with PBLUP. Prediction accuracies of GEBV increased as the number of closely-related ancestral generations included in the training set increased, but reached an asymptote or slightly decreased when distant ancestral generations were used in the training set. The optimal number of training generations was 4 or more for high heritability traits but less than that for low heritability traits. For less heritable traits, limiting the training datasets to individuals closely related to the validation population resulted in the best predictions. The effect of adding distant ancestral generations in the training set on prediction accuracy differed between traits and the optimal number of necessary training generations is associated with the heritability of traits.
Omran, Dalia; Zayed, Rania A; Nabeel, Mohammed M; Mobarak, Lamiaa; Zakaria, Zeinab; Farid, Azza; Hassany, Mohamed; Saif, Sameh; Mostafa, Muhammad; Saad, Omar Khalid; Yosry, Ayman
2018-05-01
Stage of liver fibrosis is critical for treatment decision and prediction of outcomes in chronic hepatitis C (CHC) patients. We evaluated the diagnostic accuracy of transient elastography (TE)-FibroScan and noninvasive serum markers tests in the assessment of liver fibrosis in CHC patients, in reference to liver biopsy. One-hundred treatment-naive CHC patients were subjected to liver biopsy, TE-FibroScan, and eight serum biomarkers tests; AST/ALT ratio (AAR), AST to platelet ratio index (APRI), age-platelet index (AP index), fibrosis quotient (FibroQ), fibrosis 4 index (FIB-4), cirrhosis discriminant score (CDS), King score, and Goteborg University Cirrhosis Index (GUCI). Receiver operating characteristic curves were constructed to compare the diagnostic accuracy of these noninvasive methods in predicting significant fibrosis in CHC patients. TE-FibroScan predicted significant fibrosis at cutoff value 8.5 kPa with area under the receiver operating characteristic (AUROC) 0.90, sensitivity 83%, specificity 91.5%, positive predictive value (PPV) 91.2%, and negative predictive value (NPV) 84.4%. Serum biomarkers tests showed that AP index and FibroQ had the highest diagnostic accuracy in predicting significant liver fibrosis at cutoff 4.5 and 2.7, AUROC was 0.8 and 0.8 with sensitivity 73.6% and 73.6%, specificity 70.2% and 68.1%, PPV 71.1% and 69.8%, and NPV 72.9% and 72.3%, respectively. Combined AP index and FibroQ had AUROC 0.83 with sensitivity 73.6%, specificity 80.9%, PPV 79.6%, and NPV 75.7% for predicting significant liver fibrosis. APRI, FIB-4, CDS, King score, and GUCI had intermediate accuracy in predicting significant liver fibrosis with AUROC 0.68, 0.78, 0.74, 0.74, and 0.67, respectively, while AAR had low accuracy in predicting significant liver fibrosis. TE-FibroScan is the most accurate noninvasive alternative to liver biopsy. AP index and FibroQ, either as individual tests or combined, have good accuracy in predicting significant liver fibrosis, and are better combined for higher specificity.
Explanation Generation, Not Explanation Expectancy, Improves Metacomprehension Accuracy
ERIC Educational Resources Information Center
Fukaya, Tatsushi
2013-01-01
The ability to monitor the status of one's own understanding is important to accomplish academic tasks proficiently. Previous studies have shown that comprehension monitoring (metacomprehension accuracy) is generally poor, but improves when readers engage in activities that access valid cues reflecting their situation model (activities such as…
ERIC Educational Resources Information Center
Steenbeek-Planting, Esther G.; van Bon, Wim H. J.; Schreuder, Robert
2013-01-01
The effect of two training procedures on the improvement of reading accuracy in poor readers was examined in relation to their initial reading level. A randomized controlled trial was conducted with 60 poor readers. Poor readers were assigned to a control group that received no training, or one of two training conditions. One training concentrated…
Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J
2012-02-09
The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.
Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen
2017-07-12
The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables-words of three or more syllables-are important to consider because unlike monosyllables, polysyllables have been associated with phonological processing and literacy difficulties in school-aged children. They therefore have the potential to help identify preschoolers most at risk of future literacy difficulties. Participants were 93 preschool children with SSD from the Sound Start Study. Participants completed the Polysyllable Preschool Test (Baker, 2013) as well as phonological processing, receptive vocabulary, and print knowledge tasks. Cluster analysis was completed, and 2 clusters were identified: low polysyllable accuracy and moderate polysyllable accuracy. The clusters were significantly different based on 2 measures of phonological awareness and measures of receptive vocabulary, rapid naming, and digit span. The clusters were not significantly different on sound matching accuracy or letter, sound, or print concept knowledge. The participants' poor performance on print knowledge tasks suggested that as a group, they were at risk of literacy difficulties but that there was a cluster of participants at greater risk-those with both low polysyllable accuracy and poor phonological processing.
Evaluation of TIGGE Ensemble Forecasts of Precipitation in Distinct Climate Regions in Iran
NASA Astrophysics Data System (ADS)
Aminyavari, Saleh; Saghafian, Bahram; Delavar, Majid
2018-04-01
The application of numerical weather prediction (NWP) products is increasing dramatically. Existing reports indicate that ensemble predictions have better skill than deterministic forecasts. In this study, numerical ensemble precipitation forecasts in the TIGGE database were evaluated using deterministic, dichotomous (yes/no), and probabilistic techniques over Iran for the period 2008-16. Thirteen rain gauges spread over eight homogeneous precipitation regimes were selected for evaluation. The Inverse Distance Weighting and Kriging methods were adopted for interpolation of the prediction values, downscaled to the stations at lead times of one to three days. To enhance the forecast quality, NWP values were post-processed via Bayesian Model Averaging. The results showed that ECMWF had better scores than other products. However, products of all centers underestimated precipitation in high precipitation regions while overestimating precipitation in other regions. This points to a systematic bias in forecasts and demands application of bias correction techniques. Based on dichotomous evaluation, NCEP did better at most stations, although all centers overpredicted the number of precipitation events. Compared to those of ECMWF and NCEP, UKMO yielded higher scores in mountainous regions, but performed poorly at other selected stations. Furthermore, the evaluations showed that all centers had better skill in wet than in dry seasons. The quality of post-processed predictions was better than those of the raw predictions. In conclusion, the accuracy of the NWP predictions made by the selected centers could be classified as medium over Iran, while post-processing of predictions is recommended to improve the quality.
Word skipping: effects of word length, predictability, spelling and reading skill.
Slattery, Timothy J; Yates, Mark
2017-08-31
Readers eyes often skip over words as they read. Skipping rates are largely determined by word length; short words are skipped more than long words. However, the predictability of a word in context also impacts skipping rates. Rayner, Slattery, Drieghe and Liversedge (2011) reported an effect of predictability on word skipping for even long words (10-13 characters) that extend beyond the word identification span. Recent research suggests that better readers and spellers have an enhanced perceptual span (Veldre & Andrews, 2014). We explored whether reading and spelling skill interact with word length and predictability to impact word skipping rates in a large sample (N=92) of average and poor adult readers. Participants read the items from Rayner et al. (2011) while their eye movements were recorded. Spelling skill (zSpell) was assessed using the dictation and recognition tasks developed by Sally Andrews and colleagues. Reading skill (zRead) was assessed from reading speed (words per minute) and accuracy of three 120 word passages each with 10 comprehension questions. We fit linear mixed models to the target gaze duration data and generalized linear mixed models to the target word skipping data. Target word gaze durations were significantly predicted by zRead while, the skipping likelihoods were significantly predicted by zSpell. Additionally, for gaze durations, zRead significantly interacted with word predictability as better readers relied less on context to support word processing. These effects are discussed in relation to the lexical quality hypothesis and eye movement models of reading.
Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E
2017-07-01
The leaf spotting diseases in wheat that include Septoria tritici blotch (STB) caused by , Stagonospora nodorum blotch (SNB) caused by , and tan spot (TS) caused by pose challenges to breeding programs in selecting for resistance. A promising approach that could enable selection prior to phenotyping is genomic selection that uses genome-wide markers to estimate breeding values (BVs) for quantitative traits. To evaluate this approach for seedling and/or adult plant resistance (APR) to STB, SNB, and TS, we compared the predictive ability of least-squares (LS) approach with genomic-enabled prediction models including genomic best linear unbiased predictor (GBLUP), Bayesian ridge regression (BRR), Bayes A (BA), Bayes B (BB), Bayes Cπ (BC), Bayesian least absolute shrinkage and selection operator (BL), and reproducing kernel Hilbert spaces markers (RKHS-M), a pedigree-based model (RKHS-P) and RKHS markers and pedigree (RKHS-MP). We observed that LS gave the lowest prediction accuracies and RKHS-MP, the highest. The genomic-enabled prediction models and RKHS-P gave similar accuracies. The increase in accuracy using genomic prediction models over LS was 48%. The mean genomic prediction accuracies were 0.45 for STB (APR), 0.55 for SNB (seedling), 0.66 for TS (seedling) and 0.48 for TS (APR). We also compared markers from two whole-genome profiling approaches: genotyping by sequencing (GBS) and diversity arrays technology sequencing (DArTseq) for prediction. While, GBS markers performed slightly better than DArTseq, combining markers from the two approaches did not improve accuracies. We conclude that implementing GS in breeding for these diseases would help to achieve higher accuracies and rapid gains from selection. Copyright © 2017 Crop Science Society of America.
Can nutrient status of four woody plant species be predicted using field spectrometry?
NASA Astrophysics Data System (ADS)
Ferwerda, Jelle G.; Skidmore, Andrew K.
This paper demonstrates the potential of hyperspectral remote sensing to predict the chemical composition (i.e., nitrogen, phosphorous, calcium, potassium, sodium, and magnesium) of three tree species (i.e., willow, mopane and olive) and one shrub species (i.e., heather). Reflectance spectra, derivative spectra and continuum-removed spectra were compared in terms of predictive power. Results showed that the best predictions for nitrogen, phosphorous, and magnesium occur when using derivative spectra, and the best predictions for sodium, potassium, and calcium occur when using continuum-removed data. To test whether a general model for multiple species is also valid for individual species, a bootstrapping routine was applied. Prediction accuracies for the individual species were lower then prediction accuracies obtained for the combined dataset for all except one element/species combination, indicating that indices with high prediction accuracies at the landscape scale are less appropriate to detect the chemical content of individual species.
Mitsunaga, Tisha; Hedt-Gauthier, Bethany L; Ngizwenayo, Elias; Farmer, Didi Bertrand; Gaju, Erick; Drobac, Peter; Basinga, Paulin; Hirschhorn, Lisa; Rich, Michael L; Winch, Peter J; Ngabo, Fidele; Mugeni, Cathy
2015-08-01
Community health workers (CHWs) collect data for routine services, surveys and research in their communities. However, quality of these data is largely unknown. Utilizing poor quality data can result in inefficient resource use, misinformation about system gaps, and poor program management and effectiveness. This study aims to measure CHW data accuracy, defined as agreement between household registers compared to household member interview and client records in one district in Eastern province, Rwanda. We used cluster-lot quality assurance sampling to randomly sample six CHWs per cell and six households per CHW. We classified cells as having 'poor' or 'good' accuracy for household registers for five indicators, calculating point estimates of percent of households with accurate data by health center. We evaluated 204 CHW registers and 1,224 households for accuracy across 34 cells in southern Kayonza. Point estimates across health centers ranged from 79 to 100% for individual indicators and 61 to 72% for the composite indicator. Recording error appeared random for all but the widely under-reported number of women on modern family planning method. Overall, accuracy was largely 'good' across cells, with varying results by indicator. Program managers should identify optimum thresholds for 'good' data quality and interventions to reach them according to data use. Decreasing variability and improving quality will facilitate potential of these routinely-collected data to be more meaningful for community health program management. We encourage further studies assessing CHW data quality and the impact training, supervision and other strategies have on improving it.
The Influence of Delaying Judgments of Learning on Metacognitive Accuracy: A Meta-Analytic Review
ERIC Educational Resources Information Center
Rhodes, Matthew G.; Tauber, Sarah K.
2011-01-01
Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the…
Emotion blocks the path to learning under stereotype threat
Good, Catherine; Whiteman, Ronald C.; Maniscalco, Brian; Dweck, Carol S.
2012-01-01
Gender-based stereotypes undermine females’ performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments. PMID:21252312
Emotion blocks the path to learning under stereotype threat.
Mangels, Jennifer A; Good, Catherine; Whiteman, Ronald C; Maniscalco, Brian; Dweck, Carol S
2012-02-01
Gender-based stereotypes undermine females' performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments.
NASA Technical Reports Server (NTRS)
Cronkhite, James D.
1993-01-01
Accurate vibration prediction for helicopter airframes is needed to 'fly from the drawing board' without costly development testing to solve vibration problems. The principal analytical tool for vibration prediction within the U.S. helicopter industry is the NASTRAN finite element analysis. Under the NASA DAMVIBS research program, Bell conducted NASTRAN modeling, ground vibration testing, and correlations of both metallic (AH-1G) and composite (ACAP) airframes. The objectives of the program were to assess NASTRAN airframe vibration correlations, to investigate contributors to poor agreement, and to improve modeling techniques. In the past, there has been low confidence in higher frequency vibration prediction for helicopters that have multibladed rotors (three or more blades) with predominant excitation frequencies typically above 15 Hz. Bell's findings under the DAMVIBS program, discussed in this paper, included the following: (1) accuracy of finite element models (FEM) for composite and metallic airframes generally were found to be comparable; (2) more detail is needed in the FEM to improve higher frequency prediction; (3) secondary structure not normally included in the FEM can provide significant stiffening; (4) damping can significantly affect phase response at higher frequencies; and (5) future work is needed in the areas of determination of rotor-induced vibratory loads and optimization.
Marhefka, Stephanie L; Mellins, Claude Ann; Brackis-Cott, Elizabeth; Dolezal, Curtis; Ehrhardt, Anke A
2009-10-01
Previous studies suggest that mothers can help adolescents make responsible sexual decisions by talking with them about sexual health. Yet, it is not clear how and when mothers make decisions about talking with their adolescents about sex. We sought to determine: (1) the accuracy of mothers' and adolescents' predictions of adolescents' age of sexual debut; and (2) if mothers' beliefs about their adolescents' sexual behavior affected the frequency of mother-adolescent communication about sexual topics and, in turn, if mother-adolescent communication about sexual topics affected mothers' accuracy in predicting adolescents' current and future sexual behavior. Participants were 129 urban, ethnic minority HIV-negative youth (52% male and 48% female; ages 10-14 years at baseline; ages 13-19 years at follow-up) and their mothers; 47% of mothers were HIV-positive. Most mothers and adolescents predicted poorly when adolescents would sexually debut. At baseline, mothers' communication with their early adolescents about sexual topics was not significantly associated with mothers' assessments of their early adolescents' future sexual behavior. At follow-up, mothers were more likely to talk with their adolescents about HIV prevention and birth control if they believed that their adolescents had sexually debuted, though these effects were attenuated by baseline levels of communication. Only one effect was found for adolescents' gender: mothers reported greater communication about sex with daughters. Studies are needed to determine how mothers make decisions about talking with their adolescents about sex, as well as to examine to what extent and in what instances mothers can reduce their adolescents' sexual risk behavior by providing comprehensive, developmentally appropriate sex education well before adolescents are likely to debut.
Clinical Use of CT Perfusion For Diagnosis and Prediction of Lesion Growth in Acute Ischemic Stroke
Huisa, Branko N; Neil, William P; Schrader, Ronald; Maya, Marcel; Pereira, Benedict; Bruce, Nhu T; Lyden, Patrick D
2012-01-01
Background and Purpose CT perfusion (CTP) mapping in research centers correlates well with diffusion weighted imaging (DWI) lesions and may accurately differentiate the infarct core from ischemic penumbra. The value of CTP in real-world clinical practice has not been fully established. We investigated the yield of CTP– derived cerebral blood volume (CBV) and mean transient time (MTT) for the detection of cerebral ischemia and ischemic penumbra in a sample of acute ischemic stroke (AIS) patients. Methods We studied 165 patients with initial clinical symptoms suggestive of AIS. All patients had an initial non-contrast head CT, CT Perfusion (CTP), CT angiogram (CTA) and follow up brain MRI. The obtained perfusion images were used for image processing. CBV, MTT and DWI lesion volumes were visually estimated and manually traced. Statistical analysis was done using R-2.14.and SAS 9.1. Results All normal DWI sequences had normal CBV and MTT studies (N=89). Seventy-three patients had acute DWI lesions. CBV was abnormal in 23.3% and MTT was abnormal in 42.5% of these patients. There was a high specificity (91.8%)but poor sensitivity (40.0%) for MTT maps predicting positive DWI. Spearman correlation was significant between MTT and DWI lesions (ρ=0.66, p>0.0001) only for abnormal MTT and DWI lesions>0cc. CBV lesions did not correlate with final DWI. Conclusions In real-world use, acute imaging with CTP did not predict stroke or DWI lesions with sufficient accuracy. Our findings argue against the use of CTP for screening AIS patients until real-world implementations match the accuracy reported from specialized research centers. PMID:23253533
Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong
2015-09-01
A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gao, Jianyong; Tian, Gang; Han, Xu; Zhu, Qiang
2018-01-01
Oral squamous cell carcinoma (OSCC) is the sixth most common type cancer worldwide, with poor prognosis. The present study aimed to identify gene signatures that could classify OSCC and predict prognosis in different stages. A training data set (GSE41613) and two validation data sets (GSE42743 and GSE26549) were acquired from the online Gene Expression Omnibus database. In the training data set, patients were classified based on the tumor-node-metastasis staging system, and subsequently grouped into low stage (L) or high stage (H). Signature genes between L and H stages were selected by disparity index analysis, and classification was performed by the expression of these signature genes. The established classification was compared with the L and H classification, and fivefold cross validation was used to evaluate the stability. Enrichment analysis for the signature genes was implemented by the Database for Annotation, Visualization and Integration Discovery. Two validation data sets were used to determine the precise of classification. Survival analysis was conducted followed each classification using the package ‘survival’ in R software. A set of 24 signature genes was identified based on the classification model with the Fi value of 0.47, which was used to distinguish OSCC samples in two different stages. Overall survival of patients in the H stage was higher than those in the L stage. Signature genes were primarily enriched in ‘ether lipid metabolism’ pathway and biological processes such as ‘positive regulation of adaptive immune response’ and ‘apoptotic cell clearance’. The results provided a novel 24-gene set that may be used as biomarkers to predict OSCC prognosis with high accuracy, which may be used to determine an appropriate treatment program for patients with OSCC in addition to the traditional evaluation index. PMID:29257303
The accuracy of assessment of walking distance in the elective spinal outpatients setting.
Okoro, Tosan; Qureshi, Assad; Sell, Beulah; Sell, Philip
2010-02-01
Self reported walking distance is a clinically relevant measure of function. The aim of this study was to define patient accuracy and understand factors that might influence perceived walking distance in an elective spinal outpatients setting. A prospective cohort study. 103 patients were asked to perform one test of distance estimation and 2 tests of functional distance perception using pre-measured landmarks. Standard spine specific outcomes included the patient reported claudication distance, Oswestry disability index (ODI), Low Back Outcome Score (LBOS), visual analogue score (VAS) for leg and back, and other measures. There are over-estimators and under-estimators. Overall, the accuracy to within 9.14 metres (m) (10 yards) was poor at only 5% for distance estimation and 40% for the two tests of functional distance perception. Distance: Actual distance 111 m; mean response 245 m (95% CI 176.3-314.7), Functional test 1 actual distance 29.2 m; mean response 71.7 m (95% CI 53.6-88.9) Functional test 2 actual distance 19.6 m; mean response 47.4 m (95% CI 35.02-59.95). Surprisingly patients over 60 years of age (n = 43) are twice as accurate with each test performed compared to those under 60 (n = 60) (average 70% overestimation compared to 140%; p = 0.06). Patients in social class I (n = 18) were more accurate than those in classes II-V (n = 85): There was a positive correlation between poor accuracy and increasing MZD (Pearson's correlation coefficient 0.250; p = 0.012). ODI, LBOS and other parameters measured showed no correlation. Subjective distance perception and estimation is poor in this population. Patients over 60 and those with a professional background are more accurate but still poor.
Gene Expression Profiling Predicts the Development of Oral Cancer
Saintigny, Pierre; Zhang, Li; Fan, You-Hong; El-Naggar, Adel K.; Papadimitrakopoulou, Vali; Feng, Lei; Lee, J. Jack; Kim, Edward S.; Hong, Waun Ki; Mao, Li
2011-01-01
Patients with oral preneoplastic lesion (OPL) have high risk of developing oral cancer. Although certain risk factors such as smoking status and histology are known, our ability to predict oral cancer risk remains poor. The study objective was to determine the value of gene expression profiling in predicting oral cancer development. Gene expression profile was measured in 86 of 162 OPL patients who were enrolled in a clinical chemoprevention trial that used the incidence of oral cancer development as a prespecified endpoint. The median follow-up time was 6.08 years and 35 of the 86 patients developed oral cancer over the course. Gene expression profiles were associated with oral cancer-free survival and used to develope multivariate predictive models for oral cancer prediction. We developed a 29-transcript predictive model which showed marked improvement in terms of prediction accuracy (with 8% predicting error rate) over the models using previously known clinico-pathological risk factors. Based on the gene expression profile data, we also identified 2182 transcripts significantly associated with oral cancer risk associated genes (P-value<0.01, single variate Cox proportional hazards model). Functional pathway analysis revealed proteasome machinery, MYC, and ribosomes components as the top gene sets associated with oral cancer risk. In multiple independent datasets, the expression profiles of the genes can differentiate head and neck cancer from normal mucosa. Our results show that gene expression profiles may improve the prediction of oral cancer risk in OPL patients and the significant genes identified may serve as potential targets for oral cancer chemoprevention. PMID:21292635
Application of Machine Learning to Predict Dietary Lapses During Weight Loss.
Goldstein, Stephanie P; Zhang, Fengqing; Thomas, John G; Butryn, Meghan L; Herbert, James D; Forman, Evan M
2018-05-01
Individuals who adhere to dietary guidelines provided during weight loss interventions tend to be more successful with weight control. Any deviation from dietary guidelines can be referred to as a "lapse." There is a growing body of research showing that lapses are predictable using a variety of physiological, environmental, and psychological indicators. With recent technological advancements, it may be possible to assess these triggers and predict dietary lapses in real time. The current study sought to use machine learning techniques to predict lapses and evaluate the utility of combining both group- and individual-level data to enhance lapse prediction. The current study trained and tested a machine learning algorithm capable of predicting dietary lapses from a behavioral weight loss program among adults with overweight/obesity (n = 12). Participants were asked to follow a weight control diet for 6 weeks and complete ecological momentary assessment (EMA; repeated brief surveys delivered via smartphone) regarding dietary lapses and relevant triggers. WEKA decision trees were used to predict lapses with an accuracy of 0.72 for the group of participants. However, generalization of the group algorithm to each individual was poor, and as such, group- and individual-level data were combined to improve prediction. The findings suggest that 4 weeks of individual data collection is recommended to attain optimal model performance. The predictive algorithm could be utilized to provide in-the-moment interventions to prevent dietary lapses and therefore enhance weight losses. Furthermore, methods in the current study could be translated to other types of health behavior lapses.
ERIC Educational Resources Information Center
de Bruin, Anique B. H.; Thiede, Keith W.; Camp, Gino; Redford, Joshua
2011-01-01
The ability to monitor understanding of texts, usually referred to as metacomprehension accuracy, is typically quite poor in adult learners; however, recently interventions have been developed to improve accuracy. In two experiments, we evaluated whether generating delayed keywords prior to judging comprehension improved metacomprehension accuracy…
NASA Astrophysics Data System (ADS)
Sembiring, J.; Jones, F.
2018-03-01
Red cell Distribution Width (RDW) and platelet ratio (RPR) can predict liver fibrosis and cirrhosis in chronic hepatitis B with relatively high accuracy. RPR was superior to other non-invasive methods to predict liver fibrosis, such as AST and ALT ratio, AST and platelet ratio Index and FIB-4. The aim of this study was to assess diagnostic accuracy liver fibrosis by using RDW and platelets ratio in chronic hepatitis B patients based on compared with Fibroscan. This cross-sectional study was conducted at Adam Malik Hospital from January-June 2015. We examine 34 patients hepatitis B chronic, screen RDW, platelet, and fibroscan. Data were statistically analyzed. The result RPR with ROC procedure has an accuracy of 72.3% (95% CI: 84.1% - 97%). In this study, the RPR had a moderate ability to predict fibrosis degree (p = 0.029 with AUC> 70%). The cutoff value RPR was 0.0591, sensitivity and spesificity were 71.4% and 60%, Positive Prediction Value (PPV) was 55.6% and Negative Predictions Value (NPV) was 75%, positive likelihood ratio was 1.79 and negative likelihood ratio was 0.48. RPR have the ability to predict the degree of liver fibrosis in chronic hepatitis B patients with moderate accuracy.
Comparison of Three Risk Scores to Predict Outcomes of Severe Lower Gastrointestinal Bleeding
Camus, Marine; Jensen, Dennis M.; Ohning, Gordon V.; Kovacs, Thomas O.; Jutabha, Rome; Ghassemi, Kevin A.; Machicado, Gustavo A.; Dulai, Gareth S.; Jensen, Mary Ellen; Gornbein, Jeffrey A.
2014-01-01
Background & aims Improved medical decisions by using a score at the initial patient triage level may lead to improvements in patient management, outcomes, and resource utilization. There is no validated score for management of lower gastrointestinal bleeding (LGIB) unlike for upper GIB. The aim of our study was to compare the accuracies of 3 different prognostic scores (CURE Hemostasis prognosis score, Charlston index and ASA score) for the prediction of 30 day rebleeding, surgery and death in severe LGIB. Methods Data on consecutive patients hospitalized with severe GI bleeding from January 2006 to October 2011 in our two-tertiary academic referral centers were prospectively collected. Sensitivities, specificities, accuracies and area under the receiver operating characteristic (AUROC) were computed for three scores for predictions of rebleeding, surgery and mortality at 30 days. Results 235 consecutive patients with LGIB were included between 2006 and 2011. 23% of patients rebled, 6% had surgery, and 7.7% of patients died. The accuracies of each score never reached 70% for predicting rebleeding or surgery in either. The ASA score had a highest accuracy for predicting mortality within 30 days (83.5%) whereas the CURE Hemostasis prognosis score and the Charlson index both had accuracies less than 75% for the prediction of death within 30 days. Conclusions ASA score could be useful to predict death within 30 days. However a new score is still warranted to predict all 30 days outcomes (rebleeding, surgery and death) in LGIB. PMID:25599218
Effectiveness of Link Prediction for Face-to-Face Behavioral Networks
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30–0.45 and a recall of 0.10–0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks. PMID:24339956
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.
2007-01-01
To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.
Volz-Sidiropoulou, Eftychia; Gauggel, Siegfried
2012-06-01
Older individuals who recognize their cognitive difficulties are more likely to adjust their everyday life to their actual cognitive functioning, particularly when they are able to estimate their abilities accurately. We assessed self- and spouse-ratings of memory and attention difficulties in everyday life of healthy, older individuals and compared them with the respective test performance. Eighty-four older individuals (women's age, M = 67.4 years, SD = 5.2; men's age, M = 68.5 years, SD = 4.9) completed both the self and the spouse versions of the Attention Deficit Questionnaire and the Everyday Memory Questionnaire and completed two neuropsychological tests. Using the residual score approach, subjective metacognitive measures of memory and attention were created and compared with actual test performance. Significant associations between subjective and objective scores were found only for men and only for episodic memory measures. Men who underreported memory difficulties performed more poorly; men who overreported memory difficulties performed better. Men's recognition performance was best predicted by subjective measures (R² = .25), followed by delayed recall (R² = .14) and forgetting rate (R² = .13). The results indicate gender-specific differences in metacognitive accuracy and predictive validity of subjective ratings. PsycINFO Database Record (c) 2012 APA, all rights reserved
Filatov, Serhii
2017-10-10
Uranotaenia unguiculata is a Palaearctic mosquito species with poorly known distribution and ecology. This study is aimed at filling the gap in our understanding of the species potential distribution and its environmental requirements through a species distribution modelling (SDM) exercise. Furthermore, aspects of the mosquito ecology that may be relevant to the epidemiology of certain zoonotic vector-borne diseases in Europe are discussed. A maximum entropy (Maxent) modelling approach has been applied to predict the potential distribution of Ur. unguiculata in the Western Palaearctic. Along with the high accuracy and predictive power, the model reflects well the known species distribution and predicts as highly suitable some areas where the occurrence of the species is hitherto unknown. To our knowledge, the potential distribution of a mosquito species from the genus Uranotaenia is modelled for the first time. Provided that Ur. unguiculata is a widely-distributed species, and some pathogens of zoonotic concern have been detected in this mosquito on several occasions, the question regarding its host associations and possible epidemiological role warrants further investigation.
In Vitro Measures for Assessing Boar Semen Fertility.
Jung, M; Rüdiger, K; Schulze, M
2015-07-01
Optimization of artificial insemination (AI) for pig production and evaluation of the fertilizing capacity of boar semen are highly related. Field studies have demonstrated significant variation in semen quality and fertility. The semen quality of boars is primarily affected by breed and season. AI centres routinely examine boar semen to predict male fertility. Overall, the evaluation of classical parameters, such as sperm morphology, sperm motility, sperm concentration and ejaculate volume, allows the identification of ejaculates corresponding to poor fertility but not high-efficiency prediction of field fertility. The development of new sperm tests for measuring certain sperm functions has attempted to solve this problem. Fluorescence staining can categorize live and dead spermatozoa in the ejaculate and identify spermatozoa with active mitochondria. Computer-assisted semen analysis (CASA) provides an objective assessment of multiple kinetic sperm parameters. However, sperm tests usually assess only single factors involved in the fertilization process. Thus, basing prediction of fertilizing capacity on a selective collection of sperm tests leads to greater accuracy than using single tests. In the present brief review, recent diagnostic laboratory methods that directly relate to AI performance as well as the development of a new boar fertility in vitro index are discussed. © 2015 Blackwell Verlag GmbH.
Predicting adherence of patients with HF through machine learning techniques.
Karanasiou, Georgia Spiridon; Tripoliti, Evanthia Eleftherios; Papadopoulos, Theofilos Grigorios; Kalatzis, Fanis Georgios; Goletsis, Yorgos; Naka, Katerina Kyriakos; Bechlioulis, Aris; Errachid, Abdelhamid; Fotiadis, Dimitrios Ioannis
2016-09-01
Heart failure (HF) is a chronic disease characterised by poor quality of life, recurrent hospitalisation and high mortality. Adherence of patient to treatment suggested by the experts has been proven a significant deterrent of the above-mentioned serious consequences. However, the non-adherence rates are significantly high; a fact that highlights the importance of predicting the adherence of the patient and enabling experts to adjust accordingly patient monitoring and management. The aim of this work is to predict the adherence of patients with HF, through the application of machine learning techniques. Specifically, it aims to classify a patient not only as medication adherent or not, but also as adherent or not in terms of medication, nutrition and physical activity (global adherent). Two classification problems are addressed: (i) if the patient is global adherent or not and (ii) if the patient is medication adherent or not. About 11 classification algorithms are employed and combined with feature selection and resampling techniques. The classifiers are evaluated on a dataset of 90 patients. The patients are characterised as medication and global adherent, based on clinician estimation. The highest detection accuracy is 82 and 91% for the first and the second classification problem, respectively.
Employing conservation of co-expression to improve functional inference
Daub, Carsten O; Sonnhammer, Erik LL
2008-01-01
Background Observing co-expression between genes suggests that they are functionally coupled. Co-expression of orthologous gene pairs across species may improve function prediction beyond the level achieved in a single species. Results We used orthology between genes of the three different species S. cerevisiae, D. melanogaster, and C. elegans to combine co-expression across two species at a time. This led to increased function prediction accuracy when we incorporated expression data from either of the other two species and even further increased when conservation across both of the two other species was considered at the same time. Employing the conservation across species to incorporate abundant model organism data for the prediction of protein interactions in poorly characterized species constitutes a very powerful annotation method. Conclusion To be able to employ the most suitable co-expression distance measure for our analysis, we evaluated the ability of four popular gene co-expression distance measures to detect biologically relevant interactions between pairs of genes. For the expression datasets employed in our co-expression conservation analysis above, we used the GO and the KEGG PATHWAY databases as gold standards. While the differences between distance measures were small, Spearman correlation showed to give most robust results. PMID:18808668
Chavez, Pierre-François; Meeus, Joke; Robin, Florent; Schubert, Martin Alexander; Somville, Pascal
2018-01-01
The evaluation of drug–polymer miscibility in the early phase of drug development is essential to ensure successful amorphous solid dispersion (ASD) manufacturing. This work investigates the comparison of thermodynamic models, conventional experimental screening methods (solvent casting, quench cooling), and a novel atomization screening device based on their ability to predict drug–polymer miscibility, solid state properties (Tg value and width), and adequate polymer selection during the development of spray-dried amorphous solid dispersions (SDASDs). Binary ASDs of four drugs and seven polymers were produced at 20:80, 40:60, 60:40, and 80:20 (w/w). Samples were systematically analyzed using modulated differential scanning calorimetry (mDSC) and X-ray powder diffraction (XRPD). Principal component analysis (PCA) was used to qualitatively assess the predictability of screening methods with regards to SDASD development. Poor correlation was found between theoretical models and experimentally-obtained results. Additionally, the limited ability of usual screening methods to predict the miscibility of SDASDs did not guarantee the appropriate selection of lead excipient for the manufacturing of robust SDASDs. Contrary to standard approaches, our novel screening device allowed the selection of optimal polymer and drug loading and established insight into the final properties and performance of SDASDs at an early stage, therefore enabling the optimization of the scaled-up late-stage development. PMID:29518936
SGS Dynamics and Modeling near a Rough Wall.
NASA Astrophysics Data System (ADS)
Juneja, Anurag; Brasseur, James G.
1998-11-01
Large-eddy simulation (LES) of the atmospheric boundary layer (ABL) using classical subgrid-scale (SGS) models is known to poorly predict mean shear at the first few grid cells near the rough surface, creating error which can propogate vertically to infect the entire ABL. Our goal was to determine the first-order errors in predicted SGS terms that arise as a consequence of necessary under-resolution of integral scales and anisotropy which exist at the first few grid levels in LES of rough wall turbulence. Analyzing the terms predicted from eddy-viscosity and similarity closures with DNS anisotropic datasets of buoyancy- and shear-driven turbulence, we uncover three important issues which should be addressed in the design of SGS closures for rough walls and we provide a priori tests for the SGS model. Firstly, we identify a strong spurious coupling between the anisotropic structure of the resolved velocity field and predicted SGS dynamics which can create a feedback loop to incorrectly enhance certain components of the predicted resolved velocity. Secondly, we find that eddy viscosity and similarity SGS models do not contain enough degrees of freedom to capture, at a sufficient level of accuracy, both RS-SGS energy flux and SGS-RS dynamics. Thirdly, to correctly capture pressure transport near a wall, closures must be made more flexible to accommodate proper partitioning between SGS stress divergence and SGS pressure gradient.
Protein docking prediction using predicted protein-protein interface.
Li, Bin; Kihara, Daisuke
2012-01-10
Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C
2014-06-01
Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection.
Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C
2014-01-01
Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection. PMID:24518889
Utsumi, Takanobu; Oka, Ryo; Endo, Takumi; Yano, Masashi; Kamijima, Shuichi; Kamiya, Naoto; Fujimura, Masaaki; Sekita, Nobuyuki; Mikami, Kazuo; Hiruta, Nobuyuki; Suzuki, Hiroyoshi
2015-11-01
The aim of this study is to validate and compare the predictive accuracy of two nomograms predicting the probability of Gleason sum upgrading between biopsy and radical prostatectomy pathology among representative patients with prostate cancer. We previously developed a nomogram, as did Chun et al. In this validation study, patients originated from two centers: Toho University Sakura Medical Center (n = 214) and Chibaken Saiseikai Narashino Hospital (n = 216). We assessed predictive accuracy using area under the curve values and constructed calibration plots to grasp the tendency for each institution. Both nomograms showed a high predictive accuracy in each institution, although the constructed calibration plots of the two nomograms underestimated the actual probability in Toho University Sakura Medical Center. Clinicians need to use calibration plots for each institution to correctly understand the tendency of each nomogram for their patients, even if each nomogram has a good predictive accuracy. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Wu, Cai; Li, Liang
2018-05-15
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-07
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSiP, HSiAs, HGeN, HGeP, HGeAs); and (v) H2 (+) single bond with 1 electron.
Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio
2018-04-06
To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald
2016-01-01
The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
Haji, Darsim L; Ali, Mohamed M; Royse, Alistair; Canty, David J; Clarke, Sandy; Royse, Colin F
2014-10-01
Left atrial pressure and its surrogate, pulmonary capillary wedge pressure (PCWP), are important for determining diastolic function. The role of transthoracic echocardiography (TTE) in assessing diastolic function is well established in awake subjects. The objective was to assess the accuracy of predicting PCWP by TTE and transesophageal echocardiography (TEE) during coronary artery surgery. In 27 adult patients undergoing on-pump coronary artery surgery, simultaneous echocardiographic and hemodynamic measurements were obtained immediately before anesthesia (TTE), after anesthesia and mechanical ventilation (TTE and TEE), during conduit harvest (TEE), and after separation from cardiopulmonary bypass (TEE). Twenty patients had an ejection fraction (EF) of 0.5 or greater. With the exception of E/e' and S/D ratios, echocardiographic values changed over the echocardiographic studies. In patients with low EF, E velocity, deceleration time, pulmonary vein D, S/D, and E/e' ratios correlated well with PCWP before anesthesia. After induction of anesthesia using TTE or TEE, correlations were poor. In normal EF patients, correlations were poor for both TEE and TTE at all five stages. The sensitivity and specificity of echocardiographic values were not high enough to predict raised PCWP except for a fixed curve pattern of interatrial septum (area under the curve 0.89 for PCWP ≥ 17, and 0.98 for ≥ 18 mmHg) and S/D less than 1 (area under the curve 0.74 for PCWP ≥ 17, and 0.78 for ≥ 18 mmHg). Doppler assessment of PCWP was neither sensitive nor specific enough to be clinically useful in anesthetized patients with mechanical ventilation. The fixed curve pattern of the interatrial septum was the best predictor of raised PCWP.
Langbein, John O.
2015-01-01
The 24 August 2014 Mw6.0 South Napa, California earthquake produced significant offsets on 12 borehole strainmeters in the San Francisco Bay area. These strainmeters are located between 24 and 80 km from the source and the observed offsets ranged up to 400 parts-per-billion (ppb), which exceeds their nominal precision by a factor of 100. However, the observed offsets of tidally calibrated strains differ by up to 130 ppb from predictions based on a moment tensor derived from seismic data. The large misfit can be attributed to a combination of poor instrument calibration and better modeling of the strain fit from the earthquake. Borehole strainmeters require in-situ calibration, which historically has been accomplished by comparing their measurements of Earth tides with the strain-tides predicted by a model. Although the borehole strainmeter accurately measure the deformation within the borehole, the long-wavelength strain signals from tides or other tectonic processes recorded in the borehole are modified by the presence of the borehole and the elastic properties of the grout and the instrument. Previous analyses of surface-mounted, strainmeter data and their relationship with the predicted tides suggest that tidal models could be in error by 30%. The poor fit of the borehole strainmeter data from this earthquake can be improved by simultaneously varying the components of the model tides up to 30% and making small adjustments to the point-source model of the earthquake, which reduces the RMS misfit from 130 ppb to 18 ppb. This suggests that relying on tidal models to calibrate borehole strainmeters significantly reduces their accuracy.
Carpenter, Andrea; Ng, Vicky Lee; Chapman, Karen; Ling, Simon C; Mouzaki, Marialena
2017-03-01
Malnutrition is common in children with end-stage liver disease (ESLD) and is associated with increased morbidity and mortality. The inability to accurately estimate energy needs of these patients may contribute to their poor nutrition status. In clinical practice, predictive equations are used to calculate resting energy expenditure (cREE). The objective of this study is to assess the accuracy of commonly used equations in pediatric patients with ESLD. Retrospective study performed at the Hospital for Sick Children. Clinical, laboratory, and indirect calorimetry data from children listed for liver transplant between February 2013 and December 2014 were reviewed. Calorimetry results were compared with cREE estimated using the Food and Agriculture Organization/World Health Organization/United Nations University (FAO/WHO/UNU), Schofield [weight], and Schofield [weight and height] equations. Forty-five patients were included in this study. The median age was 9 months, and the most common indication for transplantation was biliary atresia (64%). The Schofield [weight and height], FAO/WHO/UNU, and Schofield [weight] equations were compared with indirect calorimetry and found to have a mean (SD) difference of 48.8 (344.0), 59.3 (229.8), and 206.5 (502.6) kcal/d, respectively. The FAO/WHO/UNU, Schofield [weight], and Schofield [weight and height] equations introduced a mean error of 21%, 38%, and 76%, respectively. The FAO/WHO/UNU equation tended to underestimate, whereas the Schofield equations overestimated the REE. Commonly used predictive equations perform poorly in infants and young children with ESLD. Indirect calorimetry should be used when available to guide energy provision, particularly in children who are already malnourished.
Low T3 syndrome as a predictor of poor prognosis in chronic lymphocytic leukemia.
Gao, Rui; Chen, Rui-Ze; Xia, Yi; Liang, Jin-Hua; Wang, Li; Zhu, Hua-Yuan; Zhu Wu, Jia-; Fan, Lei; Li, Jian-Yong; Yang, Tao; Xu, Wei
2018-02-19
Low triiodothyronine (T3) state is associated with poor prognosis in critical acute and prolonged illness. However, the information on thyroid dysfunction and cancer is limited. The aim of our study was to evaluate the prognostic value of low T3 syndrome in chronic lymphocytic leukemia (CLL). Two hundred and fifty-eight patients with detailed thyroid hormone profile at CLL diagnosis were enrolled. Low T3 syndrome was defined by low free T3 (FT3) level accompanied by normal-to-low free tetraiodothyronine (FT4) and thyroid-stimulating hormone (TSH) levels. A propensity score-matched method was performed to balance the baseline characteristics. Multivariate Cox regression analyses screened the independent prognostic factors related to time-to-first-treatment (TTFT) and cancer-specific survival (CSS). Area under the curve (AUC) assessed the predictive accuracy of CLL-International Prognostic Index (IPI) together with low T3 syndrome. The results showed that 37 (14.34%) patients had low T3 syndrome, which was significantly associated with unfavorable TTFT and CSS in the propensity-matched cohort, and it was an independent prognostic indicator for both TTFT and CSS. Serum FT3 level was positively related to protein metabolism and anemia, and inversely related to inflammatory state. Patients with only low FT3 demonstrated better survival than those with synchronously low FT3 and FT4, while those with synchronously low FT3, FT4 and TSH had the worst clinical outcome. Low T3 syndrome together with CLL-IPI had larger AUCs compared with CLL-IPI alone in TTFT and CSS prediction. In conclusion, low T3 syndrome may be a good candidate for predicting prognosis in future clinical practice of CLL. © 2018 UICC.
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-01-01
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-06-08
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee Spangler; Ross Bricklemyer; David Brown
2012-03-15
There is growing need for rapid, accurate, and inexpensive methods to measure, and verify soil organic carbon (SOC) change for national greenhouse gas accounting and the development of a soil carbon trading market. Laboratory based soil characterization typically requires significant soil processing, which is time and resource intensive. This severely limits application for large-region soil characterization. Thus, development of rapid and accurate methods for characterizing soils are needed to map soil properties for precision agriculture applications, improve regional and global soil carbon (C) stock and flux estimates and efficiently map sub-surface metal contamination, among others. The greatest gains for efficientmore » soil characterization will come from collecting soil data in situ, thus minimizing soil sample transportation, processing, and lab-based measurement costs. Visible and near-infrared diffuse reflectance spectroscopy (VisNIR) and laser-induced breakdown spectroscopy (LIBS) are two complementary, yet fundamentally different spectroscopic techniques that have the potential to meet this need. These sensors have the potential to be mounted on a soil penetrometer and deployed for rapid soil profile characterization at field and landscape scales. Details of sensor interaction, efficient data management, and appropriate statistical analysis techniques for model calibrations are first needed. In situ or on-the-go VisNIR spectroscopy has been proposed as a rapid and inexpensive tool for intensively mapping soil texture and organic carbon (SOC). While lab-based VisNIR has been established as a viable technique for estimating various soil properties, few experiments have compared the predictive accuracy of on-the-go and lab-based VisNIR. Eight north central Montana wheat fields were intensively interrogated using on-the-go and lab-based VisNIR. Lab-based spectral data consistently provided more accurate predictions than on-the-go data. However, neither in situ nor lab-based spectroscopy yielded even semi-quantitative SOC predictions. There was little SOC variability to explain across the eight fields, and on-the-go VisNIR was not able to capture the subtle SOC variability in these Montana soils. With more variation in soil clay content compared to SOC, both lab and on-the-go VisNIR showed better explanatory power. There are several potential explanations for poor on-the-go predictive accuracy: soil heterogeneity, field moisture, consistent sample presentation, and a difference between the spatial support of on-the-go measurements and soil samples collected for laboratory analyses. Though the current configuration of a commercially available on-the-go VisNIR system allows for rapid field scanning, on-the-go soil processing (i.e. drying, crushing, and sieving) could improve soil carbon predictions. Laser-induced breakdown spectroscopy (LIBS) is an emerging elemental analysis technology with the potential to provide rapid, accurate and precise analysis of soil constituents, such as carbon, in situ across landscapes. The research team evaluated the accuracy of LIBS for measuring soil profile carbon in field-moist, intact soil cores simulating conditions that might be encountered by a probe-mounted LIBS instrument measuring soil profile carbon in situ. Over the course of three experiments, more than120 intact soil cores from eight north central Montana wheat fields and the Washington State University (WSU) Cook Agronomy Farm near Pullman, WA were interrogated with LIBS for rapid total carbon (TC), inorganic carbon (IC), and SOC determination. Partial least squares regression models were derived and independently validated at field- and regional scales. Researchers obtained the best LIBS validation predictions for IC followed by TC and SOC. Laser-induced breakdown spectroscopy is fundamentally an elemental analysis technique, yet LIBS PLS2 models appeared to discriminate IC from TC. Regression coefficients from initial models suggested a reliance upon stoichiometric relationships between carbon (247.8 nm) and other elements related to total and inorganic carbon in the soil matrix [Ca (210.2 nm, 211.3 nm, and 220.9 nm), Mg (279.55-280.4 nm, 285.26 nm), and Si (251.6 nm, 288.1 nm)]. Expanding the LIBS spectral range to capture emissions from a broader range of elements related to soil organic matter was explored using two spectrometer systems to improve SOC predictions. Results for increasing the spectral range of LIBS to the full 200-800 nm found modest gains in prediction accuracy for IC, but no gains for predicting TC or SOC. Poor SOC predictions are likely a function of (1) the lack of a consistent/definable molecular composition of SOC, (2) relatively little variation in SOC across field sites, and (3) inorganic carbon constituting the primary form of soil carbon, particularly for Montana soils.« less
Accuracy of taxonomy prediction for 16S rRNA and fungal ITS sequences
2018-01-01
Prediction of taxonomy for marker gene sequences such as 16S ribosomal RNA (rRNA) is a fundamental task in microbiology. Most experimentally observed sequences are diverged from reference sequences of authoritatively named organisms, creating a challenge for prediction methods. I assessed the accuracy of several algorithms using cross-validation by identity, a new benchmark strategy which explicitly models the variation in distances between query sequences and the closest entry in a reference database. When the accuracy of genus predictions was averaged over a representative range of identities with the reference database (100%, 99%, 97%, 95% and 90%), all tested methods had ≤50% accuracy on the currently-popular V4 region of 16S rRNA. Accuracy was found to fall rapidly with identity; for example, better methods were found to have V4 genus prediction accuracy of ∼100% at 100% identity but ∼50% at 97% identity. The relationship between identity and taxonomy was quantified as the probability that a rank is the lowest shared by a pair of sequences with a given pair-wise identity. With the V4 region, 95% identity was found to be a twilight zone where taxonomy is highly ambiguous because the probabilities that the lowest shared rank between pairs of sequences is genus, family, order or class are approximately equal. PMID:29682424
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Satellite derived bathymetry: mapping the Irish coastline
NASA Astrophysics Data System (ADS)
Monteys, X.; Cahalane, C.; Harris, P.; Hanafin, J.
2017-12-01
Ireland has a varied coastline in excess of 3000 km in length largely characterized by extended shallow environments. The coastal shallow water zone can be a challenging and costly environment in which to acquire bathymetry and other oceanographic data using traditional survey methods or airborne LiDAR techniques as demonstrated in the Irish INFOMAR program. Thus, large coastal areas in Ireland, and much of the coastal zone worldwide remain unmapped using modern techniques and is poorly understood. Earth Observations (EO) missions are currently being used to derive timely, cost effective, and quality controlled information for mapping and monitoring coastal environments. Different wavelengths of the solar light penetrate the water column to different depths and are routinely sensed by EO satellites. A large selection of multispectral imagery (MS) from many platforms were examined, as well as from small aircrafts and drones. A number of bays representing very different coastal environments were explored in turn. The project's workflow is created by building a catalogue of satellite and field bathymetric data to assess the suitability of imagery captured at a range of spatial, spectral and temporal resolutions. Turbidity indices are derived from the multispectral information. Finally, a number of spatial regression models using water-leaving radiance parameters and field calibration data are examined. Our assessment reveals that spatial regression algorithms have the potential to significantly improve the accuracy of the predictions up to 10m WD and offer a better handle on the error and uncertainty budget. The four spatial models investigated show better adjustments than the basic non-spatial model. Accuracy of the predictions is better than 10% WD at 95% confidence. Future work will focus on improving the accuracy of the predictions incorporating an analytical model in conjunction with improved empirical methods. The recently launched ESA Sentinel 2 will become the primary focus of study. Satellite bathymetry and coastal mapping products, and remarkably, their repeatability over time, can offer solutions to important coastal zone management issues and address key challenges in the critical line between shoreline changes and human activity, particularly in the light of future climate change scenarios.
Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A
2013-01-01
A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P < .001). In the updating process, age, history, and additional candidate predictors did not significantly increase discrimination, being 94%, and leaving only 4 predictors of the original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.
Revealing how network structure affects accuracy of link prediction
NASA Astrophysics Data System (ADS)
Yang, Jin-Xuan; Zhang, Xiao-Dong
2017-08-01
Link prediction plays an important role in network reconstruction and network evolution. The network structure affects the accuracy of link prediction, which is an interesting problem. In this paper we use common neighbors and the Gini coefficient to reveal the relation between them, which can provide a good reference for the choice of a suitable link prediction algorithm according to the network structure. Moreover, the statistical analysis reveals correlation between the common neighbors index, Gini coefficient index and other indices to describe the network structure, such as Laplacian eigenvalues, clustering coefficient, degree heterogeneity, and assortativity of network. Furthermore, a new method to predict missing links is proposed. The experimental results show that the proposed algorithm yields better prediction accuracy and robustness to the network structure than existing currently used methods for a variety of real-world networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhu, Roshan S., E-mail: roshansprabhu@gmail.com; Winship Cancer Institute, Emory University, Atlanta, Georgia; Magliocca, Kelly R.
2014-01-01
Purpose: Nodal extracapsular extension (ECE) in patients with head-and-neck cancer increases the loco-regional failure risk and is an indication for adjuvant chemoradiation therapy (CRT). To reduce the risk of requiring trimodality therapy, patients with head-and-neck cancer who are surgical candidates are often treated with definitive CRT when preoperative computed tomographic imaging suggests radiographic ECE. The purpose of this study was to assess the accuracy of preoperative CT imaging for predicting pathologic nodal ECE (pECE). Methods and Materials: The study population consisted of 432 consecutive patients with oral cavity or locally advanced/nonfunctional laryngeal cancer who underwent preoperative CT imaging before initialmore » surgical resection and neck dissection. Specimens with pECE had the extent of ECE graded on a scale from 1 to 4. Results: Radiographic ECE was documented in 46 patients (10.6%), and pECE was observed in 87 (20.1%). Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 43.7%, 97.7%, 82.6%, and 87.3%, respectively. The sensitivity of radiographic ECE increased from 18.8% for grade 1 to 2 ECE, to 52.9% for grade 3, and 72.2% for grade 4. Radiographic ECE criteria of adjacent structure invasion was a better predictor than irregular borders/fat stranding for pECE. Conclusions: Radiographic ECE has poor sensitivity, but excellent specificity for pECE in patients who undergo initial surgical resection. PPV and NPV are reasonable for clinical decision making. The performance of preoperative CT imaging increased as pECE grade increased. Patients with resectable head-and-neck cancer with radiographic ECE based on adjacent structure invasion are at high risk for high-grade pECE requiring adjuvant CRT when treated with initial surgery; definitive CRT as an alternative should be considered where appropriate.« less
ICU scoring systems allow prediction of patient outcomes and comparison of ICU performance.
Becker, R B; Zimmerman, J E
1996-07-01
Too much time and effort are wasted in attempts to pass final judgment on whether systems for ICU prognostication are "good or bad" and whether they "do or do not" provide a simple answer to the complex and often unpredictable question of individual mortality in the ICU. A substantial amount of data supports the usefulness of general ICU prognostic systems in comparing ICU performance with respect to a wide variety of endpoints, including ICU and hospital mortality, duration of stay, and efficiency of resource use. Work in progress is analyzing both general resource use and specific therapeutic interventions. It also is time to fully acknowledge that statistics never can predict whether a patient will die with 100% accuracy. There always will be exceptions to the rule, and physicians frequently will have information that is not included in prognostic models. In addition, the values of both physicians and patients frequently lead to differences in how a probability in interpreted; for some, a 95% probability estimate means that death is near and, for others, this estimate represents a tangible 5% chance for survival. This means that physicians must learn how to integrate such estimates into their medical decisions. In doing so, it is our hope that prognostic systems are not viewed as oversimplifying or automating clinical decisions. Rather, such systems provide objective data on which physicians may ground a spectrum of decisions regarding either escalation or withdrawal of therapy in critically ill patients. These systems do not dehumanize our decision-making process but, rather, help eliminate physician reliance on emotional, heuristic, poorly calibrated, or overly pessimistic subjective estimates. No decision regarding patient care can be considered best if the facts upon which it is based on imprecise or biased. Future research will improve the accuracy of individual patient predictions but, even with the highest degree of precision, such predictions are useful only in support of, and not as a substitute for, good clinical judgment.
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
Grogan, Eric L; Deppen, Stephen A; Ballman, Karla V; Andrade, Gabriela M; Verdial, Francys C; Aldrich, Melinda C; Chen, Chiu L; Decker, Paul A; Harpole, David H; Cerfolio, Robert J; Keenan, Robert J; Jones, David R; D'Amico, Thomas A; Shrager, Joseph B; Meyers, Bryan F; Putnam, Joe B
2014-04-01
Fluorodeoxyglucose-positron emission tomography (FDG-PET) is recommended for diagnosis and staging of non-small cell lung cancer (NSCLC). Meta-analyses of FDG-PET diagnostic accuracy demonstrated sensitivity of 96% and specificity of 78% but were performed in select centers, introducing potential bias. This study evaluates the accuracy of FDG-PET to diagnose NSCLC and examines differences across enrolling sites in the national American College of Surgeons Oncology Group (ACOSOG) Z4031 trial. Between 2004 and 2006, 959 eligible patients with clinical stage I (cT1-2 N0 M0) known or suspected NSCLC were enrolled in the Z4031 trial, and with a baseline FDG-PET available for 682. Final diagnosis was determined by pathologic examination. FDG-PET avidity was categorized into avid or not avid by radiologist description or reported maximum standard uptake value. FDG-PET diagnostic accuracy was calculated for the entire cohort. Accuracy differences based on preoperative size and by enrolling site were examined. Preoperative FDG-PET results were available for 682 participants enrolled at 51 sites in 39 cities. Lung cancer prevalence was 83%. FDG-PET sensitivity was 82% (95% confidence interval, 79 to 85) and specificity was 31% (95% confidence interval, 23% to 40%). Positive and negative predictive values were 85% and 26%, respectively. Accuracy improved with lesion size. Of 80 false-positive scans, 69% were granulomas. False-negative scans occurred in 101 patients, with adenocarcinoma being the most frequent (64%), and 11 were 10 mm or less. The sensitivity varied from 68% to 91% (p=0.03), and the specificity ranged from 15% to 44% (p=0.72) across cities with more than 25 participants. In a national surgical population with clinical stage I NSCLC, FDG-PET to diagnose lung cancer performed poorly compared with published studies. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
DOT National Transportation Integrated Search
2015-07-01
Implementing the recommendations of this study is expected to significantly : improve the accuracy of camber measurements and predictions and to : ultimately help reduce construction delays, improve bridge serviceability, : and decrease costs.
Genomic selection for crossbred performance accounting for breed-specific effects.
Lopes, Marcos S; Bovenhuis, Henk; Hidalgo, André M; van Arendonk, Johan A M; Knol, Egbert F; Bastiaansen, John W M
2017-06-26
Breed-specific effects are observed when the same allele of a given genetic marker has a different effect depending on its breed origin, which results in different allele substitution effects across breeds. In such a case, single-breed breeding values may not be the most accurate predictors of crossbred performance. Our aim was to estimate the contribution of alleles from each parental breed to the genetic variance of traits that are measured in crossbred offspring, and to compare the prediction accuracies of estimated direct genomic values (DGV) from a traditional genomic selection model (GS) that are trained on purebred or crossbred data, with accuracies of DGV from a model that accounts for breed-specific effects (BS), trained on purebred or crossbred data. The final dataset was composed of 924 Large White, 924 Landrace and 924 two-way cross (F1) genotyped and phenotyped animals. The traits evaluated were litter size (LS) and gestation length (GL) in pigs. The genetic correlation between purebred and crossbred performance was higher than 0.88 for both LS and GL. For both traits, the additive genetic variance was larger for alleles inherited from the Large White breed compared to alleles inherited from the Landrace breed (0.74 and 0.56 for LS, and 0.42 and 0.40 for GL, respectively). The highest prediction accuracies of crossbred performance were obtained when training was done on crossbred data. For LS, prediction accuracies were the same for GS and BS DGV (0.23), while for GL, prediction accuracy for BS DGV was similar to the accuracy of GS DGV (0.53 and 0.52, respectively). In this study, training on crossbred data resulted in higher prediction accuracy than training on purebred data and evidence of breed-specific effects for LS and GL was demonstrated. However, when training was done on crossbred data, both GS and BS models resulted in similar prediction accuracies. In future studies, traits with a lower genetic correlation between purebred and crossbred performance should be included to further assess the value of the BS model in genomic predictions.
A new method of power load prediction in electrification railway
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-04-01
Aiming at the character of electrification railway, the paper mainly studies the problem of load prediction in electrification railway. After the preprocessing of data, and the similar days are separated on the basis of its statistical characteristics. Meanwhile the accuracy of different methods is analyzed. The paper provides a new thought of prediction and a new method of accuracy of judgment for the load prediction of power system.
Brand, Serge; Beck, Johannes; Hatzinger, Martin; Savic, Mirjana; Holsboer-Trachsler, Edith
2011-01-01
Amongst the variety of disorders affecting sleep, restless legs syndrome (RLS) merits particular attention. Little is known about long-term outcomes for sleep or psychological functioning following a diagnosis of RLS. The aim of the present study was thus to evaluate sleep and psychological functioning at a 3-year follow-up and based on polysomnographic measurements. Thirty-eight patients (18 female and 20 male patients; mean age: 56.06, SD = 12.07) with RLS and sleep electroencephalographic recordings were followed-up 33 months later. Participants completed a series of self-rating questionnaires related to sleep and psychological functioning. Additionally, they completed a sleep log for 7 consecutive days. Age, male gender, increased light sleep (S1, S2) and sleep onset latency, along with low sleep efficiency, predicted psychological functioning and sleep 33 months later. Specifically, sleep fragmentation predicted poor psychological functioning, and both sleep fragmentation and light sleep predicted poor sleep. In patients with RLS, irrespective of medication or duration of treatment, poor objective sleep patterns at diagnosis predicted both poor psychological functioning and poor sleep about 3 years after diagnosis. The pattern of results suggests the need for more thorough medical and psychotherapeutic treatment and monitoring of patients with RLS. © 2010 S. Karger AG, Basel.
Blood DNA methylation biomarkers predict clinical reactivity in food-sensitized infants.
Martino, David; Dang, Thanh; Sexton-Oates, Alexandra; Prescott, Susan; Tang, Mimi L K; Dharmage, Shyamali; Gurrin, Lyle; Koplin, Jennifer; Ponsonby, Anne-Louise; Allen, Katrina J; Saffery, Richard
2015-05-01
The diagnosis of food allergy (FA) can be challenging because approximately half of food-sensitized patients are asymptomatic. Current diagnostic tests are excellent makers of sensitization but poor predictors of clinical reactivity. Thus oral food challenges (OFCs) are required to determine a patient's risk of reactivity. We sought to discover genomic biomarkers of clinical FA with utility for predicting food challenge outcomes. Genome-wide DNA methylation (DNAm) profiling was performed on blood mononuclear cells from volunteers who had undergone objective OFCs, concurrent skin prick tests, and specific IgE tests. Fifty-eight food-sensitized patients (aged 11-15 months) were assessed, half of whom were clinically reactive. Thirteen nonallergic control subjects were also assessed. Reproducibility was assessed in an additional 48 samples by using methylation data from an independent population of patients with clinical FA. Using a supervised learning approach, we discovered a DNAm signature of 96 CpG sites that predict clinical outcomes. Diagnostic scores were derived from these 96 methylation sites, and cutoffs were determined in a sensitivity analysis. Methylation biomarkers outperformed allergen-specific IgE and skin prick tests for predicting OFC outcomes. FA status was correctly predicted in the replication cohort with an accuracy of 79.2%. DNAm biomarkers with clinical utility for predicting food challenge outcomes are readily detectable in blood. The development of this technology in detailed follow-up studies will yield highly innovative diagnostic assays. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Prediction of daily sea surface temperature using efficient neural networks
NASA Astrophysics Data System (ADS)
Patil, Kalpesh; Deo, Makaranad Chintamani
2017-04-01
Short-term prediction of sea surface temperature (SST) is commonly achieved through numerical models. Numerical approaches are more suitable for use over a large spatial domain than in a specific site because of the difficulties involved in resolving various physical sub-processes at local levels. Therefore, for a given location, a data-driven approach such as neural networks may provide a better alternative. The application of neural networks, however, needs a large experimentation in their architecture, training methods, and formation of appropriate input-output pairs. A network trained in this manner can provide more attractive results if the advances in network architecture are additionally considered. With this in mind, we propose the use of wavelet neural networks (WNNs) for prediction of daily SST values. The prediction of daily SST values was carried out using WNN over 5 days into the future at six different locations in the Indian Ocean. First, the accuracy of site-specific SST values predicted by a numerical model, ROMS, was assessed against the in situ records. The result pointed out the necessity for alternative approaches. First, traditional networks were tried and after noticing their poor performance, WNN was used. This approach produced attractive forecasts when judged through various error statistics. When all locations were viewed together, the mean absolute error was within 0.18 to 0.32 °C for a 5-day-ahead forecast. The WNN approach was thus found to add value to the numerical method of SST prediction when location-specific information is desired.
Longitudinal Stability and Predictors of Poor Oral Comprehenders and Poor Decoders
Elwér, Åsa; Keenan, Janice M.; Olson, Richard K.; Byrne, Brian; Samuelsson, Stefan
2012-01-01
Two groups of 4th grade children were selected from a population sample (N= 926) to either be Poor Oral Comprehenders (poor oral comprehension but normal word decoding), or Poor Decoders (poor decoding but normal oral comprehension). By examining both groups in the same study with varied cognitive and literacy predictors, and examining them both retrospectively and prospectively, we could assess how distinctive and stable the predictors of each deficit are. Predictors were assessed retrospectively at preschool, at the end of kindergarten, 1st, and 2nd grades. Group effects were significant at all test occasions, including those for preschool vocabulary (worse in poor oral comprehenders) and rapid naming (RAN) (worse in poor decoders). Preschool RAN and Vocabulary prospectively predicted grade 4 group membership (77–79% correct classification) within the selected samples. Reselection in preschool of at-risk poor decoder and poor oral comprehender subgroups based on these variables led to significant but relatively weak prediction of subtype membership at grade 4. Implications of the predictive stability of our results for identification and intervention of these important subgroups are discussed. PMID:23528975
Improving Accuracy of Sleep Self-Reports through Correspondence Training
ERIC Educational Resources Information Center
St. Peter, Claire C.; Montgomery-Downs, Hawley E.; Massullo, Joel P.
2012-01-01
Sleep insufficiency is a major public health concern, yet the accuracy of self-reported sleep measures is often poor. Self-report may be useful when direct measurement of nonverbal behavior is impossible, infeasible, or undesirable, as it may be with sleep measurement. We used feedback and positive reinforcement within a small-n multiple-baseline…
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Locketz, Garrett D; Li, Peter M M C; Fischbein, Nancy J; Holdsworth, Samantha J; Blevins, Nikolas H
2016-10-01
A method to optimize imaging of cholesteatoma by combining the strengths of available modalities will improve diagnostic accuracy and help to target treatment. To assess whether fusing Periodically Rotated Overlapping Parallel Lines With Enhanced Reconstruction (PROPELLER) diffusion-weighted magnetic resonance imaging (DW-MRI) with corresponding temporal bone computed tomography (CT) images could increase cholesteatoma diagnostic and localization accuracy across 6 distinct anatomical regions of the temporal bone. Case series and preliminary technology evaluation of adults with preoperative temporal bone CT and PROPELLER DW-MRI scans who underwent surgery for clinically suggested cholesteatoma at a tertiary academic hospital. When cholesteatoma was encountered surgically, the precise location was recorded in a diagram of the middle ear and mastoid. For each patient, the 3 image data sets (CT, PROPELLER DW-MRI, and CT-MRI fusion) were reviewed in random order for the presence or absence of cholesteatoma by an investigator blinded to operative findings. If cholesteatoma was deemed present on review of each imaging modality, the location of the lesion was mapped presumptively. Image analysis was then compared with surgical findings. Twelve adults (5 women and 7 men; median [range] age, 45.5 [19-77] years) were included. The use of CT-MRI fusion had greater diagnostic sensitivity (0.88 vs 0.75), positive predictive value (0.88 vs 0.86), and negative predictive value (0.75 vs 0.60) than PROPELLER DW-MRI alone. Image fusion also showed increased overall localization accuracy when stratified across 6 distinct anatomical regions of the temporal bone (localization sensitivity and specificity, 0.76 and 0.98 for CT-MRI fusion vs 0.58 and 0.98 for PROPELLER DW-MRI). For PROPELLER DW-MRI, there were 15 true-positive, 45 true-negative, 1 false-positive, and 11 false-negative results; overall accuracy was 0.83. For CT-MRI fusion, there were 20 true-positive, 45 true-negative, 1 false-positive, and 6 false-negative results; overall accuracy was 0.90. The poor anatomical spatial resolution of DW-MRI makes precise localization of cholesteatoma within the middle ear and mastoid a diagnostic challenge. This study suggests that the bony anatomic detail obtained via CT coupled with the excellent sensitivity and specificity of PROPELLER DW-MRI for cholesteatoma can improve both preoperative identification and localization of disease over DW-MRI alone.
Kim, Jong-Min; Kim, Jong-Min; Jeon, Byeong-Sam; Lee, Chang-Rack; Lim, Sung-Joon; Kim, Kyung-Ah; Bin, Seong-Il
2015-05-01
The aim of this study was to compare the magnetic resonance imaging (MRI) evaluation of transplanted meniscal allograft with second-look arthroscopy and evaluate the sensitivity, specificity, and accuracy of MRI for assessing graft status. From 1996 to 2012, among 290 knees that underwent meniscal allograft transplantation and received follow-up examination for more than 1 year, those knees that underwent second-look arthroscopy were reviewed. Patients with no postoperative MRI and patients with a time gap between postoperative MRI and second-look arthroscopy of more than 3 months were excluded. Anatomically, the meniscus was divided into 3 segments: anterior one-third, mid body, and posterior one-third. Each part of the meniscus was evaluated using both methods. Grade 3 MRI signal intensity was diagnosed as a meniscal tear radiologically. By use of second-look arthroscopy as the standard, the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of postoperative MRI were assessed in each segment of the grafts. Twenty knees were retrospectively enrolled. The specificity, PPV, and accuracy for the anterior one-third were lower than those for the mid body and posterior one-third (specificity of 35.3% v 91.7% and 90%, respectively; PPV of 21.4% v 87.5% and 90.9%, respectively; and accuracy of 45% v 90% and 95%, respectively). However, the sensitivity and NPV were similar among the anterior one-third, mid body, and posterior one-third (sensitivity of 100%, 87.5%, and 100%, respectively; and NPV of 100%, 91.7%, and 100%, respectively). There were no significant differences in the comparison between the diagnostic MRI values of lateral grafts and medial grafts. Of 5 cases that showed grade 3 signal at only the anterior one-third section, 60% had no clinical signs. There were no graft tears in any cases. The anterior one-third of grafts showed low specificity, PPV, and accuracy of postoperative MRI compared with the mid body and posterior one-third. MRI tended to grade the anterior one-third more poorly than second-look arthroscopy. These features should be considered when evaluating transplanted meniscal allografts on postoperative MRI. Level III, study of non-consecutive patients evaluating a diagnostic test with a gold standard. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Use of refractometry and colorimetry as field methods to rapidly assess antimalarial drug quality.
Green, Michael D; Nettey, Henry; Villalva Rojas, Ofelia; Pamanivong, Chansapha; Khounsaknalath, Lamphet; Grande Ortiz, Miguel; Newton, Paul N; Fernández, Facundo M; Vongsack, Latsamy; Manolin, Ot
2007-01-04
The proliferation of counterfeit and poor-quality drugs is a major public health problem; especially in developing countries lacking adequate resources to effectively monitor their prevalence. Simple and affordable field methods provide a practical means of rapidly monitoring drug quality in circumstances where more advanced techniques are not available. Therefore, we have evaluated refractometry, colorimetry and a technique combining both processes as simple and accurate field assays to rapidly test the quality of the commonly available antimalarial drugs; artesunate, chloroquine, quinine, and sulfadoxine. Method bias, sensitivity, specificity and accuracy relative to high-performance liquid chromatographic (HPLC) analysis of drugs collected in the Lao PDR were assessed for each technique. The HPLC method for each drug was evaluated in terms of assay variability and accuracy. The accuracy of the combined method ranged from 0.96 to 1.00 for artesunate tablets, chloroquine injectables, quinine capsules, and sulfadoxine tablets while the accuracy was 0.78 for enterically coated chloroquine tablets. These techniques provide a generally accurate, yet simple and affordable means to assess drug quality in resource-poor settings.
ERIC Educational Resources Information Center
Myers, Jamie S.; Grigsby, Jim; Teel, Cynthia S.; Kramer, Andrew M.
2009-01-01
The goals of this study were to evaluate the accuracy of nurses' predictions of rehabilitation potential in older adults admitted to inpatient rehabilitation facilities and to ascertain whether the addition of a measure of executive cognitive function would enhance predictive accuracy. Secondary analysis was performed on prospective data collected…
Psychopathy, IQ, and Violence in European American and African American County Jail Inmates
ERIC Educational Resources Information Center
Walsh, Zach; Swogger, Marc T.; Kosson, David S.
2004-01-01
The accuracy of the prediction of criminal violence may be improved by combining psychopathy with other variables that have been found to predict violence. Research has suggested that assessing intelligence (i.e., IQ) as well as psychopathy improves the accuracy of violence prediction. In the present study, the authors tested this hypothesis by…
Prediction Accuracy: The Role of Feedback in 6th Graders' Recall Predictions
ERIC Educational Resources Information Center
Al-Harthy, Ibrahim S.
2016-01-01
The current study focused on the role of feedback on students' prediction accuracy (calibration). This phenomenon has been widely studied, but questions remain about how best to improve it. In the current investigation, fifty-seven students from sixth grade were randomly assigned to control and experimental groups. Thirty pictures were chosen from…
Luo, Shanhong; Snider, Anthony G
2009-11-01
There has been a long-standing debate about whether having accurate self-perceptions or holding positive illusions of self is more adaptive. This debate has recently expanded to consider the role of accuracy and bias of partner perceptions in romantic relationships. In the present study, we hypothesized that because accuracy, positivity bias, and similarity bias are likely to serve distinct functions in relationships, they should all make independent contributions to the prediction of marital satisfaction. In a sample of 288 newlywed couples, we tested this hypothesis by simultaneously modeling the actor effects and partner effects of accuracy, positivity bias, and similarity bias in predicting husbands' and wives' satisfaction. Findings across several perceptual domains suggest that all three perceptual indices independently predicted the perceiver's satisfaction. Accuracy and similarity bias, but not positivity bias, made unique contributions to the target's satisfaction. No sex differences were found.
Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness
NASA Astrophysics Data System (ADS)
Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng
2018-05-01
Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.
Ability of matrix models to explain the past and predict the future of plant populations.
McEachern, Kathryn; Crone, Elizabeth E.; Ellis, Martha M.; Morris, William F.; Stanley, Amanda; Bell, Timothy; Bierzychudek, Paulette; Ehrlen, Johan; Kaye, Thomas N.; Knight, Tiffany M.; Lesica, Peter; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F.; Ticktin, Tamara; Valverde, Teresa; Williams, Jennifer I.; Doak, Daniel F.; Ganesan, Rengaian; Thorpe, Andrea S.; Menges, Eric S.
2013-01-01
Uncertainty associated with ecological forecasts has long been recognized, but forecast accuracy is rarely quantified. We evaluated how well data on 82 populations of 20 species of plants spanning 3 continents explained and predicted plant population dynamics. We parameterized stage-based matrix models with demographic data from individually marked plants and determined how well these models forecast population sizes observed at least 5 years into the future. Simple demographic models forecasted population dynamics poorly; only 40% of observed population sizes fell within our forecasts' 95% confidence limits. However, these models explained population dynamics during the years in which data were collected; observed changes in population size during the data-collection period were strongly positively correlated with population growth rate. Thus, these models are at least a sound way to quantify population status. Poor forecasts were not associated with the number of individual plants or years of data. We tested whether vital rates were density dependent and found both positive and negative density dependence. However, density dependence was not associated with forecast error. Forecast error was significantly associated with environmental differences between the data collection and forecast periods. To forecast population fates, more detailed models, such as those that project how environments are likely to change and how these changes will affect population dynamics, may be needed. Such detailed models are not always feasible. Thus, it may be wiser to make risk-averse decisions than to expect precise forecasts from models.
Ability of matrix models to explain the past and predict the future of plant populations.
Crone, Elizabeth E; Ellis, Martha M; Morris, William F; Stanley, Amanda; Bell, Timothy; Bierzychudek, Paulette; Ehrlén, Johan; Kaye, Thomas N; Knight, Tiffany M; Lesica, Peter; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F; Ticktin, Tamara; Valverde, Teresa; Williams, Jennifer L; Doak, Daniel F; Ganesan, Rengaian; McEachern, Kathyrn; Thorpe, Andrea S; Menges, Eric S
2013-10-01
Uncertainty associated with ecological forecasts has long been recognized, but forecast accuracy is rarely quantified. We evaluated how well data on 82 populations of 20 species of plants spanning 3 continents explained and predicted plant population dynamics. We parameterized stage-based matrix models with demographic data from individually marked plants and determined how well these models forecast population sizes observed at least 5 years into the future. Simple demographic models forecasted population dynamics poorly; only 40% of observed population sizes fell within our forecasts' 95% confidence limits. However, these models explained population dynamics during the years in which data were collected; observed changes in population size during the data-collection period were strongly positively correlated with population growth rate. Thus, these models are at least a sound way to quantify population status. Poor forecasts were not associated with the number of individual plants or years of data. We tested whether vital rates were density dependent and found both positive and negative density dependence. However, density dependence was not associated with forecast error. Forecast error was significantly associated with environmental differences between the data collection and forecast periods. To forecast population fates, more detailed models, such as those that project how environments are likely to change and how these changes will affect population dynamics, may be needed. Such detailed models are not always feasible. Thus, it may be wiser to make risk-averse decisions than to expect precise forecasts from models. © 2013 Society for Conservation Biology.
On the importance of geological data for hydraulic tomography analysis: Laboratory sandbox study
NASA Astrophysics Data System (ADS)
Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2016-11-01
This paper investigates the importance of geological data in Hydraulic Tomography (HT) through sandbox experiments. In particular, four groundwater models with homogeneous geological units constructed with borehole data of varying accuracy are jointly calibrated with multiple pumping test data of two different pumping and observation densities. The results are compared to those from a geostatistical inverse model. Model calibration and validation performances are quantitatively assessed using drawdown scatterplots. We find that accurate and inaccurate geological models can be well calibrated, despite the estimated K values for the poor geological models being quite different from the actual values. Model validation results reveal that inaccurate geological models yield poor drawdown predictions, but using more calibration data improves its predictive capability. Moreover, model comparisons among a highly parameterized geostatistical and layer-based geological models show that, (1) as the number of pumping tests and monitoring locations are reduced, the performance gap between the approaches decreases, and (2) a simplified geological model with a fewer number of layers is more reliable than the one based on the wrong description of stratigraphy. Finally, using a geological model as prior information in geostatistical inverse models results in the preservation of geological features, especially in areas where drawdown data are not available. Overall, our sandbox results emphasize the importance of incorporating geological data in HT surveys when data from pumping tests is sparse. These findings have important implications for field applications of HT where well distances are large.
Vathsangam, Harshvardhan; Emken, Adar; Schroeder, E. Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S.
2011-01-01
This paper describes an experimental study in estimating energy expenditure from treadmill walking using a single hip-mounted triaxial inertial sensor comprised of a triaxial accelerometer and a triaxial gyroscope. Typical physical activity characterization using accelerometer generated counts suffers from two drawbacks - imprecison (due to proprietary counts) and incompleteness (due to incomplete movement description). We address these problems in the context of steady state walking by directly estimating energy expenditure with data from a hip-mounted inertial sensor. We represent the cyclic nature of walking with a Fourier transform of sensor streams and show how one can map this representation to energy expenditure (as measured by V O2 consumption, mL/min) using three regression techniques - Least Squares Regression (LSR), Bayesian Linear Regression (BLR) and Gaussian Process Regression (GPR). We perform a comparative analysis of the accuracy of sensor streams in predicting energy expenditure (measured by RMS prediction accuracy). Triaxial information is more accurate than uniaxial information. LSR based approaches are prone to outlier sensitivity and overfitting. Gyroscopic information showed equivalent if not better prediction accuracy as compared to accelerometers. Combining accelerometer and gyroscopic information provided better accuracy than using either sensor alone. We also analyze the best algorithmic approach among linear and nonlinear methods as measured by RMS prediction accuracy and run time. Nonlinear regression methods showed better prediction accuracy but required an order of magnitude of run time. This paper emphasizes the role of probabilistic techniques in conjunction with joint modeling of triaxial accelerations and rotational rates to improve energy expenditure prediction for steady-state treadmill walking. PMID:21690001
High accuracy operon prediction method based on STRING database scores.
Taboada, Blanca; Verde, Cristina; Merino, Enrique
2010-07-01
We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.
Comparison of Three Risk Scores to Predict Outcomes of Severe Lower Gastrointestinal Bleeding.
Camus, Marine; Jensen, Dennis M; Ohning, Gordon V; Kovacs, Thomas O; Jutabha, Rome; Ghassemi, Kevin A; Machicado, Gustavo A; Dulai, Gareth S; Jensen, Mary E; Gornbein, Jeffrey A
2016-01-01
Improved medical decisions by using a score at the initial patient triage level may lead to improvements in patient management, outcomes, and resource utilization. There is no validated score for management of lower gastrointestinal bleeding (LGIB) unlike for upper gastrointestinal bleeding. The aim of our study was to compare the accuracies of 3 different prognostic scores [Center for Ulcer Research and Education Hemostasis prognosis score, Charlson index, and American Society of Anesthesiologists (ASA) score] for the prediction of 30-day rebleeding, surgery, and death in severe LGIB. Data on consecutive patients hospitalized with severe gastrointestinal bleeding from January 2006 to October 2011 in our 2 tertiary academic referral centers were prospectively collected. Sensitivities, specificities, accuracies, and area under the receiver operator characteristic curve were computed for 3 scores for predictions of rebleeding, surgery, and mortality at 30 days. Two hundred thirty-five consecutive patients with LGIB were included between 2006 and 2011. Twenty-three percent of patients rebled, 6% had surgery, and 7.7% of patients died. The accuracies of each score never reached 70% for predicting rebleeding or surgery in either. The ASA score had a highest accuracy for predicting mortality within 30 days (83.5%), whereas the Center for Ulcer Research and Education Hemostasis prognosis score and the Charlson index both had accuracies <75% for the prediction of death within 30 days. ASA score could be useful to predict death within 30 days. However, a new score is still warranted to predict all 30 days outcomes (rebleeding, surgery, and death) in LGIB.
Massa, Luiz M; Hoffman, Jeanne M; Cardenas, Diana D
2009-01-01
To determine the validity, accuracy, and predictive value of the signs and symptoms of urinary tract infection (UTI) for individuals with spinal cord injury (SCI) using intermittent catheterization (IC) and the accuracy of individuals with SCI on IC at predicting their own UTI. Prospective cohort based on data from the first 3 months of a 1-year randomized controlled trial to evaluate UTI prevention effectiveness of hydrophilic and standard catheters. Fifty-six community-based individuals on IC. Presence of UTI as defined as bacteriuria with a colony count of at least 10(5) colony-forming units/mL and at least 1 sign or symptom of UTI. Analysis of monthly urine culture and urinalysis data combined with analysis of monthly data collected using a questionnaire that asked subjects to self-report on UTI signs and symptoms and whether or not they felt they had a UTI. Overall, "cloudy urine" had the highest accuracy (83.1%), and "leukocytes in the urine" had the highest sensitivity (82.8%). The highest specificity was for "fever" (99.0%); however, it had a very low sensitivity (6.9%). Subjects were able to predict their own UTI with an accuracy of 66.2%, and the negative predictive value (82.8%) was substantially higher than the positive predictive value (32.6%). The UTI signs and symptoms can predict a UTI more accurately than individual subjects can by using subjective impressions of their own signs and symptoms. Subjects were better at predicting when they did not have a UTI than when they did have a UTI.
Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol
2016-06-09
The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
CPO Prediction: Accuracy Assessment and Impact on UT1 Intensive Results
NASA Technical Reports Server (NTRS)
Malkin, Zinovy
2010-01-01
The UT1 Intensive results heavily depend on the celestial pole offset (CPO) model used during data processing. Since accurate CPO values are available with a delay of two to four weeks, CPO predictions are necessarily applied to the UT1 Intensive data analysis, and errors in the predictions can influence the operational UT1 accuracy. In this paper we assess the real accuracy of CPO prediction using the actual IERS and PUL predictions made in 2007-2009. Also, results of operational processing were analyzed to investigate the actual impact of EOP prediction errors on the rapid UT1 results. It was found that the impact of CPO prediction errors is at a level of several microseconds, whereas the impact of the inaccuracy in the polar motion prediction may be about one order of magnitude larger for ultra-rapid UT1 results. The situation can be amended if the IERS Rapid solution will be updated more frequently.
Abdolali, Fatemeh; Zoroofi, Reza Aghaeizadeh; Otake, Yoshito; Sato, Yoshinobu
2017-02-01
Accurate detection of maxillofacial cysts is an essential step for diagnosis, monitoring and planning therapeutic intervention. Cysts can be of various sizes and shapes and existing detection methods lead to poor results. Customizing automatic detection systems to gain sufficient accuracy in clinical practice is highly challenging. For this purpose, integrating the engineering knowledge in efficient feature extraction is essential. This paper presents a novel framework for maxillofacial cysts detection. A hybrid methodology based on surface and texture information is introduced. The proposed approach consists of three main steps as follows: At first, each cystic lesion is segmented with high accuracy. Then, in the second and third steps, feature extraction and classification are performed. Contourlet and SPHARM coefficients are utilized as texture and shape features which are fed into the classifier. Two different classifiers are used in this study, i.e. support vector machine and sparse discriminant analysis. Generally SPHARM coefficients are estimated by the iterative residual fitting (IRF) algorithm which is based on stepwise regression method. In order to improve the accuracy of IRF estimation, a method based on extra orthogonalization is employed to reduce linear dependency. We have utilized a ground-truth dataset consisting of cone beam CT images of 96 patients, belonging to three maxillofacial cyst categories: radicular cyst, dentigerous cyst and keratocystic odontogenic tumor. Using orthogonalized SPHARM, residual sum of squares is decreased which leads to a more accurate estimation. Analysis of the results based on statistical measures such as specificity, sensitivity, positive predictive value and negative predictive value is reported. The classification rate of 96.48% is achieved using sparse discriminant analysis and orthogonalized SPHARM features. Classification accuracy at least improved by 8.94% with respect to conventional features. This study demonstrated that our proposed methodology can improve the computer assisted diagnosis (CAD) performance by incorporating more discriminative features. Using orthogonalized SPHARM is promising in computerized cyst detection and may have a significant impact in future CAD systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Asha, Stephen Edward; Higham, Matthew; Child, Peter
2015-05-01
If package counts on abdominal CTs of body-packers were known to be accurate, follow-up CTs could be avoided. The objective was to determine the accuracy of CT for the number of concealed packages in body-packers, and the reliability of package counts reported by body-packers who admit to concealing drugs. Suspected body-packers were identified from the emergency departments (ED) database. The medical record and radiology reports were reviewed for package counts determined by CT, patient-reported and physically retrieved. The last method was used as the reference standard. Sensitivity, specificity, positive predictive values (PPV) and negative predictive values (NPV) were calculated for CT package count accuracy. Reliability of patient-reported package counts was assessed using Pearson's correlation coefficient. There were 50 confirmed body-packers on whom 104 CT scans were performed. Data for the index and reference tests were available for 84 scans. The sensitivity, specificity, PPV and NPV for CT package count were 63% (95% CI 46% to 77%), 82% (95% CI 67% to 92%), 76% (95% CI 58% to 89%) and 71% (95% CI 56% to 83%) respectively. For CTs with a package count<15, the sensitivity, specificity, PPV and NPV for CT package count were 96% (95% CI 80% to 99%), 95% (95% CI 82% to 99%), 93% (95% CI 76% to 99%) and 97% (95% CI 86% to 100%), respectively. Correlation between patient-reported package counts and the number of packages retrieved was high (r=0.90, p<0.001, R2=81%). The accuracy of CT for determining the number of concealed packages is poor, although when applied to patients with few concealed packages accuracy is high and is useful as a rule-out test. Among patients who have admitted to drug concealment, the number of packages reported to be concealed is reliable. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Analysis of near infrared spectra for age-grading of wild populations of Anopheles gambiae.
Krajacich, Benjamin J; Meyers, Jacob I; Alout, Haoues; Dabiré, Roch K; Dowell, Floyd E; Foy, Brian D
2017-11-07
Understanding the age-structure of mosquito populations, especially malaria vectors such as Anopheles gambiae, is important for assessing the risk of infectious mosquitoes, and how vector control interventions may impact this risk. The use of near-infrared spectroscopy (NIRS) for age-grading has been demonstrated previously on laboratory and semi-field mosquitoes, but to date has not been utilized on wild-caught mosquitoes whose age is externally validated via parity status or parasite infection stage. In this study, we developed regression and classification models using NIRS on datasets of wild An. gambiae (s.l.) reared from larvae collected from the field in Burkina Faso, and two laboratory strains. We compared the accuracy of these models for predicting the ages of wild-caught mosquitoes that had been scored for their parity status as well as for positivity for Plasmodium sporozoites. Regression models utilizing variable selection increased predictive accuracy over the more common full-spectrum partial least squares (PLS) approach for cross-validation of the datasets, validation, and independent test sets. Models produced from datasets that included the greatest range of mosquito samples (i.e. different sampling locations and times) had the highest predictive accuracy on independent testing sets, though overall accuracy on these samples was low. For classification, we found that intramodel accuracy ranged between 73.5-97.0% for grouping of mosquitoes into "early" and "late" age classes, with the highest prediction accuracy found in laboratory colonized mosquitoes. However, this accuracy was decreased on test sets, with the highest classification of an independent set of wild-caught larvae reared to set ages being 69.6%. Variation in NIRS data, likely from dietary, genetic, and other factors limits the accuracy of this technique with wild-caught mosquitoes. Alternative algorithms may help improve prediction accuracy, but care should be taken to either maximize variety in models or minimize confounders.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
Mahmood, I
1998-05-01
Extrapolation of animal data to assess pharmacokinetic parameters in man is an important tool in drug development. Clearance, volume of distribution and elimination half-life are the three most frequently extrapolated pharmacokinetic parameters. Extensive work has been done to improve the predictive performance of allometric scaling for clearance. In general there is good correlation between body weight and volume, hence volume in man can be predicted with reasonable accuracy from animal data. Besides the volume of distribution in the central compartment (Vc), two other volume terms, the volume of distribution by area (Vbeta) and the volume of distribution at steady state (VdSS), are also extrapolated from animals to man. This report compares the predictive performance of allometric scaling for Vc, Vbeta and VdSS in man from animal data. The relationship between elimination half-life (t(1/2)) and body weight across species results in poor correlation, most probably because of the hybrid nature of this parameter. To predict half-life in man from animal data, an indirect method (CL=VK, where CL=clearance, V is volume and K is elimination rate constant) has been proposed. This report proposes another indirect method which uses the mean residence time (MRT). After establishing that MRT can be predicted across species, it was used to predict half-life using the equation MRT=1.44 x t(1/2). The results of the study indicate that Vc is predicted more accurately than Vbeta and VdSS in man. It should be emphasized that for first-time dosing in man, Vc is a more important pharmacokinetic parameter than Vbeta or VdSS. Furthermore, MRT can be predicted reasonably well for man and can be used for prediction of half-life.
Conde-Agudelo, A; Papageorghiou, A T; Kennedy, S H; Villar, J
2013-05-01
Several biomarkers for predicting intrauterine growth restriction (IUGR) have been proposed in recent years. However, the predictive performance of these biomarkers has not been systematically evaluated. To determine the predictive accuracy of novel biomarkers for IUGR in women with singleton gestations. Electronic databases, reference list checking and conference proceedings. Observational studies that evaluated the accuracy of novel biomarkers proposed for predicting IUGR. Data were extracted on characteristics, quality and predictive accuracy from each study to construct 2×2 tables. Summary receiver operating characteristic curves, sensitivities, specificities and likelihood ratios (LRs) were generated. A total of 53 studies, including 39,974 women and evaluating 37 novel biomarkers, fulfilled the inclusion criteria. Overall, the predictive accuracy of angiogenic factors for IUGR was minimal (median pooled positive and negative LRs of 1.7, range 1.0-19.8; and 0.8, range 0.0-1.0, respectively). Two small case-control studies reported high predictive values for placental growth factor and angiopoietin-2 only when IUGR was defined as birthweight centile with clinical or pathological evidence of fetal growth restriction. Biomarkers related to endothelial function/oxidative stress, placental protein/hormone, and others such as serum levels of vitamin D, urinary albumin:creatinine ratio, thyroid function tests and metabolomic profile had low predictive accuracy. None of the novel biomarkers evaluated in this review are sufficiently accurate to recommend their use as predictors of IUGR in routine clinical practice. However, the use of biomarkers in combination with biophysical parameters and maternal characteristics could be more useful and merits further research. © 2013 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2013 RCOG.
de Saint Laumer, Jean‐Yves; Leocata, Sabine; Tissot, Emeline; Baroux, Lucie; Kampf, David M.; Merle, Philippe; Boschung, Alain; Seyfried, Markus
2015-01-01
We previously showed that the relative response factors of volatile compounds were predictable from either combustion enthalpies or their molecular formulae only 1. We now extend this prediction to silylated derivatives by adding an increment in the ab initio calculation of combustion enthalpies. The accuracy of the experimental relative response factors database was also improved and its population increased to 490 values. In particular, more brominated compounds were measured, and their prediction accuracy was improved by adding a correction factor in the algorithm. The correlation coefficient between predicted and measured values increased from 0.936 to 0.972, leading to a mean prediction accuracy of ± 6%. Thus, 93% of the relative response factors values were predicted with an accuracy of better than ± 10%. The capabilities of the extended algorithm are exemplified by (i) the quick and accurate quantification of hydroxylated metabolites resulting from a biodegradation test after silylation and prediction of their relative response factors, without having the reference substances available; and (ii) the rapid purity determinations of volatile compounds. This study confirms that Gas chromatography with a flame ionization detector and using predicted relative response factors is one of the few techniques that enables quantification of volatile compounds without calibrating the instrument with the pure reference substance. PMID:26179324
Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset
Lipps, David; Devineni, Sree
2016-01-01
MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428
Identifying Causal Risk Factors for Violence among Discharged Patients
Coid, Jeremy W.; Kallis, Constantinos; Doyle, Mike; Shaw, Jenny; Ullrich, Simone
2015-01-01
Background Structured Professional Judgement (SPJ) is routinely administered in mental health and criminal justice settings but cannot identify violence risk above moderate accuracy. There is no current evidence that violence can be prevented using SPJ. This may be explained by routine application of predictive instead of causal statistical models when standardising SPJ instruments. Methods We carried out a prospective cohort study of 409 male and female patients discharged from medium secure services in England and Wales to the community. Measures were taken at baseline (pre-discharge), 6 and 12 months post-discharge using the Historical, Clinical and Risk-20 items version 3 (HCR-20v3) and Structural Assessment of Protective Factors (SAPROF). Information on violence was obtained via the McArthur community violence instrument and the Police National Computer. Results In a lagged model, HCR-20v3 and SAPROF items were poor predictors of violence. Eight items of the HCR-20v3 and 4 SAPROF items did not predict violent behaviour better than chance. In re-analyses considering temporal proximity of risk/ protective factors (exposure) on violence (outcome), risk was elevated due to violent ideation (OR 6.98, 95% CI 13.85–12.65, P<0.001), instability (OR 5.41, 95% CI 3.44–8.50, P<0.001), and poor coping/ stress (OR 8.35, 95% CI 4.21–16.57, P<0.001). All 3 risk factors were explanatory variables which drove the association with violent outcome. Self-control (OR 0.13, 95% CI 0.08–0.24, P<0.001) conveyed protective effects and explained the association of other protective factors with violence. Conclusions Using two standardised SPJ instruments, predictive (lagged) methods could not identify risk and protective factors which must be targeted in interventions for discharged patients with severe mental illness. Predictive methods should be abandoned if the aim is to progress from risk assessment to effective risk management and replaced by methods which identify factors causally associated with violence. PMID:26554711
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Fatyga, M; Vora, S
Purpose: To determine if differences in patient positioning methods have an impact on the incidence and modeling of grade >=2 acute rectal toxicity in prostate cancer patients who were treated with Intensity Modulated Radiation Therapy (IMRT). Methods: We compared two databases of patients treated with radiation therapy for prostate cancer: a database of 79 patients who were treated with 7 field IMRT and daily image guided positioning based on implanted gold markers (IGRTdb), and a database of 302 patients who were treated with 5 field IMRT and daily positioning using a trans-abdominal ultrasound system (USdb). Complete planning dosimetry was availablemore » for IGRTdb patients while limited planning dosimetry, recorded at the time of planning, was available for USdb patients. We fit Lyman-Kutcher-Burman (LKB) model to IGRTdb only, and Univariate Logistic Regression (ULR) NTCP model to both databases. We perform Receiver Operating Characteristics analysis to determine the predictive power of NTCP models. Results: The incidence of grade >= 2 acute rectal toxicity in IGRTdb was 20%, while the incidence in USdb was 54%. Fits of both LKB and ULR models yielded predictive NTCP models for IGRTdb patients with Area Under the Curve (AUC) in the 0.63 – 0.67 range. Extrapolation of the ULR model from IGRTdb to planning dosimetry in USdb predicts that the incidence of acute rectal toxicity in USdb should not exceed 40%. Fits of the ULR model to the USdb do not yield predictive NTCP models and their AUC is consistent with AUC = 0.5. Conclusion: Accuracy of a patient positioning system affects clinically observed toxicity rates and the quality of NTCP models that can be derived from toxicity data. Poor correlation between planned and clinically delivered dosimetry may lead to erroneous or poorly performing NTCP models, even if the number of patients in a database is large.« less
Yao, Chen; Zhu, Xiaojin; Weigel, Kent A
2016-11-07
Genomic prediction for novel traits, which can be costly and labor-intensive to measure, is often hampered by low accuracy due to the limited size of the reference population. As an option to improve prediction accuracy, we introduced a semi-supervised learning strategy known as the self-training model, and applied this method to genomic prediction of residual feed intake (RFI) in dairy cattle. We describe a self-training model that is wrapped around a support vector machine (SVM) algorithm, which enables it to use data from animals with and without measured phenotypes. Initially, a SVM model was trained using data from 792 animals with measured RFI phenotypes. Then, the resulting SVM was used to generate self-trained phenotypes for 3000 animals for which RFI measurements were not available. Finally, the SVM model was re-trained using data from up to 3792 animals, including those with measured and self-trained RFI phenotypes. Incorporation of additional animals with self-trained phenotypes enhanced the accuracy of genomic predictions compared to that of predictions that were derived from the subset of animals with measured phenotypes. The optimal ratio of animals with self-trained phenotypes to animals with measured phenotypes (2.5, 2.0, and 1.8) and the maximum increase achieved in prediction accuracy measured as the correlation between predicted and actual RFI phenotypes (5.9, 4.1, and 2.4%) decreased as the size of the initial training set (300, 400, and 500 animals with measured phenotypes) increased. The optimal number of animals with self-trained phenotypes may be smaller when prediction accuracy is measured as the mean squared error rather than the correlation between predicted and actual RFI phenotypes. Our results demonstrate that semi-supervised learning models that incorporate self-trained phenotypes can achieve genomic prediction accuracies that are comparable to those obtained with models using larger training sets that include only animals with measured phenotypes. Semi-supervised learning can be helpful for genomic prediction of novel traits, such as RFI, for which the size of reference population is limited, in particular, when the animals to be predicted and the animals in the reference population originate from the same herd-environment.
Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv
2017-02-01
Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
2015-10-01
FORD CLASS AIRCRAFT CARRIER Poor Outcomes Are the Predictable Consequences of the Prevalent Acquisition Culture...2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Ford Class Aircraft Carrier: Poor Outcomes Are the Predictable...This Study The Navy set ambitious goals for the Ford -class program, including an array of new technologies and design features that were intended
Direct multiangle solution for poorly stratified atmospheres
Vladimir Kovalev; Cyle Wold; Alexander Petkov; Wei Min Hao
2012-01-01
The direct multiangle solution is considered, which allows improving the scanning lidar-data-inversion accuracy when the requirement of the horizontally stratified atmosphere is poorly met. The signal measured at zenith or close to zenith is used as a core source for extracting optical characteristics of the atmospheric aerosol loading. The multiangle signals are used...
Characterization of a normal control group: are they healthy?
Aine, C J; Sanfratello, L; Adair, J C; Knoefel, J E; Qualls, C; Lundy, S L; Caprihan, A; Stone, D; Stephen, J M
2014-01-01
We examined the health of a control group (18-81years) in our aging study, which is similar to control groups used in other neuroimaging studies. The current study was motivated by our previous results showing that one third of the elder control group had moderate to severe white matter hyperintensities and/or cortical volume loss which correlated with poor performance on memory tasks. Therefore, we predicted that cardiovascular risk factors (e.g., hypertension, high cholesterol) within the control group would account for significant variance on working memory task performance. Fifty-five participants completed 4 verbal and spatial working memory tasks, neuropsychological exams, diffusion tensor imaging (DTI), and blood tests to assess vascular risk. In addition to using a repeated measures ANOVA design, a cluster analysis was applied to the vascular risk measures as a data reduction step to characterize relationships between conjoint risk factors. The cluster groupings were used to predict working memory performance. The results show that higher levels of systolic blood pressure were associated with: 1) poor spatial working memory accuracy; and 2) lower fractional anisotropy (FA) values in multiple brain regions. In contrast, higher levels of total cholesterol corresponded with increased accuracy in verbal working memory. An association between lower FA values and higher cholesterol levels were identified in different brain regions from those associated with systolic blood pressure. The conjoint risk analysis revealed that Risk Cluster Group 3 (the group with the greatest number of risk factors) displayed: 1) the poorest performance on the spatial working memory tasks; 2) the longest reaction times across both spatial and verbal memory tasks; and 3) the lowest FA values across widespread brain regions. Our results confirm that a considerable range of vascular risk factors are present in a typical control group, even in younger individuals, which have robust effects on brain anatomy and function. These results present a new challenge to neuroimaging studies both for defining a cohort from which to characterize 'normative' brain circuitry and for establishing a control group to compare with other clinical populations. © 2013.
Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei
2011-12-01
Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.