Ye, Jiang-Feng; Zhao, Yu-Xin; Ju, Jian; Wang, Wei
2017-10-01
To discuss the value of the Bedside Index for Severity in Acute Pancreatitis (BISAP), Modified Early Warning Score (MEWS), serum Ca2+, similarly hereinafter, and red cell distribution width (RDW) for predicting the severity grade of acute pancreatitis and to develop and verify a more accurate scoring system to predict the severity of AP. In 302 patients with AP, we calculated BISAP and MEWS scores and conducted regression analyses on the relationships of BISAP scoring, RDW, MEWS, and serum Ca2+ with the severity of AP using single-factor logistics. The variables with statistical significance in the single-factor logistic regression were used in a multi-factor logistic regression model; forward stepwise regression was used to screen variables and build a multi-factor prediction model. A receiver operating characteristic curve (ROC curve) was constructed, and the significance of multi- and single-factor prediction models in predicting the severity of AP using the area under the ROC curve (AUC) was evaluated. The internal validity of the model was verified through bootstrapping. Among 302 patients with AP, 209 had mild acute pancreatitis (MAP) and 93 had severe acute pancreatitis (SAP). According to single-factor logistic regression analysis, we found that BISAP, MEWS and serum Ca2+ are prediction indexes of the severity of AP (P-value<0.001), whereas RDW is not a prediction index of AP severity (P-value>0.05). The multi-factor logistic regression analysis showed that BISAP and serum Ca2+ are independent prediction indexes of AP severity (P-value<0.001), and MEWS is not an independent prediction index of AP severity (P-value>0.05); BISAP is negatively related to serum Ca2+ (r=-0.330, P-value<0.001). The constructed model is as follows: ln()=7.306+1.151*BISAP-4.516*serum Ca2+. The predictive ability of each model for SAP follows the order of the combined BISAP and serum Ca2+ prediction model>Ca2+>BISAP. There is no statistical significance for the predictive ability of BISAP and serum Ca2+ (P-value>0.05); however, there is remarkable statistical significance for the predictive ability using the newly built prediction model as well as BISAP and serum Ca2+ individually (P-value<0.01). Verification of the internal validity of the models by bootstrapping is favorable. BISAP and serum Ca2+ have high predictive value for the severity of AP. However, the model built by combining BISAP and serum Ca2+ is remarkably superior to those of BISAP and serum Ca2+ individually. Furthermore, this model is simple, practical and appropriate for clinical use. Copyright © 2016. Published by Elsevier Masson SAS.
Predicting outcome in severe traumatic brain injury using a simple prognostic model.
Sobuwa, Simpiwe; Hartzenberg, Henry Benjamin; Geduld, Heike; Uys, Corrie
2014-06-17
Several studies have made it possible to predict outcome in severe traumatic brain injury (TBI) making it beneficial as an aid for clinical decision-making in the emergency setting. However, reliable predictive models are lacking for resource-limited prehospital settings such as those in developing countries like South Africa. To develop a simple predictive model for severe TBI using clinical variables in a South African prehospital setting. All consecutive patients admitted at two level-one centres in Cape Town, South Africa, for severe TBI were included. A binary logistic regression model was used, which included three predictor variables: oxygen saturation (SpO₂), Glasgow Coma Scale (GCS) and pupil reactivity. The Glasgow Outcome Scale was used to assess outcome on hospital discharge. A total of 74.4% of the outcomes were correctly predicted by the logistic regression model. The model demonstrated SpO₂ (p=0.019), GCS (p=0.001) and pupil reactivity (p=0.002) as independently significant predictors of outcome in severe TBI. Odds ratios of a good outcome were 3.148 (SpO₂ ≥ 90%), 5.108 (GCS 6 - 8) and 4.405 (pupils bilaterally reactive). This model is potentially useful for effective predictions of outcome in severe TBI.
2009-01-01
Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism) known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention), and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models. PMID:19903354
Honeybul, Stephen; Ho, Kwok M; Lind, Christopher R P; Gillett, Grant R
2014-05-01
The goal in this study was to assess the validity of the corticosteroid randomization after significant head injury (CRASH) collaborators prediction model in predicting mortality and unfavorable outcome at 18 months in patients with severe traumatic brain injury (TBI) requiring decompressive craniectomy. In addition, the authors aimed to assess whether this model was well calibrated in predicting outcome across a wide spectrum of severity of TBI requiring decompressive craniectomy. This prospective observational cohort study included all patients who underwent a decompressive craniectomy following severe TBI at the two major trauma hospitals in Western Australia between 2004 and 2012 and for whom 18-month follow-up data were available. Clinical and radiological data on initial presentation were entered into the Web-based model and the predicted outcome was compared with the observed outcome. In validating the CRASH model, the authors used area under the receiver operating characteristic curve to assess the ability of the CRASH model to differentiate between favorable and unfavorable outcomes. The ability of the CRASH 6-month unfavorable prediction model to differentiate between unfavorable and favorable outcomes at 18 months after decompressive craniectomy was good (area under the receiver operating characteristic curve 0.85, 95% CI 0.80-0.90). However, the model's calibration was not perfect. The slope and the intercept of the calibration curve were 1.66 (SE 0.21) and -1.11 (SE 0.14), respectively, suggesting that the predicted risks of unfavorable outcomes were not sufficiently extreme or different across different risk strata and were systematically too high (or overly pessimistic), respectively. The CRASH collaborators prediction model can be used as a surrogate index of injury severity to stratify patients according to injury severity. However, clinical decisions should not be based solely on the predicted risks derived from the model, because the number of patients in each predicted risk stratum was still relatively small and hence the results were relatively imprecise. Notwithstanding these limitations, the model may add to a clinician's ability to have better-informed conversations with colleagues and patients' relatives about prognosis.
NASA Astrophysics Data System (ADS)
Yu, Z.; Lin, S.
2011-12-01
Regional heat waves and drought have major economic and societal impacts on regional and even global scales. For example, during and following the 2010-2011 La Nina period, severe droughts have been reported in many places around the world including China, the southern US, and the east Africa, causing severe hardship in China and famine in east Africa. In this study, we investigate the feasibility and predictability of severe spring-summer draught events, 3 to 6 months in advance with the 25-km resolution Geophysical Fluid Dynamics Laboratory High-Resolution Atmosphere Model (HiRAM), which is built as a seamless weather-climate model, capable of long-term climate simulations as well as skillful seasonal predictions (e.g., Chen and Lin 2011, GRL). We adopted a similar methodology and the same (HiRAM) model as in Chen and Lin (2011), which is used successfully for seasonal hurricane predictions. A series of initialized 7-month forecasts starting from Dec 1 are performed each year (5 members each) during the past decade (2000-2010). We will then evaluate the predictability of the severe drought events during this period by comparing model predictions vs. available observations. To evaluate the predictive skill, in this preliminary report, we will focus on the anomalies of precipitation, sea-level-pressure, and 500-mb height. These anomalies will be computed as the individual model prediction minus the mean climatology obtained by an independent AMIP-type "simulation" using observed SSTs (rather than using predictive SSTs in the forecasts) from the same model.
Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.
Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric
2017-02-01
In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and severity for each highway facility, and their prediction results are compared with the MVPLN model based on the Average Predicted Mean Absolute Error (APMAE) statistic. A UPLN model for total crashes was also estimated to compare the coefficients of contributing factors with the models that estimate crashes by crash type and severity. The model coefficient estimates show that the signs of coefficients for presence of left-turn lane, presence of right-turn lane, land width and speed limit are different across crash type or severity counts, which suggest that estimating crashes by crash type or severity might be more helpful in identifying crash contributing factors. The standard errors of covariates in the MVPLN model are slightly lower than the UPLN model when the covariates are statistically significant, and the crash counts by crash type and severity are significantly correlated. The model prediction comparisons illustrate that the MVPLN model outperforms the UPLN model in prediction accuracy. Therefore, when predicting crash counts by crash type and crash severity for rural two-lane highways, the MVPLN model should be considered to avoid estimation error and to account for the potential correlations among crash type counts and crash severity counts. Copyright © 2016 Elsevier Ltd. All rights reserved.
Smith, Andrew J; Abeyta, Andrew A; Hughes, Michael; Jones, Russell T
2015-03-01
This study tested a conceptual model merging anxiety buffer disruption and social-cognitive theories to predict persistent grief severity among students who lost a close friend, significant other, and/or professor/teacher in tragic university campus shootings. A regression-based path model tested posttraumatic stress (PTS) symptom severity 3 to 4 months postshooting (Time 1) as a predictor of grief severity 1 year postshootings (Time 2), both directly and indirectly through cognitive processes (self-efficacy and disrupted worldview). Results revealed a model that predicted 61% of the variance in Time 2 grief severity. Hypotheses were supported, demonstrating that Time 1 PTS severity indirectly, positively predicted Time 2 grief severity through undermining self-efficacy and more severely disrupting worldview. Findings and theoretical interpretation yield important insights for future research and clinical application. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Kesmarky, Klara; Delhumeau, Cecile; Zenobi, Marie; Walder, Bernhard
2017-07-15
The Glasgow Coma Scale (GCS) and the Abbreviated Injury Score of the head region (HAIS) are validated prognostic factors in traumatic brain injury (TBI). The aim of this study was to compare the prognostic performance of an alternative predictive model including motor GCS, pupillary reactivity, age, HAIS, and presence of multi-trauma for short-term mortality with a reference predictive model including motor GCS, pupil reaction, and age (IMPACT core model). A secondary analysis of a prospective epidemiological cohort study in Switzerland including patients after severe TBI (HAIS >3) with the outcome death at 14 days was performed. Performance of prediction, accuracy of discrimination (area under the receiver operating characteristic curve [AUROC]), calibration, and validity of the two predictive models were investigated. The cohort included 808 patients (median age, 56; interquartile range, 33-71), median GCS at hospital admission 3 (3-14), abnormal pupil reaction 29%, with a death rate of 29.7% at 14 days. The alternative predictive model had a higher accuracy of discrimination to predict death at 14 days than the reference predictive model (AUROC 0.852, 95% confidence interval [CI] 0.824-0.880 vs. AUROC 0.826, 95% CI 0.795-0.857; p < 0.0001). The alternative predictive model had an equivalent calibration, compared with the reference predictive model Hosmer-Lemeshow p values (Chi2 8.52, Hosmer-Lemeshow p = 0.345 vs. Chi2 8.66, Hosmer-Lemeshow p = 0.372). The optimism-corrected value of AUROC for the alternative predictive model was 0.845. After severe TBI, a higher performance of prediction for short-term mortality was observed with the alternative predictive model, compared with the reference predictive model.
Fei, Yang; Gao, Kun; Tu, Jianfeng; Wang, Wei; Zong, Guang-Quan; Li, Wei-Qin
2017-06-03
Acute pancreatitis (AP) keeps as severe medical diagnosis and treatment problem. Early evaluation for severity and risk stratification in patients with AP is very important. Some scoring system such as acute physiology and chronic health evaluation-II (APACHE-II), the computed tomography severity index (CTSI), Ranson's score and the bedside index of severity of AP (BISAP) have been used, nevertheless, there're a few shortcomings in these methods. The aim of this study was to construct a new modeling including intra-abdominal pressure (IAP) and body mass index (BMI) to evaluate the severity in AP. The study comprised of two independent cohorts of patients with AP, one set was used to develop modeling from Jinling hospital in the period between January 2013 and October 2016, 1073 patients were included in it; another set was used to validate modeling from the 81st hospital in the period between January 2012 and December 2016, 326 patients were included in it. The association between risk factors and severity of AP were assessed by univariable analysis; multivariable modeling was explored through stepwise selection regression. The change in IAP and BMI were combined to generate a regression equation as the new modeling. Statistical indexes were used to evaluate the value of the prediction in the new modeling. Univariable analysis confirmed change in IAP and BMI to be significantly associated with severity of AP. The predict sensitivity, specificity, positive predictive value, negative predictive value and accuracy by the new modeling for severity of AP were 77.6%, 82.6%, 71.9%, 87.5% and 74.9% respectively in the developing dataset. There were significant differences between the new modeling and other scoring systems in these parameters (P < 0.05). In addition, a comparison of the area under receiver operating characteristic curves of them showed a statistically significant difference (P < 0.05). The same results could be found in the validating dataset. A new modeling based on IAP and BMI is more likely to predict the severity of AP. Copyright © 2017. Published by Elsevier Inc.
A Severe Sepsis Mortality Prediction Model and Score for Use with Administrative Data
Ford, Dee W.; Goodwin, Andrew J.; Simpson, Annie N.; Johnson, Emily; Nadig, Nandita; Simpson, Kit N.
2016-01-01
Objective Administrative data is used for research, quality improvement, and health policy in severe sepsis. However, there is not a sepsis-specific tool applicable to administrative data with which to adjust for illness severity. Our objective was to develop, internally validate, and externally validate a severe sepsis mortality prediction model and associated mortality prediction score. Design Retrospective cohort study using 2012 administrative data from five US states. Three cohorts of patients with severe sepsis were created: 1) ICD-9-CM codes for severe sepsis/septic shock, 2) ‘Martin’ approach, and 3) ‘Angus’ approach. The model was developed and internally validated in ICD-9-CM cohort and externally validated in other cohorts. Integer point values for each predictor variable were generated to create a sepsis severity score. Setting Acute care, non-federal hospitals in NY, MD, FL, MI, and WA Subjects Patients in one of three severe sepsis cohorts: 1) explicitly coded (n=108,448), 2) Martin cohort (n=139,094), and 3) Angus cohort (n=523,637) Interventions None Measurements and Main Results Maximum likelihood estimation logistic regression to develop a predictive model for in-hospital mortality. Model calibration and discrimination assessed via Hosmer-Lemeshow goodness-of-fit (GOF) and C-statistics respectively. Primary cohort subset into risk deciles and observed versus predicted mortality plotted. GOF demonstrated p>0.05 for each cohort demonstrating sound calibration. C-statistic ranged from low of 0.709 (sepsis severity score) to high of 0.838 (Angus cohort) suggesting good to excellent model discrimination. Comparison of observed versus expected mortality was robust although accuracy decreased in highest risk decile. Conclusions Our sepsis severity model and score is a tool that provides reliable risk adjustment for administrative data. PMID:26496452
Modelling obesity trends in Australia: unravelling the past and predicting the future.
Hayes, A J; Lung, T W C; Bauman, A; Howard, K
2017-01-01
Modelling is increasingly being used to predict the epidemiology of obesity progression and its consequences. The aims of this study were: (a) to present and validate a model for prediction of obesity among Australian adults and (b) to use the model to project the prevalence of obesity and severe obesity by 2025. Individual level simulation combined with survey estimation techniques to model changing population body mass index (BMI) distribution over time. The model input population was derived from a nationally representative survey in 1995, representing over 12 million adults. Simulations were run for 30 years. The model was validated retrospectively and then used to predict obesity and severe obesity by 2025 among different aged cohorts and at a whole population level. The changing BMI distribution over time was well predicted by the model and projected prevalence of weight status groups agreed with population level data in 2008, 2012 and 2014.The model predicts more growth in obesity among younger than older adult cohorts. Projections at a whole population level, were that healthy weight will decline, overweight will remain steady, but obesity and severe obesity prevalence will continue to increase beyond 2016. Adult obesity prevalence was projected to increase from 19% in 1995 to 35% by 2025. Severe obesity (BMI>35), which was only around 5% in 1995, was projected to be 13% by 2025, two to three times the 1995 levels. The projected rise in obesity severe obesity will have more substantial cost and healthcare system implications than in previous decades. Having a robust epidemiological model is key to predicting these long-term costs and health outcomes into the future.
Luque, M J; Tapia, J L; Villarroel, L; Marshall, G; Musante, G; Carlo, W; Kattan, J
2014-01-01
Develop a risk prediction model for severe intraventricular hemorrhage (IVH) in very low birth weight infants (VLBWI). Prospectively collected data of infants with birth weight 500 to 1249 g born between 2001 and 2010 in centers from the Neocosur Network were used. Forward stepwise logistic regression model was employed. The model was tested in the 2011 cohort and then applied to the population of VLBWI that received prophylactic indomethacin to analyze its effect in the risk of severe IVH. Data from 6538 VLBWI were analyzed. The area under ROC curve for the model was 0.79 and 0.76 when tested in the 2011 cohort. The prophylactic indomethacin group had lower incidence of severe IVH, especially in the highest-risk groups. A model for early severe IVH prediction was developed and tested in our population. Prophylactic indomethacin was associated with a lower risk-adjusted incidence of severe IVH.
Comparison of four statistical and machine learning methods for crash severity prediction.
Iranitalab, Amirfarrokh; Khattak, Aemal
2017-11-01
Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Lei; Chen, Nengcheng; Zhang, Xiang
2018-02-01
Drought is an extreme natural disaster that can lead to huge socioeconomic losses. Drought prediction ahead of months is helpful for early drought warning and preparations. In this study, we developed a statistical model, two weighted dynamic models and a statistical-dynamic (hybrid) model for 1-6 month lead drought prediction in China. Specifically, statistical component refers to climate signals weighting by support vector regression (SVR), dynamic components consist of the ensemble mean (EM) and Bayesian model averaging (BMA) of the North American Multi-Model Ensemble (NMME) climatic models, and the hybrid part denotes a combination of statistical and dynamic components by assigning weights based on their historical performances. The results indicate that the statistical and hybrid models show better rainfall predictions than NMME-EM and NMME-BMA models, which have good predictability only in southern China. In the 2011 China winter-spring drought event, the statistical model well predicted the spatial extent and severity of drought nationwide, although the severity was underestimated in the mid-lower reaches of Yangtze River (MLRYR) region. The NMME-EM and NMME-BMA models largely overestimated rainfall in northern and western China in 2011 drought. In the 2013 China summer drought, the NMME-EM model forecasted the drought extent and severity in eastern China well, while the statistical and hybrid models falsely detected negative precipitation anomaly (NPA) in some areas. Model ensembles such as multiple statistical approaches, multiple dynamic models or multiple hybrid models for drought predictions were highlighted. These conclusions may be helpful for drought prediction and early drought warnings in China.
Role of genetic variation in docetaxel-induced neutropenia and pharmacokinetics.
Nieuweboer, A J M; Smid, M; de Graan, A-J M; Elbouazzaoui, S; de Bruijn, P; Eskens, F A L M; Hamberg, P; Martens, J W M; Sparreboom, A; de Wit, R; van Schaik, R H N; Mathijssen, R H J
2016-11-01
Docetaxel is used for treatment of several solid malignancies. In this study, we aimed for predicting docetaxel clearance and docetaxel-induced neutropenia by developing several genetic models. Therefore, pharmacokinetic data and absolute neutrophil counts (ANCs) of 213 docetaxel-treated cancer patients were collected. Next, patients were genotyped for 1936 single nucleotide polymorphisms (SNPs) in 225 genes using the drug-metabolizing enzymes and transporters platform and thereafter split into two cohorts. The combination of SNPs that best predicted severe neutropenia or low clearance was selected in one cohort and validated in the other. Patients with severe neutropenia had lower docetaxel clearance than patients with ANCs in the normal range (P=0.01). Severe neutropenia was predicted with 70% sensitivity. True low clearance (1 s.d.
A comparison of the Injury Severity Score and the Trauma Mortality Prediction Model.
Cook, Alan; Weddle, Jo; Baker, Susan; Hosmer, David; Glance, Laurent; Friedman, Lee; Osler, Turner
2014-01-01
Performance benchmarking requires accurate measurement of injury severity. Despite its shortcomings, the Injury Severity Score (ISS) remains the industry standard 40 years after its creation. A new severity measure, the Trauma Mortality Prediction Model (TMPM), uses either the Abbreviated Injury Scale (AIS) or DRG International Classification of Diseases-9th Rev. (ICD-9) lexicons and may better quantify injury severity compared with ISS. We compared the performance of TMPM with ISS and other measures of injury severity in a single cohort of patients. We included 337,359 patient records with injuries reliably described in both the AIS and the ICD-9 lexicons from the National Trauma Data Bank. Five injury severity measures (ISS, maximum AIS score, New Injury Severity Score [NISS], ICD-9-Based Injury Severity Score [ICISS], TMPM) were computed using either the AIS or ICD-9 codes. These measures were compared for discrimination (area under the receiver operating characteristic curve), an estimate of proximity to a model that perfectly predicts the outcome (Akaike information criterion), and model calibration curves. TMPM demonstrated superior receiver operating characteristic curve, Akaike information criterion, and calibration using either the AIS or ICD-9 lexicons. Calibration plots demonstrate the monotonic characteristics of the TMPM models contrasted by the nonmonotonic features of the other prediction models. Severity measures were more accurate with the AIS lexicon rather than ICD-9. NISS proved superior to ISS in either lexicon. Since NISS is simpler to compute, it should replace ISS when a quick estimate of injury severity is required for AIS-coded injuries. Calibration curves suggest that the nonmonotonic nature of ISS may undermine its performance. TMPM demonstrated superior overall mortality prediction compared with all other models including ISS whether the AIS or ICD-9 lexicons were used. Because TMPM provides an absolute probability of death, it may allow clinicians to communicate more precisely with one another and with patients and families. Disagnostic study, level I; prognostic study, level II.
Bujarski, Spencer; Jentsch, J David; Roche, Daniel J O; Ramchandani, Vijay A; Miotto, Karen; Ray, Lara A
2018-05-08
The Allostatic Model proposes that Alcohol Use Disorder (AUD) is associated with a transition in the motivational structure of alcohol drinking: from positive reinforcement in early-stage drinking to negative reinforcement in late-stage dependence. However, direct empirical support for this preclinical model from human experiments is limited. This study tests predictions derived from the Allostatic Model in humans. Specifically, this study tested whether alcohol use severity (1) independently predicts subjective responses to alcohol (SR; comprised of stimulation/hedonia, negative affect, sedation and craving domains), and alcohol self-administration and 2) moderates associations between domains of SR and alcohol self-administration. Heavy drinking participants ranging in severity of alcohol use and problems (N = 67) completed an intravenous alcohol administration paradigm combining an alcohol challenge (target BrAC = 60 mg%), with progressive ratio self-administration. Alcohol use severity was associated with greater baseline negative affect, sedation, and craving but did not predict changes in any SR domain during the alcohol challenge. Alcohol use severity also predicted greater self-administration. Craving during the alcohol challenge strongly predicted self-administration and sedation predicted lower self-administration. Neither stimulation, nor negative affect predicted self-administration. This study represents a novel approach to translating preclinical neuroscientific theories to the human laboratory. As expected, craving predicted self-administration and sedation was protective. Contrary to the predictions of the Allostatic Model, however, these results were inconsistent with a transition from positively to negatively reinforced alcohol consumption in severe AUD. Future studies that assess negative reinforcement in the context of an acute stressor are warranted.
Rein, David B
2005-01-01
Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501
Røe, Cecilie; Skandsen, Toril; Manskow, Unn; Ader, Tiina; Anke, Audny
2015-01-01
The aim of the present study was to evaluate mortality and functional outcome in old and very old patients with severe traumatic brain injury (TBI) and compare to the predicted outcome according to the internet based CRASH (Corticosteroid Randomization After Significant Head injury) model based prediction, from the Medical Research Council (MRC). Methods. Prospective, national multicenter study including patients with severe TBI ≥65 years. Predicted mortality and outcome were calculated based on clinical information (CRASH basic) (age, GCS score, and pupil reactivity to light), as well as with additional CT findings (CRASH CT). Observed 14-day mortality and favorable/unfavorable outcome according to the Glasgow Outcome Scale at one year was compared to the predicted outcome according to the CRASH models. Results. 97 patients, mean age 75 (SD 7) years, 64% men, were included. Two patients were lost to follow-up; 48 died within 14 days. The predicted versus the observed odds ratio (OR) for mortality was 2.65. Unfavorable outcome (GOSE < 5) was observed at one year follow-up in 72% of patients. The CRASH models predicted unfavorable outcome in all patients. Conclusion. The CRASH model overestimated mortality and unfavorable outcome in old and very old Norwegian patients with severe TBI. PMID:26688614
NASA Astrophysics Data System (ADS)
Liu, Z.; LU, G.; He, H.; Wu, Z.; He, J.
2017-12-01
Reliable drought prediction is fundamental for seasonal water management. Considering that drought development is closely related to the spatio-temporal evolution of large-scale circulation patterns, we develop a conceptual prediction model of seasonal drought processes based on atmospheric/oceanic Standardized Anomalies (SA). It is essentially the synchronous stepwise regression relationship between 90-day-accumulated atmospheric/oceanic SA-based predictors and 3-month SPI updated daily (SPI3). It is forced with forecasted atmospheric and oceanic variables retrieved from seasonal climate forecast systems, and it can make seamless drought prediction for operational use after a year-to-year calibration. Simulation and prediction of four severe seasonal regional drought processes in China were forced with the NCEP/NCAR reanalysis datasets and the NCEP Climate Forecast System Version 2 (CFSv2) operationally forecasted datasets, respectively. With the help of real-time correction for operational application, model application during four recent severe regional drought events in China revealed that the model is good at development prediction but weak in severity prediction. In addition to weakness in prediction of drought peak, the prediction of drought relief is possible to be predicted as drought recession. This weak performance may be associated with precipitation-causing weather patterns during drought relief. Based on initial virtual analysis on predicted 90-day prospective SPI3 curves, it shows that the 2009/2010 drought in Southwest China and 2014 drought in North China can be predicted and simulated well even for the prospective 1-75 day. In comparison, the prospective 1-45 day may be a feasible and acceptable lead time for simulation and prediction of the 2011 droughts in Southwest China and East China, after which the simulated and predicted developments clearly change.
Rios, Anthony; Kavuluru, Ramakanth
2017-11-01
The CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing (NLP) provided a set of 1000 neuropsychiatric notes to participants as part of a competition to predict psychiatric symptom severity scores. This paper summarizes our methods, results, and experiences based on our participation in the second track of the shared task. Classical methods of text classification usually fall into one of three problem types: binary, multi-class, and multi-label classification. In this effort, we study ordinal regression problems with text data where misclassifications are penalized differently based on how far apart the ground truth and model predictions are on the ordinal scale. Specifically, we present our entries (methods and results) in the N-GRID shared task in predicting research domain criteria (RDoC) positive valence ordinal symptom severity scores (absent, mild, moderate, and severe) from psychiatric notes. We propose a novel convolutional neural network (CNN) model designed to handle ordinal regression tasks on psychiatric notes. Broadly speaking, our model combines an ordinal loss function, a CNN, and conventional feature engineering (wide features) into a single model which is learned end-to-end. Given interpretability is an important concern with nonlinear models, we apply a recent approach called locally interpretable model-agnostic explanation (LIME) to identify important words that lead to instance specific predictions. Our best model entered into the shared task placed third among 24 teams and scored a macro mean absolute error (MMAE) based normalized score (100·(1-MMAE)) of 83.86. Since the competition, we improved our score (using basic ensembling) to 85.55, comparable with the winning shared task entry. Applying LIME to model predictions, we demonstrate the feasibility of instance specific prediction interpretation by identifying words that led to a particular decision. In this paper, we present a method that successfully uses wide features and an ordinal loss function applied to convolutional neural networks for ordinal text classification specifically in predicting psychiatric symptom severity scores. Our approach leads to excellent performance on the N-GRID shared task and is also amenable to interpretability using existing model-agnostic approaches. Copyright © 2017 Elsevier Inc. All rights reserved.
Kim, Ji-Hoon; Kang, Wee-Soo; Yun, Sung-Chul
2014-06-01
A population model of bacterial spot caused by Xanthomonas campestris pv. vesicatoria on hot pepper was developed to predict the primary disease infection date. The model estimated the pathogen population on the surface and within the leaf of the host based on the wetness period and temperature. For successful infection, at least 5,000 cells/ml of the bacterial population were required. Also, wind and rain were necessary according to regression analyses of the monitored data. Bacterial spot on the model is initiated when the pathogen population exceeds 10(15) cells/g within the leaf. The developed model was validated using 94 assessed samples from 2000 to 2007 obtained from monitored fields. Based on the validation study, the predicted initial infection dates varied based on the year rather than the location. Differences in initial infection dates between the model predictions and the monitored data in the field were minimal. For example, predicted infection dates for 7 locations were within the same month as the actual infection dates, 11 locations were within 1 month of the actual infection, and only 3 locations were more than 2 months apart from the actual infection. The predicted infection dates were mapped from 2009 to 2012; 2011 was the most severe year. Although the model was not sensitive enough to predict disease severity of less than 0.1% in the field, our model predicted bacterial spot severity of 1% or more. Therefore, this model can be applied in the field to determine when bacterial spot control is required.
Kim, Ji-Hoon; Kang, Wee-Soo; Yun, Sung-Chul
2014-01-01
A population model of bacterial spot caused by Xanthomonas campestris pv. vesicatoria on hot pepper was developed to predict the primary disease infection date. The model estimated the pathogen population on the surface and within the leaf of the host based on the wetness period and temperature. For successful infection, at least 5,000 cells/ml of the bacterial population were required. Also, wind and rain were necessary according to regression analyses of the monitored data. Bacterial spot on the model is initiated when the pathogen population exceeds 1015 cells/g within the leaf. The developed model was validated using 94 assessed samples from 2000 to 2007 obtained from monitored fields. Based on the validation study, the predicted initial infection dates varied based on the year rather than the location. Differences in initial infection dates between the model predictions and the monitored data in the field were minimal. For example, predicted infection dates for 7 locations were within the same month as the actual infection dates, 11 locations were within 1 month of the actual infection, and only 3 locations were more than 2 months apart from the actual infection. The predicted infection dates were mapped from 2009 to 2012; 2011 was the most severe year. Although the model was not sensitive enough to predict disease severity of less than 0.1% in the field, our model predicted bacterial spot severity of 1% or more. Therefore, this model can be applied in the field to determine when bacterial spot control is required. PMID:25288995
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Predicting severe injury using vehicle telemetry data.
Ayoung-Chee, Patricia; Mack, Christopher D; Kaufman, Robert; Bulger, Eileen
2013-01-01
In 2010, the National Highway Traffic Safety Administration standardized collision data collected by event data recorders, which may help determine appropriate emergency medical service (EMS) response. Previous models (e.g., General Motors ) predict severe injury (Injury Severity Score [ISS] > 15) using occupant demographics and collision data. Occupant information is not automatically available, and 12% of calls from advanced automatic collision notification providers are unanswered. To better inform EMS triage, our goal was to create a predictive model only using vehicle collision data. Using the National Automotive Sampling System Crashworthiness Data System data set, we included front-seat occupants in late-model vehicles (2000 and later) in nonrollover and rollover crashes in years 2000 to 2010. Telematic (change in velocity, direction of force, seat belt use, vehicle type and curb weight, as well as multiple impact) and nontelematic variables (maximum intrusion, narrow impact, and passenger ejection) were included. Missing data were multiply imputed. The University of Washington model was tested to predict severe injury before application of guidelines (Step 0) and for occupants who did not meet Steps 1 and 2 criteria (Step 3) of the Centers for Disease Control and Prevention Field Triage Guidelines. A probability threshold of 20% was chosen in accordance with Centers for Disease Control and Prevention recommendations. There were 28,633 crashes, involving 33,956 vehicles and 52,033 occupants, of whom 9.9% had severe injury. At Step 0, the University of Washington model sensitivity was 40.0% and positive predictive value (PPV) was 20.7%. At Step 3, the sensitivity was 32.3 % and PPV was 10.1%. Model analysis excluding nontelematic variables decreased sensitivity and PPV. The sensitivity of the re-created General Motors model was 38.5% at Step 0 and 28.1% at Step 3. We designed a model using only vehicle collision data that was predictive of severe injury at collision notification and in the field and was comparable with an existing model. These models demonstrate the potential use of advanced automatic collision notification in planning EMS response. Prognostic study, level II.
Claims-based risk model for first severe COPD exacerbation.
Stanford, Richard H; Nag, Arpita; Mapel, Douglas W; Lee, Todd A; Rosiello, Richard; Schatz, Michael; Vekeman, Francis; Gauthier-Loiselle, Marjolaine; Merrigan, J F Philip; Duh, Mei Sheng
2018-02-01
To develop and validate a predictive model for first severe chronic obstructive pulmonary disease (COPD) exacerbation using health insurance claims data and to validate the risk measure of controller medication to total COPD treatment (controller and rescue) ratio (CTR). A predictive model was developed and validated in 2 managed care databases: Truven Health MarketScan database and Reliant Medical Group database. This secondary analysis assessed risk factors, including CTR, during the baseline period (Year 1) to predict risk of severe exacerbation in the at-risk period (Year 2). Patients with COPD who were 40 years or older and who had at least 1 COPD medication dispensed during the year following COPD diagnosis were included. Subjects with severe exacerbations in the baseline year were excluded. Risk factors in the baseline period were included as potential predictors in multivariate analysis. Performance was evaluated using C-statistics. The analysis included 223,824 patients. The greatest risk factors for first severe exacerbation were advanced age, chronic oxygen therapy usage, COPD diagnosis type, dispensing of 4 or more canisters of rescue medication, and having 2 or more moderate exacerbations. A CTR of 0.3 or greater was associated with a 14% lower risk of severe exacerbation. The model performed well with C-statistics, ranging from 0.711 to 0.714. This claims-based risk model can predict the likelihood of first severe COPD exacerbation. The CTR could also potentially be used to target populations at greatest risk for severe exacerbations. This could be relevant for providers and payers in approaches to prevent severe exacerbations and reduce costs.
NASA Technical Reports Server (NTRS)
Ashrafi, S.
1991-01-01
K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majumdar, S.
1997-02-01
Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmedmore » by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.« less
Evaluation of free modeling targets in CASP11 and ROLL.
Kinch, Lisa N; Li, Wenlin; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Grishin, Nick V
2016-09-01
We present an assessment of 'template-free modeling' (FM) in CASP11and ROLL. Community-wide server performance suggested the use of automated scores similar to previous CASPs would provide a good system of evaluating performance, even in the absence of comprehensive manual assessment. The CASP11 FM category included several outstanding examples, including successful prediction by the Baker group of a 256-residue target (T0806-D1) that lacked sequence similarity to any existing template. The top server model prediction by Zhang's Quark, which was apparently selected and refined by several manual groups, encompassed the entire fold of target T0837-D1. Methods from the same two groups tended to dominate overall CASP11 FM and ROLL rankings. Comparison of top FM predictions with those from the previous CASP experiment revealed progress in the category, particularly reflected in high prediction accuracy for larger protein domains. FM prediction models for two cases were sufficient to provide functional insights that were otherwise not obtainable by traditional sequence analysis methods. Importantly, CASP11 abstracts revealed that alignment-based contact prediction methods brought about much of the CASP11 progress, producing both of the functionally relevant models as well as several of the other outstanding structure predictions. These methodological advances enabled de novo modeling of much larger domain structures than was previously possible and allowed prediction of functional sites. Proteins 2016; 84(Suppl 1):51-66. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Airport Noise Prediction Model -- MOD 7
DOT National Transportation Integrated Search
1978-07-01
The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...
[Severity classification of chronic obstructive pulmonary disease based on deep learning].
Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe
2017-12-01
In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.
Farney, Robert J.; Walker, Brandon S.; Farney, Robert M.; Snow, Gregory L.; Walker, James M.
2011-01-01
Background: Various models and questionnaires have been developed for screening specific populations for obstructive sleep apnea (OSA) as defined by the apnea/hypopnea index (AHI); however, almost every method is based upon dichotomizing a population, and none function ideally. We evaluated the possibility of using the STOP-Bang model (SBM) to classify severity of OSA into 4 categories ranging from none to severe. Methods: Anthropomorphic data and the presence of snoring, tiredness/sleepiness, observed apneas, and hypertension were collected from 1426 patients who underwent diagnostic polysomnography. Questionnaire data for each patient was converted to the STOP-Bang equivalent with an ordinal rating of 0 to 8. Proportional odds logistic regression analysis was conducted to predict severity of sleep apnea based upon the AHI: none (AHI < 5/h), mild (AHI ≥ 5 to < 15/h), moderate (≥ 15 to < 30/h), and severe (AHI ≥ 30/h). Results: Linear, curvilinear, and weighted models (R2 = 0.245, 0.251, and 0.269, respectively) were developed that predicted AHI severity. The linear model showed a progressive increase in the probability of severe (4.4% to 81.9%) and progressive decrease in the probability of none (52.5% to 1.1%). The probability of mild or moderate OSA initially increased from 32.9% and 10.3% respectively (SBM score 0) to 39.3% (SBM score 2) and 31.8% (SBM score 4), after which there was a progressive decrease in probabilities as more patients fell into the severe category. Conclusions: The STOP-Bang model may be useful to categorize OSA severity, triage patients for diagnostic evaluation or exclude from harm. Citation: Farney RJ; Walker BS; Farney RM; Snow GL; Walker JM. The STOP-Bang equivalent model and prediction of severity of obstructive sleep apnea: relation to polysomnographic measurements of the apnea/hypopnea index. J Clin Sleep Med 2011;7(5):459-465. PMID:22003340
COMPARING MID-INFRARED GLOBULAR CLUSTER COLORS WITH POPULATION SYNTHESIS MODELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barmby, P.; Jalilian, F. F.
2012-04-15
Several population synthesis models now predict integrated colors of simple stellar populations in the mid-infrared bands. To date, the models have not been extensively tested in this wavelength range. In a comparison of the predictions of several recent population synthesis models, the integrated colors are found to cover approximately the same range but to disagree in detail, for example, on the effects of metallicity. To test against observational data, globular clusters (GCs) are used as the closest objects to idealized groups of stars with a single age and single metallicity. Using recent mass estimates, we have compiled a sample ofmore » massive, old GCs in M31 which contain enough stars to guard against the stochastic effects of small-number statistics, and measured their integrated colors in the Spitzer/IRAC bands. Comparison of the cluster photometry in the IRAC bands with the model predictions shows that the models reproduce the cluster colors reasonably well, except for a small (not statistically significant) offset in [4.5] - [5.8]. In this color, models without circumstellar dust emission predict bluer values than are observed. Model predictions of colors formed from the V band and the IRAC 3.6 and 4.5 {mu}m bands are redder than the observed data at high metallicities and we discuss several possible explanations. In agreement with model predictions, V - [3.6] and V - [4.5] colors are found to have metallicity sensitivity similar to or slightly better than V - K{sub s}.« less
Lin, Haiqun; Williams, Kyle A.; Katsovich, Liliya; Findley, Diane B.; Grantz, Heidi; Lombroso, Paul J.; King, Robert A.; Bessen, Debra E.; Johnson, Dwight; Kaplan, Edward L.; Landeros-Weisenberger, Angeli; Zhang, Heping; Leckman, James F.
2009-01-01
Background: One goal of this prospective longitudinal study was to identify new group A beta hemolytic streptococcal (GABHS) infections in children and adolescents with Tourette syndrome (TS) and/or obsessive-compulsive disorder (OCD) compared to healthy control subjects. We then examined the power of GABHS infections and measures of psychosocial stress to predict future tic, obsessive-compulsive (OC), and depressive symptom severity. Methods: Consecutive ratings of tic, OC and depressive symptom severity were obtained for 45 cases and 41 matched control subjects over a two-year period. Clinical raters were blinded to the results of laboratory tests. Laboratory personnel were blinded to case or control status and clinical ratings. Structural equation modeling for unbalanced repeated measures was used to assess the sequence of new GABHS infections and psychosocial stress and their impact on future symptom severity. Results: Increases in tic and OC symptom severity did not occur after every new GABHS infection. However, the structural equation model found that these newly diagnosed infections were predictive of modest increases in future tic and OC symptom severity, but did not predict future depressive symptom severity. In addition, the inclusion of new infections in the model greatly enhanced, by a factor of three, the power of psychosocial stress in predicting future tic and OC symptom severity. Conclusions: Our data suggest that a minority of children with TS and early-onset OCD were sensitive to antecedent GABHS infections. These infections also enhanced the predictive power of current psychosocial stress on future tic and OC symptom severity. PMID:19833320
Comparison of prediction models for use of medical resources at urban auto-racing events.
Nable, Jose V; Margolis, Asa M; Lawner, Benjamin J; Hirshon, Jon Mark; Perricone, Alexander J; Galvagno, Samuel M; Lee, Debra; Millin, Michael G; Bissell, Richard A; Alcorta, Richard L
2014-12-01
INTRODUCTION Predicting the number of patient encounters and transports during mass gatherings can be challenging. The nature of these events necessitates that proper resources are available to meet the needs that arise. Several prediction models to assist event planners in forecasting medical utilization have been proposed in the literature. The objective of this study was to determine the accuracy of the Arbon and Hartman models in predicting the number of patient encounters and transportations from the Baltimore Grand Prix (BGP), held in 2011 and 2012. It was hypothesized that the Arbon method, which utilizes regression model-derived equations to estimate, would be more accurate than the Hartman model, which categorizes events into only three discreet severity types. This retrospective analysis of the BGP utilized data collected from an electronic patient tracker system. The actual number of patients evaluated and transported at the BGP was tabulated and compared to the numbers predicted by the two studied models. Several environmental features including weather, crowd attendance, and presence of alcohol were used in the Arbon and Hartman models. Approximately 130,000 spectators attended the first event, and approximately 131,000 attended the second. The number of patient encounters per day ranged from 19 to 57 in 2011, and the number of transports from the scene ranged from two to nine. In 2012, the number of patients ranged from 19 to 44 per day, and the number of transports to emergency departments ranged from four to nine. With the exception of one day in 2011, the Arbon model over predicted the number of encounters. For both events, the Hartman model over predicted the number of patient encounters. In regard to hospital transports, the Arbon model under predicted the actual numbers whereas the Hartman model both over predicted and under predicted the number of transports from both events, varying by day. These findings call attention to the need for the development of a versatile and accurate model that can more accurately predict the number of patient encounters and transports associated with mass-gathering events so that medical needs can be anticipated and sufficient resources can be provided.
González-Domínguez, Elisa; Armengol, Josep; Rossi, Vittorio
2014-01-01
A mechanistic, dynamic model was developed to predict infection of loquat fruit by conidia of Fusicladium eriobotryae, the causal agent of loquat scab. The model simulates scab infection periods and their severity through the sub-processes of spore dispersal, infection, and latency (i.e., the state variables); change from one state to the following one depends on environmental conditions and on processes described by mathematical equations. Equations were developed using published data on F. eriobotryae mycelium growth, conidial germination, infection, and conidial dispersion pattern. The model was then validated by comparing model output with three independent data sets. The model accurately predicts the occurrence and severity of infection periods as well as the progress of loquat scab incidence on fruit (with concordance correlation coefficients >0.95). Model output agreed with expert assessment of the disease severity in seven loquat-growing seasons. Use of the model for scheduling fungicide applications in loquat orchards may help optimise scab management and reduce fungicide applications. PMID:25233340
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
Evaluation of Turbulence-Model Performance as Applied to Jet-Noise Prediction
NASA Technical Reports Server (NTRS)
Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.
1998-01-01
The accurate prediction of jet noise is possible only if the jet flow field can be predicted accurately. Predictions for the mean velocity and turbulence quantities in the jet flowfield are typically the product of a Reynolds-averaged Navier-Stokes solver coupled with a turbulence model. To evaluate the effectiveness of solvers and turbulence models in predicting those quantities most important to jet noise prediction, two CFD codes and several turbulence models were applied to a jet configuration over a range of jet temperatures for which experimental data is available.
Testing for ontological errors in probabilistic forecasting models of natural systems
Marzocchi, Warner; Jordan, Thomas H.
2014-01-01
Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265
Omachi, Theodore A; Gregorich, Steven E; Eisner, Mark D; Penaloza, Renee A; Tolstykh, Irina V; Yelin, Edward H; Iribarren, Carlos; Dudley, R Adams; Blanc, Paul D
2013-08-01
Adjustment for differing risks among patients is usually incorporated into newer payment approaches, and current risk models rely on age, sex, and diagnosis codes. It is unknown the extent to which controlling additionally for disease severity improves cost prediction. Failure to adjust for within-disease variation may create incentives to avoid sicker patients. We address this issue among patients with chronic obstructive pulmonary disease (COPD). Cost and clinical data were collected prospectively from 1202 COPD patients at Kaiser Permanente. Baseline analysis included age, sex, and diagnosis codes (using the Diagnostic Cost Group Relative Risk Score) in a general linear model predicting total medical costs in the following year. We determined whether adding COPD severity measures-forced expiratory volume in 1 second, 6-Minute Walk Test, dyspnea score, body mass index, and BODE Index (composite of the other 4 measures)-improved predictions. Separately, we examined household income as a cost predictor. Mean costs were $12,334/y. Controlling for Relative Risk Score, each ½ SD worsening in COPD severity factor was associated with $629 to $1135 in increased annual costs (all P<0.01). The lowest stratum of forced expiratory volume in 1 second (<30% normal) predicted $4098 (95% confidence interval, $576-$8773) additional costs. Household income predicted excess costs when added to the baseline model (P=0.038), but this became nonsignificant when also incorporating the BODE Index. Disease severity measures explain significant cost variations beyond current risk models, and adding them to such models appears important to fairly compensate organizations that accept responsibility for sicker COPD patients. Appropriately controlling for disease severity also accounts for costs otherwise associated with lower socioeconomic status.
A model for predicting life expectancy of children with cystic fibrosis.
Aurora, P; Wade, A; Whitmore, P; Whitehead, B
2000-12-01
In this study the authors aimed to produce a model for predicting the life expectancy of children with severe cystic fibrosis (CF) lung disease. The survival of 181 children with severe CF lung disease referred for transplantation assessment 1988-1998 (mean age 11.5 yrs, median survival without transplant 1.9 yrs from date of assessment) were studied. Proportional hazards modelling was used to identify assessment measurements that are of value in predicting longevity. The resultant model included low height predicted forced expiratory volume in one second (FEV1), low minimum oxygen saturation (Sa,O2min) during a 12-min walk, high age adjusted resting heart rate, young age, female sex, low plasma albumin, and low blood haemoglobin as predictors for poor prognosis. Extrapolation from the model suggests that a 12-yr old male child with an FEV1 of 30% pred and a Sa,O2min of 85% has a 44% risk of death within 2 yrs (95% confidence interval (CI) 35-54%), whilst a female child with the same measurements has a 63% risk of death (95% CI 52-73%) within the same period. The model produced may be of value in predicting the life expectancy of children with severe cystic fibrosis lung disease and in optimizing the timing of lung transplantation.
A Genomics-Based Model for Prediction of Severe Bioprosthetic Mitral Valve Calcification.
Ponasenko, Anastasia V; Khutornaya, Maria V; Kutikhin, Anton G; Rutkovskaya, Natalia V; Tsepokina, Anna V; Kondyukova, Natalia V; Yuzhalin, Arseniy E; Barbarash, Leonid S
2016-08-31
Severe bioprosthetic mitral valve calcification is a significant problem in cardiovascular surgery. Unfortunately, clinical markers did not demonstrate efficacy in prediction of severe bioprosthetic mitral valve calcification. Here, we examined whether a genomics-based approach is efficient in predicting the risk of severe bioprosthetic mitral valve calcification. A total of 124 consecutive Russian patients who underwent mitral valve replacement surgery were recruited. We investigated the associations of the inherited variation in innate immunity, lipid metabolism and calcium metabolism genes with severe bioprosthetic mitral valve calcification. Genotyping was conducted utilizing the TaqMan assay. Eight gene polymorphisms were significantly associated with severe bioprosthetic mitral valve calcification and were therefore included into stepwise logistic regression which identified male gender, the T/T genotype of the rs3775073 polymorphism within the TLR6 gene, the C/T genotype of the rs2229238 polymorphism within the IL6R gene, and the A/A genotype of the rs10455872 polymorphism within the LPA gene as independent predictors of severe bioprosthetic mitral valve calcification. The developed genomics-based model had fair predictive value with area under the receiver operating characteristic (ROC) curve of 0.73. In conclusion, our genomics-based approach is efficient for the prediction of severe bioprosthetic mitral valve calcification.
A Genomics-Based Model for Prediction of Severe Bioprosthetic Mitral Valve Calcification
Ponasenko, Anastasia V.; Khutornaya, Maria V.; Kutikhin, Anton G.; Rutkovskaya, Natalia V.; Tsepokina, Anna V.; Kondyukova, Natalia V.; Yuzhalin, Arseniy E.; Barbarash, Leonid S.
2016-01-01
Severe bioprosthetic mitral valve calcification is a significant problem in cardiovascular surgery. Unfortunately, clinical markers did not demonstrate efficacy in prediction of severe bioprosthetic mitral valve calcification. Here, we examined whether a genomics-based approach is efficient in predicting the risk of severe bioprosthetic mitral valve calcification. A total of 124 consecutive Russian patients who underwent mitral valve replacement surgery were recruited. We investigated the associations of the inherited variation in innate immunity, lipid metabolism and calcium metabolism genes with severe bioprosthetic mitral valve calcification. Genotyping was conducted utilizing the TaqMan assay. Eight gene polymorphisms were significantly associated with severe bioprosthetic mitral valve calcification and were therefore included into stepwise logistic regression which identified male gender, the T/T genotype of the rs3775073 polymorphism within the TLR6 gene, the C/T genotype of the rs2229238 polymorphism within the IL6R gene, and the A/A genotype of the rs10455872 polymorphism within the LPA gene as independent predictors of severe bioprosthetic mitral valve calcification. The developed genomics-based model had fair predictive value with area under the receiver operating characteristic (ROC) curve of 0.73. In conclusion, our genomics-based approach is efficient for the prediction of severe bioprosthetic mitral valve calcification. PMID:27589735
NASA Astrophysics Data System (ADS)
Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.
2018-04-01
Sea level rise has already caused more frequent and severe coastal flooding and this trend will likely continue. Flood prediction is an essential part of a coastal city's capacity to adapt to and mitigate this growing problem. Complex coastal urban hydrological systems however, do not always lend themselves easily to physically-based flood prediction approaches. This paper presents a method for using a data-driven approach to estimate flood severity in an urban coastal setting using crowd-sourced data, a non-traditional but growing data source, along with environmental observation data. Two data-driven models, Poisson regression and Random Forest regression, are trained to predict the number of flood reports per storm event as a proxy for flood severity, given extensive environmental data (i.e., rainfall, tide, groundwater table level, and wind conditions) as input. The method is demonstrated using data from Norfolk, Virginia USA from September 2010 to October 2016. Quality-controlled, crowd-sourced street flooding reports ranging from 1 to 159 per storm event for 45 storm events are used to train and evaluate the models. Random Forest performed better than Poisson regression at predicting the number of flood reports and had a lower false negative rate. From the Random Forest model, total cumulative rainfall was by far the most dominant input variable in predicting flood severity, followed by low tide and lower low tide. These methods serve as a first step toward using data-driven methods for spatially and temporally detailed coastal urban flood prediction.
Comprehensive and critical review of the predictive properties of the various mass models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.
1984-01-01
Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, theremore » is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models.« less
Ali Ali, Bismil; Lefering, Rolf; Fortún Moral, Mariano; Belzunegui Otano, Tomás
2018-01-01
To validate the Mortality Prediction Model of Navarre (MPMN) to predict death after severe trauma and compare it to the Revised Injury Severity Classification Score II (RISCII). Retrospective analysis of a cohort of severe trauma patients (New Injury Severity Score >15) who were attended by emergency services in the Spanish autonomous community of Navarre between 2013 and 2015. The outcome variable was 30-day all-cause mortality. Risk was calculated with the MPMN and the RISCII. The performance of each model was assessed with the area under the receiver operating characteristic (ROC) curve and precision with respect to observed mortality. Calibration was assessed with the Hosmer-Lemeshow test. We included 516 patients. The mean (SD) age was 56 (23) years, and 363 (70%) were males. Ninety patients (17.4%) died within 30 days. The 30-day mortality rates predicted by the MPMN and RISCII were 16.4% and 15.4%, respectively. The areas under the ROC curves were 0.925 (95% CI, 0.902-0.952) for the MPMN and 0.941 (95% CI, 0.921-0.962) for the RISCII (P=0.269, DeLong test). Calibration statistics were 13.6 (P=.09) for the MPMN and 8.9 (P=.35) for the RISCII. Both the MPMN and the RISCII show good ability to discriminate risk and predict 30-day all-cause mortality in severe trauma patients.
NASA Astrophysics Data System (ADS)
Keyser, Alisa; Westerling, Anthony LeRoy
2017-05-01
A long history of fire suppression in the western United States has significantly changed forest structure and ecological function, leading to increasingly uncharacteristic fires in terms of size and severity. Prior analyses of fire severity in California forests showed that time since last fire and fire weather conditions predicted fire severity very well, while a larger regional analysis showed that topography and climate were important predictors of high severity fire. There has not yet been a large-scale study that incorporates topography, vegetation and fire-year climate to determine regional scale high severity fire occurrence. We developed models to predict the probability of high severity fire occurrence for the western US. We predict high severity fire occurrence with some accuracy, and identify the relative importance of predictor classes in determining the probability of high severity fire. The inclusion of both vegetation and fire-year climate predictors was critical for model skill in identifying fires with high fractional fire severity. The inclusion of fire-year climate variables allows this model to forecast inter-annual variability in areas at future risk of high severity fire, beyond what slower-changing fuel conditions alone can accomplish. This allows for more targeted land management, including resource allocation for fuels reduction treatments to decrease the risk of high severity fire.
Facchinello, Yann; Beauséjour, Marie; Richard-Denis, Andreane; Thompson, Cynthia; Mac-Thiong, Jean-Marc
2017-10-25
Predicting the long-term functional outcome following traumatic spinal cord injury is needed to adapt medical strategies and to plan an optimized rehabilitation. This study investigates the use of regression tree for the development of predictive models based on acute clinical and demographic predictors. This prospective study was performed on 172 patients hospitalized following traumatic spinal cord injury. Functional outcome was quantified using the Spinal Cord Independence Measure collected within the first-year post injury. Age, delay prior to surgery and Injury Severity Score were considered as continuous predictors while energy of injury, trauma mechanisms, neurological level of injury, injury severity, occurrence of early spasticity, urinary tract infection, pressure ulcer and pneumonia were coded as categorical inputs. A simplified model was built using only injury severity, neurological level, energy and age as predictor and was compared to a more complex model considering all 11 predictors mentioned above The models built using 4 and 11 predictors were found to explain 51.4% and 62.3% of the variance of the Spinal Cord Independence Measure total score after validation, respectively. The severity of the neurological deficit at admission was found to be the most important predictor. Other important predictors were the Injury Severity Score, age, neurological level and delay prior to surgery. Regression trees offer promising performances for predicting the functional outcome after a traumatic spinal cord injury. It could help to determine the number and type of predictors leading to a prediction model of the functional outcome that can be used clinically in the future.
Honeybul, Stephen; Ho, Kwok M
2016-09-01
Predicting long-term neurological outcomes after severe traumatic brain (TBI) is important, but which prognostic model in the context of decompressive craniectomy has the best performance remains uncertain. This prospective observational cohort study included all patients who had severe TBI requiring decompressive craniectomy between 2004 and 2014, in the two neurosurgical centres in Perth, Western Australia. Severe disability, vegetative state, or death were defined as unfavourable neurological outcomes. Area under the receiver-operating-characteristic curve (AUROC) and slope and intercept of the calibration curve were used to assess discrimination and calibration of the CRASH (Corticosteroid-Randomisation-After-Significant-Head injury) and IMPACT (International-Mission-For-Prognosis-And-Clinical-Trial) models, respectively. Of the 319 patients included in the study, 119 (37%) had unfavourable neurological outcomes at 18-month after decompressive craniectomy for severe TBI. Both CRASH (AUROC 0.86, 95% confidence interval 0.81-0.90) and IMPACT full-model (AUROC 0.85, 95% CI 0.80-0.89) were similar in discriminating between favourable and unfavourable neurological outcome at 18-month after surgery (p=0.690 for the difference in AUROC derived from the two models). Although both models tended to over-predict the risks of long-term unfavourable outcome, the IMPACT model had a slightly better calibration than the CRASH model (intercept of the calibration curve=-4.1 vs. -5.7, and log likelihoods -159 vs. -360, respectively), especially when the predicted risks of unfavourable outcome were <80%. Both CRASH and IMPACT prognostic models were good in discriminating between favourable and unfavourable long-term neurological outcome for patients with severe TBI requiring decompressive craniectomy, but the calibration of the IMPACT full-model was better than the CRASH model. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Testing and analysis of internal hardwood log defect prediction models
R. Edward Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Neonatal intensive care unit: predictive models for length of stay.
Bender, G J; Koestler, D; Ombao, H; McCourt, M; Alskinis, B; Rubin, L P; Padbury, J F
2013-02-01
Hospital length of stay (LOS) is important to administrators and families of neonates admitted to the neonatal intensive care unit (NICU). A prediction model for NICU LOS was developed using predictors birth weight, gestational age and two severity of illness tools, the score for neonatal acute physiology, perinatal extension (SNAPPE) and the morbidity assessment index for newborns (MAIN). Consecutive admissions (n=293) to a New England regional level III NICU were retrospectively collected. Multiple predictive models were compared for complexity and goodness-of-fit, coefficient of determination (R (2)) and predictive error. The optimal model was validated prospectively with consecutive admissions (n=615). Observed and expected LOS was compared. The MAIN models had best Akaike's information criterion, highest R (2) (0.786) and lowest predictive error. The best SNAPPE model underestimated LOS, with substantial variability, yet was fairly well calibrated by birthweight category. LOS was longer in the prospective cohort than the retrospective cohort, without differences in birth weight, gestational age, MAIN or SNAPPE. LOS prediction is improved by accounting for severity of illness in the first week of life, beyond factors known at birth. Prospective validation of both MAIN and SNAPPE models is warranted.
Toyabe, Shin-ichi
2014-01-01
Inpatient falls are the most common adverse events that occur in a hospital, and about 3 to 10% of falls result in serious injuries such as bone fractures and intracranial haemorrhages. We previously reported that bone fractures and intracranial haemorrhages were two major fall-related injuries and that risk assessment score for osteoporotic bone fracture was significantly associated not only with bone fractures after falls but also with intracranial haemorrhage after falls. Based on the results, we tried to establish a risk assessment tool for predicting fall-related severe injuries in a hospital. Possible risk factors related to fall-related serious injuries were extracted from data on inpatients that were admitted to a tertiary-care university hospital by using multivariate Cox’ s regression analysis and multiple logistic regression analysis. We found that fall risk score and fracture risk score were the two significant factors, and we constructed models to predict fall-related severe injuries incorporating these factors. When the prediction model was applied to another independent dataset, the constructed model could detect patients with fall-related severe injuries efficiently. The new assessment system could identify patients prone to severe injuries after falls in a reproducible fashion. PMID:25168984
DiMagno, Matthew J; Spaete, Joshua P; Ballard, Darren D; Wamsteker, Erik-Jan; Saini, Sameer D
2013-08-01
We investigated which variables independently associated with protection against or development of postendoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) and severity of PEP. Subsequently, we derived predictive risk models for PEP. In a case-control design, 6505 patients had 8264 ERCPs, 211 patients had PEP, and 22 patients had severe PEP. We randomly selected 348 non-PEP controls. We examined 7 established- and 9 investigational variables. In univariate analysis, 7 variables predicted PEP: younger age, female sex, suspected sphincter of Oddi dysfunction (SOD), pancreatic sphincterotomy, moderate-difficult cannulation (MDC), pancreatic stent placement, and lower Charlson score. Protective variables were current smoking, former drinking, diabetes, and chronic liver disease (CLD, biliary/transplant complications). Multivariate analysis identified seven independent variables for PEP, three protective (current smoking, CLD-biliary, CLD-transplant/hepatectomy complications) and 4 predictive (younger age, suspected SOD, pancreatic sphincterotomy, MDC). Pre- and post-ERCP risk models of 7 variables have a C-statistic of 0.74. Removing age (seventh variable) did not significantly affect the predictive value (C-statistic of 0.73) and reduced model complexity. Severity of PEP did not associate with any variables by multivariate analysis. By using the newly identified protective variables with 3 predictive variables, we derived 2 risk models with a higher predictive value for PEP compared to prior studies.
Predicting Coastal Flood Severity using Random Forest Algorithm
NASA Astrophysics Data System (ADS)
Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.
2017-12-01
Coastal floods have become more common recently and are predicted to further increase in frequency and severity due to sea level rise. Predicting floods in coastal cities can be difficult due to the number of environmental and geographic factors which can influence flooding events. Built stormwater infrastructure and irregular urban landscapes add further complexity. This paper demonstrates the use of machine learning algorithms in predicting street flood occurrence in an urban coastal setting. The model is trained and evaluated using data from Norfolk, Virginia USA from September 2010 - October 2016. Rainfall, tide levels, water table levels, and wind conditions are used as input variables. Street flooding reports made by city workers after named and unnamed storm events, ranging from 1-159 reports per event, are the model output. Results show that Random Forest provides predictive power in estimating the number of flood occurrences given a set of environmental conditions with an out-of-bag root mean squared error of 4.3 flood reports and a mean absolute error of 0.82 flood reports. The Random Forest algorithm performed much better than Poisson regression. From the Random Forest model, total daily rainfall was by far the most important factor in flood occurrence prediction, followed by daily low tide and daily higher high tide. The model demonstrated here could be used to predict flood severity based on forecast rainfall and tide conditions and could be further enhanced using more complete street flooding data for model training.
Hinton, Devon E; Hofmann, Stefan G; Pitman, Roger K; Pollack, Mark H; Barlow, David H
2008-01-01
This article examines the ability of the panic attack-posttraumatic stress disorder (PTSD) model to predict how panic attacks are generated and how panic attacks worsen PTSD. The article does so by determining the validity of the panic attack-PTSD model in respect to one type of panic attack among traumatized Cambodian refugees: orthostatic panic (OP) attacks (i.e. panic attacks generated by moving from lying or sitting to standing). Among Cambodian refugees attending a psychiatric clinic, the authors conducted two studies to explore the validity of the panic attack-PTSD model as applied to OP patients (i.e. patients with at least one episode of OP in the previous month). In Study 1, the panic attack-PTSD model accurately indicated how OP is seemingly generated: among OP patients (N = 58), orthostasis-associated flashbacks and catastrophic cognitions predicted OP severity beyond a measure of anxious-depressive distress (Symptom Checklist-90-R subscales), and OP severity significantly mediated the effect of anxious-depressive distress on Clinician-Administered PTSD Scale severity. In Study 2, as predicted by the panic attack-PTSD model, OP had a mediational role in respect to the effect of treatment on PTSD severity: among Cambodian refugees with PTSD and comorbid OP who participated in a cognitive behavioural therapy study (N = 56), improvement in PTSD severity was partially mediated by improvement in OP severity.
Alqahtani, Saeed; Bukhari, Ishfaq; Albassam, Ahmed; Alenazi, Maha
2018-05-28
The intestinal absorption process is a combination of several events that are governed by various factors. Several transport mechanisms are involved in drug absorption through enterocytes via active and/or passive processes. The transported molecules then undergo intestinal metabolism, which together with intestinal transport may affect the systemic availability of drugs. Many studies have provided clear evidence on the significant role of intestinal first-pass metabolism on drug bioavailability and degree of drug-drug interactions (DDIs). Areas covered: This review provides an update on the role of intestinal first-pass metabolism in the oral bioavailability of drugs and prediction of drug-drug interactions. It also provides a comprehensive overview and summary of the latest update in the role of PBPK modeling in prediction of intestinal metabolism and DDIs in humans. Expert opinion: The contribution of intestinal first-pass metabolism in the oral bioavailability of drugs and prediction of DDIs has become more evident over the last few years. Several in vitro, in situ, and in vivo models have been developed to evaluate the role of first-pass metabolism and to predict DDIs. Currently, physiologically based pharmacokinetic modeling is considered the most valuable tool for the prediction of intestinal first-pass metabolism and DDIs.
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
The prediction of intelligence in preschool children using alternative models to regression.
Finch, W Holmes; Chang, Mei; Davis, Andrew S; Holden, Jocelyn E; Rothlisberg, Barbara A; McIntosh, David E
2011-12-01
Statistical prediction of an outcome variable using multiple independent variables is a common practice in the social and behavioral sciences. For example, neuropsychologists are sometimes called upon to provide predictions of preinjury cognitive functioning for individuals who have suffered a traumatic brain injury. Typically, these predictions are made using standard multiple linear regression models with several demographic variables (e.g., gender, ethnicity, education level) as predictors. Prior research has shown conflicting evidence regarding the ability of such models to provide accurate predictions of outcome variables such as full-scale intelligence (FSIQ) test scores. The present study had two goals: (1) to demonstrate the utility of a set of alternative prediction methods that have been applied extensively in the natural sciences and business but have not been frequently explored in the social sciences and (2) to develop models that can be used to predict premorbid cognitive functioning in preschool children. Predictions of Stanford-Binet 5 FSIQ scores for preschool-aged children is used to compare the performance of a multiple regression model with several of these alternative methods. Results demonstrate that classification and regression trees provided more accurate predictions of FSIQ scores than does the more traditional regression approach. Implications of these results are discussed.
Esbenshade, Adam J; Zhao, Zhiguo; Aftandilian, Catherine; Saab, Raya; Wattier, Rachel L; Beauchemin, Melissa; Miller, Tamara P; Wilkes, Jennifer J; Kelly, Michael J; Fernbach, Alison; Jeng, Michael; Schwartz, Cindy L; Dvorak, Christopher C; Shyr, Yu; Moons, Karl G M; Sulis, Maria-Luisa; Friedman, Debra L
2017-10-01
Pediatric oncology patients are at an increased risk of invasive bacterial infection due to immunosuppression. The risk of such infection in the absence of severe neutropenia (absolute neutrophil count ≥ 500/μL) is not well established and a validated prediction model for blood stream infection (BSI) risk offers clinical usefulness. A 6-site retrospective external validation was conducted using a previously published risk prediction model for BSI in febrile pediatric oncology patients without severe neutropenia: the Esbenshade/Vanderbilt (EsVan) model. A reduced model (EsVan2) excluding 2 less clinically reliable variables also was created using the initial EsVan model derivative cohort, and was validated using all 5 external validation cohorts. One data set was used only in sensitivity analyses due to missing some variables. From the 5 primary data sets, there were a total of 1197 febrile episodes and 76 episodes of bacteremia. The overall C statistic for predicting bacteremia was 0.695, with a calibration slope of 0.50 for the original model and a calibration slope of 1.0 when recalibration was applied to the model. The model performed better in predicting high-risk bacteremia (gram-negative or Staphylococcus aureus infection) versus BSI alone, with a C statistic of 0.801 and a calibration slope of 0.65. The EsVan2 model outperformed the EsVan model across data sets with a C statistic of 0.733 for predicting BSI and a C statistic of 0.841 for high-risk BSI. The results of this external validation demonstrated that the EsVan and EsVan2 models are able to predict BSI across multiple performance sites and, once validated and implemented prospectively, could assist in decision making in clinical practice. Cancer 2017;123:3781-3790. © 2017 American Cancer Society. © 2017 American Cancer Society.
NASA Astrophysics Data System (ADS)
O'Connor, Alison; Kirtman, Benjamin; Harrison, Scott; Gorman, Joe
2016-05-01
The US Navy faces several limitations when planning operations in regard to forecasting environmental conditions. Currently, mission analysis and planning tools rely heavily on short-term (less than a week) forecasts or long-term statistical climate products. However, newly available data in the form of weather forecast ensembles provides dynamical and statistical extended-range predictions that can produce more accurate predictions if ensemble members can be combined correctly. Charles River Analytics is designing the Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS), which performs data fusion over extended-range multi-model ensembles, such as the North American Multi-Model Ensemble (NMME), to produce a unified forecast for several weeks to several seasons in the future. We evaluated thirty years of forecasts using machine learning to select predictions for an all-encompassing and superior forecast that can be used to inform the Navy's decision planning process.
Hong, Wandong; Lin, Suhan; Zippi, Maddalena; Geng, Wujun; Stock, Simon; Zimmer, Vincent; Xu, Chunfang; Zhou, Mengtao
2017-01-01
Early prediction of disease severity of acute pancreatitis (AP) would be helpful for triaging patients to the appropriate level of care and intervention. The aim of the study was to develop a model able to predict Severe Acute Pancreatitis (SAP). A total of 647 patients with AP were enrolled. The demographic data, hematocrit, High-Density Lipoprotein Cholesterol (HDL-C) determinant at time of admission, Blood Urea Nitrogen (BUN), and serum creatinine (Scr) determinant at time of admission and 24 hrs after hospitalization were collected and analyzed statistically. Multivariate logistic regression indicated that HDL-C at admission and BUN and Scr at 24 hours (hrs) were independently associated with SAP. A logistic regression function (LR model) was developed to predict SAP as follows: -2.25-0.06 HDL-C (mg/dl) at admission + 0.06 BUN (mg/dl) at 24 hours + 0.66 Scr (mg/dl) at 24 hours. The optimism-corrected c-index for LR model was 0.832 after bootstrap validation. The area under the receiver operating characteristic curve for LR model for the prediction of SAP was 0.84. The LR model consists of HDL-C at admission and BUN and Scr at 24 hours, representing an additional tool to stratify patients at risk of SAP.
Nuclear masses far from stability: the interplay of theory and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.
1985-01-01
Mass models seek, by a variety of theoretical approaches, to reproduce the measured mass surface and to predict unmeasured masses beyond it. Subsequent measurements of these predicted nuclear masses permit an assessment of the quality of the mass predictions from the various models. Since the last comprehensive revision of the mass predictions (in the mid-to-late 1970's) over 300 new masses have been reported. Global analyses of these data have been performed by several numerical and graphical methods. These have identified both the strengths and weaknesses of the models. In some cases failures in individual models are distinctly apparent when themore » new mass data are plotted as functions of one or more selected physical parameters. Several examples will be given. Future theoretical efforts will also be discussed.« less
van Strien, Maarten J; Keller, Daniela; Holderegger, Rolf; Ghazoul, Jaboury; Kienast, Felix; Bolliger, Janine
2014-03-01
For conservation managers, it is important to know whether landscape changes lead to increasing or decreasing gene flow. Although the discipline of landscape genetics assesses the influence of landscape elements on gene flow, no studies have yet used landscape-genetic models to predict gene flow resulting from landscape change. A species that has already been severely affected by landscape change is the large marsh grasshopper (Stethophyma grossum), which inhabits moist areas in fragmented agricultural landscapes in Switzerland. From transects drawn between all population pairs within maximum dispersal distance (< 3 km), we calculated several measures of landscape composition as well as some measures of habitat configuration. Additionally, a complete sampling of all populations in our study area allowed incorporating measures of population topology. These measures together with the landscape metrics formed the predictor variables in linear models with gene flow as response variable (F(ST) and mean pairwise assignment probability). With a modified leave-one-out cross-validation approach, we selected the model with the highest predictive accuracy. With this model, we predicted gene flow under several landscape-change scenarios, which simulated construction, rezoning or restoration projects, and the establishment of a new population. For some landscape-change scenarios, significant increase or decrease in gene flow was predicted, while for others little change was forecast. Furthermore, we found that the measures of population topology strongly increase model fit in landscape genetic analysis. This study demonstrates the use of predictive landscape-genetic models in conservation and landscape planning.
Evaluation of Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Proctor, Fred H.; Hamilton, David W.
2009-01-01
Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.
USDA-ARS?s Scientific Manuscript database
Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates for selection. Originally these models were developed without considering genotype ' environment interaction (GE). Several authors have proposed extensions of the cannonical GS model that accomm...
NASA Technical Reports Server (NTRS)
Paine, D. A.; Zack, J. W.; Kaplan, M. L.
1979-01-01
The progress and problems associated with the dynamical forecast system which was developed to predict severe storms are examined. The meteorological problem of severe convective storm forecasting is reviewed. The cascade hypothesis which forms the theoretical core of the nested grid dynamical numerical modelling system is described. The dynamical and numerical structure of the model used during the 1978 test period is presented and a preliminary description of a proposed multigrid system for future experiments and tests is provided. Six cases from the spring of 1978 are discussed to illustrate the model's performance and its problems. Potential solutions to the problems are examined.
ERIC Educational Resources Information Center
Fox, William
2012-01-01
The purpose of our modeling effort is to predict future outcomes. We assume the data collected are both accurate and relatively precise. For our oscillating data, we examined several mathematical modeling forms for predictions. We also examined both ignoring the oscillations as an important feature and including the oscillations as an important…
2013-09-01
based confidence metric is used to compare several different model predictions with the experimental data. II. Aerothermal Model Definition and...whereas 5% measurement uncertainty is assumed for aerodynamic pressure and heat flux measurements 4p y and 4Q y . Bayesian updating according... definitive conclusions for these particular aerodynamic models. However, given the confidence associated with the 4 sdp predictions for Run 30 (H/D
Kessler, R C; van Loo, H M; Wardenaar, K J; Bossarte, R M; Brenner, L A; Cai, T; Ebert, D D; Hwang, I; Li, J; de Jonge, P; Nierenberg, A A; Petukhova, M V; Rosellini, A J; Sampson, N A; Schoevers, R A; Wilcox, M A; Zaslavsky, A M
2016-10-01
Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. Although efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine-learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity and severity with good accuracy. We report results of model validation in an independent prospective national household sample of 1056 respondents with lifetime MDD at baseline. The WMH ML models were applied to these baseline data to generate predicted outcome scores that were compared with observed scores assessed 10-12 years after baseline. ML model prediction accuracy was also compared with that of conventional logistic regression models. Area under the receiver operating characteristic curve based on ML (0.63 for high chronicity and 0.71-0.76 for the other prospective outcomes) was consistently higher than for the logistic models (0.62-0.70) despite the latter models including more predictors. A total of 34.6-38.1% of respondents with subsequent high persistence chronicity and 40.8-55.8% with the severity indicators were in the top 20% of the baseline ML-predicted risk distribution, while only 0.9% of respondents with subsequent hospitalizations and 1.5% with suicide attempts were in the lowest 20% of the ML-predicted risk distribution. These results confirm that clinically useful MDD risk-stratification models can be generated from baseline patient self-reports and that ML methods improve on conventional methods in developing such models.
Kessler, Ronald C.; van Loo, Hanna M.; Wardenaar, Klaas J.; Bossarte, Robert M.; Brenner, Lisa A.; Cai, Tianxi; Ebert, David Daniel; Hwang, Irving; Li, Junlong; de Jonge, Peter; Nierenberg, Andrew A.; Petukhova, Maria V.; Rosellini, Anthony J.; Sampson, Nancy A.; Schoevers, Robert A.; Wilcox, Marsha A.; Zaslavsky, Alan M.
2015-01-01
Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. While efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity, and severity with good accuracy. We report results of model validation in an independent prospective national household sample of 1,056 respondents with lifetime MDD at baseline. The WMH ML models were applied to these baseline data to generate predicted outcome scores that were compared to observed scores assessed 10–12 years after baseline. ML model prediction accuracy was also compared to that of conventional logistic regression models. Area under the receiver operating characteristic curve (AUC) based on ML (.63 for high chronicity and .71–.76 for the other prospective outcomes) was consistently higher than for the logistic models (.62–.70) despite the latter models including more predictors. 34.6–38.1% of respondents with subsequent high persistence-chronicity and 40.8–55.8% with the severity indicators were in the top 20% of the baseline ML predicted risk distribution, while only 0.9% of respondents with subsequent hospitalizations and 1.5% with suicide attempts were in the lowest 20% of the ML predicted risk distribution. These results confirm that clinically useful MDD risk stratification models can be generated from baseline patient self-reports and that ML methods improve on conventional methods in developing such models. PMID:26728563
Short-term droughts forecast using Markov chain model in Victoria, Australia
NASA Astrophysics Data System (ADS)
Rahmat, Siti Nazahiyah; Jayasuriya, Niranjali; Bhuiyan, Muhammed A.
2017-07-01
A comprehensive risk management strategy for dealing with drought should include both short-term and long-term planning. The objective of this paper is to present an early warning method to forecast drought using the Standardised Precipitation Index (SPI) and a non-homogeneous Markov chain model. A model such as this is useful for short-term planning. The developed method has been used to forecast droughts at a number of meteorological monitoring stations that have been regionalised into six (6) homogenous clusters with similar drought characteristics based on SPI. The non-homogeneous Markov chain model was used to estimate drought probabilities and drought predictions up to 3 months ahead. The drought severity classes defined using the SPI were computed at a 12-month time scale. The drought probabilities and the predictions were computed for six clusters that depict similar drought characteristics in Victoria, Australia. Overall, the drought severity class predicted was quite similar for all the clusters, with the non-drought class probabilities ranging from 49 to 57 %. For all clusters, the near normal class had a probability of occurrence varying from 27 to 38 %. For the more moderate and severe classes, the probabilities ranged from 2 to 13 % and 3 to 1 %, respectively. The developed model predicted drought situations 1 month ahead reasonably well. However, 2 and 3 months ahead predictions should be used with caution until the models are developed further.
Hayashi, Yumi; Okamoto, Yasumasa; Takagaki, Koki; Okada, Go; Toki, Shigeru; Inoue, Takeshi; Tanabe, Hajime; Kobayakawa, Makoto; Yamawaki, Shigeto
2015-10-14
It is known that the onset, progression, and prognosis of major depressive disorder are affected by interactions between a number of factors. This study investigated how childhood abuse, personality, and stress of life events were associated with symptoms of depression in depressed people. Patients with major depressive disorder (N = 113, 58 women and 55 men) completed the Beck Depression Inventory-II (BDI-II), the Neuroticism Extroversion Openness Five Factor Inventory (NEO-FFI), the Child Abuse and Trauma Scale (CATS), and the Life Experiences Survey (LES), which are self-report scales. Results were analyzed with correlation analysis and structural equation modeling (SEM), by using SPSS AMOS 21.0. Childhood abuse directly predicted the severity of depression and indirectly predicted the severity of depression through the mediation of personality. Negative life change score of the LES was affected by childhood abuse, however it did not predict the severity of depression. This study is the first to report a relationship between childhood abuse, personality, adulthood life stresses and the severity of depression in depressed patients. Childhood abuse directly and indirectly predicted the severity of depression. These results suggest the need for clinicians to be receptive to the possibility of childhood abuse in patients suffering from depression. SEM is a procedure used for hypothesis modeling and not for causal modeling. Therefore, the possibility of developing more appropriate models that include other variables cannot be excluded.
van Tilburg, Miranda A L; Palsson, Olafur S; Whitehead, William E
2013-06-01
There is evidence that psychological factors affect the onset, severity and duration of irritable bowel syndrome (IBS). However, it is not clear which psychological factors are the most important and how they interact. The aims of the current study are to identify the most important psychological factors predicting IBS symptom severity and to investigate how these psychological variables are related to each other. Study participants were 286 IBS patients who completed a battery of psychological questionnaires including neuroticism, abuse history, life events, anxiety, somatization and catastrophizing. IBS severity measured by the IBS Severity Scale was the dependent variable. Path analysis was performed to determine the associations among the psychological variables, and IBS severity. Although the hypothesized model showed adequate fit, post hoc model modifications were performed to increase prediction. The final model was significant (Chi(2)=2.2; p=0.82; RMSEA<.05) predicting 36% of variance in IBS severity. Catastrophizing (standardized coefficient (β)=0.33; p<.001) and somatization (β=0.20; p<.001) were the only two psychological variables directly associated with IBS severity. Anxiety had an indirect effect on IBS symptoms through catastrophizing (β=0.80; p<.001); as well as somatization (β=0.37; p<.001). Anxiety, in turn, was predicted by neuroticism (β=0.66; p<.001) and stressful life events (β=0.31; p<.001). While cause-and-effect cannot be determined from these cross-sectional data, the outcomes suggest that the most fruitful approach to curb negative effects of psychological factors on IBS is to reduce catastrophizing and somatization. Copyright © 2013 Elsevier Inc. All rights reserved.
Predictors of smoking lapse in a human laboratory paradigm.
Roche, Daniel J O; Bujarski, Spencer; Moallem, Nathasha R; Guzman, Iris; Shapiro, Jenessa R; Ray, Lara A
2014-07-01
During a smoking quit attempt, a single smoking lapse is highly predictive of future relapse. While several risk factors for a smoking lapse have been identified during clinical trials, a laboratory model of lapse was until recently unavailable and, therefore, it is unclear whether these characteristics also convey risk for lapse in a laboratory environment. The primary study goal was to examine whether real-world risk factors of lapse are also predictive of smoking behavior in a laboratory model of smoking lapse. After overnight abstinence, 77 smokers completed the McKee smoking lapse task, in which they were presented with the choice of smoking or delaying in exchange for monetary reinforcement. Primary outcome measures were the latency to initiate smoking behavior and the number of cigarettes smoked during the lapse. Several baseline measures of smoking behavior, mood, and individual traits were examined as predictive factors. Craving to relieve the discomfort of withdrawal, withdrawal severity, and tension level were negatively predictive of latency to smoke. In contrast, average number of cigarettes smoked per day, withdrawal severity, level of nicotine dependence, craving for the positive effects of smoking, and craving to relieve the discomfort of withdrawal were positively predictive of number of cigarettes smoked. The results suggest that real-world risk factors for smoking lapse are also predictive of smoking behavior in a laboratory model of lapse. Future studies using the McKee lapse task should account for between subject differences in the unique factors that independently predict each outcome measure.
Vrshek-Schallhorn, Suzanne; Avery, Bradley M; Ditcheva, Maria; Sapuram, Vaibhav R
2018-06-01
Various internalizing risk factors predict, in separate studies, both augmented and reduced cortisol responding to lab-induced stress. Stressor severity appears key: We tested whether heightened trait-like internalizing risk (here, trait rumination) predicts heightened cortisol reactivity under modest objective stress, but conversely predicts reduced reactivity under more robust objective stress. Thus, we hypothesized that trait rumination would interact with a curvilinear (quadratic) function of stress severity to predict cortisol reactivity. Evidence comes from 85 currently non-depressed emerging adults who completed either a non-stressful control protocol (n = 29), an intermediate difficulty Trier Social Stress Test (TSST; n = 26), or a robustly stressful negative evaluative TSST (n = 30). Latent growth curve models evaluated relationships between trait rumination and linear and quadratic effects of stressor severity on the change in cortisol and negative affect over time. Among other findings, a significant Trait Rumination x Quadratic Stress Severity interaction effect for cortisol's Quadratic Trend of Time (i.e., reactivity, B = .125, p = .017) supported the hypothesis. Rumination predicted greater cortisol reactivity to intermediate stress (r p = .400, p = .043), but blunted reactivity to more robust negative evaluative stress (r p = -0.379, p = 0.039). Contrasting hypotheses, negative affective reactivity increased independently of rumination as stressor severity increased (B = .453, p = 0.044). The direction of the relationship between an internalizing risk factor (trait rumination) and cortisol reactivity varies as a function of stressor severity. We propose the Cortisol Reactivity Threshold Model, which may help reconcile several divergent reactivity literatures and has implications for internalizing psychopathology, particularly depression. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of a predictive Bayesian model to environmental accounting.
Anex, R P; Englehardt, J D
2001-03-30
Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.
NASA Technical Reports Server (NTRS)
Waldron, W. L.
1985-01-01
The observed X-ray emission from early-type stars can be explained by the recombination stellar wind model (or base coronal model). The model predicts that the true X-ray luminosity from the base coronal zone can be 10 to 1000 times greater than the observed X-ray luminosity. From the models, scaling laws were found for the true and observed X-ray luminosities. These scaling laws predict that the ratio of the observed X-ray luminosity to the bolometric luminosity is functionally dependent on several stellar parameters. When applied to several other O and B stars, it is found that the values of the predicted ratio agree very well with the observed values.
Developing a novel risk prediction model for severe malarial anemia.
Brickley, E B; Kabyemela, E; Kurtis, J D; Fried, M; Wood, A M; Duffy, P E
2017-01-01
As a pilot study to investigate whether personalized medicine approaches could have value for the reduction of malaria-related mortality in young children, we evaluated questionnaire and biomarker data collected from the Mother Offspring Malaria Study Project birth cohort (Muheza, Tanzania, 2002-2006) at the time of delivery as potential prognostic markers for pediatric severe malarial anemia. Severe malarial anemia, defined here as a Plasmodium falciparum infection accompanied by hemoglobin levels below 50 g/L, is a key manifestation of life-threatening malaria in high transmission regions. For this study sample, a prediction model incorporating cord blood levels of interleukin-1β provided the strongest discrimination of severe malarial anemia risk with a C-index of 0.77 (95% CI 0.70-0.84), whereas a pragmatic model based on sex, gravidity, transmission season at delivery, and bed net possession yielded a more modest C-index of 0.63 (95% CI 0.54-0.71). Although additional studies, ideally incorporating larger sample sizes and higher event per predictor ratios, are needed to externally validate these prediction models, the findings provide proof of concept that risk score-based screening programs could be developed to avert severe malaria cases in early childhood.
THE PANIC ATTACK–PTSD MODEL: APPLICABILITY TO ORTHOSTATIC PANIC AMONG CAMBODIAN REFUGEES
Hinton, Devon E.; Hofmann, Stefan G.; Pitman, Roger K.; Pollack, Mark H.; Barlow, David H.
2009-01-01
This article examines the ability of the “Panic Attack–PTSD Model” to predict how panic attacks are generated and how panic attacks worsen posttraumatic stress disorder (PTSD). The article does so by determining the validity of the Panic Attack–PTSD Model in respect to one type of panic attacks among traumatized Cambodian refugees: orthostatic panic (OP) attacks, that is, panic attacks generated by moving from lying or sitting to standing. Among Cambodian refugees attending a psychiatric clinic, we conducted two studies to explore the validity of the Panic Attack–PTSD Model as applied to OP patients, meaning patients with at least one episode of OP in the previous month. In Study 1, the “Panic Attack–PTSD Model” accurately indicated how OP is seemingly generated: among OP patients (N = 58), orthostasis-associated flashbacks and catastrophic cognitions predicted OP severity beyond a measure of anxious–depressive distress (SCL subscales), and OP severity significantly mediated the effect of anxious–depressive distress on CAPS severity. In Study 2, as predicted by the Panic Attack–PTSD Model, OP had a mediational role in respect to the effect of treatment on PTSD severity: among Cambodian refugees with PTSD and comorbid OP who participated in a CBT study (N = 56), improvement in PTSD severity was partially mediated by improvement in OP severity. PMID:18470741
Anger: cause or consequence of posttraumatic stress? A prospective study of Dutch soldiers.
Lommen, Miriam J J; Engelhard, Iris M; van de Schoot, Rens; van den Hout, Marcel A
2014-04-01
Many studies have shown that individuals with posttraumatic stress disorder (PTSD) experience more anger over time and across situations (i.e., trait anger) than trauma-exposed individuals without PTSD. There is a lack of prospective research, however, that considers anger levels before trauma exposure. The aim of this study was to prospectively assess the relationship between trait anger and PTSD symptoms, with several known risk factors, including baseline symptoms, neuroticism, and stressor severity in the model. Participants were 249 Dutch soldiers tested approximately 2 months before and approximately 2 months and 9 months after their deployment to Afghanistan. Trait anger and PTSD symptom severity were measured at all assessments. Structural equation modeling including cross-lagged effects showed that higher trait anger before deployment predicted higher PTSD symptoms 2 months after deployment (β = .36), with stressor severity and baseline symptoms in the model, but not with neuroticism in the model. Trait anger at 2 months postdeployment did not predict PTSD symptom severity at 9 months, and PTSD symptom severity 2 months postdeployment did not predict subsequent trait anger scores. Findings suggest that trait anger may be a pretrauma vulnerability factor for PTSD symptoms, but does not add variance beyond the effect of neuroticism. Copyright © 2014 International Society for Traumatic Stress Studies.
Wakie, Tewodros; Kumar, Sunil; Senay, Gabriel; Takele, Abera; Lencho, Alemu
2016-01-01
A number of studies have reported the presence of wheat septoria leaf blotch (Septoria tritici; SLB) disease in Ethiopia. However, the environmental factors associated with SLB disease, and areas under risk of SLB disease, have not been studied. Here, we tested the hypothesis that environmental variables can adequately explain observed SLB disease severity levels in West Shewa, Central Ethiopia. Specifically, we identified 50 environmental variables and assessed their relationships with SLB disease severity. Geographically referenced disease severity data were obtained from the field, and linear regression and Boosted Regression Trees (BRT) modeling approaches were used for developing spatial models. Moderate-resolution imaging spectroradiometer (MODIS) derived vegetation indices and land surface temperature (LST) variables highly influenced SLB model predictions. Soil and topographic variables did not sufficiently explain observed SLB disease severity variation in this study. Our results show that wheat growing areas in Central Ethiopia, including highly productive districts, are at risk of SLB disease. The study demonstrates the integration of field data with modeling approaches such as BRT for predicting the spatial patterns of severity of a pathogenic wheat disease in Central Ethiopia. Our results can aid Ethiopia's wheat disease monitoring efforts, while our methods can be replicated for testing related hypotheses elsewhere.
Methods to Improve the Maintenance of the Earth Catalog of Satellites During Severe Solar Storms
NASA Technical Reports Server (NTRS)
Wilkin, Paul G.; Tolson, Robert H.
1998-01-01
The objective of this thesis is to investigate methods to improve the ability to maintain the inventory of orbital elements of Earth satellites during periods of atmospheric disturbance brought on by severe solar activity. Existing techniques do not account for such atmospheric dynamics, resulting in tracking errors of several seconds in predicted crossing time. Two techniques are examined to reduce of these tracking errors. First, density predicted from various atmospheric models is fit to the orbital decay rate for a number of satellites. An orbital decay model is then developed that could be used to reduce tracking errors by accounting for atmospheric changes. The second approach utilizes a Kalman filter to estimate the orbital decay rate of a satellite after every observation. The new information is used to predict the next observation. Results from the first approach demonstrated the feasibility of building an orbital decay model based on predicted atmospheric density. Correlation of atmospheric density to orbital decay was as high as 0.88. However, it is clear that contemporary: atmospheric models need further improvement in modeling density perturbations polar region brought on by solar activity. The second approach resulted in a dramatic reduction in tracking errors for certain satellites during severe solar Storms. For example, in the limited cases studied, the reduction in tracking errors ranged from 79 to 25 percent.
Lee, Tsair-Fwu; Liou, Ming-Hsiang; Huang, Yu-Jie; Chao, Pei-Ju; Ting, Hui-Min; Lee, Hsiao-Yi
2014-01-01
To predict the incidence of moderate-to-severe patient-reported xerostomia among head and neck squamous cell carcinoma (HNSCC) and nasopharyngeal carcinoma (NPC) patients treated with intensity-modulated radiotherapy (IMRT). Multivariable normal tissue complication probability (NTCP) models were developed by using quality of life questionnaire datasets from 152 patients with HNSCC and 84 patients with NPC. The primary endpoint was defined as moderate-to-severe xerostomia after IMRT. The numbers of predictive factors for a multivariable logistic regression model were determined using the least absolute shrinkage and selection operator (LASSO) with bootstrapping technique. Four predictive models were achieved by LASSO with the smallest number of factors while preserving predictive value with higher AUC performance. For all models, the dosimetric factors for the mean dose given to the contralateral and ipsilateral parotid gland were selected as the most significant predictors. Followed by the different clinical and socio-economic factors being selected, namely age, financial status, T stage, and education for different models were chosen. The predicted incidence of xerostomia for HNSCC and NPC patients can be improved by using multivariable logistic regression models with LASSO technique. The predictive model developed in HNSCC cannot be generalized to NPC cohort treated with IMRT without validation and vice versa. PMID:25163814
Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J
2011-07-01
The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.
Prediction of severe thunderstorms over Sriharikota Island by using the WRF-ARW operational model
NASA Astrophysics Data System (ADS)
Papa Rao, G.; Rajasekhar, M.; Pushpa Saroja, R.; Sreeshna, T.; Rajeevan, M.; Ramakrishna, S. S. V. S.
2016-05-01
Operational short range prediction of Meso-scale thunderstorms for Sriharikota(13.7°N ,80.18°E) has been performed using two nested domains 27 & 9Km configuration of Weather Research & Forecasting-Advanced Research Weather Model (WRF- ARW V3.4).Thunderstorm is a Mesoscale system with spatial scale of few kilometers to a couple of 100 kilometers and time scale of less than an one hour to several hours, which produces heavy rain, lightning, thunder, surface wind squalls and down-bursts. Numerical study of Thunderstorms at Sriharikota and its neighborhood have been discussed with its antecedent thermodynamic stability indices and Parameters that are usually favorable for the development of convective instability based on WRF ARW model predictions. Instability is a prerequisite for the occurrence of severe weather, the greater the instability, the greater will be the potential of thunderstorm. In the present study, K Index, Total totals Index (TTI), Convective Available Potential Energy (CAPE), Convective Inhibition Energy (CINE), Lifted Index (LI), Precipitable Water (PW), etc. are the instability indices used for the short range prediction of thunderstorms. In this study we have made an attempt to estimate the skill of WRF ARW predictability and diagnosed three thunderstorms that occurred during the late evening to late night of 31st July, 20th September and 2nd October of 2015 over Sriharikota Island which are validated with Local Electric Field Mill (EFM), rainfall observations and Chennai Doppler Weather Radar products. The model predicted thermodynamic indices (CAPE, CINE, K Index, LI, TTI and PW) over Sriharikota which act as good indicators for severe thunderstorm activity.
AGDRIFT: A MODEL FOR ESTIMATING NEAR-FIELD SPRAY DRIFT FROM AERIAL APPLICATIONS
The aerial spray prediction model AgDRIFT(R) embodies the computational engine found in the near-wake Lagrangian model AGricultural DISPersal (AGDISP) but with several important features added that improve the speed and accuracy of its predictions. This article summarizes those c...
Low, Yen S.; Sedykh, Alexander; Rusyn, Ivan; Tropsha, Alexander
2017-01-01
Cheminformatics approaches such as Quantitative Structure Activity Relationship (QSAR) modeling have been used traditionally for predicting chemical toxicity. In recent years, high throughput biological assays have been increasingly employed to elucidate mechanisms of chemical toxicity and predict toxic effects of chemicals in vivo. The data generated in such assays can be considered as biological descriptors of chemicals that can be combined with molecular descriptors and employed in QSAR modeling to improve the accuracy of toxicity prediction. In this review, we discuss several approaches for integrating chemical and biological data for predicting biological effects of chemicals in vivo and compare their performance across several data sets. We conclude that while no method consistently shows superior performance, the integrative approaches rank consistently among the best yet offer enriched interpretation of models over those built with either chemical or biological data alone. We discuss the outlook for such interdisciplinary methods and offer recommendations to further improve the accuracy and interpretability of computational models that predict chemical toxicity. PMID:24805064
NASA Astrophysics Data System (ADS)
Liu, Zhenchen; Lu, Guihua; He, Hai; Wu, Zhiyong; He, Jian
2018-01-01
Reliable drought prediction is fundamental for water resource managers to develop and implement drought mitigation measures. Considering that drought development is closely related to the spatial-temporal evolution of large-scale circulation patterns, we developed a conceptual prediction model of seasonal drought processes based on atmospheric and oceanic standardized anomalies (SAs). Empirical orthogonal function (EOF) analysis is first applied to drought-related SAs at 200 and 500 hPa geopotential height (HGT) and sea surface temperature (SST). Subsequently, SA-based predictors are built based on the spatial pattern of the first EOF modes. This drought prediction model is essentially the synchronous statistical relationship between 90-day-accumulated atmospheric-oceanic SA-based predictors and SPI3 (3-month standardized precipitation index), calibrated using a simple stepwise regression method. Predictor computation is based on forecast atmospheric-oceanic products retrieved from the NCEP Climate Forecast System Version 2 (CFSv2), indicating the lead time of the model depends on that of CFSv2. The model can make seamless drought predictions for operational use after a year-to-year calibration. Model application to four recent severe regional drought processes in China indicates its good performance in predicting seasonal drought development, despite its weakness in predicting drought severity. Overall, the model can be a worthy reference for seasonal water resource management in China.
Neonatal Pulmonary MRI of Bronchopulmonary Dysplasia Predicts Short-term Clinical Outcomes.
Higano, Nara S; Spielberg, David R; Fleck, Robert J; Schapiro, Andrew H; Walkup, Laura L; Hahn, Andrew D; Tkach, Jean A; Kingma, Paul S; Merhar, Stephanie L; Fain, Sean B; Woods, Jason C
2018-05-23
Bronchopulmonary dysplasia (BPD) is a serious neonatal pulmonary condition associated with premature birth, but the underlying parenchymal disease and trajectory are poorly characterized. The current NICHD/NHLBI definition of BPD severity is based on degree of prematurity and extent of oxygen requirement. However, no clear link exists between initial diagnosis and clinical outcomes. We hypothesized that magnetic resonance imaging (MRI) of structural parenchymal abnormalities will correlate with NICHD-defined BPD disease severity and predict short-term respiratory outcomes. Forty-two neonates (20 severe BPD, 6 moderate, 7 mild, 9 non-BPD controls; 40±3 weeks post-menstrual age) underwent quiet-breathing structural pulmonary MRI (ultrashort echo-time and gradient echo) in a NICU-sited, neonatal-sized 1.5T scanner, without sedation or respiratory support unless already clinically prescribed. Disease severity was scored independently by two radiologists. Mean scores were compared to clinical severity and short-term respiratory outcomes. Outcomes were predicted using univariate and multivariable models including clinical data and scores. MRI scores significantly correlated with severities and predicted respiratory support at NICU discharge (P<0.0001). In multivariable models, MRI scores were by far the strongest predictor of respiratory support duration over clinical data, including birth weight and gestational age. Notably, NICHD severity level was not predictive of discharge support. Quiet-breathing neonatal pulmonary MRI can independently assess structural abnormalities of BPD, describe disease severity, and predict short-term outcomes more accurately than any individual standard clinical measure. Importantly, this non-ionizing technique can be implemented to phenotype disease and has potential to serially assess efficacy of individualized therapies.
Sapak, Z; Salam, M U; Minchinton, E J; MacManus, G P V; Joyce, D C; Galea, V J
2017-09-01
A weather-based simulation model, called Powdery Mildew of Cucurbits Simulation (POMICS), was constructed to predict fungicide application scheduling to manage powdery mildew of cucurbits. The model was developed on the principle that conditions favorable for Podosphaera xanthii, a causal pathogen of this crop disease, generate a number of infection cycles in a single growing season. The model consists of two components that (i) simulate the disease progression of P. xanthii in secondary infection cycles under natural conditions and (ii) predict the disease severity with application of fungicides at any recurrent disease cycles. The underlying environmental factors associated with P. xanthii infection were quantified from laboratory and field studies, and also gathered from literature. The performance of the POMICS model when validated with two datasets of uncontrolled natural infection was good (the mean difference between simulated and observed disease severity on a scale of 0 to 5 was 0.02 and 0.05). In simulations, POMICS was able to predict high- and low-risk disease alerts. Furthermore, the predicted disease severity was responsive to the number of fungicide applications. Such responsiveness indicates that the model has the potential to be used as a tool to guide the scheduling of judicious fungicide applications.
Cheng, Wen; Gill, Gurdiljot Singh; Sakrani, Taha; Dasu, Mohan; Zhou, Jiao
2017-11-01
Motorcycle crashes constitute a very high proportion of the overall motor vehicle fatalities in the United States, and many studies have examined the influential factors under various conditions. However, research on the impact of weather conditions on the motorcycle crash severity is not well documented. In this study, we examined the impact of weather conditions on motorcycle crash injuries at four different severity levels using San Francisco motorcycle crash injury data. Five models were developed using Full Bayesian formulation accounting for different correlations commonly seen in crash data and then compared for fitness and performance. Results indicate that the models with serial and severity variations of parameters had superior fit, and the capability of accurate crash prediction. The inferences from the parameter estimates from the five models were: an increase in the air temperature reduced the possibility of a fatal crash but had a reverse impact on crashes of other severity levels; humidity in air was not observed to have a predictable or strong impact on crashes; the occurrence of rainfall decreased the possibility of crashes for all severity levels. Transportation agencies might benefit from the research results to improve road safety by providing motorcyclists with information regarding the risk of certain crash severity levels for special weather conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
REVIEW: Widespread access to predictive models in the motor system: a short review
NASA Astrophysics Data System (ADS)
Davidson, Paul R.; Wolpert, Daniel M.
2005-09-01
Recent behavioural and computational studies suggest that access to internal predictive models of arm and object dynamics is widespread in the sensorimotor system. Several systems, including those responsible for oculomotor and skeletomotor control, perceptual processing, postural control and mental imagery, are able to access predictions of the motion of the arm. A capacity to make and use predictions of object dynamics is similarly widespread. Here, we review recent studies looking at the predictive capacity of the central nervous system which reveal pervasive access to forward models of the environment.
Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell
2011-01-01
Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.
Identifying the location of fire refuges in wet forest ecosystems.
Berry, Laurence E; Driscoll, Don A; Stein, John A; Blanchard, Wade; Banks, Sam C; Bradstock, Ross A; Lindenmayer, David B
2015-12-01
The increasing frequency of large, high-severity fires threatens the survival of old-growth specialist fauna in fire-prone forests. Within topographically diverse montane forests, areas that experience less severe or fewer fires compared with those prevailing in the landscape may present unique resource opportunities enabling old-growth specialist fauna to survive. Statistical landscape models that identify the extent and distribution of potential fire refuges may assist land managers to incorporate these areas into relevant biodiversity conservation strategies. We used a case study in an Australian wet montane forest to establish how predictive fire simulation models can be interpreted as management tools to identify potential fire refuges. We examined the relationship between the probability of fire refuge occurrence as predicted by an existing fire refuge model and fire severity experienced during a large wildfire. We also examined the extent to which local fire severity was influenced by fire severity in the surrounding landscape. We used a combination of statistical approaches, including generalized linear modeling, variogram analysis, and receiver operating characteristics and area under the curve analysis (ROC AUC). We found that the amount of unburned habitat and the factors influencing the retention and location of fire refuges varied with fire conditions. Under extreme fire conditions, the distribution of fire refuges was limited to only extremely sheltered, fire-resistant regions of the landscape. During extreme fire conditions, fire severity patterns were largely determined by stochastic factors that could not be predicted by the model. When fire conditions were moderate, physical landscape properties appeared to mediate fire severity distribution. Our study demonstrates that land managers can employ predictive landscape fire models to identify the broader climatic and spatial domain within which fire refuges are likely to be present. It is essential that within these envelopes, forest is protected from logging, roads, and other developments so that the ecological processes related to the establishment and subsequent use of fire refuges are maintained.
The NASA Severe Thunderstorm Observations and Regional Modeling (NASA STORM) Project
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Gatlin, Patrick N.; Lang, Timothy J.; Srikishen, Jayanthi; Case, Jonathan L.; Molthan, Andrew L.; Zavodsky, Bradley T.; Bailey, Jeffrey; Blakeslee, Richard J.; Jedlovec, Gary J.
2016-01-01
The NASA Severe Storm Thunderstorm Observations and Regional Modeling(NASA STORM) project enhanced NASA’s severe weather research capabilities, building upon existing Earth Science expertise at NASA Marshall Space Flight Center (MSFC). During this project, MSFC extended NASA’s ground-based lightning detection capacity to include a readily deployable lightning mapping array (LMA). NASA STORM also enabled NASA’s Short-term Prediction and Research Transition (SPoRT) to add convection allowing ensemble modeling to its portfolio of regional numerical weather prediction (NWP) capabilities. As a part of NASA STORM, MSFC developed new open-source capabilities for analyzing and displaying weather radar observations integrated from both research and operational networks. These accomplishments enabled by NASA STORM are a step towards enhancing NASA’s capabilities for studying severe weather and positions them for any future NASA related severe storm field campaigns.
Gartner, J.E.; Cannon, S.H.; Santi, P.M.; deWolfe, V.G.
2008-01-01
Recently burned basins frequently produce debris flows in response to moderate-to-severe rainfall. Post-fire hazard assessments of debris flows are most useful when they predict the volume of material that may flow out of a burned basin. This study develops a set of empirically-based models that predict potential volumes of wildfire-related debris flows in different regions and geologic settings. The models were developed using data from 53 recently burned basins in Colorado, Utah and California. The volumes of debris flows in these basins were determined by either measuring the volume of material eroded from the channels, or by estimating the amount of material removed from debris retention basins. For each basin, independent variables thought to affect the volume of the debris flow were determined. These variables include measures of basin morphology, basin areas burned at different severities, soil material properties, rock type, and rainfall amounts and intensities for storms triggering debris flows. Using these data, multiple regression analyses were used to create separate predictive models for volumes of debris flows generated by burned basins in six separate regions or settings, including the western U.S., southern California, the Rocky Mountain region, and basins underlain by sedimentary, metamorphic and granitic rocks. An evaluation of these models indicated that the best model (the Western U.S. model) explains 83% of the variability in the volumes of the debris flows, and includes variables that describe the basin area with slopes greater than or equal to 30%, the basin area burned at moderate and high severity, and total storm rainfall. This model was independently validated by comparing volumes of debris flows reported in the literature, to volumes estimated using the model. Eighty-seven percent of the reported volumes were within two residual standard errors of the volumes predicted using the model. This model is an improvement over previous models in that it includes a measure of burn severity and an estimate of modeling errors. The application of this model, in conjunction with models for the probability of debris flows, will enable more complete and rapid assessments of debris flow hazards following wildfire.
Jin, H; Wu, S; Vidyanti, I; Di Capua, P; Wu, B
2015-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". Depression is a common and often undiagnosed condition for patients with diabetes. It is also a condition that significantly impacts healthcare outcomes, use, and cost as well as elevating suicide risk. Therefore, a model to predict depression among diabetes patients is a promising and valuable tool for providers to proactively assess depressive symptoms and identify those with depression. This study seeks to develop a generalized multilevel regression model, using a longitudinal data set from a recent large-scale clinical trial, to predict depression severity and presence of major depression among patients with diabetes. Severity of depression was measured by the Patient Health Questionnaire PHQ-9 score. Predictors were selected from 29 candidate factors to develop a 2-level Poisson regression model that can make population-average predictions for all patients and subject-specific predictions for individual patients with historical records. Newly obtained patient records can be incorporated with historical records to update the prediction model. Root-mean-square errors (RMSE) were used to evaluate predictive accuracy of PHQ-9 scores. The study also evaluated the classification ability of using the predicted PHQ-9 scores to classify patients as having major depression. Two time-invariant and 10 time-varying predictors were selected for the model. Incorporating historical records and using them to update the model may improve both predictive accuracy of PHQ-9 scores and classification ability of the predicted scores. Subject-specific predictions (for individual patients with historical records) achieved RMSE about 4 and areas under the receiver operating characteristic (ROC) curve about 0.9 and are better than population-average predictions. The study developed a generalized multilevel regression model to predict depression and demonstrated that using generalized multilevel regression based on longitudinal patient records can achieve high predictive ability.
PTSITE--a new method of site evaluation for loblolly pine: model development and user's guide
Constance A. Harrington
1991-01-01
A model, named PTSITE, was developed to predict site index for loblolly pine based on soil characteristics, site location on the landscape, and land history. The model was tested with data from several sources and judged to predict site index within + 4 feet (P
USDA-ARS?s Scientific Manuscript database
Predictive models have been developed in several major grape growing regions to correlate environmental conditions to Erysiphe necator ascospore release; however, these models may not accurately predict ascospore release in other viticulture regions with differing climatic conditions. To assess asco...
NASA Technical Reports Server (NTRS)
Carlson, L. A.; Horn, W. J.
1981-01-01
A computer model for the prediction of the trajectory and thermal behavior of zero-pressure high altitude balloon was developed. In accord with flight data, the model permits radiative emission and absorption of the lifting gas and daytime gas temperatures above that of the balloon film. It also includes ballasting, venting, and valving. Predictions obtained with the model are compared with flight data from several flights and newly discovered features are discussed.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-05-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-01-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
Seasonal predictions for wildland fire severity
Shyh-Chin Chen; Haiganoush Preisler; Francis Fujioka; John W. Benoit; John O. Roads
2009-01-01
The National Fire Danger Rating System (NFDRS) indices deduced from the monthly to seasonal predictions of a meteorological climate model at 50-km grid space from January 1998 through December 2003 were used in conjunction with a probability model to predict the expected number of fire occurrences and large fires over the U.S. West. The short-term climate forecasts are...
ERIC Educational Resources Information Center
Owen, Steven V.; Feldhusen, John F.
This study compares the effectiveness of three models of multivariate prediction for academic success in identifying the criterion variance of achievement in nursing education. The first model involves the use of an optimum set of predictors and one equation derived from a regression analysis on first semester grade average in predicting the…
Kontodimopoulos, Nick; Bozios, Panagiotis; Yfantopoulos, John; Niakas, Dimitris
2013-04-01
The purpose of this methodological study was to to provide insight into the under-addressed issue of the longitudinal predictive ability of mapping models. Post-intervention predicted and reported utilities were compared, and the effect of disease severity on the observed differences was examined. A cohort of 120 rheumatoid arthritis (RA) patients (60.0% female, mean age 59.0) embarking on therapy with biological agents completed the Modified Health Assessment Questionnaire (MHAQ) and the EQ-5D at baseline, and at 3, 6 and 12 months post-intervention. OLS regression produced a mapping equation to estimate post-intervention EQ-5D utilities from baseline MHAQ data. Predicted and reported utilities were compared with t test, and the prediction error was modeled, using fixed effects, in terms of covariates such as age, gender, time, disease duration, treatment, RF, DAS28 score, predicted and reported EQ-5D. The OLS model (RMSE = 0.207, R(2) = 45.2%) consistently underestimated future utilities, with a mean prediction error of 6.5%. Mean absolute differences between reported and predicted EQ-5D utilities at 3, 6 and 12 months exceeded the typically reported MID of the EQ-5D (0.03). According to the fixed-effects model, time, lower predicted EQ-5D and higher DAS28 scores had a significant impact on prediction errors, which appeared increasingly negative for lower reported EQ-5D scores, i.e., predicted utilities tended to be lower than reported ones in more severe health states. This study builds upon existing research having demonstrated the potential usefulness of mapping disease-specific instruments onto utility measures. The specific issue of longitudinal validity is addressed, as mapping models derived from baseline patients need to be validated on post-therapy samples. The underestimation of post-treatment utilities in the present study, at least in more severe patients, warrants further research before it is prudent to conduct cost-utility analyses in the context of RA by means of the MHAQ alone.
Ehring, Thomas; Ehlers, Anke; Glucksman, Edward
2008-01-01
The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months later. Diagnoses were established with the Structured Clinical Interview for DSM–IV. Predictors included initial symptom severities; variables established as predictors of PTSD in E. J. Ozer, S. R. Best, T. L. Lipsey, and D. S. Weiss's (2003) meta-analysis; and variables derived from cognitive models of PTSD, phobia, and depression. Results of nonparametric multiple regression analyses showed that the cognitive variables predicted subsequent PTSD and depression severities over and above what could be predicted from initial symptom levels. They also showed greater predictive power than the established predictors, although the latter showed similar effect sizes as in the meta-analysis. In addition, the predictors derived from cognitive models of PTSD and depression were disorder-specific. The results support the role of cognitive factors in the maintenance of emotional disorders following trauma. PMID:18377119
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Predictability and Coupled Dynamics of MJO During DYNAMO
2013-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Predictability and Coupled Dynamics of MJO During DYNAMO ...Model (LIM) for MJO predictions and apply it in retrospective cross-validated forecast mode to the DYNAMO time period. APPROACH We are working as...a team to study MJO dynamics and predictability using several models as team members of the ONR DRI associated with the DYNAMO experiment. This is a
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models. To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models. To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
Dallmann, André; Ince, Ibrahim; Coboeken, Katrin; Eissing, Thomas; Hempel, Georg
2017-09-18
Physiologically based pharmacokinetic modeling is considered a valuable tool for predicting pharmacokinetic changes in pregnancy to subsequently guide in-vivo pharmacokinetic trials in pregnant women. The objective of this study was to extend and verify a previously developed physiologically based pharmacokinetic model for pregnant women for the prediction of pharmacokinetics of drugs metabolized via several cytochrome P450 enzymes. Quantitative information on gestation-specific changes in enzyme activity available in the literature was incorporated in a pregnancy physiologically based pharmacokinetic model and the pharmacokinetics of eight drugs metabolized via one or multiple cytochrome P450 enzymes was predicted. The tested drugs were caffeine, midazolam, nifedipine, metoprolol, ondansetron, granisetron, diazepam, and metronidazole. Pharmacokinetic predictions were evaluated by comparison with in-vivo pharmacokinetic data obtained from the literature. The pregnancy physiologically based pharmacokinetic model successfully predicted the pharmacokinetics of all tested drugs. The observed pregnancy-induced pharmacokinetic changes were qualitatively and quantitatively reasonably well predicted for all drugs. Ninety-seven percent of the mean plasma concentrations predicted in pregnant women fell within a twofold error range and 63% within a 1.25-fold error range. For all drugs, the predicted area under the concentration-time curve was within a 1.25-fold error range. The presented pregnancy physiologically based pharmacokinetic model can quantitatively predict the pharmacokinetics of drugs that are metabolized via one or multiple cytochrome P450 enzymes by integrating prior knowledge of the pregnancy-related effect on these enzymes. This pregnancy physiologically based pharmacokinetic model may thus be used to identify potential exposure changes in pregnant women a priori and to eventually support informed decision making when clinical trials are designed in this special population.
Predicting Fire Severity and Hydrogeomorphic Effects for Wildland Fire Decision Support
NASA Astrophysics Data System (ADS)
Hyde, K.; Woods, S. W.; Calkin, D.; Ryan, K.; Keane, R.
2007-12-01
The Wildland Fire Decision Support System (WFDSS) uses the Fire Spread Probability (FSPro) model to predict the spatial extent of fire, and to assess values-at-risk within probable spread zones. This information is used to support Appropriate Management Response (AMR), which involves decision making regarding fire-fighter deployment, fire suppression requirements, and identification of areas where fire may be safely permitted to take its course. Current WFDSS assessments are generally limited to a binary prediction of whether or not a fire will reach a given location and an assessment of the infrastructure which may be damaged or destroyed by fire. However, an emerging challenge is to expand the capabilities of WFDSS so that it also estimates the probable fire severity, and hence the effect on soil, vegetation and on hydrologic and geomorphic processes such as runoff and soil erosion. We present a conceptual framework within which derivatives of predictive fire modelling are used to predict impacts upon vegetation and soil, from which fire severity and probable post-fire watershed response can be inferred, before a fire actually occurs. Fire severity predictions are validated using Burned Area Reflectance Classification imagery. Recent tests indicate that satellite derived BARC images are a simple and effective means to predict post-fire erosion response based on relative vegetation disturbance. A fire severity prediction which reasonably approximates a BARC image may therefore be used to assess post-fire erosion and flood potential before fire reaches an area. This information may provide a new avenue of reliable support for fire management decisions.
Dynamic model for predicting growth of salmonella spp. in ground sterile pork
USDA-ARS?s Scientific Manuscript database
Predictive model for Salmonella spp. growth in ground pork was developed and validated using kinetic growth data. Salmonella spp. kinetic growth data in ground pork was collected at several isothermal conditions (between 10 and 45C) and Baranyi model was fitted to describe the growth at each temper...
The effects of hillslope-scale variability in burn severity on post-fire sediment delivery
NASA Astrophysics Data System (ADS)
Quinn, Dylan; Brooks, Erin; Dobre, Mariana; Lew, Roger; Robichaud, Peter; Elliot, William
2017-04-01
With the increasing frequency of wildfire and the costs associated with managing the burned landscapes, there is an increasing need for decision support tools that can be used to assess the effectiveness of targeted post-fire management strategies. The susceptibility of landscapes to post-fire soil erosion and runoff have been closely linked with the severity of the wildfire. Wildfire severity maps are often spatial complex and largely dependent upon total vegetative biomass, fuel moisture patterns, direction of burn, wind patterns, and other factors. The decision to apply targeted treatment to a specific landscape and the amount of resources dedicated to treating a landscape should ideally be based on the potential for excessive sediment delivery from a particular hillslope. Recent work has suggested that the delivery of sediment to a downstream water body from a hillslope will be highly influenced by the distribution of wildfire severity across a hillslope and that models that do not capture this hillslope scale variability would not provide reliable sediment and runoff predictions. In this project we compare detailed (10 m) grid-based model predictions to lumped and semi-lumped hillslope approaches where hydrologic parameters are fixed based on hillslope scale averaging techniques. We use the watershed scale version of the process-based Watershed Erosion Prediction Projection (WEPP) model and its GIS interface, GeoWEPP, to simulate the fire impacts on runoff and sediment delivery using burn severity maps at a watershed scale. The flowpath option in WEPP allows for the most detail representation of wildfire severity patterns (10 m) but depending upon the size of the watershed, simulations are time consuming and computational demanding. The hillslope version is a simpler approach which assigns wildfire severity based on the severity level that is assigned to the majority of the hillslope area. In the third approach we divided hillslopes in overland flow elements (OFEs) and assigned representative input values on a finer scale within single hillslopes. Each of these approaches were compared for several large wildfires in the mountainous ranges of central Idaho, USA. Simulations indicated that predictions based on lumped hillslope modeling over-predict sediment transport by as much as 4.8x in areas of high to moderate burn severity. Annual sediment yield within the simulated watersheds ranged from 1.7 tonnes/ha to 6.8 tonnes/ha. The disparity between simulated sediment yield with these approaches was attributed to hydrologic connectivity of the burn patterns within the hillslope. High infiltration rates between high severity sites can greatly reduce the delivery of sediment. This research underlines the importance of accurately representing soil burn severity along individual hillslopes in hydrologic models and the need for modeling approaches to capture this variability to reliability simulate soil erosion.
Designing and benchmarking the MULTICOM protein structure prediction system
2013-01-01
Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819
NASA Technical Reports Server (NTRS)
Burris, John; McGee, Thomas J.; Hoegy, Walt; Lait, Leslie; Sumnicht, Grant; Twigg, Larry; Heaps, William
2000-01-01
Temperature profiles acquired by Goddard Space Flight Center's AROTEL lidar during the SOLVE mission onboard NASA's DC-8 are compared with predicted values from several atmospheric models (DAO, NCEP and UKMO). The variability in the differences between measured and calculated temperature fields was approximately 5 K. Retrieved temperatures within the polar vortex showed large regions that were significantly colder than predicted by the atmospheric models.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.
2004-01-01
Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.
Faria, Melissa; Prats, Eva; Padrós, Francesc; Soares, Amadeu M V M; Raldúa, Demetrio
2017-04-01
Acute organophosphorus (OP) intoxication is a worldwide clinical and public health problem. In addition to cholinergic crisis, neurodegeneration and brain damage are hallmarks of the severe form of this toxidrome. Recently, we generated a chemical model of severe acute OP intoxication in zebrafish that is characterized by altered head morphology and brain degeneration. The pathophysiological pathways resulting in brain toxicity in this model are similar to those described in humans. The aim of this study was to assess the predictive power of this zebrafish model by testing the effect of a panel of drugs that provide protection in mammalian models. The selected drugs included "standard therapy" drugs (atropine and pralidoxime), reversible acetylcholinesterase inhibitors (huperzine A, galantamine, physostigmine and pyridostigmine), N-methyl-D-aspartate (NMDA) receptor antagonists (MK-801 and memantine), dual-function NMDA receptor and acetylcholine receptor antagonists (caramiphen and benactyzine) and anti-inflammatory drugs (dexamethasone and ibuprofen). The effects of these drugs on zebrafish survival and the prevalence of abnormal head morphology in the larvae exposed to 4 µM chlorpyrifos oxon [1 × median lethal concentration (LC 50 )] were determined. Moreover, the neuroprotective effects of pralidoxime, memantine, caramiphen and dexamethasone at the gross morphological level were confirmed by histopathological and transcriptional analyses. Our results demonstrated that the zebrafish model for severe acute OP intoxication has a high predictive value and can be used to identify new compounds that provide neuroprotection against severe acute OP intoxication.
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2005-01-01
The purpose of the research was to develop and test improved hazard algorithms that could result in the development of sensors that are better able to anticipate potentially severe atmospheric turbulence, which affects aircraft safety. The research focused on employing numerical simulation models to develop improved algorithms for the prediction of aviation turbulence. This involved producing both research simulations and real-time simulations of environments predisposed to moderate and severe aviation turbulence. The research resulted in the following fundamental advancements toward the aforementioned goal: 1) very high resolution simulations of turbulent environments indicated how predictive hazard indices could be improved resulting in a candidate hazard index that indicated the potential for improvement over existing operational indices, 2) a real-time turbulence hazard numerical modeling system was improved by correcting deficiencies in its simulation of moist convection and 3) the same real-time predictive system was tested by running the code twice daily and the hazard prediction indices updated and improved. Additionally, a simple validation study was undertaken to determine how well a real time hazard predictive index performed when compared to commercial pilot observations of aviation turbulence. Simple statistical analyses were performed in this validation study indicating potential skill in employing the hazard prediction index to predict regions of varying intensities of aviation turbulence. Data sets from a research numerical model where provided to NASA for use in a large eddy simulation numerical model. A NASA contractor report and several refereed journal articles where prepared and submitted for publication during the course of this research.
Modesto-Alapont, Vicente; Gonzalez-Marrachelli, Vannina; Vento-Rehues, Rosa; Jorda-Miñana, Angela; Blanquer-Olivas, Jose; Monleon, Daniel
2015-01-01
Early diagnosis and patient stratification may improve sepsis outcome by a timely start of the proper specific treatment. We aimed to identify metabolomic biomarkers of sepsis in urine by 1H-NMR spectroscopy to assess the severity and to predict outcomes. Urine samples were collected from 64 patients with severe sepsis or septic shock in the ICU for a 1H NMR spectra acquisition. A supervised analysis was performed on the processed spectra, and a predictive model for prognosis (30-days mortality/survival) of sepsis was constructed using partial least-squares discriminant analysis (PLS-DA). In addition, we compared the prediction power of metabolomics data respect the Sequential Organ Failure Assessment (SOFA) score. Supervised multivariate analysis afforded a good predictive model to distinguish the patient groups and detect specific metabolic patterns. Negative prognosis patients presented higher values of ethanol, glucose and hippurate, and on the contrary, lower levels of methionine, glutamine, arginine and phenylalanine. These metabolites could be part of a composite biopattern of the human metabolic response to sepsis shock and its mortality in ICU patients. The internal cross-validation showed robustness of the metabolic predictive model obtained and a better predictive ability in comparison with SOFA values. Our results indicate that NMR metabolic profiling might be helpful for determining the metabolomic phenotype of worst-prognosis septic patients in an early stage. A predictive model for the evolution of septic patients using these metabolites was able to classify cases with more sensitivity and specificity than the well-established organ dysfunction score SOFA. PMID:26565633
Predictive and mechanistic multivariate linear regression models for reaction development
Santiago, Celine B.; Guo, Jing-Yao
2018-01-01
Multivariate Linear Regression (MLR) models utilizing computationally-derived and empirically-derived physical organic molecular descriptors are described in this review. Several reports demonstrating the effectiveness of this methodological approach towards reaction optimization and mechanistic interrogation are discussed. A detailed protocol to access quantitative and predictive MLR models is provided as a guide for model development and parameter analysis. PMID:29719711
Fernández, Cristina; Vega, José A
2018-05-04
Severe fire greatly increases soil erosion rates and overland-flow in forest land. Soil erosion prediction models are essential for estimating fire impacts and planning post-fire emergency responses. We evaluated the performance of a) the Revised Universal Soil Loss Equation (RUSLE), modified by inclusion of an alternative equation for the soil erodibility factor, and b) the Disturbed WEPP model, by comparing the soil loss predicted by the models and the soil loss measured in the first year after wildfire in 44 experimental field plots in NW Spain. The Disturbed WEPP has not previously been validated with field data for use in NW Spain; validation studies are also very scarce in other areas. We found that both models underestimated the erosion rates. The accuracy of the RUSLE model was low, even after inclusion of a modified soil erodibility factor accounting for high contents of soil organic matter. We conclude that neither model is suitable for predicting soil erosion in the first year after fire in NW Spain and suggest that soil burn severity should be given greater weighting in post-fire soil erosion modelling. Copyright © 2018 Elsevier Inc. All rights reserved.
Emerging approaches in predictive toxicology.
Zhang, Luoping; McHale, Cliona M; Greene, Nigel; Snyder, Ronald D; Rich, Ivan N; Aardema, Marilyn J; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2014-12-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. © 2014 Wiley Periodicals, Inc.
Emerging Approaches in Predictive Toxicology
Zhang, Luoping; McHale, Cliona M.; Greene, Nigel; Snyder, Ronald D.; Rich, Ivan N.; Aardema, Marilyn J.; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2016-01-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. PMID:25044351
Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W
2016-08-01
Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Preclinical models used for immunogenicity prediction of therapeutic proteins.
Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim
2013-07-01
All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.
Gagné, Mathieu; Moore, Lynne; Beaudoin, Claudia; Batomen Kuimi, Brice Lionel; Sirois, Marie-Josée
2016-03-01
The International Classification of Diseases (ICD) is the main classification system used for population-based injury surveillance activities but does not contain information on injury severity. ICD-based injury severity measures can be empirically derived or mapped, but no single approach has been formally recommended. This study aimed to compare the performance of ICD-based injury severity measures to predict in-hospital mortality among injury-related admissions. A systematic review and a meta-analysis were conducted. MEDLINE, EMBASE, and Global Health databases were searched from their inception through September 2014. Observational studies that assessed the performance of ICD-based injury severity measures to predict in-hospital mortality and reported discriminative ability using the area under a receiver operating characteristic curve (AUC) were included. Metrics of model performance were extracted. Pooled AUC were estimated under random-effects models. Twenty-two eligible studies reported 72 assessments of discrimination on ICD-based injury severity measures. Reported AUC ranged from 0.681 to 0.958. Of the 72 assessments, 46 showed excellent (0.80 ≤ AUC < 0.90) and 6 outstanding (AUC ≥ 0.90) discriminative ability. Pooled AUC for ICD-based Injury Severity Score (ICISS) based on the product of traditional survival proportions was significantly higher than measures based on ICD mapped to Abbreviated Injury Scale (AIS) scores (0.863 vs. 0.825 for ICDMAP-ISS [p = 0.005] and ICDMAP-NISS [p = 0.016]). Similar results were observed when studies were stratified by the type of data used (trauma registry or hospital discharge) or the provenance of survival proportions (internally or externally derived). However, among studies published after 2003 the Trauma Mortality Prediction Model based on ICD-9 codes (TMPM-9) demonstrated superior discriminative ability than ICISS using the product of traditional survival proportions (0.850 vs. 0.802, p = 0.002). Models generally showed poor calibration. ICISS using the product of traditional survival proportions and TMPM-9 predict mortality more accurately than those mapped to AIS codes and should be preferred for describing injury severity when ICD is used to record injury diagnoses. Systematic review and meta-analysis, level III.
Sharabi, Adi; Margalit, Malka
2011-01-01
This study evaluated a multidimensional model of loneliness as related to risk and protective factors among adolescents with learning disabilities (LD). The authors aimed to identify factors that mediated loneliness among 716 adolescents in Grades 10 through 12 who were studying in high schools or in Youth Education Centers for at-risk populations. There were 334 students with LD, divided into subgroups according to disability severity (three levels of testing accommodations), and 382 students without LD. Five instruments measured participants' socioemotional characteristics: loneliness, Internet communication, mood, and social and academic achievement-oriented motivation. Using structural equation modeling, the results confirmed the loneliness model and revealed that the use of the Internet to support interpersonal communication with friends predicted less intense loneliness, whereas virtual friendships with individuals whom students knew only online predicted greater loneliness. Positive and negative mood and motivation also predicted students' loneliness. In addition, the severity of LD predicted stronger loneliness feelings.
NASA Astrophysics Data System (ADS)
Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob
2014-05-01
The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust models, since this step is the bottle-neck of this technique. In the first approach, a plot-scale model was used to predict the temperature reached in samples collected in other plots from the same site. In a plot-scale model, all the heated aliquots come from a unique plot-scale sample. As expected, the results obtained with this approach were deceptive, because this approach was assuming that a plot-scale model would be enough to represent the whole variability of the site. The accuracy (measured as the root mean square error of prediction, thereinafter RMSEP) was 86ºC, and the bias was also high (>30ºC). In the second approach, the temperatures predicted through several plot-scale models were averaged. The accuracy was improved (RMSEP=65ºC) respect the first approach, because the variability from several plots was considered and biased predictions were partially counterbalanced. However, this approach implies more efforts, since several plot-scale models are needed. In the third approach, the predictions were obtained with site-scale models. These models were constructed with aliquots from several plots. In this case, the results were accurate, since the RMSEP was around 40ºC, the bias was very small (<1ºC) and the R2 was 0.92. As expected, this approach clearly outperformed the second approach, in spite of the fact that the same efforts were needed. In a plot-scale model, only one interaction between temperature and soil components was modelled. However, several different interactions between temperature and soil components were present in the calibration matrix of a site-scale model. Consequently, the site-scale models were able to model the temperature reached excluding the influence of the differences in soil composition, resulting in more robust models respect that variation. Summarizing, the results were highlighting the importance of an adequate strategy to develop robust and accurate models with moderate efforts, and how a wrong strategy can result in deceptive predictions.
Validation of Aircraft Noise Prediction Models at Low Levels of Exposure
NASA Technical Reports Server (NTRS)
Page, Juliet A.; Hobbs, Christopher M.; Plotkin, Kenneth J.; Stusnick, Eric; Shepherd, Kevin P. (Technical Monitor)
2000-01-01
Aircraft noise measurements were made at Denver International Airport for a period of four weeks. Detailed operational information was provided by airline operators which enabled noise levels to be predicted using the FAA's Integrated Noise Model. Several thrust prediction techniques were evaluated. Measured sound exposure levels for departure operations were found to be 4 to 10 dB higher than predicted, depending on the thrust prediction technique employed. Differences between measured and predicted levels are shown to be related to atmospheric conditions present at the aircraft altitude.
A phenomenological model of muscle fatigue and the power-endurance relationship.
James, A; Green, S
2012-11-01
The relationship between power output and the time that it can be sustained during exercise (i.e., endurance) at high intensities is curvilinear. Although fatigue is implicit in this relationship, there is little evidence pertaining to it. To address this, we developed a phenomenological model that predicts the temporal response of muscle power during submaximal and maximal exercise and which was based on the type, contractile properties (e.g., fatiguability), and recruitment of motor units (MUs) during exercise. The model was first used to predict power outputs during all-out exercise when fatigue is clearly manifest and for several distributions of MU type. The model was then used to predict times that different submaximal power outputs could be sustained for several MU distributions, from which several power-endurance curves were obtained. The model was simultaneously fitted to two sets of human data pertaining to all-out exercise (power-time profile) and submaximal exercise (power-endurance relationship), yielding a high goodness of fit (R(2) = 0.96-0.97). This suggested that this simple model provides an accurate description of human power output during submaximal and maximal exercise and that fatigue-related processes inherent in it account for the curvilinearity of the power-endurance relationship.
Using Predictive Analytics to Predict Power Outages from Severe Weather
NASA Astrophysics Data System (ADS)
Wanik, D. W.; Anagnostou, E. N.; Hartman, B.; Frediani, M. E.; Astitha, M.
2015-12-01
The distribution of reliable power is essential to businesses, public services, and our daily lives. With the growing abundance of data being collected and created by industry (i.e. outage data), government agencies (i.e. land cover), and academia (i.e. weather forecasts), we can begin to tackle problems that previously seemed too complex to solve. In this session, we will present newly developed tools to aid decision-support challenges at electric distribution utilities that must mitigate, prepare for, respond to and recover from severe weather. We will show a performance evaluation of outage predictive models built for Eversource Energy (formerly Connecticut Light & Power) for storms of all types (i.e. blizzards, thunderstorms and hurricanes) and magnitudes (from 20 to >15,000 outages). High resolution weather simulations (simulated with the Weather and Research Forecast Model) were joined with utility outage data to calibrate four types of models: a decision tree (DT), random forest (RF), boosted gradient tree (BT) and an ensemble (ENS) decision tree regression that combined predictions from DT, RF and BT. The study shows that the ENS model forced with weather, infrastructure and land cover data was superior to the other models we evaluated, especially in terms of predicting the spatial distribution of outages. This research has the potential to be used for other critical infrastructure systems (such as telecommunications, drinking water and gas distribution networks), and can be readily expanded to the entire New England region to facilitate better planning and coordination among decision-makers when severe weather strikes.
Applications for predictive microbiology to food packaging
USDA-ARS?s Scientific Manuscript database
Predictive microbiology has been used for several years in the food industry to predict microbial growth, inactivation and survival. Predictive models provide a useful tool in risk assessment, HACCP set-up and GMP for the food industry to enhance microbial food safety. This report introduces the c...
Predicting ecological responses in a changing ocean: the effects of future climate uncertainty.
Freer, Jennifer J; Partridge, Julian C; Tarling, Geraint A; Collins, Martin A; Genner, Martin J
2018-01-01
Predicting how species will respond to climate change is a growing field in marine ecology, yet knowledge of how to incorporate the uncertainty from future climate data into these predictions remains a significant challenge. To help overcome it, this review separates climate uncertainty into its three components (scenario uncertainty, model uncertainty, and internal model variability) and identifies four criteria that constitute a thorough interpretation of an ecological response to climate change in relation to these parts (awareness, access, incorporation, communication). Through a literature review, the extent to which the marine ecology community has addressed these criteria in their predictions was assessed. Despite a high awareness of climate uncertainty, articles favoured the most severe emission scenario, and only a subset of climate models were used as input into ecological analyses. In the case of sea surface temperature, these models can have projections unrepresentative against a larger ensemble mean. Moreover, 91% of studies failed to incorporate the internal variability of a climate model into results. We explored the influence that the choice of emission scenario, climate model, and model realisation can have when predicting the future distribution of the pelagic fish, Electrona antarctica . Future distributions were highly influenced by the choice of climate model, and in some cases, internal variability was important in determining the direction and severity of the distribution change. Increased clarity and availability of processed climate data would facilitate more comprehensive explorations of climate uncertainty, and increase in the quality and standard of marine prediction studies.
Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin
2016-11-01
Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.
NASA Astrophysics Data System (ADS)
Dobre, Mariana; Brooks, Erin; Lew, Roger; Kolden, Crystal; Quinn, Dylan; Elliot, William; Robichaud, Pete
2017-04-01
Soil erosion is a secondary fire effect with great implications for many ecosystem resources. Depending on the burn severity, topography, and the weather immediately after the fire, soil erosion can impact municipal water supplies, degrade water quality, and reduce reservoirs' storage capacity. Scientists and managers use field and remotely sensed data to quickly assess post-fire burn severity in ecologically-sensitive areas. From these assessments, mitigation activities are implemented to minimize post-fire flood and soil erosion and to facilitate post-fire vegetation recovery. Alternatively, land managers can use fire behavior and spread models (e.g. FlamMap, FARSITE, FOFEM, or CONSUME) to identify sensitive areas a priori, and apply strategies such as fuel reduction treatments to proactively minimize the risk of wildfire spread and increased burn severity. There is a growing interest in linking fire behavior and spread models with hydrology-based soil erosion models to provide site-specific assessment of mitigation treatments on post-fire runoff and erosion. The challenge remains, however, that many burn severity mapping and modeling products quantify vegetation loss rather than measuring soil burn severity. Wildfire burn severity is spatially heterogeneous and depends on the pre-fire vegetation cover, fuel load, topography, and weather. Severities also differ depending on the variable of interest (e.g. soil, vegetation). In the United States, Burned Area Reflectance Classification (BARC) maps, derived from Landsat satellite images, are used as an initial burn severity assessment. BARC maps are classified from either a Normalized Burn Ratio (NBR) or differenced Normalized Burned Ratio (dNBR) scene into four classes (Unburned, Low, Moderate, and High severity). The development of soil burn severity maps requires further manual field validation efforts to transform the BARC maps into a product more applicable for post-fire soil rehabilitation activities. Alternative spectral indices and modeled output approaches may prove better predictors of soil burn severity and hydrologic effects, but these have not yet been assessed in a model framework. In this project we compare field-verified soil burn severity maps to satellite-derived and modeled burn severity maps. We quantify the extent to which there are systematic differences in these mapping products. We then use the Water Erosion Prediction Project (WEPP) hydrologic soil erosion model to assess sediment delivery from these fires using the predicted and observed soil burn severity maps. Finally, we discuss differences in observed and predicted soil burn severity maps and application to watersheds in the Pacific Northwest to estimate post-fire sediment delivery.
Modeling causes of death: an integrated approach using CODEm
2012-01-01
Background Data on causes of death by age and sex are a critical input into health decision-making. Priority setting in public health should be informed not only by the current magnitude of health problems but by trends in them. However, cause of death data are often not available or are subject to substantial problems of comparability. We propose five general principles for cause of death model development, validation, and reporting. Methods We detail a specific implementation of these principles that is embodied in an analytical tool - the Cause of Death Ensemble model (CODEm) - which explores a large variety of possible models to estimate trends in causes of death. Possible models are identified using a covariate selection algorithm that yields many plausible combinations of covariates, which are then run through four model classes. The model classes include mixed effects linear models and spatial-temporal Gaussian Process Regression models for cause fractions and death rates. All models for each cause of death are then assessed using out-of-sample predictive validity and combined into an ensemble with optimal out-of-sample predictive performance. Results Ensemble models for cause of death estimation outperform any single component model in tests of root mean square error, frequency of predicting correct temporal trends, and achieving 95% coverage of the prediction interval. We present detailed results for CODEm applied to maternal mortality and summary results for several other causes of death, including cardiovascular disease and several cancers. Conclusions CODEm produces better estimates of cause of death trends than previous methods and is less susceptible to bias in model specification. We demonstrate the utility of CODEm for the estimation of several major causes of death. PMID:22226226
Wall-modeled large eddy simulation of high-lift devices from low to post-stall angle of attacks
NASA Astrophysics Data System (ADS)
Bodart, Julien; Larsson, Johan; Moin, Parviz
2013-11-01
The flow around a McDonnell-Douglas 30P/30N multi-element airfoil at the flight Reynolds number of 9 million (based on chord) is computed using LES with an equilibrium wall-model with special treatment for transitional flows. Several different angles of attack are considered, up to and including stall, challenging the wall-model in several flow regimes. The maximum lift coefficient, which is generally difficult to predict with RANS approaches, is accurately predicted, as compared to experiments performed in the NASA LPT wind-tunnel. NASA grant: NNX11AI60A.
Van Belleghem, Griet; Devos, Stefanie; De Wit, Liesbet; Hubloue, Ives; Lauwaert, Door; Pien, Karen; Putman, Koen
2016-01-01
Injury severity scores are important in the context of developing European and national goals on traffic safety, health-care benchmarking and improving patient communication. Various severity scores are available and are mostly based on Abbreviated Injury Scale (AIS) or International Classification of Diseases (ICD). The aim of this paper is to compare the predictive value for in-hospital mortality between the various severity scores if only International Classification of Diseases, 9th revision, Clinical Modification ICD-9-CM is reported. To estimate severity scores based on the AIS lexicon, ICD-9-CM codes were converted with ICD Programmes for Injury Categorization (ICDPIC) and four AIS-based severity scores were derived: Maximum AIS (MaxAIS), Injury Severity Score (ISS), New Injury Severity Score (NISS) and Exponential Injury Severity Score (EISS). Based on ICD-9-CM, six severity scores were calculated. Determined by the number of injuries taken into account and the means by which survival risk ratios (SRRs) were calculated, four different approaches were used to calculate the ICD-9-based Injury Severity Scores (ICISS). The Trauma Mortality Prediction Model (TMPM) was calculated with the ICD-9-CM-based model averaged regression coefficients (MARC) for both the single worst injury and multiple injuries. Severity scores were compared via model discrimination and calibration. Model comparisons were performed separately for the severity scores based on the single worst injury and multiple injuries. For ICD-9-based scales, estimation of area under the receiver operating characteristic curve (AUROC) ranges between 0.94 and 0.96, while AIS-based scales range between 0.72 and 0.76, respectively. The intercept in the calibration plots is not significantly different from 0 for MaxAIS, ICISS and TMPM. When only ICD-9-CM codes are reported, ICD-9-CM-based severity scores perform better than severity scores based on the conversion to AIS. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Effect of Data Quality on Short-term Growth Model Projections
David Gartner
2005-01-01
This study was designed to determine the effect of FIA's data quality on short-term growth model projections. The data from Georgia's 1996 statewide survey were used for the Southern variant of the Forest Vegetation Simulator to predict Georgia's first annual panel. The effect of several data error sources on growth modeling prediction errors...
Monitoring and forecasting the 2009-2010 severe drought in Southwest China
NASA Astrophysics Data System (ADS)
Zhang, X.; Tang, Q.; Liu, X.; Leng, G.; Li, Z.; Cui, H.
2015-12-01
From the fall of 2009 to the spring of 2010, an unprecedented drought swept across southwest China (SW) and led to a severe shortage in drinking water and a huge loss to regional economy. Monitoring and predicting the severe drought with several months in advance is of critical importance for such hydrological disaster assessment, preparation and mitigation. In this study, we attempted to carry out a model-based hydrological monitoring and seasonal forecasting framework, and assessed its skill in capturing the evolution of the SW drought in 2009-2010. Using the satellite-based meteorological forcings and the Variable Infiltration Capacity (VIC) hydrologic model, the drought conditions were assessed in a near-real-time manner based on a 62-year (1952-2013) retrospective simulation, wherein the satellite data was adjusted by a gauge-based forcing to remove systematic biases. Bias-corrected seasonal forecasting outputs from the National Centers for Environmental Prediction (NCEP) Climate Forecast System Version 2 (CFSv2) was tentatively applied for a seasonal hydrologic prediction and its predictive skill was overall evaluated relative to a traditional Ensemble Streamflow Prediction (ESP) method with lead time varying from 1 to 6 months. The results show that the climate model-driven hydrologic predictability is generally limited to 1-month lead time and exhibits negligible skill improvement relative to ESP during this drought event, suggesting the initial hydrologic conditions (IHCs) play a dominant role in forecasting performance. The research highlights the value of the framework in providing accurate IHCs in a real-time manner which will greatly benefit drought early-warning.
Characteristics of Perimenstrual Asthma and Its Relation to Asthma Severity and Control
Rao, Chitra K.; Moore, Charity G.; Bleecker, Eugene; Busse, William W.; Calhoun, William; Castro, Mario; Chung, Kian Fan; Erzurum, Serpil C.; Israel, Elliot; Curran-Everett, Douglas
2013-01-01
Background: Although perimenstrual asthma (PMA) has been associated with severe and difficult-to-control asthma, it remains poorly characterized and understood. The objectives of this study were to identify clinical, demographic, and inflammatory factors associated with PMA and to assess the association of PMA with asthma severity and control. Methods: Women with asthma recruited to the National Heart, Lung, and Blood Institute Severe Asthma Research Program who reported PMA symptoms on a screening questionnaire were analyzed in relation to basic demographics, clinical questionnaire data, immunoinflammatory markers, and physiologic parameters. Univariate comparisons between PMA and non-PMA groups were performed. A severity-adjusted model predicting PMA was created. Additional models addressed the role of PMA in asthma control. Results: Self-identified PMA was reported in 17% of the subjects (n = 92) and associated with higher BMI, lower FVC % predicted, and higher gastroesophageal reflux disease rates. Fifty-two percent of the PMA group met criteria for severe asthma compared with 30% of the non-PMA group. In multivariable analyses controlling for severity, aspirin sensitivity and lower FVC % predicted were associated with the presence of PMA. Furthermore, after controlling for severity and confounders, PMA remained associated with more asthma symptoms and urgent health-care utilization. Conclusions: PMA is common in women with severe asthma and associated with poorly controlled disease. Aspirin sensitivity and lower FVC % predicted are associated with PMA after adjusting for multiple factors, suggesting that alterations in prostaglandins may contribute to this phenotype. PMID:23632943
COMPARISON OF DATA FROM AN IAQ TEST HOUSE WITH PREDICTIONS OF AN IAQ COMPUTER MODEL
The paper describes several experiments to evaluate the impact of indoor air pollutant sources on indoor air quality (IAQ). Measured pollutant concentrations are compared with concentrations predicted by an IAQ model. The measured concentrations are in excellent agreement with th...
A robust operational model for predicting where tropical cyclone waves damage coral reefs
NASA Astrophysics Data System (ADS)
Puotinen, Marji; Maynard, Jeffrey A.; Beeden, Roger; Radford, Ben; Williams, Gareth J.
2016-05-01
Tropical cyclone (TC) waves can severely damage coral reefs. Models that predict where to find such damage (the ‘damage zone’) enable reef managers to: 1) target management responses after major TCs in near-real time to promote recovery at severely damaged sites; and 2) identify spatial patterns in historic TC exposure to explain habitat condition trajectories. For damage models to meet these needs, they must be valid for TCs of varying intensity, circulation size and duration. Here, we map damage zones for 46 TCs that crossed Australia’s Great Barrier Reef from 1985-2015 using three models - including one we develop which extends the capability of the others. We ground truth model performance with field data of wave damage from seven TCs of varying characteristics. The model we develop (4MW) out-performed the other models at capturing all incidences of known damage. The next best performing model (AHF) both under-predicted and over-predicted damage for TCs of various types. 4MW and AHF produce strikingly different spatial and temporal patterns of damage potential when used to reconstruct past TCs from 1985-2015. The 4MW model greatly enhances both of the main capabilities TC damage models provide to managers, and is useful wherever TCs and coral reefs co-occur.
A robust operational model for predicting where tropical cyclone waves damage coral reefs.
Puotinen, Marji; Maynard, Jeffrey A; Beeden, Roger; Radford, Ben; Williams, Gareth J
2016-05-17
Tropical cyclone (TC) waves can severely damage coral reefs. Models that predict where to find such damage (the 'damage zone') enable reef managers to: 1) target management responses after major TCs in near-real time to promote recovery at severely damaged sites; and 2) identify spatial patterns in historic TC exposure to explain habitat condition trajectories. For damage models to meet these needs, they must be valid for TCs of varying intensity, circulation size and duration. Here, we map damage zones for 46 TCs that crossed Australia's Great Barrier Reef from 1985-2015 using three models - including one we develop which extends the capability of the others. We ground truth model performance with field data of wave damage from seven TCs of varying characteristics. The model we develop (4MW) out-performed the other models at capturing all incidences of known damage. The next best performing model (AHF) both under-predicted and over-predicted damage for TCs of various types. 4MW and AHF produce strikingly different spatial and temporal patterns of damage potential when used to reconstruct past TCs from 1985-2015. The 4MW model greatly enhances both of the main capabilities TC damage models provide to managers, and is useful wherever TCs and coral reefs co-occur.
Predicting Hail Size Using Model Vertical Velocities
2008-03-01
updrafts from a simple cloud model using forecasted soundings . The models used MM5 model data coinciding with severe hail events collected from the...updrafts from a simple cloud model using forecasted soundings . The models used MM5 model data coinciding with severe hail events collected from the...determine their accuracy. Plus they are based primary on observed upper air soundings . Obtaining upper air soundings in proximity to convective
NASA Astrophysics Data System (ADS)
Fuselier, S.; Allegrini, F.; Bzowski, M.; Dayeh, M. A.; Desai, M. I.; Funsten, H. O.; Galli, A.; Heirtzler, D.; Janzen, P. H.; Kubiak, M. A.; Kucharek, H.; Lewis, W. S.; Livadiotis, G.; McComas, D. J.; Moebius, E.; Petrinec, S. M.; Quinn, M. S.; Schwadron, N.; Sokol, J. M.; Trattner, K. J.
2014-12-01
The Bureau of Meteorology's Space Weather Service operates an alert service for severe space weather events. The service relies on a statistical model which ingests observations of M and X class solar flares at or shortly after the time of the flare to predict the occurrence and severity of terrestrial impacts with a lead time of 1 to 4 days. This model has been operational since 2012 and caters to the needs of critical infrastructure groups in the Australian region. This paper reports on improvements to the forecast model by including SOHO LASCO coronagraph observations of Coronal Mass Ejections (CMEs). The coronagraphs are analysed to determine the Earthward direction parameter and the integrated intensity as a measure of the CME mass. Both of these parameters can help to predict whether a CME will be geo-effective. This work aims to increase the accuracy of the model predictions and lower the rate of false positives, as well as providing an estimate of the expected level of geomagnetic storm intensity.
NASA Astrophysics Data System (ADS)
Freeland, L. E.; Terkildsen, M. B.
2015-12-01
The Bureau of Meteorology's Space Weather Service operates an alert service for severe space weather events. The service relies on a statistical model which ingests observations of M and X class solar flares at or shortly after the time of the flare to predict the occurrence and severity of terrestrial impacts with a lead time of 1 to 4 days. This model has been operational since 2012 and caters to the needs of critical infrastructure groups in the Australian region. This paper reports on improvements to the forecast model by including SOHO LASCO coronagraph observations of Coronal Mass Ejections (CMEs). The coronagraphs are analysed to determine the Earthward direction parameter and the integrated intensity as a measure of the CME mass. Both of these parameters can help to predict whether a CME will be geo-effective. This work aims to increase the accuracy of the model predictions and lower the rate of false positives, as well as providing an estimate of the expected level of geomagnetic storm intensity.
Tulloch, Heather; Reida, Robert; D'Angeloa, Monika Slovinec; Plotnikoff, Ronald C; Morrina, Louise; Beatona, Louise; Papadakisa, Sophia; Pipe, Andrew
2009-03-01
The purpose of this study was to examine the utility of protection motivation theory (PMT) in the prediction of exercise intentions and behaviour in the year following hospitalisation for coronary artery disease (CAD). Patients with documented CAD (n = 787), recruited at hospital discharge, completed questionnaires measuring PMT's threat (i.e. perceived severity and vulnerability) and coping (i.e. self-efficacy, response efficacy) appraisal constructs at baseline, 2 and 6 months, and exercise behaviour at baseline, 6 and 12 months post-hospitalisation. Structural equation modelling showed that the PMT model of exercise at 6 months had a good fit with the empirical data. Self-efficacy, response efficacy, and perceived severity predicted exercise intentions, which, in turn predicted exercise behaviour. Overall, the PMT variables accounted for a moderate amount of variance in exercise intentions (23%) and behaviour (20%). In contrast, the PMT model was not reliable for predicting exercise behaviour at 12 months post-hospitalisation. The data provided support for PMT applied to short-term, but not long-term, exercise behaviour among patients with CAD. Health education should concentrate on providing positive coping messages to enhance patients' confidence regarding exercise and their belief that exercise provides health benefits, as well as realistic information about disease severity.
Recovery of speed of information processing in closed-head-injury patients.
Zwaagstra, R; Schmidt, I; Vanier, M
1996-06-01
After severe traumatic brain injury, patients almost invariably demonstrate a slowing of reaction time, reflecting a slowing of central information processing. Methodological problems associated with the traditional method for the analysis of longitudinal data (MANOVA) severely complicate studies on cognitive recovery. It is argued that multilevel models are often better suited for the analysis of improvement over time in clinical settings. Multilevel models take into account individual differences in both overall performance level and recovery. These models enable individual predictions for the recovery of speed of information processing. Recovery is modelled in a group of closed-head-injury patients (N = 24). Recovery was predicted by age and severity of injury, as indicated by coma duration. Over a period up to 44 months post trauma, reaction times were found to decrease faster for patients with longer coma duration.
Dynamic compositional modeling of pedestrian crash counts on urban roads in Connecticut.
Serhiyenko, Volodymyr; Ivan, John N; Ravishanker, Nalini; Islam, Md Saidul
2014-03-01
Uncovering the temporal trend in crash counts provides a good understanding of the context for pedestrian safety. With a rareness of pedestrian crashes it is impossible to investigate monthly temporal effects with an individual segment/intersection level data, thus the time dependence should be derived from the aggregated level data. Most previous studies have used annual data to investigate the differences in pedestrian crashes between different regions or countries in a given year, and/or to look at time trends of fatal pedestrian injuries annually. Use of annual data unfortunately does not provide sufficient information on patterns in time trends or seasonal effects. This paper describes statistical methods uncovering patterns in monthly pedestrian crashes aggregated on urban roads in Connecticut from January 1995 to December 2009. We investigate the temporal behavior of injury severity levels, including fatal (K), severe injury (A), evident minor injury (B), and non-evident possible injury and property damage only (C and O), as proportions of all pedestrian crashes in each month, taking into consideration effects of time trend, seasonal variations and VMT (vehicle miles traveled). This type of dependent multivariate data is characterized by positive components which sum to one, and occurs in several applications in science and engineering. We describe a dynamic framework with vector autoregressions (VAR) for modeling and predicting compositional time series. Combining these predictions with predictions from a univariate statistical model for total crash counts will then enable us to predict pedestrian crash counts with the different injury severity levels. We compare these predictions with those obtained from fitting separate univariate models to time series of crash counts at each injury severity level. We also show that the dynamic models perform better than the corresponding static models. We implement the Integrated Nested Laplace Approximation (INLA) approach to enable fast Bayesian posterior computation. Taking CO injury severity level as a baseline for the compositional analysis, we conclude that there was a noticeable shift in the proportion of pedestrian crashes from injury severity A to B, while the increase for injury severity K was extremely small over time. This shift to the less severe injury level (from A to B) suggests that the overall safety on urban roads in Connecticut is improving. In January and February, there was some increase in the proportions for levels A and B over the baseline, indicating a seasonal effect. We found evidence that an increase in VMT would result in a decrease of proportions over the baseline for all injury severity levels. Our dynamic model uncovered a decreasing trend in all pedestrian crash counts before April 2005, followed by a noticeable increase and a flattening out until the end of the fitting period. This appears to be largely due to the behavior of injury severity level A pedestrian crashes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Decadal climate prediction (project GCEP).
Haines, Keith; Hermanson, Leon; Liu, Chunlei; Putt, Debbie; Sutton, Rowan; Iwi, Alan; Smith, Doug
2009-03-13
Decadal prediction uses climate models forced by changing greenhouse gases, as in the International Panel for Climate Change, but unlike longer range predictions they also require initialization with observations of the current climate. In particular, the upper-ocean heat content and circulation have a critical influence. Decadal prediction is still in its infancy and there is an urgent need to understand the important processes that determine predictability on these timescales. We have taken the first Hadley Centre Decadal Prediction System (DePreSys) and implemented it on several NERC institute compute clusters in order to study a wider range of initial condition impacts on decadal forecasting, eventually including the state of the land and cryosphere. The eScience methods are used to manage submission and output from the many ensemble model runs required to assess predictive skill. Early results suggest initial condition skill may extend for several years, even over land areas, but this depends sensitively on the definition used to measure skill, and alternatives are presented. The Grid for Coupled Ensemble Prediction (GCEP) system will allow the UK academic community to contribute to international experiments being planned to explore decadal climate predictability.
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
NASA Astrophysics Data System (ADS)
Park, Jeong-Gyun; Jee, Joon-Bum
2017-04-01
Dangerous weather such as severe rain, heavy snow, drought and heat wave caused by climate change make more damage in the urban area that dense populated and industry areas. Urban areas, unlike the rural area, have big population and transportation, dense the buildings and fuel consumption. Anthropogenic factors such as road energy balance, the flow of air in the urban is unique meteorological phenomena. However several researches are in process about prediction of urban meteorology. ASAPS (Advanced Storm-scale Analysis and Prediction System) predicts a severe weather with very short range (prediction with 6 hour) and high resolution (every hour with time and 1 km with space) on Seoul metropolitan area based on KLAPS (Korea Local Analysis and Prediction System) from KMA (Korea Meteorological Administration). This system configured three parts that make a background field (SUF5), analysis field (SU01) with observation and forecast field with high resolution (SUF1). In this study, we improve a high-resolution ASAPS model and perform a sensitivity test for the rainfall case. The improvement of ASAPS include model domain configuration, high resolution topographic data and data assimilation with WISE observation data.
Development of a multi-ensemble Prediction Model for China
NASA Astrophysics Data System (ADS)
Brasseur, G. P.; Bouarar, I.; Petersen, A. K.
2016-12-01
As part of the EU-sponsored Panda and MarcoPolo Projects, a multi-model prediction system including 7 models has been developed. Most regional models use global air quality predictions provided by the Copernicus Atmospheric Monitoring Service and downscale the forecast at relatively high spatial resolution in eastern China. The paper will describe the forecast system and show examples of forecasts produced for several Chinese urban areas and displayed on a web site developed by the Dutch Meteorological service. A discussion on the accuracy of the predictions based on a detailed validation process using surface measurements from the Chinese monitoring network will be presented.
Khan, Nabeel; Patel, Dhruvan; Shah, Yash; Yang, Yu-Xiao
2017-05-01
Anemia and iron deficiency are common complications of ulcerative colitis (UC). We aimed to develop and internally validate a prediction model for the incidence of moderate to severe anemia and iron deficiency anemia (IDA) in newly diagnosed patients with UC. Multivariable logistic regression was performed among a nationwide cohort of patients who were newly diagnosed with UC in the VA health-care system. Model development was performed in a random two-third of the total cohort and then validated in the remaining one-third of the cohort. As candidate predictors, we examined routinely available data at the time of UC diagnosis including demographics, medications, laboratory results, and endoscopy findings. A total of 789 patients met the inclusion criteria. For the outcome of moderate to severe anemia, age, albumin level and mild anemia at UC diagnosis were predictors selected for the model. The AUC for this model was 0.69 (95% CI 0.64-0.74). For the outcome of moderate to severe anemia with evidence of iron deficiency, the predictors included African-American ethnicity, mild anemia, age, and albumin level at UC diagnosis. The AUC was 0.76, (95% CI 0.69-0.82). Calibration was consistently good in all models (Hosmer-Lemeshow goodness of fit p > 0.05). The models performed similarly in the internal validation cohort. We developed and internally validated a prognostic model for predicting the risk of moderate to severe anemia and IDA among newly diagnosed patients with UC. This will help identify patients at high risk of these complications, who could benefit from surveillance and preventive measures.
Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun
2016-04-01
Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.
NASA Astrophysics Data System (ADS)
Shi, Ming F.; Zhang, Li; Zhu, Xinhai
2016-08-01
The Yoshida nonlinear isotropic/kinematic hardening material model is often selected in forming simulations where an accurate springback prediction is required. Many successful application cases in the industrial scale automotive components using advanced high strength steels (AHSS) have been reported to give better springback predictions. Several issues have been raised recently in the use of the model for higher strength AHSS including the use of two C vs. one C material parameters in the Armstrong and Frederick model (AF model), the original Yoshida model vs. Original Yoshida model with modified hardening law, and constant Young's Modulus vs. decayed Young's Modulus as a function of plastic strain. In this paper, an industrial scale automotive component using 980 MPa strength materials is selected to study the effect of two C and one C material parameters in the AF model on both forming and springback prediction using the Yoshida model with and without the modified hardening law. The effect of decayed Young's Modulus on the springback prediction for AHSS is also evaluated. In addition, the limitations of the material parameters determined from tension and compression tests without multiple cycle tests are also discussed for components undergoing several bending and unbending deformations.
Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.
2003-01-01
Logistic regression was used to predict the probability of debris flows occurring in areas recently burned by wildland fires. Multiple logistic regression is conceptually similar to multiple linear regression because statistical relations between one dependent variable and several independent variables are evaluated. In logistic regression, however, the dependent variable is transformed to a binary variable (debris flow did or did not occur), and the actual probability of the debris flow occurring is statistically modeled. Data from 399 basins located within 15 wildland fires that burned during 2000-2002 in Colorado, Idaho, Montana, and New Mexico were evaluated. More than 35 independent variables describing the burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows were delineated from National Elevation Data using a Geographic Information System (GIS). (2) Data describing the burn severity, geology, land surface gradient, rainfall, and soil properties were determined for each basin. These data were then downloaded to a statistics software package for analysis using logistic regression. (3) Relations between the occurrence/non-occurrence of debris flows and burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated and several preliminary multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combination produced the most effective model. The multivariate model that best predicted the occurrence of debris flows was selected. (4) The multivariate logistic regression model was entered into a GIS, and a map showing the probability of debris flows was constructed. The most effective model incorporates the percentage of each basin with slope greater than 30 percent, percentage of land burned at medium and high burn severity in each basin, particle size sorting, average storm intensity (millimeters per hour), soil organic matter content, soil permeability, and soil drainage. The results of this study demonstrate that logistic regression is a valuable tool for predicting the probability of debris flows occurring in recently-burned landscapes.
NASA Astrophysics Data System (ADS)
Love, D. M.; Venturas, M.; Sperry, J.; Wang, Y.; Anderegg, W.
2017-12-01
Modeling approaches for tree stomatal control often rely on empirical fitting to provide accurate estimates of whole tree transpiration (E) and assimilation (A), which are limited in their predictive power by the data envelope used to calibrate model parameters. Optimization based models hold promise as a means to predict stomatal behavior under novel climate conditions. We designed an experiment to test a hydraulic trait based optimization model, which predicts stomatal conductance from a gain/risk approach. Optimal stomatal conductance is expected to maximize the potential carbon gain by photosynthesis, and minimize the risk to hydraulic transport imposed by cavitation. The modeled risk to the hydraulic network is assessed from cavitation vulnerability curves, a commonly measured physiological trait in woody plant species. Over a growing season garden grown plots of aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas) were subjected to three distinct drought treatments (moderate, severe, severe with rehydration) relative to a control plot to test model predictions. Model outputs of predicted E, A, and xylem pressure can be directly compared to both continuous data (whole tree sapflux, soil moisture) and point measurements (leaf level E, A, xylem pressure). The model also predicts levels of whole tree hydraulic impairment expected to increase mortality risk. This threshold is used to estimate survivorship in the drought treatment plots. The model can be run at two scales, either entirely from climate (meteorological inputs, irrigation) or using the physiological measurements as a starting point. These data will be used to study model performance and utility, and aid in developing the model for larger scale applications.
A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.
Ni, Qianwu; Chen, Lei
2017-01-01
Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Aguilée, Robin; Raoul, Gaël; Rousset, François; Ronce, Ophélie
2016-01-01
Species may survive climate change by migrating to track favorable climates and/or adapting to different climates. Several quantitative genetics models predict that species escaping extinction will change their geographical distribution while keeping the same ecological niche. We introduce pollen dispersal in these models, which affects gene flow but not directly colonization. We show that plant populations may escape extinction because of both spatial range and ecological niche shifts. Exact analytical formulas predict that increasing pollen dispersal distance slows the expected spatial range shift and accelerates the ecological niche shift. There is an optimal distance of pollen dispersal, which maximizes the sustainable rate of climate change. These conclusions hold in simulations relaxing several strong assumptions of our analytical model. Our results imply that, for plants with long distance of pollen dispersal, models assuming niche conservatism may not accurately predict their future distribution under climate change. PMID:27621443
Aguilée, Robin; Raoul, Gaël; Rousset, François; Ronce, Ophélie
2016-09-27
Species may survive climate change by migrating to track favorable climates and/or adapting to different climates. Several quantitative genetics models predict that species escaping extinction will change their geographical distribution while keeping the same ecological niche. We introduce pollen dispersal in these models, which affects gene flow but not directly colonization. We show that plant populations may escape extinction because of both spatial range and ecological niche shifts. Exact analytical formulas predict that increasing pollen dispersal distance slows the expected spatial range shift and accelerates the ecological niche shift. There is an optimal distance of pollen dispersal, which maximizes the sustainable rate of climate change. These conclusions hold in simulations relaxing several strong assumptions of our analytical model. Our results imply that, for plants with long distance of pollen dispersal, models assuming niche conservatism may not accurately predict their future distribution under climate change.
NASA Astrophysics Data System (ADS)
Ramaswamy, V.; Chen, J. H.; Delworth, T. L.; Knutson, T. R.; Lin, S. J.; Murakami, H.; Vecchi, G. A.
2017-12-01
Damages from catastrophic tropical storms such as the 2017 destructive hurricanes compel an acceleration of scientific advancements to understand the genesis, underlying mechanisms, frequency, track, intensity, and landfall of these storms. The advances are crucial to provide improved early information for planners and responders. We discuss the development and utilization of a global modeling capability based on a novel atmospheric dynamical core ("Finite-Volume Cubed Sphere or FV3") which captures the realism of the recent tropical storms and is a part of the NOAA Next-Generation Global Prediction System. This capability is also part of an emerging seamless modeling system at NOAA/ Geophysical Fluid Dynamics Laboratory for simulating the frequency of storms on seasonal and longer timescales with high fidelity e.g., Atlantic hurricane frequency over the past decades. In addition, the same modeling system has also been employed to evaluate the nature of projected storms on the multi-decadal scales under the influence of anthropogenic factors such as greenhouse gases and aerosols. The seamless modeling system thus facilitates research into and the predictability of severe tropical storms across diverse timescales of practical interest to several societal sectors.
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Reddy, Bhargava K; Delen, Dursun; Agrawal, Rupesh K
2018-01-01
Crohn's disease is among the chronic inflammatory bowel diseases that impact the gastrointestinal tract. Understanding and predicting the severity of inflammation in real-time settings is critical to disease management. Extant literature has primarily focused on studies that are conducted in clinical trial settings to investigate the impact of a drug treatment on the remission status of the disease. This research proposes an analytics methodology where three different types of prediction models are developed to predict and to explain the severity of inflammation in patients diagnosed with Crohn's disease. The results show that machine-learning-based analytic methods such as gradient boosting machines can predict the inflammation severity with a very high accuracy (area under the curve = 92.82%), followed by regularized regression and logistic regression. According to the findings, a combination of baseline laboratory parameters, patient demographic characteristics, and disease location are among the strongest predictors of inflammation severity in Crohn's disease patients.
Thermal barrier coating life prediction model
NASA Technical Reports Server (NTRS)
Pilsner, B. H.; Hillery, R. V.; Mcknight, R. L.; Cook, T. S.; Kim, K. S.; Duderstadt, E. C.
1986-01-01
The objectives of this program are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, and then to develop and verify life prediction models accounting for these degradation modes. The program is divided into two phases, each consisting of several tasks. The work in Phase 1 is aimed at identifying the relative importance of the various failure modes, and developing and verifying life prediction model(s) for the predominant model for a thermal barrier coating system. Two possible predominant failure mechanisms being evaluated are bond coat oxidation and bond coat creep. The work in Phase 2 will develop design-capable, causal, life prediction models for thermomechanical and thermochemical failure modes, and for the exceptional conditions of foreign object damage and erosion.
Learning to Predict Combinatorial Structures
NASA Astrophysics Data System (ADS)
Vembu, Shankar
2009-12-01
The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.
Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model
NASA Astrophysics Data System (ADS)
Tang, Jingshi; Liu, Lin; Miao, Manqian
Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model , is able to model the rate of occurrence of...which adds specificity to the model and can make nonlinear data more manageable. Early results show that the 1. REPORT DATE (DD-MM-YYYY) 4. TITLE
Musite, a tool for global prediction of general and kinase-specific phosphorylation sites.
Gao, Jianjiong; Thelen, Jay J; Dunker, A Keith; Xu, Dong
2010-12-01
Reversible protein phosphorylation is one of the most pervasive post-translational modifications, regulating diverse cellular processes in various organisms. High throughput experimental studies using mass spectrometry have identified many phosphorylation sites, primarily from eukaryotes. However, the vast majority of phosphorylation sites remain undiscovered, even in well studied systems. Because mass spectrometry-based experimental approaches for identifying phosphorylation events are costly, time-consuming, and biased toward abundant proteins and proteotypic peptides, in silico prediction of phosphorylation sites is potentially a useful alternative strategy for whole proteome annotation. Because of various limitations, current phosphorylation site prediction tools were not well designed for comprehensive assessment of proteomes. Here, we present a novel software tool, Musite, specifically designed for large scale predictions of both general and kinase-specific phosphorylation sites. We collected phosphoproteomics data in multiple organisms from several reliable sources and used them to train prediction models by a comprehensive machine-learning approach that integrates local sequence similarities to known phosphorylation sites, protein disorder scores, and amino acid frequencies. Application of Musite on several proteomes yielded tens of thousands of phosphorylation site predictions at a high stringency level. Cross-validation tests show that Musite achieves some improvement over existing tools in predicting general phosphorylation sites, and it is at least comparable with those for predicting kinase-specific phosphorylation sites. In Musite V1.0, we have trained general prediction models for six organisms and kinase-specific prediction models for 13 kinases or kinase families. Although the current pretrained models were not correlated with any particular cellular conditions, Musite provides a unique functionality for training customized prediction models (including condition-specific models) from users' own data. In addition, with its easily extensible open source application programming interface, Musite is aimed at being an open platform for community-based development of machine learning-based phosphorylation site prediction applications. Musite is available at http://musite.sourceforge.net/.
Evaluating scaling models in biology using hierarchical Bayesian approaches
Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S
2009-01-01
Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621
Cosmic ray antiprotons in closed galaxy model
NASA Technical Reports Server (NTRS)
Protheroe, R.
1981-01-01
The flux of secondary antiprotons expected for the leaky-box model was calculated as well as that for the closed galaxy model of Peters and Westergard (1977). The antiproton/proton ratio observed at several GeV is a factor of 4 higher than the prediction for the leaky-box model but is consistent with that predicted for the closed galaxy model. New low energy data is not consistent with either model. The possibility of a primary antiproton component is discussed.
Comparison of modeled backscatter with SAR data at P-band
NASA Technical Reports Server (NTRS)
Wang, Yong; Davis, Frank W.; Melack, John M.
1992-01-01
In recent years several analytical models were developed to predict microwave scattering by trees and forest canopies. These models contribute to the understanding of radar backscatter over forested regions to the extent that they capture the basic interactions between microwave radiation and tree canopies, understories, and ground layers as functions of incidence angle, wavelength, and polarization. The Santa Barbara microwave model backscatter model for woodland (i.e. with discontinuous tree canopies) combines a single-tree backscatter model and a gap probability model. Comparison of model predictions with synthetic aperture radar (SAR) data and L-band (lambda = 0.235 m) is promising, but much work is still needed to test the validity of model predictions at other wavelengths. The validity of the model predictions at P-band (lambda = 0.68 m) for woodland stands at our Mt. Shasta test site was tested.
NASA Technical Reports Server (NTRS)
Murch, Austin M.; Foster, John V.
2007-01-01
A simulation study was conducted to investigate aerodynamic modeling methods for prediction of post-stall flight dynamics of large transport airplanes. The research approach involved integrating dynamic wind tunnel data from rotary balance and forced oscillation testing with static wind tunnel data to predict aerodynamic forces and moments during highly dynamic departure and spin motions. Several state-of-the-art aerodynamic modeling methods were evaluated and predicted flight dynamics using these various approaches were compared. Results showed the different modeling methods had varying effects on the predicted flight dynamics and the differences were most significant during uncoordinated maneuvers. Preliminary wind tunnel validation data indicated the potential of the various methods for predicting steady spin motions.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-07-01
Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Choi, Ickwon; Kattan, Michael W; Wells, Brian J; Yu, Changhong
2012-01-01
In medical society, the prognostic models, which use clinicopathologic features and predict prognosis after a certain treatment, have been externally validated and used in practice. In recent years, most research has focused on high dimensional genomic data and small sample sizes. Since clinically similar but molecularly heterogeneous tumors may produce different clinical outcomes, the combination of clinical and genomic information, which may be complementary, is crucial to improve the quality of prognostic predictions. However, there is a lack of an integrating scheme for clinic-genomic models due to the P ≥ N problem, in particular, for a parsimonious model. We propose a methodology to build a reduced yet accurate integrative model using a hybrid approach based on the Cox regression model, which uses several dimension reduction techniques, L₂ penalized maximum likelihood estimation (PMLE), and resampling methods to tackle the problem. The predictive accuracy of the modeling approach is assessed by several metrics via an independent and thorough scheme to compare competing methods. In breast cancer data studies on a metastasis and death event, we show that the proposed methodology can improve prediction accuracy and build a final model with a hybrid signature that is parsimonious when integrating both types of variables.
Hsieh, Cheng-Yang; Lee, Cheng-Han; Wu, Darren Philbert; Sung, Sheng-Feng
2018-05-01
Early detection of atrial fibrillation after stroke is important for secondary prevention in stroke patients without known atrial fibrillation (AF). We aimed to compare the performance of CHADS 2 , CHA 2 DS 2 -VASc and HATCH scores in predicting AF detected after stroke (AFDAS) and to test whether adding stroke severity to the risk scores improves predictive performance. Adult patients with first ischemic stroke event but without a prior history of AF were retrieved from a nationwide population-based database. We compared C-statistics of CHADS 2 , CHA 2 DS 2 -VASc and HATCH scores for predicting the occurrence of AFDAS during stroke admission (cohort I) and during follow-up after hospital discharge (cohort II). The added value of stroke severity to prediction models was evaluated using C-statistics, net reclassification improvement, and integrated discrimination improvement. Cohort I comprised 13,878 patients and cohort II comprised 12,567 patients. Among them, 806 (5.8%) and 657 (5.2%) were diagnosed with AF, respectively. The CHADS 2 score had the lowest C-statistics (0.558 in cohort I and 0.597 in cohort II), whereas the CHA 2 DS 2 -VASc score had comparable C-statistics (0.603 and 0.644) to the HATCH score (0.612 and 0.653) in predicting AFDAS. Adding stroke severity to each of the three risk scores significantly increased the model performance. In stroke patients without known AF, all three risk scores predicted AFDAS during admission and follow-up, but with suboptimal discrimination. Adding stroke severity improved their predictive abilities. These risk scores, when combined with stroke severity, may help prioritize patients for continuous cardiac monitoring in daily practice. Copyright © 2018 Elsevier B.V. All rights reserved.
Seasonal prediction of winter haze days in the north central North China Plain
NASA Astrophysics Data System (ADS)
Yin, Zhicong; Wang, Huijun
2016-11-01
Recently, the winter (December-February) haze pollution over the north central North China Plain (NCP) has become severe. By treating the year-to-year increment as the predictand, two new statistical schemes were established using the multiple linear regression (MLR) and the generalized additive model (GAM). By analyzing the associated increment of atmospheric circulation, seven leading predictors were selected to predict the upcoming winter haze days over the NCP (WHDNCP). After cross validation, the root mean square error and explained variance of the MLR (GAM) prediction model was 3.39 (3.38) and 53 % (54 %), respectively. For the final predicted WHDNCP, both of these models could capture the interannual and interdecadal trends and the extremums successfully. Independent prediction tests for 2014 and 2015 also confirmed the good predictive skill of the new schemes. The predicted bias of the MLR (GAM) prediction model in 2014 and 2015 was 0.09 (-0.07) and -3.33 (-1.01), respectively. Compared to the MLR model, the GAM model had a higher predictive skill in reproducing the rapid and continuous increase of WHDNCP after 2010.
Gartner, Joseph E.; Cannon, Susan H.; Santi, Paul M
2014-01-01
Debris flows and sediment-laden floods in the Transverse Ranges of southern California pose severe hazards to nearby communities and infrastructure. Frequent wildfires denude hillslopes and increase the likelihood of these hazardous events. Debris-retention basins protect communities and infrastructure from the impacts of debris flows and sediment-laden floods and also provide critical data for volumes of sediment deposited at watershed outlets. In this study, we supplement existing data for the volumes of sediment deposited at watershed outlets with newly acquired data to develop new empirical models for predicting volumes of sediment produced by watersheds located in the Transverse Ranges of southern California. The sediment volume data represent a broad sample of conditions found in Ventura, Los Angeles and San Bernardino Counties, California. The measured volumes of sediment, watershed morphology, distributions of burn severity within each watershed, the time since the most recent fire, triggering storm rainfall conditions, and engineering soil properties were analyzed using multiple linear regressions to develop two models. A “long-term model” was developed for predicting volumes of sediment deposited by both debris flows and floods at various times since the most recent fire from a database of volumes of sediment deposited by a combination of debris flows and sediment-laden floods with no time limit since the most recent fire (n = 344). A subset of this database was used to develop an “emergency assessment model” for predicting volumes of sediment deposited by debris flows within two years of a fire (n = 92). Prior to developing the models, 32 volumes of sediment, and related parameters for watershed morphology, burn severity and rainfall conditions were retained to independently validate the long-term model. Ten of these volumes of sediment were deposited by debris flows within two years of a fire and were used to validate the emergency assessment model. The models were validated by comparing predicted and measured volumes of sediment. These validations were also performed for previously developed models and identify that the models developed here best predict volumes of sediment for burned watersheds in comparison to previously developed models.
Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.; Michael, John A.; Helsel, Dennis R.
2008-01-01
Logistic regression was used to develop statistical models that can be used to predict the probability of debris flows in areas recently burned by wildfires by using data from 14 wildfires that burned in southern California during 2003-2006. Twenty-eight independent variables describing the basin morphology, burn severity, rainfall, and soil properties of 306 drainage basins located within those burned areas were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows soon after the 2003 to 2006 fires were delineated from data in the National Elevation Dataset using a geographic information system; (2) Data describing the basin morphology, burn severity, rainfall, and soil properties were compiled for each basin. These data were then input to a statistics software package for analysis using logistic regression; and (3) Relations between the occurrence or absence of debris flows and the basin morphology, burn severity, rainfall, and soil properties were evaluated, and five multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combinations produced the most effective models, and the multivariate models that best predicted the occurrence of debris flows were identified. Percentage of high burn severity and 3-hour peak rainfall intensity were significant variables in all models. Soil organic matter content and soil clay content were significant variables in all models except Model 5. Soil slope was a significant variable in all models except Model 4. The most suitable model can be selected from these five models on the basis of the availability of independent variables in the particular area of interest and field checking of probability maps. The multivariate logistic regression models can be entered into a geographic information system, and maps showing the probability of debris flows can be constructed in recently burned areas of southern California. This study demonstrates that logistic regression is a valuable tool for developing models that predict the probability of debris flows occurring in recently burned landscapes.
Intersection crash prediction modeling with macro-level data from various geographic units.
Lee, Jaeyoung; Abdel-Aty, Mohamed; Cai, Qing
2017-05-01
There have been great efforts to develop traffic crash prediction models for various types of facilities. The crash models have played a key role to identify crash hotspots and evaluate safety countermeasures. In recent, many macro-level crash prediction models have been developed to incorporate highway safety considerations in the long-term transportation planning process. Although the numerous macro-level studies have found that a variety of demographic and socioeconomic zonal characteristics have substantial effects on traffic safety, few studies have attempted to coalesce micro-level with macro-level data from existing geographic units for estimating crash models. In this study, the authors have developed a series of intersection crash models for total, severe, pedestrian, and bicycle crashes with macro-level data for seven spatial units. The study revealed that the total, severe, and bicycle crash models with ZIP-code tabulation area data performs the best, and the pedestrian crash models with census tract-based data outperforms the competing models. Furthermore, it was uncovered that intersection crash models can be drastically improved by only including random-effects for macro-level entities. Besides, the intersection crash models are even further enhanced by including other macro-level variables. Lastly, the pedestrian and bicycle crash modeling results imply that several macro-level variables (e.g., population density, proportions of specific age group, commuters who walk, or commuters using bicycle, etc.) can be a good surrogate exposure for those crashes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Inverse and Predictive Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syracuse, Ellen Marie
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less
Rosellini, A J; Stein, M B; Benedek, D M; Bliese, P D; Chiu, W T; Hwang, I; Monahan, J; Nock, M K; Petukhova, M V; Sampson, N A; Street, A E; Zaslavsky, A M; Ursano, R J; Kessler, R C
2017-10-01
The U.S. Army uses universal preventives interventions for several negative outcomes (e.g. suicide, violence, sexual assault) with especially high risks in the early years of service. More intensive interventions exist, but would be cost-effective only if targeted at high-risk soldiers. We report results of efforts to develop models for such targeting from self-report surveys administered at the beginning of Army service. 21 832 new soldiers completed a self-administered questionnaire (SAQ) in 2011-2012 and consented to link administrative data to SAQ responses. Penalized regression models were developed for 12 administratively-recorded outcomes occurring by December 2013: suicide attempt, mental hospitalization, positive drug test, traumatic brain injury (TBI), other severe injury, several types of violence perpetration and victimization, demotion, and attrition. The best-performing models were for TBI (AUC = 0.80), major physical violence perpetration (AUC = 0.78), sexual assault perpetration (AUC = 0.78), and suicide attempt (AUC = 0.74). Although predicted risk scores were significantly correlated across outcomes, prediction was not improved by including risk scores for other outcomes in models. Of particular note: 40.5% of suicide attempts occurred among the 10% of new soldiers with highest predicted risk, 57.2% of male sexual assault perpetrations among the 15% with highest predicted risk, and 35.5% of female sexual assault victimizations among the 10% with highest predicted risk. Data collected at the beginning of service in self-report surveys could be used to develop risk models that define small proportions of new soldiers accounting for high proportions of negative outcomes over the first few years of service.
The management submodel of the Wind Erosion Prediction System
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is a process-based, daily time-step, computer model that predicts soil erosion via simulation of the physical processes controlling wind erosion. WEPS is comprised of several individual modules (submodels) that reflect different sets of physical processes, ...
Reusser, D.A.; Lee, H.
2008-01-01
Habitat models can be used to predict the distributions of marine and estuarine non-indigenous species (NIS) over several spatial scales. At an estuary scale, our goal is to predict the estuaries most likely to be invaded, but at a habitat scale, the goal is to predict the specific locations within an estuary that are most vulnerable to invasion. As an initial step in evaluating several habitat models, model performance for a suite of benthic species with reasonably well-known distributions on the Pacific coast of the US needs to be compared. We discuss the utility of non-parametric multiplicative regression (NPMR) for predicting habitat- and estuary-scale distributions of native and NIS. NPMR incorporates interactions among variables, allows qualitative and categorical variables, and utilizes data on absence as well as presence. Preliminary results indicate that NPMR generally performs well at both spatial scales and that distributions of NIS are predicted as well as those of native species. For most species, latitude was the single best predictor, although similar model performance could be obtained at both spatial scales with combinations of other habitat variables. Errors of commission were more frequent at a habitat scale, with omission and commission errors approximately equal at an estuary scale. ?? 2008 International Council for the Exploration of the Sea. Published by Oxford Journals. All rights reserved.
Prediction of wastewater treatment plants performance based on artificial fish school neural network
NASA Astrophysics Data System (ADS)
Zhang, Ruicheng; Li, Chong
2011-10-01
A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.
Predictability of depression severity based on posterior alpha oscillations.
Jiang, H; Popov, T; Jylänki, P; Bi, K; Yao, Z; Lu, Q; Jensen, O; van Gerven, M A J
2016-04-01
We aimed to integrate neural data and an advanced machine learning technique to predict individual major depressive disorder (MDD) patient severity. MEG data was acquired from 22 MDD patients and 22 healthy controls (HC) resting awake with eyes closed. Individual power spectra were calculated by a Fourier transform. Sources were reconstructed via beamforming technique. Bayesian linear regression was applied to predict depression severity based on the spatial distribution of oscillatory power. In MDD patients, decreased theta (4-8 Hz) and alpha (8-14 Hz) power was observed in fronto-central and posterior areas respectively, whereas increased beta (14-30 Hz) power was observed in fronto-central regions. In particular, posterior alpha power was negatively related to depression severity. The Bayesian linear regression model showed significant depression severity prediction performance based on the spatial distribution of both alpha (r=0.68, p=0.0005) and beta power (r=0.56, p=0.007) respectively. Our findings point to a specific alteration of oscillatory brain activity in MDD patients during rest as characterized from MEG data in terms of spectral and spatial distribution. The proposed model yielded a quantitative and objective estimation for the depression severity, which in turn has a potential for diagnosis and monitoring of the recovery process. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Ion exchange of several radionuclides on the hydrous crystalline silicotitanate, UOP IONSIV IE-911
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huckman, M.E.; Latheef, I.M.; Anthony, R.G.
1999-04-01
The crystalline silicotitanate, UOP IONSIV IE-911, is a proven material for removing radionuclides from a wide variety of waste streams. It is superior for removing several radionuclides from the highly alkaline solutions typical of DOE wastes. This laboratory previously developed an equilibrium model applicable to complex solutions for IE-910 (the power form of the granular IE-911), and more recently, the authors have developed several single component ion-exchange kinetic models for predicting column breakthrough curves and batch reactor concentration histories. In this paper, the authors model ion-exchange column performance using effective diffusivities determined from batch kinetic experiments. This technique is preferablemore » because the batch experiments are easier, faster, and cheaper to perform than column experiments. They also extend these ideas to multicomponent systems. Finally, they evaluate the ability of the equilibrium model to predict data for IE-911.« less
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
Seligman, D A; Pullinger, A G
2006-11-01
To determine whether patients with temporomandibular joint disease or masticatory muscle pain can be usefully differentiated from asymptomatic controls using multifactorial classification tree models of attrition severity and/or rates. Measures of attrition severity and rates in patients diagnosed with disc displacement (n = 52), osteoarthrosis (n = 74), or masticatory muscle pain only (n = 43) were compared against those in asymptomatic controls (n = 132). Cross-validated classification tree models were tested for fit with sensitivity, specificity, accuracy and log likelihood accountability. The model for identifying asymptomatic controls only required the three measures of attrition severity (anterior, mediotrusive and laterotrusive posterior) to be differentiated from the patients with a 74.2 +/- 3.8% cross-validation accuracy. This compared with cross-validation accuracies of 69.7 +/- 3.7% for differentiating disc displacement using anterior and laterotrusive attrition severity, 68.7 +/- 3.9% for differentiating disc displacement using anterior and laterotrusive attrition rates, 70.9 +/- 3.3% for differentiating osteoarthrosis using anterior attrition severity and rates, 94.6 +/- 2.1% for differentiating myofascial pain using mediotrusive and laterotrusive attrition severity, and 92.0 +/- 2.1% for differentiating myofascial pain using mediotrusive and anterior attrition rates. The myofascial pain models exceeded the > or =75% sensitivity and > or =90% specificity thresholds recommended for diagnostic tests, and the asymptomatic control model approached these thresholds. Multifactorial models using attrition severity and rates may differentiate masticatory muscle pain patients from asymptomatic controls, and have some predictive value for differentiating intracapsular temporomandibular disorder patients as well.
The Magnetic Field along the Axis of a Short, Thick Solenoid
ERIC Educational Resources Information Center
Hart, Francis Xavier
2018-01-01
We commonly ask students to compare the results of their experimental measurements with the predictions of a simple physical model that is well understood. However, in practice, physicists must compare their experimental measurements with the predictions of several models, none of which may work well over the entire range of measurements. The…
NASA Astrophysics Data System (ADS)
Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje
2017-09-01
Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.
Seo, Min Ho; Choa, Minhong; You, Je Sung; Lee, Hye Sun; Hong, Jung Hwa; Park, Yoo Seok; Chung, Sung Phil; Park, Incheol
2016-11-01
The objective of this study was to develop a new nomogram that can predict 28-day mortality in severe sepsis and/or septic shock patients using a combination of several biomarkers that are inexpensive and readily available in most emergency departments, with and without scoring systems. We enrolled 561 patients who were admitted to an emergency department (ED) and received early goal-directed therapy for severe sepsis or septic shock. We collected demographic data, initial vital signs, and laboratory data sampled at the time of ED admission. Patients were randomly assigned to a training set or validation set. For the training set, we generated models using independent variables associated with 28-day mortality by multivariate analysis, and developed a new nomogram for the prediction of 28-day mortality. Thereafter, the diagnostic accuracy of the nomogram was tested using the validation set. The prediction model that included albumin, base excess, and respiratory rate demonstrated the largest area under the receiver operating characteristic curve (AUC) value of 0.8173 [95% confidence interval (CI), 0.7605-0.8741]. The logistic analysis revealed that a conventional scoring system was not associated with 28-day mortality. In the validation set, the discrimination of a newly developed nomogram was also good, with an AUC value of 0.7537 (95% CI, 0.6563-0.8512). Our new nomogram is valuable in predicting the 28-day mortality of patients with severe sepsis and/or septic shock in the emergency department. Moreover, our readily available nomogram is superior to conventional scoring systems in predicting mortality.
A generalized procedure for the prediction of multicomponent adsorption equilibria
Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas
2015-04-07
Prediction of multicomponent adsorption equilibria has been investigated for several decades. While there are theories available to predict the adsorption behavior of ideal mixtures, there are few purely predictive theories to account for nonidealities in real systems. Most models available for dealing with nonidealities contain interaction parameters that must be obtained through correlation with binary-mixture data. However, as the number of components in a system grows, the number of parameters needed to be obtained increases exponentially. Here, a generalized procedure is proposed, as an extension of the predictive real adsorbed solution theory, for determining the parameters of any activity model,more » for any number of components, without correlation. This procedure is then combined with the adsorbed solution theory to predict the adsorption behavior of mixtures. As this method can be applied to any isotherm model and any activity model, it is referred to as the generalized predictive adsorbed solution theory.« less
Walker, William C; Stromberg, Katharine A; Marwitz, Jennifer H; Sima, Adam P; Agyemang, Amma A; Graham, Kristin M; Harrison-Felix, Cynthia; Hoffman, Jeanne M; Brown, Allen W; Kreutzer, Jeffrey S; Merchant, Randall
2018-05-16
For patients surviving serious traumatic brain injury (TBI), families and other stakeholders often desire information on long-term functional prognosis, but accurate and easy-to-use clinical tools are lacking. We aimed to build utilitarian decision trees from commonly collected clinical variables to predict Glasgow Outcome Scale (GOS) functional levels at 1, 2, and 5 years after moderate-to-severe closed TBI. Flexible classification tree statistical modeling was used on prospectively collected data from the TBI-Model Systems (TBIMS) inception cohort study. Enrollments occurred at 17 designated, or previously designated, TBIMS inpatient rehabilitation facilities. Analysis included all participants with nonpenetrating TBI injured between January 1997 and January 2017. Sample sizes were 10,125 (year-1), 8,821 (year-2), and 6,165 (year-5) after cross-sectional exclusions (death, vegetative state, insufficient post-injury time, and unavailable outcome). In our final models, post-traumatic amnesia (PTA) duration consistently dominated branching hierarchy and was the lone injury characteristic significantly contributing to GOS predictability. Lower-order variables that added predictability were age, pre-morbid education, productivity, and occupational category. Generally, patient outcomes improved with shorter PTA, younger age, greater pre-morbid productivity, and higher pre-morbid vocational or educational achievement. Across all prognostic groups, the best and worst good recovery rates were 65.7% and 10.9%, respectively, and the best and worst severe disability rates were 3.9% and 64.1%. Predictability in test data sets ranged from C-statistic of 0.691 (year-1; confidence interval [CI], 0.675, 0.711) to 0.731 (year-2; CI, 0.724, 0.738). In conclusion, we developed a clinically useful tool to provide prognostic information on long-term functional outcomes for adult survivors of moderate and severe closed TBI. Predictive accuracy for GOS level was demonstrated in an independent test sample. Length of PTA, a clinical marker of injury severity, was by far the most critical outcome determinant.
Vizuete, William; Biton, Leiran; Jeffries, Harvey E; Couzo, Evan
2010-07-01
In 2007, the U.S. Environmental Protection Agency (EPA) released guidance on demonstrating attainment of the federal ozone (O3) standard. This guidance recommended a change in the use of air quality model (AQM) predictions from an absolute to a relative way. This was accomplished by using a ratio, and not the absolute difference of AQM O3 predictions from a historical year to an attainment year. This ratio of O3 concentrations, labeled the relative response factor (RRF), is multiplied by an average of observed concentrations at every monitor. In this analysis, whether the methodology used to calculate RRFs is severing the source-receptor relationship for a given monitor was investigated. Model predictions were generated with a regulatory AQM system used to support the 2004 Houston-Galveston-Brazoria State Implementation Plan. Following the procedures in the EPA guidance, an attainment demonstration was completed using regulatory AQM predictions and measurements from the Houston ground-monitoring network. Results show that the model predictions used for the RRF calculation were often based on model conditions that were geographically remote from observations and counter to wind flow. Many of the monitors used the same model predictions for an RRF, even if that O3 plume did not impact it. The RRF methodology resulted in severing the true source-receptor relationship for a monitor. This analysis also showed that model performance could influence RRF values, and values at monitoring sites appear to be sensitive to model bias. Results indicate an inverse linear correlation of RRFs with model bias at each monitor (R2 = 0.47), resulting in a change in future O3 design values up to 5 parts per billion (ppb). These results suggest that the application of RRF methodology in Houston, TX, should be changed from using all model predictions above 85 ppb to a method that removes any predictions that are not relevant to the observed source-receptor relationship.
Predictive Surface Complexation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sverjensky, Dimitri A.
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO 2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall,more » my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.« less
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, Timothy K.; Chrostowski, Jon D.
1991-01-01
Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.
Multi-Model Ensemble Wake Vortex Prediction
NASA Technical Reports Server (NTRS)
Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.
2015-01-01
Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.
A Clinical Prediction Algorithm to Stratify Pediatric Musculoskeletal Infection by Severity
Benvenuti, Michael A; An, Thomas J; Mignemi, Megan E; Martus, Jeffrey E; Mencio, Gregory A; Lovejoy, Stephen A; Thomsen, Isaac P; Schoenecker, Jonathan G; Williams, Derek J
2016-01-01
Objective There are currently no algorithms for early stratification of pediatric musculoskeletal infection (MSKI) severity that are applicable to all types of tissue involvement. In this study, the authors sought to develop a clinical prediction algorithm that accurately stratifies infection severity based on clinical and laboratory data at presentation to the emergency department. Methods An IRB-approved retrospective review was conducted to identify patients aged 0–18 who presented to the pediatric emergency department at a tertiary care children’s hospital with concern for acute MSKI over a five-year period (2008–2013). Qualifying records were reviewed to obtain clinical and laboratory data and to classify in-hospital outcomes using a three-tiered severity stratification system. Ordinal regression was used to estimate risk for each outcome. Candidate predictors included age, temperature, respiratory rate, heart rate, C-reactive protein, and peripheral white blood cell count. We fit fully specified (all predictors) and reduced models (retaining predictors with a p-value ≤ 0.2). Discriminatory power of the models was assessed using the concordance (c)-index. Results Of the 273 identified children, 191 (70%) met inclusion criteria. Median age was 5.8 years. Outcomes included 47 (25%) children with inflammation only, 41 (21%) with local infection, and 103 (54%) with disseminated infection. Both the full and reduced models accurately demonstrated excellent performance (full model c-index 0.83, 95% CI [0.79–0.88]; reduced model 0.83, 95% CI [0.78–0.87]). Model fit was also similar, indicating preference for the reduced model. Variables in this model included C-reactive protein, pulse, temperature, and an interaction term for pulse and temperature. The odds of a more severe outcome increased by 30% for every 10-unit increase in C-reactive protein. Conclusions Clinical and laboratory data obtained in the emergency department may be used to accurately differentiate pediatric MSKI severity. The predictive algorithm in this study stratifies pediatric MSKI severity at presentation irrespective of tissue involvement and anatomic diagnosis. Prospective studies are needed to validate model performance and clinical utility. PMID:27682512
The Potential for Predicting Precipitation on Seasonal-to-Interannual Timescales
NASA Technical Reports Server (NTRS)
Koster, R. D.
1999-01-01
The ability to predict precipitation several months in advance would have a significant impact on water resource management. This talk provides an overview of a project aimed at developing this prediction capability. NASA's Seasonal-to-Interannual Prediction Project (NSIPP) will generate seasonal-to-interannual sea surface temperature predictions through detailed ocean circulation modeling and will then translate these SST forecasts into forecasts of continental precipitation through the application of an atmospheric general circulation model and a "SVAT"-type land surface model. As part of the process, ocean variables (e.g., height) and land variables (e.g., soil moisture) will be updated regularly via data assimilation. The overview will include a discussion of the variability inherent in such a modeling system and will provide some quantitative estimates of the absolute upper limits of seasonal-to-interannual precipitation predictability.
Estimating tree grades for Southern Appalachian natural forest stands
Jeffrey P. Prestemon
1998-01-01
Log prices can vary significantly by grade: grade 1 logs are often several times the price per unit of grade 3 logs. Because tree grading rules derive from log grading rules, a model that predicts tree grades based on tree and stand-level variables might be useful for predicting stand values. The model could then assist in the modeling of timber supply and in economic...
The effect of model resolution in predicting meteorological parameters used in fire danger rating.
Jeanne L. Hoadley; Ken Westrick; Sue A. Ferguson; Scott L. Goodrick; Larry Bradshaw; Paul Werth
2004-01-01
Previous studies of model performance at varying resolutions have focused on winter storms or isolated convective events. Little attention has been given to the static high pressure situations that may lead to severe wildfire outbreaks. This study focuses on such an event so as to evaluate the value of increased model resolution for prediction of fire danger. The...
The effect of model resolution in predicting meteorological parameters used in fire danger rating
Jeanne L. Hoadley; Ken Westrick; Sue a. Ferguson; Scott L. Goodrick; Larry Bradshaw; Paul Wreth
2004-01-01
Previous studies of model perfonnance at varying resolutions have focused on winter stonns or isolated convective events. Little attention has been given to the static high pressure situations that may lead to severe wildfire outbreaks. This study focuses on such an event so as to evaluate the value of increased model resolution for prediction of fire danger. The...
Predicting county-level cancer incidence rates and counts in the United States
Yu, Binbing
2018-01-01
Many countries, including the United States, publish predicted numbers of cancer incidence and death in current and future years for the whole country. These predictions provide important information on the cancer burden for cancer control planners, policymakers and the general public. Based on evidence from several empirical studies, the joinpoint (segmented-line linear regression) model has been adopted by the American Cancer Society to estimate the number of new cancer cases in the United States and in individual states since 2007. Recently, cancer incidence in smaller geographic regions such as counties and FIPS code regions is of increasing interest by local policymakers. The natural extension is to directly apply the joinpoint model to county-level cancer incidence data. The direct application has several drawbacks and its performance has not been evaluated. To address the concerns, we developed a spatial random-effects joinpoint model for county-level cancer incidence data. The proposed model was used to predict both cancer incidence rates and counts at the county level. The standard joinpoint model and the proposed method were compared through a validation study. The proposed method out-performed the standard joinpoint model for almost all cancer sites, especially for moderate or rare cancer sites and for counties with small population sizes. As an application, we predicted county-level prostate cancer incidence rates and counts for the year 2011 in Connecticut. PMID:23670947
Chen, Cong; Zhang, Guohui; Huang, Helai; Wang, Jiangfeng; Tarefder, Rafiqul A
2016-11-01
Rural non-interstate crashes induce a significant amount of severe injuries and fatalities. Examination of such injury patterns and the associated contributing factors is of practical importance. Taking into account the ordinal nature of injury severity levels and the hierarchical feature of crash data, this study employs a hierarchical ordered logit model to examine the significant factors in predicting driver injury severities in rural non-interstate crashes based on two-year New Mexico crash records. Bayesian inference is utilized in model estimation procedure and 95% Bayesian Credible Interval (BCI) is applied to testing variable significance. An ordinary ordered logit model omitting the between-crash variance effect is evaluated as well for model performance comparison. Results indicate that the model employed in this study outperforms ordinary ordered logit model in model fit and parameter estimation. Variables regarding crash features, environment conditions, and driver and vehicle characteristics are found to have significant influence on the predictions of driver injury severities in rural non-interstate crashes. Factors such as road segments far from intersection, wet road surface condition, collision with animals, heavy vehicle drivers, male drivers and driver seatbelt used tend to induce less severe driver injury outcomes than the factors such as multiple-vehicle crashes, severe vehicle damage in a crash, motorcyclists, females, senior drivers, driver with alcohol or drug impairment, and other major collision types. Research limitations regarding crash data and model assumptions are also discussed. Overall, this research provides reasonable results and insight in developing effective road safety measures for crash injury severity reduction and prevention. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pilinski, M.; Crowley, G.; Sutton, E.; Codrescu, M.
2016-09-01
Much as aircraft are affected by the prevailing winds and weather conditions in which they fly, satellites are affected by the variability in density and motion of the near earth space environment. Drastic changes in the neutral density of the thermosphere, caused by geomagnetic storms or other phenomena, result in perturbations of LEO satellite motions through drag on the satellite surfaces. This can lead to difficulties in locating important satellites, temporarily losing track of satellites, and errors when predicting collisions in space. As the population of satellites in Earth orbit grows, higher space-weather prediction accuracy is required for critical missions, such as accurate catalog maintenance, collision avoidance for manned and unmanned space flight, reentry prediction, satellite lifetime prediction, defining on-board fuel requirements, and satellite attitude dynamics. We describe ongoing work to build a comprehensive nowcast and forecast system for specifying the neutral atmospheric state related to orbital drag conditions. The system outputs include neutral density, winds, temperature, composition, and the satellite drag derived from these parameters. This modeling tool is based on several state-of-the-art coupled models of the thermosphere-ionosphere as well as several empirical models running in real-time and uses assimilative techniques to produce a thermospheric nowcast. This software will also produce 72 hour predictions of the global thermosphere-ionosphere system using the nowcast as the initial condition and using near real-time and predicted space weather data and indices as the inputs. In this paper, we will review the driving requirements for our model, summarize the model design and assimilative architecture, and present preliminary validation results. Validation results will be presented in the context of satellite orbit errors and compared with several leading atmospheric models. As part of the analysis, we compare the drag observed by a variety of satellites which were not used as part of the assimilation-dataset and whose perigee altitudes span a range from 200 km to 700 km.
A Quantitative Model of Expert Transcription Typing
1993-03-08
side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Learning Instance-Specific Predictive Models
Visweswaran, Shyam; Cooper, Gregory F.
2013-01-01
This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325
Westphal, Michael F; Stewart, Joseph A E; Tennant, Erin N; Butterfield, H Scott; Sinervo, Barry
2016-01-01
Extreme weather events can provide unique opportunities for testing models that predict the effect of climate change. Droughts of increasing severity have been predicted under numerous models, thus contemporary droughts may allow us to test these models prior to the onset of the more extreme effects predicted with a changing climate. In the third year of an ongoing severe drought, surveys failed to detect neonate endangered blunt-nosed leopard lizards in a subset of previously surveyed populations where we expected to see them. By conducting surveys at a large number of sites across the range of the species over a short time span, we were able to establish a strong positive correlation between winter precipitation and the presence of neonate leopard lizards over geographic space. Our results are consistent with those of numerous longitudinal studies and are in accordance with predictive climate change models. We suggest that scientists can take immediate advantage of droughts while they are still in progress to test patterns of occurrence in other drought-sensitive species and thus provide for more robust models of climate change effects on biodiversity.
Scribner, Richard; Ackleh, Azmy S; Fitzpatrick, Ben G; Jacquez, Geoffrey; Thibodeaux, Jeremy J; Rommel, Robert; Simonsen, Neal
2009-09-01
The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by "wetness" and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately "dry" campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately "wet" campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses.
Scribner, Richard; Ackleh, Azmy S.; Fitzpatrick, Ben G.; Jacquez, Geoffrey; Thibodeaux, Jeremy J.; Rommel, Robert; Simonsen, Neal
2009-01-01
Objective: The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. Method: A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. Results: First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by “wetness” and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately “dry” campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately “wet” campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). Conclusions: A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses. PMID:19737506
Predictive modeling of nanomaterial exposure effects in biological systems
Liu, Xiong; Tang, Kaizhi; Harper, Stacey; Harper, Bryan; Steevens, Jeffery A; Xu, Roger
2013-01-01
Background Predictive modeling of the biological effects of nanomaterials is critical for industry and policymakers to assess the potential hazards resulting from the application of engineered nanomaterials. Methods We generated an experimental dataset on the toxic effects experienced by embryonic zebrafish due to exposure to nanomaterials. Several nanomaterials were studied, such as metal nanoparticles, dendrimer, metal oxide, and polymeric materials. The embryonic zebrafish metric (EZ Metric) was used as a screening-level measurement representative of adverse effects. Using the dataset, we developed a data mining approach to model the toxic endpoints and the overall biological impact of nanomaterials. Data mining techniques, such as numerical prediction, can assist analysts in developing risk assessment models for nanomaterials. Results We found several important attributes that contribute to the 24 hours post-fertilization (hpf) mortality, such as dosage concentration, shell composition, and surface charge. These findings concur with previous studies on nanomaterial toxicity using embryonic zebrafish. We conducted case studies on modeling the overall effect/impact of nanomaterials and the specific toxic endpoints such as mortality, delayed development, and morphological malformations. The results show that we can achieve high prediction accuracy for certain biological effects, such as 24 hpf mortality, 120 hpf mortality, and 120 hpf heart malformation. The results also show that the weighting scheme for individual biological effects has a significant influence on modeling the overall impact of nanomaterials. Sample prediction models can be found at http://neiminer.i-a-i.com/nei_models. Conclusion The EZ Metric-based data mining approach has been shown to have predictive power. The results provide valuable insights into the modeling and understanding of nanomaterial exposure effects. PMID:24098077
Predictors of health-related quality of life of European food-allergic patients.
Saleh-Langenberg, J; Goossens, N J; Flokstra-de Blok, B M J; Kollen, B J; van der Meulen, G N; Le, T M; Knulst, A C; Jedrzejczak-Czechowicz, M; Kowalski, M L; Rokicka, E; Starosta, P; de la Hoz Caballer, B; Vazquez-Cortés, S; Cerecedo, I; Barreales, L; Asero, R; Clausen, M; DunnGalvin, A; Hourihane, J O' B; Purohit, A; Papadopoulos, N G; Fernandéz-Rivas, M; Frewer, L; Burney, P; Duiverman, E J; Dubois, A E J
2015-06-01
Although food allergy has universally been found to impair HRQL, studies have found significant differences in HRQL between countries, even when corrected for differences in perceived disease severity. However, little is known about factors other than disease severity which may contribute to HRQL in food-allergic patients. Therefore, the aim of this study was to identify factors which may predict HRQL of food-allergic patients and also to investigate the specific impact of having experienced anaphylaxis and being prescribed an EAI on HRQL. A total of 648 European food-allergic patients (404 adults, 244 children) completed an age-specific questionnaire package including descriptive questions. Multivariable regression analyses were performed to develop models for predicting HRQL of these patients. For adults, the prediction model accounted for 62% of the variance in HRQL and included perceived disease severity, type of symptoms, having a fish or milk allergy, and gender. For children, the prediction model accounted for 28% of the variance in HRQL and included perceived disease severity, having a peanut or soy allergy, and country of origin. For both adults and children, neither experiencing anaphylaxis nor being prescribed an epinephrine auto-injector (EAI) contributed to impairment of HRQL. In this study, food allergy-related HRQL may be predicted to a greater extent in adults than in children. Allergy to certain foods may cause greater HRQL impairment than others. Country of origin may affect HRQL, at least in children. Experiencing anaphylaxis or being prescribed an EAI has no impact on HRQL in either adults or children. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Sung, Sheng-Feng; Hsieh, Cheng-Yang; Kao Yang, Yea-Huei; Lin, Huey-Juan; Chen, Chih-Hung; Chen, Yu-Wei; Hu, Ya-Han
2015-11-01
Case-mix adjustment is difficult for stroke outcome studies using administrative data. However, relevant prescription, laboratory, procedure, and service claims might be surrogates for stroke severity. This study proposes a method for developing a stroke severity index (SSI) by using administrative data. We identified 3,577 patients with acute ischemic stroke from a hospital-based registry and analyzed claims data with plenty of features. Stroke severity was measured using the National Institutes of Health Stroke Scale (NIHSS). We used two data mining methods and conventional multiple linear regression (MLR) to develop prediction models, comparing the model performance according to the Pearson correlation coefficient between the SSI and the NIHSS. We validated these models in four independent cohorts by using hospital-based registry data linked to a nationwide administrative database. We identified seven predictive features and developed three models. The k-nearest neighbor model (correlation coefficient, 0.743; 95% confidence interval: 0.737, 0.749) performed slightly better than the MLR model (0.742; 0.736, 0.747), followed by the regression tree model (0.737; 0.731, 0.742). In the validation cohorts, the correlation coefficients were between 0.677 and 0.725 for all three models. The claims-based SSI enables adjusting for disease severity in stroke studies using administrative data. Copyright © 2015 Elsevier Inc. All rights reserved.
A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.
A 30-day-ahead forecast model for grass pollen in north London, United Kingdom.
Smith, Matt; Emberlin, Jean
2006-03-01
A 30-day-ahead forecast method has been developed for grass pollen in north London. The total period of the grass pollen season is covered by eight multiple regression models, each covering a 10-day period running consecutively from 21 May to 8 August. This means that three models were used for each 30-day forecast. The forecast models were produced using grass pollen and environmental data from 1961 to 1999 and tested on data from 2000 and 2002. Model accuracy was judged in two ways: the number of times the forecast model was able to successfully predict the severity (relative to the 1961-1999 dataset as a whole) of grass pollen counts in each of the eight forecast periods on a scale of 1 to 4; the number of times the forecast model was able to predict whether grass pollen counts were higher or lower than the mean. The models achieved 62.5% accuracy in both assessment years when predicting the relative severity of grass pollen counts on a scale of 1 to 4, which equates to six of the eight 10-day periods being forecast correctly. The models attained 87.5% and 100% accuracy in 2000 and 2002, respectively, when predicting whether grass pollen counts would be higher or lower than the mean. Attempting to predict pollen counts during distinct 10-day periods throughout the grass pollen season is a novel approach. The models also employed original methodology in the use of winter averages of the North Atlantic Oscillation to forecast 10-day means of allergenic pollen counts.
Probability-based collaborative filtering model for predicting gene-disease associations.
Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan
2017-12-28
Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.
Observational breakthroughs lead the way to improved hydrological predictions
NASA Astrophysics Data System (ADS)
Lettenmaier, Dennis P.
2017-04-01
New data sources are revolutionizing the hydrological sciences. The capabilities of hydrological models have advanced greatly over the last several decades, but until recently model capabilities have outstripped the spatial resolution and accuracy of model forcings (atmospheric variables at the land surface) and the hydrologic state variables (e.g., soil moisture; snow water equivalent) that the models predict. This has begun to change, as shown in two examples here: soil moisture and drought evolution over Africa as predicted by a hydrology model forced with satellite-derived precipitation, and observations of snow water equivalent at very high resolution over a river basin in California's Sierra Nevada.
Awad, Aya; Bader-El-Den, Mohamed; McNicholas, James; Briggs, Jim
2017-12-01
Mortality prediction of hospitalized patients is an important problem. Over the past few decades, several severity scoring systems and machine learning mortality prediction models have been developed for predicting hospital mortality. By contrast, early mortality prediction for intensive care unit patients remains an open challenge. Most research has focused on severity of illness scoring systems or data mining (DM) models designed for risk estimation at least 24 or 48h after ICU admission. This study highlights the main data challenges in early mortality prediction in ICU patients and introduces a new machine learning based framework for Early Mortality Prediction for Intensive Care Unit patients (EMPICU). The proposed method is evaluated on the Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database. Mortality prediction models are developed for patients at the age of 16 or above in Medical ICU (MICU), Surgical ICU (SICU) or Cardiac Surgery Recovery Unit (CSRU). We employ the ensemble learning Random Forest (RF), the predictive Decision Trees (DT), the probabilistic Naive Bayes (NB) and the rule-based Projective Adaptive Resonance Theory (PART) models. The primary outcome was hospital mortality. The explanatory variables included demographic, physiological, vital signs and laboratory test variables. Performance measures were calculated using cross-validated area under the receiver operating characteristic curve (AUROC) to minimize bias. 11,722 patients with single ICU stays are considered. Only patients at the age of 16 years old and above in Medical ICU (MICU), Surgical ICU (SICU) or Cardiac Surgery Recovery Unit (CSRU) are considered in this study. The proposed EMPICU framework outperformed standard scoring systems (SOFA, SAPS-I, APACHE-II, NEWS and qSOFA) in terms of AUROC and time (i.e. at 6h compared to 48h or more after admission). The results show that although there are many values missing in the first few hour of ICU admission, there is enough signal to effectively predict mortality during the first 6h of admission. The proposed framework, in particular the one that uses the ensemble learning approach - EMPICU Random Forest (EMPICU-RF) offers a base to construct an effective and novel mortality prediction model in the early hours of an ICU patient admission, with an improved performance profile. Copyright © 2017 Elsevier B.V. All rights reserved.
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
NASA Astrophysics Data System (ADS)
Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich
2017-10-01
New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
A fuzzy mathematical model of West Java population with logistic growth model
NASA Astrophysics Data System (ADS)
Nurkholipah, N. S.; Amarti, Z.; Anggriani, N.; Supriatna, A. K.
2018-03-01
In this paper we develop a mathematics model of population growth in the West Java Province Indonesia. The model takes the form as a logistic differential equation. We parameterize the model using several triples of data, and choose the best triple which has the smallest Mean Absolute Percentage Error (MAPE). The resulting model is able to predict the historical data with a high accuracy and it also able to predict the future of population number. Predicting the future population is among the important factors that affect the consideration is preparing a good management for the population. Several experiment are done to look at the effect of impreciseness in the data. This is done by considering a fuzzy initial value to the crisp model assuming that the model propagates the fuzziness of the independent variable to the dependent variable. We assume here a triangle fuzzy number representing the impreciseness in the data. We found that the fuzziness may disappear in the long-term. Other scenarios also investigated, such as the effect of fuzzy parameters to the crisp initial value of the population. The solution of the model is obtained numerically using the fourth-order Runge-Kutta scheme.
Measured rates of in vitro hepatic clearance by fish have been used by several authors as inputs to predictive models for chemical accumulation. The resulting predictions are consistent with observed trends in bioaccumulation and provide a proof of principal for the approach. ...
Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S
2016-01-01
Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of the risk of SNB, facilitating sound disease management decisions prior to planting of wheat.
Prostate Cancer Probability Prediction By Machine Learning Technique.
Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena
2017-11-26
The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.
A thermal sensation prediction tool for use by the profession
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fountain, M.E.; Huizenga, C.
1997-12-31
As part of a recent ASHRAE research project (781-RP), a thermal sensation prediction tool has been developed. This paper introduces the tool, describes the component thermal sensation models, and presents examples of how the tool can be used in practice. Since the main end product of the HVAC industry is the comfort of occupants indoors, tools for predicting occupant thermal response can be an important asset to designers of indoor climate control systems. The software tool presented in this paper incorporates several existing models for predicting occupant comfort.
USDA-ARS?s Scientific Manuscript database
The Great Plains experienced an influx of settlers in the late 1850s to 1900. Periodic drought was hard on both settlers and the soil and caused severe wind erosion. The period known as the Dirty Thirties, 1931 to 1939, produced many severe windstorms, and the resulting dusty sky over Washington, D....
Mining key elements for severe convection prediction based on CNN
NASA Astrophysics Data System (ADS)
Liu, Ming; Pan, Ning; Zhang, Changan; Sha, Hongzhou; Zhang, Bolei; Liu, Liang; Zhang, Meng
2017-04-01
Severe convective weather is a kind of weather disasters accompanied by heavy rainfall, gust wind, hail, etc. Along with recent developments on remote sensing and numerical modeling, there are high-volume and long-term observational and modeling data accumulated to capture massive severe convective events over particular areas and time periods. With those high-volume and high-variety weather data, most of the existing studies and methods carry out the dynamical laws, cause analysis, potential rule study, and prediction enhancement by utilizing the governing equations from fluid dynamics and thermodynamics. In this study, a key-element mining method is proposed for severe convection prediction based on convolution neural network (CNN). It aims to identify the key areas and key elements from huge amounts of historical weather data including conventional measurements, weather radar, satellite, so as numerical modeling and/or reanalysis data. Under this manner, the machine-learning based method could help the human forecasters on their decision-making on operational weather forecasts on severe convective weathers by extracting key information from the real-time and historical weather big data. In this paper, it first utilizes computer vision technology to complete the data preprocessing work of the meteorological variables. Then, it utilizes the information such as radar map and expert knowledge to annotate all images automatically. And finally, by using CNN model, it cloud analyze and evaluate each weather elements (e.g., particular variables, patterns, features, etc.), and identify key areas of those critical weather elements, then help forecasters quickly screen out the key elements from huge amounts of observation data by current weather conditions. Based on the rich weather measurement and model data (up to 10 years) over Fujian province in China, where the severe convective weathers are very active during the summer months, experimental tests are conducted with the new machine-learning method via CNN models. Based on the analysis of those experimental results and case studies, the proposed new method have below benefits for the severe convection prediction: (1) helping forecasters to narrow down the scope of analysis and saves lead-time for those high-impact severe convection; (2) performing huge amount of weather big data by machine learning methods rather relying on traditional theory and knowledge, which provide new method to explore and quantify the severe convective weathers; (3) providing machine learning based end-to-end analysis and processing ability with considerable scalability on data volumes, and accomplishing the analysis work without human intervention.
Predicting responses from Rasch measures.
Linacre, John M
2010-01-01
There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.
NREL's Battery Life Predictive Model Helps Companies Take Charge | News |
lithium-ion (Li-ion) batteries, are complex electrochemical systems. There are typically several different NREL NREL's Battery Life Predictive Model Helps Companies Take Charge NREL's Battery Life monitor. An example of a stationary, grid-connected battery is the NREL project from Erigo/EaglePicher
Ashrafi, Mahnaz; Bahmanabadi, Akram; Akhond, Mohammad Reza; Arabipoor, Arezoo
2015-11-01
To evaluate demographic, medical history and clinical cycle characteristics of infertile non-polycystic ovary syndrome (NPCOS) women with the purpose of investigating their associations with the prevalence of moderate-to-severe OHSS. In this retrospective study, among 7073 in vitro fertilization and/or intracytoplasmic sperm injection (IVF/ICSI) cycles, 86 cases of NPCO patients who developed moderate-to-severe OHSS while being treated with IVF/ICSI cycles were analyzed during the period of January 2008 to December 2010 at Royan Institute. To review the OHSS risk factors, 172 NPCOS patients without developing OHSS, treated at the same period of time, were selected randomly by computer as control group. We used multiple logistic regression in a backward manner to build a prediction model. The regression analysis revealed that the variables, including age [odds ratio (OR) 0.9, confidence interval (CI) 0.81-0.99], antral follicles count (OR 4.3, CI 2.7-6.9), infertility cause (tubal factor, OR 11.5, CI 1.1-51.3), hypothyroidism (OR 3.8, CI 1.5-9.4) and positive history of ovarian surgery (OR 0.2, CI 0.05-0.9) were the most important predictors of OHSS. The regression model had an area under curve of 0.94, presenting an allowable discriminative performance that was equal with two strong predictive variables, including the number of follicles and serum estradiol level on human chorionic gonadotropin day. The predictive regression model based on primary characteristics of NPCOS patients had equal specificity in comparison with two mentioned strong predictive variables. Therefore, it may be beneficial to apply this model before the beginning of ovarian stimulation protocol.
Cestari, Andrea
2013-01-01
Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.
Triebel, Kristen L; Novack, Thomas A; Kennedy, Richard; Martin, Roy C; Dreer, Laura E; Raman, Rema; Marson, Daniel C
2016-01-01
To identify neurocognitive predictors of medical decision-making capacity (MDC) in participants with mild and moderate/severe traumatic brain injury (TBI). Academic medical center. Sixty adult controls and 104 adults with TBI (49 mild, 55 moderate/severe) evaluated within 6 weeks of injury. Prospective cross-sectional study. Participants completed the Capacity to Consent to Treatment Instrument to assess MDC and a neuropsychological test battery. We used factor analysis to reduce the battery test measures into 4 cognitive composite scores (verbal memory, verbal fluency, academic skills, and processing speed/executive function). We identified cognitive predictors of the 3 most clinically relevant Capacity to Consent to Treatment Instrument consent standards (appreciation, reasoning, and understanding). In controls, academic skills (word reading, arithmetic) and verbal memory predicted understanding; verbal fluency predicted reasoning; and no predictors emerged for appreciation. In the mild TBI group, verbal memory predicted understanding and reasoning, whereas academic skills predicted appreciation. In the moderate/severe TBI group, verbal memory and academic skills predicted understanding; academic skills predicted reasoning; and academic skills and verbal fluency predicted appreciation. Verbal memory was a predictor of MDC in controls and persons with mild and moderate/severe TBI. In clinical practice, impaired verbal memory could serve as a "red flag" for diminished consent capacity in persons with recent TBI.
NASA Technical Reports Server (NTRS)
Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily;
2013-01-01
The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.
Development of an accident duration prediction model on the Korean Freeway Systems.
Chung, Younshik
2010-01-01
Since duration prediction is one of the most important steps in an accident management process, there have been several approaches developed for modeling accident duration. This paper presents a model for the purpose of accident duration prediction based on accurately recorded and large accident dataset from the Korean Freeway Systems. To develop the duration prediction model, this study utilizes the log-logistic accelerated failure time (AFT) metric model and a 2-year accident duration dataset from 2006 to 2007. Specifically, the 2006 dataset is utilized to develop the prediction model and then, the 2007 dataset was employed to test the temporal transferability of the 2006 model. Although the duration prediction model has limitations such as large prediction error due to the individual differences of the accident treatment teams in terms of clearing similar accidents, the results from the 2006 model yielded a reasonable prediction based on the mean absolute percentage error (MAPE) scale. Additionally, the results of the statistical test for temporal transferability indicated that the estimated parameters in the duration prediction model are stable over time. Thus, this temporal stability suggests that the model may have potential to be used as a basis for making rational diversion and dispatching decisions in the event of an accident. Ultimately, such information will beneficially help in mitigating traffic congestion due to accidents.
Severe rainfall prediction systems for civil protection purposes
NASA Astrophysics Data System (ADS)
Comellas, A.; Llasat, M. C.; Molini, L.; Parodi, A.; Siccardi, F.
2010-09-01
One of the most common natural hazards impending on Mediterranean regions is the occurrence of severe weather structures able to produce heavy rainfall. Floods have killed about 1000 people across all Europe in last 10 years. With the aim of mitigating this kind of risk, quantitative precipitation forecasts (QPF) and rain probability forecasts are two tools nowadays available for national meteorological services and institutions responsible for weather forecasting in order to and predict rainfall, by using either the deterministic or the probabilistic approach. This study provides an insight of the different approaches used by Italian (DPC) and Catalonian (SMC) Civil Protection and the results they achieved with their peculiar issuing-system for early warnings. For the former, the analysis considers the period between 2006-2009 in which the predictive ability of the forecasting system, based on the numerical weather prediction model COSMO-I7, has been put into comparison with ground based observations (composed by more than 2000 raingauge stations, Molini et al., 2009). Italian system is mainly focused on regional-scale warnings providing forecasts for periods never shorter than 18 hours and very often have a 36-hour maximum duration . The information contained in severe weather bulletins is not quantitative and usually is referred to a specific meteorological phenomena (thunderstorms, wind gales et c.). Updates and refining have a usual refresh time of 24 hours. SMC operates within the Catalonian boundaries and uses a warning system that mixes both quantitative and probabilistic information. For each administrative region ("comarca") Catalonia is divided into, forecasters give an approximate value of the average predicted rainfall and the probability of overcoming that threshold. Usually warnings are re-issued every 6 hours and their duration depends on the predicted time extent of the storm. In order to provide a comprehensive QPF verification, the rainfall predicted by Mesoscale Model 5 (MM5), the SMC forecast operational model, is compared with the local rain gauge network for year 2008 (Comellas et al., 2010). This study presents benefits and drawbacks of both Italian and Catalonian systems. Moreover, a particular attention is paid on the link between system's predictive ability and the predicted severe weather type as a function of its space-time development.
Varma, Manthena V S; Lai, Yurong; Kimoto, Emi; Goosen, Theunis C; El-Kattan, Ayman F; Kumar, Vikas
2013-04-01
Quantitative prediction of complex drug-drug interactions (DDIs) is challenging. Repaglinide is mainly metabolized by cytochrome-P-450 (CYP)2C8 and CYP3A4, and is also a substrate of organic anion transporting polypeptide (OATP)1B1. The purpose is to develop a physiologically based pharmacokinetic (PBPK) model to predict the pharmacokinetics and DDIs of repaglinide. In vitro hepatic transport of repaglinide, gemfibrozil and gemfibrozil 1-O-β-glucuronide was characterized using sandwich-culture human hepatocytes. A PBPK model, implemented in Simcyp (Sheffield, UK), was developed utilizing in vitro transport and metabolic clearance data. In vitro studies suggested significant active hepatic uptake of repaglinide. Mechanistic model adequately described repaglinide pharmacokinetics, and successfully predicted DDIs with several OATP1B1 and CYP3A4 inhibitors (<10% error). Furthermore, repaglinide-gemfibrozil interaction at therapeutic dose was closely predicted using in vitro fraction metabolism for CYP2C8 (0.71), when primarily considering reversible inhibition of OATP1B1 and mechanism-based inactivation of CYP2C8 by gemfibrozil and gemfibrozil 1-O-β-glucuronide. This study demonstrated that hepatic uptake is rate-determining in the systemic clearance of repaglinide. The model quantitatively predicted several repaglinide DDIs, including the complex interactions with gemfibrozil. Both OATP1B1 and CYP2C8 inhibition contribute significantly to repaglinide-gemfibrozil interaction, and need to be considered for quantitative rationalization of DDIs with either drug.
Rational selection of training and test sets for the development of validated QSAR models
NASA Astrophysics Data System (ADS)
Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander
2003-02-01
Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.
Evans, Sarah E.; Steel, Anne; DiLillo, David
2013-01-01
Objectives The current study investigates the moderating effect of perceived social support on associations between child maltreatment severity and adult trauma symptoms. We extend the existing literature by examining the roles of severity of multiple maltreatment types (i.e., sexual, physical, and emotional abuse; physical and emotional neglect) and gender in this process. Methods The sample included 372 newlywed individuals recruited from marriage license records. Participants completed a number of self-report questionnaires measuring the nature and severity of child maltreatment history, perceived social support from friends and family, and trauma-related symptoms. These questionnaires were part of a larger study, investigating marital and intrapersonal functioning. We conducted separate, two-step hierarchical multiple regression models for perceived social support from family and perceived social support from friends. In each of these models, total trauma symptomatology was predicted from each child maltreatment severity variable, perceived social support, and the product of the two variables. In order to examine the role of gender, we conducted separate analyses for women and men. Results As hypothesized, increased severity of several maltreatment types (sexual abuse, emotional abuse, emotional neglect, and physical neglect) predicted greater trauma symptoms for both women and men, and increased physical abuse severity predicted greater trauma symptoms for women. Perceived social support from both family and friends predicted lower trauma symptoms across all levels of maltreatment for men. For women, greater perceived social support from friends, but not from family, predicted decreased trauma symptoms. Finally, among women, perceived social support from family interacted with child maltreatment such that, as the severity of maltreatment (physical and emotional abuse, emotional neglect) increased, the buffering effect of perceived social support from family on trauma symptoms diminished. Conclusions The results of the current study shed new light on the potential for social support to shield individuals against long-term trauma symptoms, and suggest the importance of strengthening perceptions of available social support when working with adult survivors of child maltreatment. PMID:23623620
Candela, Lori; Gutierrez, Antonio P; Keating, Sarah
2015-04-01
To investigate the relations among several factors regarding the academic context within a nationally representative sample of U.S. nursing faculty. Correlational design using structural equation modeling to explore the predictive nature of several factors related to the academic organization and the work life of nursing faculty. A survey was used to evaluate several aspects of the work life of U.S. nursing faculty members. Nursing faculty members in academic organizations across the U.S. serving at either CCNE- or NLNAC-accredited institutions of higher education. Standard confirmatory factor analysis was used to assess the validity of a proposed measurement model, and structural equation modeling was used to evaluate the validity of a structural/latent variable model. Several direct and indirect effects were observed among the factors under investigation. Of special importance, perceptions of nurse administration's support and perceived teaching expertise positively predicted U.S. nursing faculty members' intent to stay in the academic organization. Understanding the way that nursing faculty members' perceptions of the various factors common to the academic context interact with intent to stay in the academic organization is essential for faculty and nursing administrators. This information can assist administrators in obtaining more resources for faculty development to lobby for additional faculty in order to meet the teaching, research, and service missions of the organization; and to personalize relationships with individual faculty members to understand their needs and acknowledge their efforts. Published by Elsevier Ltd.
2008-02-01
clinician to distinguish between the effects of treatment and the effects of disease. Several different prediction models for multiple or- gan failure...treat- ment protocols and allow a clinician to distinguish the effect of treatment from effect of disease. In this study, our model predicted in...TNF produces a decrease in protein C activation by down regulating the expression of endothelial cell protein C receptor and thrombomodulin, both of
Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
NASA Technical Reports Server (NTRS)
Perry, Bruce; Anderson, Molly
2015-01-01
The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.
Glassman, Patrick M; Chen, Yang; Balthasar, Joseph P
2015-10-01
Preclinical assessment of monoclonal antibody (mAb) disposition during drug development often includes investigations in non-human primate models. In many cases, mAb exhibit non-linear disposition that relates to mAb-target binding [i.e., target-mediated disposition (TMD)]. The goal of this work was to develop a physiologically-based pharmacokinetic (PBPK) model to predict non-linear mAb disposition in plasma and in tissues in monkeys. Physiological parameters for monkeys were collected from several sources, and plasma data for several mAbs associated with linear pharmacokinetics were digitized from prior literature reports. The digitized data displayed great variability; therefore, parameters describing inter-antibody variability in the rates of pinocytosis and convection were estimated. For prediction of the disposition of individual antibodies, we incorporated tissue concentrations of target proteins, where concentrations were estimated based on categorical immunohistochemistry scores, and with assumed localization of target within the interstitial space of each organ. Kinetics of target-mAb binding and target turnover, in the presence or absence of mAb, were implemented. The model was then employed to predict concentration versus time data, via Monte Carlo simulation, for two mAb that have been shown to exhibit TMD (2F8 and tocilizumab). Model predictions, performed a priori with no parameter fitting, were found to provide good prediction of dose-dependencies in plasma clearance, the areas under plasma concentration versu time curves, and the time-course of plasma concentration data. This PBPK model may find utility in predicting plasma and tissue concentration versus time data and, potentially, the time-course of receptor occupancy (i.e., mAb-target binding) to support the design and interpretation of preclinical pharmacokinetic-pharmacodynamic investigations in non-human primates.
Kahmann, A; Anzanello, M J; Fogliatto, F S; Marcelo, M C A; Ferrão, M F; Ortiz, R S; Mariotti, K C
2018-04-15
Street cocaine is typically altered with several compounds that increase its harmful health-related side effects, most notably depression, convulsions, and severe damages to the cardiovascular system, lungs, and brain. Thus, determining the concentration of cocaine and adulterants in seized drug samples is important from both health and forensic perspectives. Although FTIR has been widely used to identify the fingerprint and concentration of chemical compounds, spectroscopy datasets are usually comprised of thousands of highly correlated wavenumbers which, when used as predictors in regression models, tend to undermine the predictive performance of multivariate techniques. In this paper, we propose an FTIR wavenumber selection method aimed at identifying FTIR spectra intervals that best predict the concentration of cocaine and adulterants (e.g. caffeine, phenacetin, levamisole, and lidocaine) in cocaine samples. For that matter, the Mutual Information measure is integrated into a Quadratic Programming problem with the objective of minimizing the probability of retaining redundant wavenumbers, while maximizing the relationship between retained wavenumbers and compounds' concentrations. Optimization outputs guide the order of inclusion of wavenumbers in a predictive model, using a forward-based wavenumber selection method. After the inclusion of each wavenumber, parameters of three alternative regression models are estimated, and each model's prediction error is assessed through the Mean Average Error (MAE) measure; the recommended subset of retained wavenumbers is the one that minimizes the prediction error with maximum parsimony. Using our propositions in a dataset of 115 cocaine samples we obtained a best prediction model with average MAE of 0.0502 while retaining only 2.29% of the original wavenumbers, increasing the predictive precision by 0.0359 when compared to a model using the complete set of wavenumbers as predictors. Copyright © 2018 Elsevier B.V. All rights reserved.
Validation of the measure automobile emissions model : a statistical analysis
DOT National Transportation Integrated Search
2000-09-01
The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized e...
The transferability of safety-driven access management models for application to other sites.
DOT National Transportation Integrated Search
2001-01-01
Several research studies have produced mathematical models that predict the safety impacts of selected access management techniques. Since new models require substantial resources to construct, this study evaluated five existing models with regard to...
Empirical algorithms to predict aragonite saturation state
NASA Astrophysics Data System (ADS)
Turk, Daniela; Dowd, Michael
2017-04-01
Novel sensor packages deployed on autonomous platforms (Profiling Floats, Gliders, Moorings, SeaCycler) and biogeochemical models have a potential to increase the coverage of a key water chemistry variable, aragonite saturation state (ΩAr) in time and space, in particular in the under sampled regions of global ocean. However, these do not provide the set of inorganic carbon measurements commonly used to derive ΩAr. There is therefore a need to develop regional predictive models to determine ΩAr from measurements of commonly observed or/and non carbonate oceanic variables. Here, we investigate predictive skill of several commonly observed oceanographic variables (temperature, salinity, oxygen, nitrate, phosphate and silicate) in determining ΩAr using climatology and shipboard data. This will allow us to assess potential for autonomous sensors and biogeochemical models to monitor ΩAr regionally and globally. We apply the regression models to several time series data sets and discuss regional differences and their implications for global estimates of ΩAr.
Advanced Computational Modeling Approaches for Shock Response Prediction
NASA Technical Reports Server (NTRS)
Derkevorkian, Armen; Kolaini, Ali R.; Peterson, Lee
2015-01-01
Motivation: (1) The activation of pyroshock devices such as explosives, separation nuts, pin-pullers, etc. produces high frequency transient structural response, typically from few tens of Hz to several hundreds of kHz. (2) Lack of reliable analytical tools makes the prediction of appropriate design and qualification test levels a challenge. (3) In the past few decades, several attempts have been made to develop methodologies that predict the structural responses to shock environments. (4) Currently, there is no validated approach that is viable to predict shock environments overt the full frequency range (i.e., 100 Hz to 10 kHz). Scope: (1) Model, analyze, and interpret space structural systems with complex interfaces and discontinuities, subjected to shock loads. (2) Assess the viability of a suite of numerical tools to simulate transient, non-linear solid mechanics and structural dynamics problems, such as shock wave propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, James V.; Wellman, Gerald William; Emery, John M.
2011-09-01
Fracture or tearing of ductile metals is a pervasive engineering concern, yet accurate prediction of the critical conditions of fracture remains elusive. Sandia National Laboratories has been developing and implementing several new modeling methodologies to address problems in fracture, including both new physical models and new numerical schemes. The present study provides a double-blind quantitative assessment of several computational capabilities including tearing parameters embedded in a conventional finite element code, localization elements, extended finite elements (XFEM), and peridynamics. For this assessment, each of four teams reported blind predictions for three challenge problems spanning crack initiation and crack propagation. After predictionsmore » had been reported, the predictions were compared to experimentally observed behavior. The metal alloys for these three problems were aluminum alloy 2024-T3 and precipitation hardened stainless steel PH13-8Mo H950. The predictive accuracies of the various methods are demonstrated, and the potential sources of error are discussed.« less
Predicting space telerobotic operator training performance from human spatial ability assessment
NASA Astrophysics Data System (ADS)
Liu, Andrew M.; Oman, Charles M.; Galvan, Raquel; Natapoff, Alan
2013-11-01
Our goal was to determine whether existing tests of spatial ability can predict an astronaut's qualification test performance after robotic training. Because training astronauts to be qualified robotics operators is so long and expensive, NASA is interested in tools that can predict robotics performance before training begins. Currently, the Astronaut Office does not have a validated tool to predict robotics ability as part of its astronaut selection or training process. Commonly used tests of human spatial ability may provide such a tool to predict robotics ability. We tested the spatial ability of 50 active astronauts who had completed at least one robotics training course, then used logistic regression models to analyze the correlation between spatial ability test scores and the astronauts' performance in their evaluation test at the end of the training course. The fit of the logistic function to our data is statistically significant for several spatial tests. However, the prediction performance of the logistic model depends on the criterion threshold assumed. To clarify the critical selection issues, we show how the probability of correct classification vs. misclassification varies as a function of the mental rotation test criterion level. Since the costs of misclassification are low, the logistic models of spatial ability and robotic performance are reliable enough only to be used to customize regular and remedial training. We suggest several changes in tracking performance throughout robotics training that could improve the range and reliability of predictive models.
Vulnerability of shallow groundwater and drinking-water wells to nitrate in the United States
Nolan, Bernard T.; Hitt, Kerie J.
2006-01-01
Two nonlinear models were developed at the national scale to (1) predict contamination of shallow ground water (typically < 5 m deep) by nitrate from nonpoint sources and (2) to predict ambient nitrate concentration in deeper supplies used for drinking. The new models have several advantages over previous national-scale approaches. First, they predict nitrate concentration (rather than probability of occurrence), which can be directly compared with water-quality criteria. Second, the models share a mechanistic structure that segregates nitrogen (N) sources and physical factors that enhance or restrict nitrate transport and accumulation in ground water. Finally, data were spatially averaged to minimize small-scale variability so that the large-scale influences of N loading, climate, and aquifer characteristics could more readily be identified. Results indicate that areas with high N application, high water input, well-drained soils, fractured rocks or those with high effective porosity, and lack of attenuation processes have the highest predicted nitrate concentration. The shallow groundwater model (mean square error or MSE = 2.96) yielded a coefficient of determination (R2) of 0.801, indicating that much of the variation in nitrate concentration is explained by the model. Moderate to severe nitrate contamination is predicted to occur in the High Plains, northern Midwest, and selected other areas. The drinking-water model performed comparably (MSE = 2.00, R2 = 0.767) and predicts that the number of users on private wells and residing in moderately contaminated areas (>5 to ≤10 mg/L nitrate) decreases by 12% when simulation depth increases from 10 to 50 m.
Vulnerability of shallow groundwater and drinking-water wells to nitrate in the United States.
Nolan, Bernard T; Hitt, Kerie J
2006-12-15
Two nonlinear models were developed at the national scale to (1) predict contamination of shallow ground water (typically < 5 m deep) by nitrate from nonpoint sources and (2) to predict ambient nitrate concentration in deeper supplies used for drinking. The new models have several advantages over previous national-scale approaches. First, they predict nitrate concentration (rather than probability of occurrence), which can be directly compared with water-quality criteria. Second, the models share a mechanistic structure that segregates nitrogen (N) sources and physical factors that enhance or restrict nitrate transport and accumulation in ground water. Finally, data were spatially averaged to minimize small-scale variability so that the large-scale influences of N loading, climate, and aquifer characteristics could more readily be identified. Results indicate that areas with high N application, high water input, well-drained soils, fractured rocks or those with high effective porosity, and lack of attenuation processes have the highest predicted nitrate concentration. The shallow groundwater model (mean square error or MSE = 2.96) yielded a coefficient of determination (R(2)) of 0.801, indicating that much of the variation in nitrate concentration is explained by the model. Moderate to severe nitrate contamination is predicted to occur in the High Plains, northern Midwest, and selected other areas. The drinking-water model performed comparably (MSE = 2.00, R(2) = 0.767) and predicts that the number of users on private wells and residing in moderately contaminated areas (>5 to < or =10 mg/L nitrate) decreases by 12% when simulation depth increases from 10 to 50 m.
Ahmadi, Hamed; Rodehutscord, Markus
2017-01-01
In the nutrition literature, there are several reports on the use of artificial neural network (ANN) and multiple linear regression (MLR) approaches for predicting feed composition and nutritive value, while the use of support vector machines (SVM) method as a new alternative approach to MLR and ANN models is still not fully investigated. The MLR, ANN, and SVM models were developed to predict metabolizable energy (ME) content of compound feeds for pigs based on the German energy evaluation system from analyzed contents of crude protein (CP), ether extract (EE), crude fiber (CF), and starch. A total of 290 datasets from standardized digestibility studies with compound feeds was provided from several institutions and published papers, and ME was calculated thereon. Accuracy and precision of developed models were evaluated, given their produced prediction values. The results revealed that the developed ANN [ R 2 = 0.95; root mean square error (RMSE) = 0.19 MJ/kg of dry matter] and SVM ( R 2 = 0.95; RMSE = 0.21 MJ/kg of dry matter) models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR ( R 2 = 0.89; RMSE = 0.27 MJ/kg of dry matter). The developed ANN and SVM models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR; however, there were not obvious differences between performance of ANN and SVM models. Thus, SVM model may also be considered as a promising tool for modeling the relationship between chemical composition and ME of compound feeds for pigs. To provide the readers and nutritionist with the easy and rapid tool, an Excel ® calculator, namely, SVM_ME_pig, was created to predict the metabolizable energy values in compound feeds for pigs using developed support vector machine model.
Predictive modeling of neuroanatomic structures for brain atrophy detection
NASA Astrophysics Data System (ADS)
Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming
2010-03-01
In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Sanders, B. F.; Gallegos, H. A.; Schubert, J. E.
2011-12-01
The Baldwin Hills dam-break flood and associated structural damage is investigated in this study. The flood caused high velocity flows exceeding 5 m/s which destroyed 41 wood-framed residential structures, 16 of which were completed washed out. Damage is predicted by coupling a calibrated hydrodynamic flood model based on the shallow-water equations to structural damage models. The hydrodynamic and damage models are two-way coupled so building failure is predicted upon exceedance of a hydraulic intensity parameter, which in turn triggers a localized reduction in flow resistance which affects flood intensity predictions. Several established damage models and damage correlations reported in the literature are tested to evaluate the predictive skill for two damage states defined by destruction (Level 2) and washout (Level 3). Results show that high-velocity structural damage can be predicted with a remarkable level of skill using established damage models, but only with two-way coupling of the hydrodynamic and damage models. In contrast, when structural failure predictions have no influence on flow predictions, there is a significant reduction in predictive skill. Force-based damage models compare well with a subset of the damage models which were devised for similar types of structures. Implications for emergency planning and preparedness as well as monetary damage estimation are discussed.
Finch, Bryson E; Marzooghi, Solmaz; Di Toro, Dominic M; Stubblefield, William A
2017-08-01
Crude oils are composed of an assortment of hydrocarbons, some of which are polycyclic aromatic hydrocarbons (PAHs). Polycyclic aromatic hydrocarbons are of particular interest due to their narcotic and potential phototoxic effects. Several studies have examined the phototoxicity of individual PAHs and fresh and weathered crude oils, and several models have been developed to predict PAH toxicity. Fingerprint analyses of oils have shown that PAHs in crude oils are predominantly alkylated. However, current models for estimating PAH phototoxicity assume toxic equivalence between unsubstituted (i.e., parent) and alkyl-substituted compounds. This approach may be incorrect if substantial differences in toxic potency exist between unsubstituted and substituted PAHs. The objective of the present study was to examine the narcotic and photo-enhanced toxicity of commercially available unsubstituted and alkylated PAHs to mysid shrimp (Americamysis bahia). Data were used to validate predictive models of phototoxicity based on the highest occupied molecular orbital-lowest unoccupied molecular orbital (HOMO-LUMO) gap approach and to develop relative effect potencies. Results demonstrated that photo-enhanced toxicity increased with increasing methylation and that phototoxic PAH potencies vary significantly among unsubstituted compounds. Overall, predictive models based on the HOMO-LUMO gap were relatively accurate in predicting phototoxicity for unsubstituted PAHs but are limited to qualitative assessments. Environ Toxicol Chem 2017;36:2043-2049. © 2017 SETAC. © 2017 SETAC.
Luo, Xiaochen; Nuttall, Amy K; Locke, Kenneth D; Hopwood, Christopher J
2018-01-01
Despite wide recognition of the importance of interpersonal problems in binge eating disorder (BED), the nature of this association remains unclear. Examining the direction of this longitudinal relationship is necessary to clarify the role that interpersonal problems play in the course of binge eating problems, and thus to specify treatment targets and mechanisms. This study aimed to articulate the bidirectional, longitudinal associations between BED and both the general severity of interpersonal problems as well as warm and dominant interpersonal styles. Severity and styles of interpersonal problems and BED symptoms were measured at baseline, 12 weeks, 24 weeks, and 36 weeks in a sample of 107 women in treatment for BED. Results from bivariate latent change score models indicated that interpersonal problem severity and BED symptoms are associated longitudinally but do not directly influence each other. The results indicated a bidirectional interrelation between binge eating symptoms and dominance such that less dominance predicted greater decreases in binge eating problems, and less binge eating symptoms predicted greater increases in dominance. We also found that binge eating symptoms positively predicted changes in warmth (i.e., less binge eating symptoms predicted less increases or more decreases in warmth). These findings highlight the importance of using dynamic models to examine directionality and delineate the distinct roles of interpersonal severity and styles in BED trajectories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Predicting protein-binding regions in RNA using nucleotide profiles and compositions.
Choi, Daesik; Park, Byungkyu; Chae, Hanju; Lee, Wook; Han, Kyungsook
2017-03-14
Motivated by the increased amount of data on protein-RNA interactions and the availability of complete genome sequences of several organisms, many computational methods have been proposed to predict binding sites in protein-RNA interactions. However, most computational methods are limited to finding RNA-binding sites in proteins instead of protein-binding sites in RNAs. Predicting protein-binding sites in RNA is more challenging than predicting RNA-binding sites in proteins. Recent computational methods for finding protein-binding sites in RNAs have several drawbacks for practical use. We developed a new support vector machine (SVM) model for predicting protein-binding regions in mRNA sequences. The model uses sequence profiles constructed from log-odds scores of mono- and di-nucleotides and nucleotide compositions. The model was evaluated by standard 10-fold cross validation, leave-one-protein-out (LOPO) cross validation and independent testing. Since actual mRNA sequences have more non-binding regions than protein-binding regions, we tested the model on several datasets with different ratios of protein-binding regions to non-binding regions. The best performance of the model was obtained in a balanced dataset of positive and negative instances. 10-fold cross validation with a balanced dataset achieved a sensitivity of 91.6%, a specificity of 92.4%, an accuracy of 92.0%, a positive predictive value (PPV) of 91.7%, a negative predictive value (NPV) of 92.3% and a Matthews correlation coefficient (MCC) of 0.840. LOPO cross validation showed a lower performance than the 10-fold cross validation, but the performance remains high (87.6% accuracy and 0.752 MCC). In testing the model on independent datasets, it achieved an accuracy of 82.2% and an MCC of 0.656. Testing of our model and other state-of-the-art methods on a same dataset showed that our model is better than the others. Sequence profiles of log-odds scores of mono- and di-nucleotides were much more powerful features than nucleotide compositions in finding protein-binding regions in RNA sequences. But, a slight performance gain was obtained when using the sequence profiles along with nucleotide compositions. These are preliminary results of ongoing research, but demonstrate the potential of our approach as a powerful predictor of protein-binding regions in RNA. The program and supporting data are available at http://bclab.inha.ac.kr/RBPbinding .
A burnout prediction model based around char morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao Wu; Edward Lester; Michael Cloke
Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less
Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds
Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark
2009-01-01
Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.
Munoz, Miranda J.; Kumar, Raj G.; Oh, Byung-Mo; Conley, Yvette P.; Wang, Zhensheng; Failla, Michelle D.; Wagner, Amy K.
2017-01-01
Distinct regulatory signaling mechanisms exist between cortisol and brain derived neurotrophic factor (BDNF) that may influence secondary injury cascades associated with traumatic brain injury (TBI) and predict outcome. We investigated concurrent CSF BDNF and cortisol relationships in 117 patients sampled days 0–6 after severe TBI while accounting for BDNF genetics and age. We also determined associations between CSF BDNF and cortisol with 6-month mortality. BDNF variants, rs6265 and rs7124442, were used to create a gene risk score (GRS) in reference to previously published hypothesized risk for mortality in “younger patients” (<48 years) and hypothesized BDNF production/secretion capacity with these variants. Group based trajectory analysis (TRAJ) was used to create two cortisol groups (high and low trajectories). A Bayesian estimation approach informed the mediation models. Results show CSF BDNF predicted patient cortisol TRAJ group (P = 0.001). Also, GRS moderated BDNF associations with cortisol TRAJ group. Additionally, cortisol TRAJ predicted 6-month mortality (P = 0.001). In a mediation analysis, BDNF predicted mortality, with cortisol acting as the mediator (P = 0.011), yielding a mediation percentage of 29.92%. Mediation effects increased to 45.45% among younger patients. A BDNF*GRS interaction predicted mortality in younger patients (P = 0.004). Thus, we conclude 6-month mortality after severe TBI can be predicted through a mediation model with CSF cortisol and BDNF, suggesting a regulatory role for cortisol with BDNF's contribution to TBI pathophysiology and mortality, particularly among younger individuals with severe TBI. Based on the literature, cortisol modulated BDNF effects on mortality after TBI may be related to known hormone and neurotrophin relationships to neurological injury severity and autonomic nervous system imbalance. PMID:28337122
Medium-range, objective predictions of thunderstorm location and severity for aviation
NASA Technical Reports Server (NTRS)
Wilson, G. S.; Turner, R. E.
1981-01-01
This paper presents a computerized technique for medium-range (12-48h) prediction of both the location and severity of thunderstorms utilizing atmospheric predictions from the National Meteorological Center's limited-area fine-mesh model (LFM). A regional-scale analysis scheme is first used to examine the spatial and temporal distributions of forecasted variables associated with the structure and dynamics of mesoscale systems over an area of approximately 10 to the 6th sq km. The final prediction of thunderstorm location and severity is based upon an objective combination of these regionally analyzed variables. Medium-range thunderstorm predictions are presented for the late afternoon period of April 10, 1979, the day of the Wichita Falls, Texas tornado. Conventional medium-range thunderstorm forecasts, made from observed data, are presented with the case study to demonstrate the possible application of this objective technique in improving 12-48 h thunderstorm forecasts for aviation.
McGovern, Amy; Gagne, David J; Williams, John K; Brown, Rodger A; Basara, Jeffrey B
Severe weather, including tornadoes, thunderstorms, wind, and hail annually cause significant loss of life and property. We are developing spatiotemporal machine learning techniques that will enable meteorologists to improve the prediction of these events by improving their understanding of the fundamental causes of the phenomena and by building skillful empirical predictive models. In this paper, we present significant enhancements of our Spatiotemporal Relational Probability Trees that enable autonomous discovery of spatiotemporal relationships as well as learning with arbitrary shapes. We focus our evaluation on two real-world case studies using our technique: predicting tornadoes in Oklahoma and predicting aircraft turbulence in the United States. We also discuss how to evaluate success for a machine learning algorithm in the severe weather domain, which will enable new methods such as ours to transfer from research to operations, provide a set of lessons learned for embedded machine learning applications, and discuss how to field our technique.
O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite
2012-01-01
Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.
Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends
Attanasi, E.D.; Coburn, T.C.
2009-01-01
This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.
We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less
Roberts, Susan L.; Van Wagtendonk, Jan W.; Miles, A. Keith; Kelt, Douglas A.; Lutz, James A.
2008-01-01
We evaluated the impact of fire severity and related spatial and vegetative parameters on small mammal populations in 2 yr- to 15 yr-old burns in Yosemite National Park, California, USA. We also developed habitat models that would predict small mammal responses to fires of differing severity. We hypothesized that fire severity would influence the abundances of small mammals through changes in vegetation composition, structure, and spatial habitat complexity. Deer mouse (Peromyscus maniculatus) abundance responded negatively to fire severity, and brush mouse (P. boylii) abundance increased with increasing oak tree (Quercus spp.) cover. Chipmunk (Neotamias spp.) abundance was best predicted through a combination of a negative response to oak tree cover and a positive response to spatial habitat complexity. California ground squirrel (Spermophilus beecheyi) abundance increased with increasing spatial habitat complexity. Our results suggest that fire severity, with subsequent changes in vegetation structure and habitat spatial complexity, can influence small mammal abundance patterns.
Identifying depression severity risk factors in persons with traumatic spinal cord injury.
Williams, Ryan T; Wilson, Catherine S; Heinemann, Allen W; Lazowski, Linda E; Fann, Jesse R; Bombardier, Charles H
2014-02-01
Examine the relationship between demographic characteristics, health-, and injury-related characteristics, and substance misuse across multiple levels of depression severity. 204 persons with traumatic spinal cord injury (SCI) volunteered as part of screening efforts for a randomized controlled trial of venlafaxine extended release for major depressive disorder (MDD). Instruments included the Patient Health Questionnaire-9 (PHQ-9) depression scale, the Alcohol Use Disorders Identification Test (AUDIT), and the Substance Abuse in Vocational Rehabilitation-Screener (SAVR-S), which contains 3 subscales: drug misuse, alcohol misuse, and a subtle items scale. Each of the SAVR-S subscales contributes to an overall substance use disorder (SUD) outcome. Three proportional odds models were specified, varying the substance misuse measure included in each model. 44% individuals had no depression symptoms, 31% had mild symptoms, 16% had moderate symptoms, 6% had moderately severe symptoms, and 3% had severe depression symptoms. Alcohol misuse, as indicated by the AUDIT and the SAVR-S drug misuse subscale scores were significant predictors of depression symptom severity. The SAVR-S substance use disorder (SUD) screening outcome was the most predictive variable. Level of education was only significantly predictive of depression severity in the model using the AUDIT alcohol misuse indicator. Likely SUD as measured by the SAVR-S was most predictive of depression symptom severity in this sample of persons with traumatic SCI. Drug and alcohol screening are important for identifying individuals at risk for depression, but screening for both may be optimal. Further research is needed on risk and protective factors for depression, including psychosocial characteristics. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
DOT National Transportation Integrated Search
1974-08-01
The Transportation Systems Center (TSC) ILS Localizer Performance Prediction Model was used to predict the derogation to an Alford 1B Localizer caused by vehicular traffic traveling on a roadway to be located in front of the localizer. Several differ...
Measured rates of in vitro hepatic clearance by fish have been used by several authors as inputs to predictive models for chemical accumulation. The resulting predictions are consistent with observed trends in bioaccumulation and provide a proof of principal for the approach. ...
Predictions of cell damage rates for Lifesat missions
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Atwell, William; Hardy, Alva C.; Golightly, Michael J.; Wilson, John W.; Townsend, Lawrence W.; Shinn, Judy; Nealy, John E.; Katz, Robert
1990-01-01
The track model of Katz is used to make predictions of cell damage rates for possible Lifesat experiments. Contributions from trapped protons and electrons and galactic cosmic rays are considered for several orbits. Damage rates for survival and transformation of C3HT10-1/2 cells are predicted for various spacecraft shields.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Elhai, Jon D; Voorhees, Summer; Ford, Julian D; Min, Kyeong Sam; Frueh, B Christopher
2009-01-30
We explored sociodemographic and illness/need associations with both recent mental healthcare utilization intensity and self-reported behavioral intentions to seek treatment. Data were examined from a community sample of 201 participants presenting for medical appointments at a Midwestern U.S. primary care clinic, in a cross-sectional survey study. Using non-linear regression analyses accounting for the excess of zero values in treatment visit counts, we found that both sociodemographic and illness/need models were significantly predictive of both recent treatment utilization intensity and intentions to seek treatment. Need models added substantial variance in prediction, above and beyond sociodemographic models. Variables with the greatest predictive role in explaining past treatment utilization intensity were greater depression severity, perceived need for treatment, older age, and lower income. Robust variables in predicting intentions to seek treatment were greater depression severity, perceived need for treatment, and more positive treatment attitudes. This study extends research findings on mental health treatment utilization, specifically addressing medical patients and using statistical methods appropriate to examining treatment visit counts, and demonstrates the importance of both objective and subjective illness/need variables in predicting recent service use intensity and intended future utilization.
Stagnation Point Nonequilibrium Radiative Heating and the Influence of Energy Exchange Models
NASA Technical Reports Server (NTRS)
Hartung, Lin C.; Mitcheltree, Robert A.; Gnoffo, Peter A.
1991-01-01
A nonequilibrium radiative heating prediction method has been used to evaluate several energy exchange models used in nonequilibrium computational fluid dynamics methods. The radiative heating measurements from the FIRE II flight experiment supply an experimental benchmark against which different formulations for these exchange models can be judged. The models which predict the lowest radiative heating are found to give the best agreement with the flight data. Examination of the spectral distribution of radiation indicates that despite close agreement of the total radiation, many of the models examined predict excessive molecular radiation. It is suggested that a study of the nonequilibrium chemical kinetics may lead to a correction for this problem.
Amaku, M; Azevedo, F; Burattini, M N; Coelho, G E; Coutinho, F A B; Greenhalgh, D; Lopez, L F; Motitsuki, R S; Wilder-Smith, A; Massad, E
2016-08-19
The classical Ross-Macdonald model is often utilized to model vector-borne infections; however, this model fails on several fronts. First, using measured (or estimated) parameters, which values are accepted from the literature, the model predicts a much greater number of cases than what is usually observed. Second, the model predicts a single large outbreak that is followed by decades of much smaller outbreaks, which is not consistent with what is observed. Usually towns or cities report a number of recurrences for many years, even when environmental changes cannot explain the disappearance of the infection between the peaks. In this paper, we continue to examine the pitfalls in modelling this class of infections, and explain that, if properly used, the Ross-Macdonald model works and can be used to understand the patterns of epidemics and even, to some extent, be used to make predictions. We model several outbreaks of dengue fever and show that the variable pattern of yearly recurrence (or its absence) can be understood and explained by a simple Ross-Macdonald model modified to take into account human movement across a range of neighbourhoods within a city. In addition, we analyse the effect of seasonal variations in the parameters that determine the number, longevity and biting behaviour of mosquitoes. Based on the size of the first outbreak, we show that it is possible to estimate the proportion of the remaining susceptible individuals and to predict the likelihood and magnitude of the eventual subsequent outbreaks. This approach is described based on actual dengue outbreaks with different recurrence patterns from some Brazilian regions.
Predictive validity of behavioural animal models for chronic pain
Berge, Odd-Geir
2011-01-01
Rodent models of chronic pain may elucidate pathophysiological mechanisms and identify potential drug targets, but whether they predict clinical efficacy of novel compounds is controversial. Several potential analgesics have failed in clinical trials, in spite of strong animal modelling support for efficacy, but there are also examples of successful modelling. Significant differences in how methods are implemented and results are reported means that a literature-based comparison between preclinical data and clinical trials will not reveal whether a particular model is generally predictive. Limited reports on negative outcomes prevents reliable estimate of specificity of any model. Animal models tend to be validated with standard analgesics and may be biased towards tractable pain mechanisms. But preclinical publications rarely contain drug exposure data, and drugs are usually given in high doses and as a single administration, which may lead to drug distribution and exposure deviating significantly from clinical conditions. The greatest challenge for predictive modelling is, however, the heterogeneity of the target patient populations, in terms of both symptoms and pharmacology, probably reflecting differences in pathophysiology. In well-controlled clinical trials, a majority of patients shows less than 50% reduction in pain. A model that responds well to current analgesics should therefore predict efficacy only in a subset of patients within a diagnostic group. It follows that successful translation requires several models for each indication, reflecting critical pathophysiological processes, combined with data linking exposure levels with effect on target. LINKED ARTICLES This article is part of a themed issue on Translational Neuropharmacology. To view the other articles in this issue visit http://dx.doi.org/10.1111/bph.2011.164.issue-4 PMID:21371010
2002-03-01
source term. Several publications provided a thorough accounting of the accident, including “ Chernobyl Record” [Mould], and the NRC technical report...Report on the Accident at the Chernobyl Nuclear Power Station” [NUREG-1250]. The most comprehensive study of transport models to predict the...from the Chernobyl Accident: The ATMES Report” [Klug, et al.]. The Atmospheric Transport 5 Model Evaluation Study (ATMES) report used data
2012-09-01
make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma
Marek K. Jakubowksi; Qinghua Guo; Brandon Collins; Scott Stephens; Maggi Kelly
2013-01-01
We compared the ability of several classification and regression algorithms to predict forest stand structure metrics and standard surface fuel models. Our study area spans a dense, topographically complex Sierra Nevada mixed-conifer forest. We used clustering, regression trees, and support vector machine algorithms to analyze high density (average 9 pulses/m
Strategies for Near Real Time Estimates of Precipitable Water Vapor from GPS Ground Receivers
NASA Technical Reports Server (NTRS)
Y., Bar-Sever; Runge, T.; Kroger, P.
1995-01-01
GPS-based estimates of precipitable water vapor (PWV) may be useful in numerical weather models to improve short-term weather predictions. To be effective in numerical weather prediction models, GPS PWV estimates must be produced with sufficient accuracy in near real time. Several estimation strategies for the near real time processing of GPS data are investigated.
Evaluation of Deep Learning Representations of Spatial Storm Data
NASA Astrophysics Data System (ADS)
Gagne, D. J., II; Haupt, S. E.; Nychka, D. W.
2017-12-01
The spatial structure of a severe thunderstorm and its surrounding environment provide useful information about the potential for severe weather hazards, including tornadoes, hail, and high winds. Statistics computed over the area of a storm or from the pre-storm environment can provide descriptive information but fail to capture structural information. Because the storm environment is a complex, high-dimensional space, identifying methods to encode important spatial storm information in a low-dimensional form should aid analysis and prediction of storms by statistical and machine learning models. Principal component analysis (PCA), a more traditional approach, transforms high-dimensional data into a set of linearly uncorrelated, orthogonal components ordered by the amount of variance explained by each component. The burgeoning field of deep learning offers two potential approaches to this problem. Convolutional Neural Networks are a supervised learning method for transforming spatial data into a hierarchical set of feature maps that correspond with relevant combinations of spatial structures in the data. Generative Adversarial Networks (GANs) are an unsupervised deep learning model that uses two neural networks trained against each other to produce encoded representations of spatial data. These different spatial encoding methods were evaluated on the prediction of severe hail for a large set of storm patches extracted from the NCAR convection-allowing ensemble. Each storm patch contains information about storm structure and the near-storm environment. Logistic regression and random forest models were trained using the PCA and GAN encodings of the storm data and were compared against the predictions from a convolutional neural network. All methods showed skill over climatology at predicting the probability of severe hail. However, the verification scores among the methods were very similar and the predictions were highly correlated. Further evaluations are being performed to determine how the choice of input variables affects the results.
Kassam, Zain; Fabersunne, Camila Cribb; Smith, Mark B.; Alm, Eric J.; Kaplan, Gilaad G.; Nguyen, Geoffrey C.; Ananthakrishnan, Ashwin N.
2016-01-01
Background Clostridium difficile infection (CDI) is public health threat and associated with significant mortality. However, there is a paucity of objectively derived CDI severity scoring systems to predict mortality. Aims To develop a novel CDI risk score to predict mortality entitled: Clostridium difficile Associated Risk of Death Score (CARDS). Methods We obtained data from the United States 2011 Nationwide Inpatient Sample (NIS) database. All CDI-associated hospitalizations were identified using discharge codes (ICD-9-CM, 008.45). Multivariate logistic regression was utilized to identify independent predictors of mortality. CARDS was calculated by assigning a numeric weight to each parameter based on their odds ratio in the final logistic model. Predictive properties of model discrimination were assessed using the c-statistic and validated in an independent sample using the 2010 NIS database. Results We identified 77,776 hospitalizations, yielding an estimate of 374,747 cases with an associated diagnosis of CDI in the United States, 8% of whom died in the hospital. The 8 severity score predictors were identified on multivariate analysis: age, cardiopulmonary disease, malignancy, diabetes, inflammatory bowel disease, acute renal failure, liver disease and ICU admission, with weights ranging from −1 (for diabetes) to 5 (for ICU admission). The overall risk score in the cohort ranged from 0 to 18. Mortality increased significantly as CARDS increased. CDI-associated mortality was 1.2% with a CARDS of 0 compared to 100% with CARDS of 18. The model performed equally well in our validation cohort. Conclusion CARDS is a promising simple severity score to predict mortality among those hospitalized with CDI. PMID:26849527
Ferrer, Rebecca A; Klein, William M P; Avishai, Aya; Jones, Katelyn; Villegas, Megan; Sheeran, Paschal
2018-01-01
Although risk perception is a key concept in many health behavior theories, little research has explicitly tested when risk perception predicts motivation to take protective action against a health threat (protection motivation). The present study tackled this question by (a) adopting a multidimensional model of risk perception that comprises deliberative, affective, and experiential components (the TRIRISK model), and (b) taking a person-by-situation approach. We leveraged a highly intensive within-subjects paradigm to test features of the health threat (i.e., perceived severity) and individual differences (e.g., emotion reappraisal) as moderators of the relationship between the three types of risk perception and protection motivation in a within-subjects design. Multi-level modeling of 2968 observations (32 health threats across 94 participants) showed interactions among the TRIRISK components and moderation both by person-level and situational factors. For instance, affective risk perception better predicted protection motivation when deliberative risk perception was high, when the threat was less severe, and among participants who engage less in emotional reappraisal. These findings support the TRIRISK model and offer new insights into when risk perceptions predict protection motivation.
Klein, William M. P.; Avishai, Aya; Jones, Katelyn; Villegas, Megan; Sheeran, Paschal
2018-01-01
Although risk perception is a key concept in many health behavior theories, little research has explicitly tested when risk perception predicts motivation to take protective action against a health threat (protection motivation). The present study tackled this question by (a) adopting a multidimensional model of risk perception that comprises deliberative, affective, and experiential components (the TRIRISK model), and (b) taking a person-by-situation approach. We leveraged a highly intensive within-subjects paradigm to test features of the health threat (i.e., perceived severity) and individual differences (e.g., emotion reappraisal) as moderators of the relationship between the three types of risk perception and protection motivation in a within-subjects design. Multi-level modeling of 2968 observations (32 health threats across 94 participants) showed interactions among the TRIRISK components and moderation both by person-level and situational factors. For instance, affective risk perception better predicted protection motivation when deliberative risk perception was high, when the threat was less severe, and among participants who engage less in emotional reappraisal. These findings support the TRIRISK model and offer new insights into when risk perceptions predict protection motivation. PMID:29494705
A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174
Some Aspects of Forecasting Severe Thunderstorms during Cool-Season Return-Flow Episodes.
NASA Astrophysics Data System (ADS)
Weiss, Steven J.
1992-08-01
Historically, the Gulf of Mexico has been considered a primary source of water vapor that influences the weather for much of the United States east of the Rocky Mountains. Although severe thunderstorms and tornadoes occur most frequently during the spring and summer months, the periodic transport of Gulf moisture inland ahead of traveling baroclinic waves can result in significant severe-weather episodes during the cool season.To gain insight into the short-range skill in forecasting surface synoptic patterns associated with moisture return from the Gulf, operational numerical weather prediction models from the National Meteorological Center were examined. Sea level pressure fields from the Limited-Area Fine-Mesh Model (LFM), Nested Grid Model (NGM), and the aviation (AVN) run of the Global Spectral Model, valid 48 h after initial data time, were evaluated for three cool-season cases that preceded severe local storm outbreaks. The NGM and AVN provided useful guidance in forecasting the onset of return flow along the Gulf coast. There was a slight tendency for these models to be slightly slow in the development of return flow. In contrast the LFM typically overforecasts the occurrence of return flow and tends to `open the Gulf' from west to east too quickly.Although the low-level synoptic pattern may be forecast correctly, the overall prediction process is hampered by a data void over the Gulf. It is hypothesized that when the return-flow moisture is located over the Gulf, model forecasts of stability and the resultant operational severe local storm forecasts are less skillful compared to situations when the moisture has spread inland already. This hypothesis is tested by examining the performance of the initial second-day (day 2) severe thunderstorm outlook issued by the National Severe Storms Forecast Center during the Gulf of Mexico Experiment (GUFMEX) in early 1988.It has been found that characteristically different air masses were present along the Gulf coast prior to the issuance of outlooks that accurately predicted the occurrence of severe thunderstorms versus outlooks that did not verify well. Unstable air masses with ample low-level moisture were in place along the coast prior to the issuance of the `good' day 2 outlooks, whereas relatively dry, stable air masses were present before the issuance of `false-alarm' outlooks. In the latter cases, large errors in the NGM 48-h lifted-index predictions were located north of the Gulf coast.
Predicting Drug-induced Hepatotoxicity Using QSAR and Toxicogenomics Approaches
Low, Yen; Uehara, Takeki; Minowa, Yohsuke; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro; Sedykh, Alexander; Muratov, Eugene; Fourches, Denis; Zhu, Hao; Rusyn, Ivan; Tropsha, Alexander
2014-01-01
Quantitative Structure-Activity Relationship (QSAR) modeling and toxicogenomics are used independently as predictive tools in toxicology. In this study, we evaluated the power of several statistical models for predicting drug hepatotoxicity in rats using different descriptors of drug molecules, namely their chemical descriptors and toxicogenomic profiles. The records were taken from the Toxicogenomics Project rat liver microarray database containing information on 127 drugs (http://toxico.nibio.go.jp/datalist.html). The model endpoint was hepatotoxicity in the rat following 28 days of exposure, established by liver histopathology and serum chemistry. First, we developed multiple conventional QSAR classification models using a comprehensive set of chemical descriptors and several classification methods (k nearest neighbor, support vector machines, random forests, and distance weighted discrimination). With chemical descriptors alone, external predictivity (Correct Classification Rate, CCR) from 5-fold external cross-validation was 61%. Next, the same classification methods were employed to build models using only toxicogenomic data (24h after a single exposure) treated as biological descriptors. The optimized models used only 85 selected toxicogenomic descriptors and had CCR as high as 76%. Finally, hybrid models combining both chemical descriptors and transcripts were developed; their CCRs were between 68 and 77%. Although the accuracy of hybrid models did not exceed that of the models based on toxicogenomic data alone, the use of both chemical and biological descriptors enriched the interpretation of the models. In addition to finding 85 transcripts that were predictive and highly relevant to the mechanisms of drug-induced liver injury, chemical structural alerts for hepatotoxicity were also identified. These results suggest that concurrent exploration of the chemical features and acute treatment-induced changes in transcript levels will both enrich the mechanistic understanding of sub-chronic liver injury and afford models capable of accurate prediction of hepatotoxicity from chemical structure and short-term assay results. PMID:21699217
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooman, A.; Mohammadzadeh, M
Some medical and epidemiological surveys have been designed to predict a nominal response variable with several levels. With regard to the type of pregnancy there are four possible states: wanted, unwanted by wife, unwanted by husband and unwanted by couple. In this paper, we have predicted the type of pregnancy, as well as the factors influencing it using three different models and comparing them. Regarding the type of pregnancy with several levels, we developed a multinomial logistic regression, a neural network and a flexible discrimination based on the data and compared their results using tow statistical indices: Surface under curvemore » (ROC) and kappa coefficient. Based on these tow indices, flexible discrimination proved to be a better fit for prediction on data in comparison to other methods. When the relations among variables are complex, one can use flexible discrimination instead of multinomial logistic regression and neural network to predict the nominal response variables with several levels in order to gain more accurate predictions.« less
NASA Technical Reports Server (NTRS)
Benedetti, Angela; Baldasano, Jose M.; Basart, Sara; Benincasa, Francesco; Boucher, Olivier; Brooks, Malcolm E.; Chen, Jen-Ping; Colarco, Peter R.; Gong, Sunlin; Huneeus, Nicolas;
2014-01-01
Over the last few years, numerical prediction of dust aerosol concentration has become prominent at several research and operational weather centres due to growing interest from diverse stakeholders, such as solar energy plant managers, health professionals, aviation and military authorities and policymakers. Dust prediction in numerical weather prediction-type models faces a number of challenges owing to the complexity of the system. At the centre of the problem is the vast range of scales required to fully account for all of the physical processes related to dust. Another limiting factor is the paucity of suitable dust observations available for model, evaluation and assimilation. This chapter discusses in detail numerical prediction of dust with examples from systems that are currently providing dust forecasts in near real-time or are part of international efforts to establish daily provision of dust forecasts based on multi-model ensembles. The various models are introduced and described along with an overview on the importance of dust prediction activities and a historical perspective. Assimilation and evaluation aspects in dust prediction are also discussed.
Howells, Laura; Chisholm, Anna; Cotterill, Sarah; Chinoy, Hector; Warren, Richard B; Bundy, Christine
2018-02-01
Little is known about how people with psoriatic arthritis (PsA) cope with and manage their condition, but data show that psychological problems are underrecognized and undertreated. The Common Sense Self-Regulatory Model (CS-SRM) suggests illness beliefs, mediated by coping, may influence health outcomes. The study aimed to investigate the roles of disease severity, illness beliefs, and coping strategies in predicting depression, anxiety, and quality of life (QoL) in people with PsA. Additionally, we aimed to assess the role of depression and anxiety in predicting QoL. We conducted a cross-sectional observational study, where adults with PsA (n = 179) completed validated measures of predictor (illness beliefs, coping strategies, disease severity) and outcome variables (depression, anxiety, QoL) using an online survey distributed via social media. The participants were a community sample of 179 adults with PsA, ages 20 to 72 years (77.1% female). After controlling for disease severity, hierarchical multiple regression models indicated that more negative beliefs about consequences and behavioral disengagement as a coping method predicted levels of depression, and self-blame predicted anxiety. Beliefs about consequences and the presence of depression predicted quality of life scores after controlling for disease severity. This study offers support for the use of the CS-SRM in explaining variation on psychological outcomes in individuals with PsA. The illness beliefs and coping strategies identified as predictors in this article are potential targets for interventions addressing PsA-related distress and QoL. © 2017, American College of Rheumatology.
Interactive effects of prey and weather on golden eagle reproduction
Steenhof, Karen; Kochert, Michael N.; McDonald, T.L.
1997-01-01
1. The reproduction of the golden eagle Aquila chrysaetos was studied in southwestern Idaho for 23 years, and the relationship between eagle reproduction and jackrabbit Lepus californicus abundance, weather factors, and their interactions, was modelled using general linear models. Backward elimination procedures were used to arrive at parsimonious models.2. The number of golden eagle pairs occupying nesting territories each year showed a significant decline through time that was unrelated to either annual rabbit abundance or winter severity. However, eagle hatching dates were significantly related to both winter severity and jackrabbit abundance. Eagles hatched earlier when jackrabbits were abundant, and they hatched later after severe winters.3. Jackrabbit abundance influenced the proportion of pairs that laid eggs, the proportion of pairs that were successful, mean brood size at fledging, and the number of young fledged per pair. Weather interacted with prey to influence eagle reproductive rates.4. Both jackrabbit abundance and winter severity were important in predicting the percentage of eagle pairs that laid eggs. Percentage laying was related positively to jackrabbit abundance and inversely related to winter severity.5. The variables most useful in predicting percentage of laying pairs successful were rabbit abundance and the number of extremely hot days during brood-rearing. The number of hot days and rabbit abundance were also significant in a model predicting eagle brood size at fledging. Both success and brood size were positively related to jackrabbit abundance and inversely related to the frequency of hot days in spring.6. Eagle reproduction was limited by rabbit abundance during approximately twothirds of the years studied. Weather influenced how severely eagle reproduction declined in those years.7. This study demonstrates that prey and weather can interact to limit a large raptor population's productivity. Smaller raptors could be affected more strongly, especially in colder or wetter climates.
NASA Technical Reports Server (NTRS)
Kalluri, Sreeramesh
2013-01-01
Structural materials used in engineering applications routinely subjected to repetitive mechanical loads in multiple directions under non-isothermal conditions. Over past few decades, several multiaxial fatigue life estimation models (stress- and strain-based) developed for isothermal conditions. Historically, numerous fatigue life prediction models also developed for thermomechanical fatigue (TMF) life prediction, predominantly for uniaxial mechanical loading conditions. Realistic structural components encounter multiaxial loads and non-isothermal loading conditions, which increase potential for interaction of damage modes. A need exists for mechanical testing and development verification of life prediction models under such conditions.
Age structure is critical to the population dynamics and survival of honeybee colonies.
Betti, M I; Wahl, L M; Zamir, M
2016-11-01
Age structure is an important feature of the division of labour within honeybee colonies, but its effects on colony dynamics have rarely been explored. We present a model of a honeybee colony that incorporates this key feature, and use this model to explore the effects of both winter and disease on the fate of the colony. The model offers a novel explanation for the frequently observed phenomenon of 'spring dwindle', which emerges as a natural consequence of the age-structured dynamics. Furthermore, the results indicate that a model taking age structure into account markedly affects the predicted timing and severity of disease within a bee colony. The timing of the onset of disease with respect to the changing seasons may also have a substantial impact on the fate of a honeybee colony. Finally, simulations predict that an infection may persist in a honeybee colony over several years, with effects that compound over time. Thus, the ultimate collapse of the colony may be the result of events several years past.
Chen, Yumiao; Yang, Zhongliang
2017-01-01
Recently, several researchers have considered the problem of reconstruction of handwriting and other meaningful arm and hand movements from surface electromyography (sEMG). Although much progress has been made, several practical limitations may still affect the clinical applicability of sEMG-based techniques. In this paper, a novel three-step hybrid model of coordinate state transition, sEMG feature extraction and gene expression programming (GEP) prediction is proposed for reconstructing drawing traces of 12 basic one-stroke shapes from multichannel surface electromyography. Using a specially designed coordinate data acquisition system, we recorded the coordinate data of drawing traces collected in accordance with the time series while 7-channel EMG signals were recorded. As a widely-used time domain feature, Root Mean Square (RMS) was extracted with the analysis window. The preliminary reconstruction models can be established by GEP. Then, the original drawing traces can be approximated by a constructed prediction model. Applying the three-step hybrid model, we were able to convert seven channels of EMG activity recorded from the arm muscles into smooth reconstructions of drawing traces. The hybrid model can yield a mean accuracy of 74% in within-group design (one set of prediction models for all shapes) and 86% in between-group design (one separate set of prediction models for each shape), averaged for the reconstructed x and y coordinates. It can be concluded that it is feasible for the proposed three-step hybrid model to improve the reconstruction ability of drawing traces from sEMG.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John
Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less
Stempler, Shiri; Yizhak, Keren; Ruppin, Eytan
2014-01-01
Accumulating evidence links numerous abnormalities in cerebral metabolism with the progression of Alzheimer's disease (AD), beginning in its early stages. Here, we integrate transcriptomic data from AD patients with a genome-scale computational human metabolic model to characterize the altered metabolism in AD, and employ state-of-the-art metabolic modelling methods to predict metabolic biomarkers and drug targets in AD. The metabolic descriptions derived are first tested and validated on a large scale versus existing AD proteomics and metabolomics data. Our analysis shows a significant decrease in the activity of several key metabolic pathways, including the carnitine shuttle, folate metabolism and mitochondrial transport. We predict several metabolic biomarkers of AD progression in the blood and the CSF, including succinate and prostaglandin D2. Vitamin D and steroid metabolism pathways are enriched with predicted drug targets that could mitigate the metabolic alterations observed. Taken together, this study provides the first network wide view of the metabolic alterations associated with AD progression. Most importantly, it offers a cohort of new metabolic leads for the diagnosis of AD and its treatment. PMID:25127241
HomoTarget: a new algorithm for prediction of microRNA targets in Homo sapiens.
Ahmadi, Hamed; Ahmadi, Ali; Azimzadeh-Jamalkandi, Sadegh; Shoorehdeli, Mahdi Aliyari; Salehzadeh-Yazdi, Ali; Bidkhori, Gholamreza; Masoudi-Nejad, Ali
2013-02-01
MiRNAs play an essential role in the networks of gene regulation by inhibiting the translation of target mRNAs. Several computational approaches have been proposed for the prediction of miRNA target-genes. Reports reveal a large fraction of under-predicted or falsely predicted target genes. Thus, there is an imperative need to develop a computational method by which the target mRNAs of existing miRNAs can be correctly identified. In this study, combined pattern recognition neural network (PRNN) and principle component analysis (PCA) architecture has been proposed in order to model the complicated relationship between miRNAs and their target mRNAs in humans. The results of several types of intelligent classifiers and our proposed model were compared, showing that our algorithm outperformed them with higher sensitivity and specificity. Using the recent release of the mirBase database to find potential targets of miRNAs, this model incorporated twelve structural, thermodynamic and positional features of miRNA:mRNA binding sites to select target candidates. Copyright © 2012 Elsevier Inc. All rights reserved.
Anwar-Mohamed, Anwar; Barakat, Khaled H; Bhat, Rakesh; Noskov, Sergei Y; Tyrrell, D Lorne; Tuszynski, Jack A; Houghton, Michael
2014-11-04
Acquired cardiac long QT syndrome (LQTS) is a frequent drug-induced toxic event that is often caused through blocking of the human ether-á-go-go-related (hERG) K(+) ion channel. This has led to the removal of several major drugs post-approval and is a frequent cause of termination of clinical trials. We report here a computational atomistic model derived using long molecular dynamics that allows sensitive prediction of hERG blockage. It identified drug-mediated hERG blocking activity of a test panel of 18 compounds with high sensitivity and specificity and was experimentally validated using hERG binding assays and patch clamp electrophysiological assays. The model discriminates between potent, weak, and non-hERG blockers and is superior to previous computational methods. This computational model serves as a powerful new tool to predict hERG blocking thus rendering drug development safer and more efficient. As an example, we show that a drug that was halted recently in clinical development because of severe cardiotoxicity is a potent inhibitor of hERG in two different biological assays which could have been predicted using our new computational model. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.
Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O
2017-08-01
To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.
[Prediction of Severe Course in Infants with RSV Bronchiolitis under 6 Months. Spain].
Ramos-Fernández, José Miguel; Moreno-Pérez, David; Gutiérrez-Bedmar, Mario; Hernández-Yuste, Alexandra; Cordón-Martínez, Ana María; Milano-Manso, Guillermo; Urda-Cardona, Antonio
2017-01-19
The need for mechanical ventilation (MV) in acute bronchiolitis (AB) by respiratory syncytial virus (RSV) varies depending on the series (6-18%). Our goal is to determine the admissions to PICU for MV in patients under 6 months with AB and define the risk factors for building a prediction model. Retrospective study of patients younger than 6 months admitted by BA-VRS between the periods April 1, 2010 and March 31, 2015 was made. The primary variable was the admission to PICU for MV. Related addition, to find risk factors in a model of binary logistic regression clinical variables were collected. A ROC curve model was developed and optimal cutoff point was identified. In 695 cases, the need of MV in the PICU (Y) was 56 (8.1%). Risk factors (Xi) included in the equation were: 1. male sex (OR 4.27) 2. postmenstrual age (OR: 0.76) 3. Weight income less than p3 (OR: 5.53) 4. intake lees than 50% (OR: 12.4) 5. Severity by scale (OR: 1.58) 6. apneas before admission (OR: 25.5) 7. bacterial superinfection (OR 5.03) and 8. gestational age more than 37 weeks OR (0.32). The area under the curve, sensitivity and specificity were 0.943, 0.84 and 0.93 respectively. The PICU admission for MV was 8.1 in every 100 healthy infants hospitalized for AB and year. The prediction model equation can help to predict patients at increased risk of severe evolution.
Spatiotemporal Bayesian networks for malaria prediction.
Haddawy, Peter; Hasan, A H M Imrul; Kasantikul, Rangwan; Lawpoolsri, Saranath; Sa-Angchai, Patiwat; Kaewkungwal, Jaranit; Singhasivanon, Pratap
2018-01-01
Targeted intervention and resource allocation are essential for effective malaria control, particularly in remote areas, with predictive models providing important information for decision making. While a diversity of modeling technique have been used to create predictive models of malaria, no work has made use of Bayesian networks. Bayes nets are attractive due to their ability to represent uncertainty, model time lagged and nonlinear relations, and provide explanations. This paper explores the use of Bayesian networks to model malaria, demonstrating the approach by creating village level models with weekly temporal resolution for Tha Song Yang district in northern Thailand. The networks are learned using data on cases and environmental covariates. Three types of networks are explored: networks for numeric prediction, networks for outbreak prediction, and networks that incorporate spatial autocorrelation. Evaluation of the numeric prediction network shows that the Bayes net has prediction accuracy in terms of mean absolute error of about 1.4 cases for 1 week prediction and 1.7 cases for 6 week prediction. The network for outbreak prediction has an ROC AUC above 0.9 for all prediction horizons. Comparison of prediction accuracy of both Bayes nets against several traditional modeling approaches shows the Bayes nets to outperform the other models for longer time horizon prediction of high incidence transmission. To model spread of malaria over space, we elaborate the models with links between the village networks. This results in some very large models which would be far too laborious to build by hand. So we represent the models as collections of probability logic rules and automatically generate the networks. Evaluation of the models shows that the autocorrelation links significantly improve prediction accuracy for some villages in regions of high incidence. We conclude that spatiotemporal Bayesian networks are a highly promising modeling alternative for prediction of malaria and other vector-borne diseases. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.
Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza
2015-09-15
The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.
2012-04-21
model with severe acidosis (pH 6.8), hyperkalemia (up to 10 meq/L), hypoglycemia, and hypoxia and reported that ECG electrical changes were not directly...hypoxia, hyperkalemia , and acidosis on intracellular and extracellular poten tials and metabolism in the isolated porcine heart. Circ Res 46 (5):634
Washington, Chad W; Derdeyn, Colin P; Dacey, Ralph G; Dhar, Rajat; Zipfel, Gregory J
2014-08-01
Studies using the Nationwide Inpatient Sample (NIS), a large ICD-9-based (International Classification of Diseases, Ninth Revision) administrative database, to analyze aneurysmal subarachnoid hemorrhage (SAH) have been limited by an inability to control for SAH severity and the use of unverified outcome measures. To address these limitations, the authors developed and validated a surrogate marker for SAH severity, the NIS-SAH Severity Score (NIS-SSS; akin to Hunt and Hess [HH] grade), and a dichotomous measure of SAH outcome, the NIS-SAH Outcome Measure (NIS-SOM; akin to modified Rankin Scale [mRS] score). Three separate and distinct patient cohorts were used to define and then validate the NIS-SSS and NIS-SOM. A cohort (n = 148,958, the "model population") derived from the 1998-2009 NIS was used for developing the NIS-SSS and NIS-SOM models. Diagnoses most likely reflective of SAH severity were entered into a regression model predicting poor outcome; model coefficients of significant factors were used to generate the NIS-SSS. Nationwide Inpatient Sample codes most likely to reflect a poor outcome (for example, discharge disposition, tracheostomy) were used to create the NIS-SOM. Data from 716 patients with SAH (the "validation population") treated at the authors' institution were used to validate the NIS-SSS and NIS-SOM against HH grade and mRS score, respectively. Lastly, 147,395 patients (the "assessment population") from the 1998-2009 NIS, independent of the model population, were used to assess performance of the NIS-SSS in predicting outcome. The ability of the NIS-SSS to predict outcome was compared with other common measures of disease severity (All Patient Refined Diagnosis Related Group [APR-DRG], All Payer Severity-adjusted DRG [APS-DRG], and DRG). RESULTS The NIS-SSS significantly correlated with HH grade, and there was no statistical difference between the abilities of the NIS-SSS and HH grade to predict mRS-based outcomes. As compared with the APR-DRG, APSDRG, and DRG, the NIS-SSS was more accurate in predicting SAH outcome (area under the curve [AUC] = 0.69, 0.71, 0.71, and 0.79, respectively). A strong correlation between NIS-SOM and mRS was found, with an agreement and kappa statistic of 85% and 0.63, respectively, when poor outcome was defined by an mRS score > 2 and 95% and 0.84 when poor outcome was defined by an mRS score > 3. Data in this study indicate that in the analysis of NIS data sets, the NIS-SSS is a valid measure of SAH severity that outperforms previous measures of disease severity and that the NIS-SOM is a valid measure of SAH outcome. It is critically important that outcomes research in SAH using administrative data sets incorporate the NIS-SSS and NIS-SOM to adjust for neurology-specific disease severity.
Pastore, Francesco; Conson, Manuel; D'Avino, Vittoria; Palma, Giuseppe; Liuzzi, Raffaele; Solla, Raffaele; Farella, Antonio; Salvatore, Marco; Cella, Laura; Pacelli, Roberto
2016-01-01
Severe acute radiation-induced skin toxicity (RIST) after breast irradiation is a side effect impacting the quality of life in breast cancer (BC) patients. The aim of the present study was to develop normal tissue complication probability (NTCP) models of severe acute RIST in BC patients. We evaluated 140 consecutive BC patients undergoing conventional three-dimensional conformal radiotherapy (3D-CRT) after breast conserving surgery in a prospective study assessing acute RIST. The acute RIST was classified according to the RTOG scoring system. Dose-surface histograms (DSHs) of the body structure in the breast region were extracted as representative of skin irradiation. Patient, disease, and treatment-related characteristics were analyzed along with DSHs. NTCP modeling by Lyman-Kutcher-Burman (LKB) and by multivariate logistic regression using bootstrap resampling techniques was performed. Models were evaluated by Spearman's Rs coefficient and ROC area. By the end of radiotherapy, 139 (99%) patients developed any degree of acute RIST. G3 RIST was found in 11 of 140 (8%) patients. Mild-moderate (G1-G2) RIST was still present at 40 days after treatment in six (4%) patients. Using DSHs for LKB modeling of acute RIST severity (RTOG G3 vs. G0-2), parameter estimates were TD50=39 Gy, n=0.38 and m=0.14 [Rs = 0.25, area under the curve (AUC) = 0.77, p = 0.003]. On multivariate analysis, the most predictive model of acute RIST severity was a two-variable model including the skin receiving ≥30 Gy (S30) and psoriasis [Rs = 0.32, AUC = 0.84, p < 0.001]. Using body DSH as representative of skin dose, the LKB n parameter was consistent with a surface effect for the skin. A good prediction performance was obtained using a data-driven multivariate model including S30 and a pre-existing skin disease (psoriasis) as a clinical factor.
Earthquake Prediction is Coming
ERIC Educational Resources Information Center
MOSAIC, 1977
1977-01-01
Describes (1) several methods used in earthquake research, including P:S ratio velocity studies, dilatancy models; and (2) techniques for gathering base-line data for prediction using seismographs, tiltmeters, laser beams, magnetic field changes, folklore, animal behavior. The mysterious Palmdale (California) bulge is discussed. (CS)
Modeling of multi-strata forest fire severity using Landsat TM data
Q. Meng; R.K. Meentemeyer
2011-01-01
Most of fire severity studies use field measures of composite burn index (CBI) to represent forest fire severity and fit the relationships between CBI and Landsat imagery derived differenced normalized burn ratio (dNBR) to predict and map fire severity at unsampled locations. However, less attention has been paid on the multi-strata forest fire severity, which...
Centrosome-Based Mechanisms, Prognostics and Therapeutics in Prostate Cancer
2006-12-01
progression of prostate carcinomas. The specific aims of the original proposal were designed to test several features of this model . 1. Are centrosome...features of this model . 1. Are centrosome defects present in early prostate cancer and can they predict aggressive disease? 2. Do pericentrin’s...cells, supports this model . The ability to block the cell cycle in prostate cells by depletion of any of 14 centrosome proteins identifies several
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai
We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less
Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai
2018-03-01
We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad
2016-02-01
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less
Assessment of turbulent models for scramjet flowfields
NASA Technical Reports Server (NTRS)
Sindir, M. M.; Harsha, P. T.
1982-01-01
The behavior of several turbulence models applied to the prediction of scramjet combustor flows is described. These models include the basic two equation model, the multiple dissipation length scale variant of the two equation model, and the algebraic stress model (ASM). Predictions were made of planar backward facing step flows and axisymmetric sudden expansion flows using each of these approaches. The formulation of each of these models are discussed, and the application of the different approaches to supersonic flows is described. A modified version of the ASM is found to provide the best prediction of the planar backward facing step flow in the region near the recirculation zone, while the basic ASM provides the best results downstream of the recirculation. Aspects of the interaction of numerica modeling and turbulences modeling as they affect the assessment of turbulence models are discussed.
Modeling and predicting intertidal variations of the salinity field in the Bay/Delta
Knowles, Noah; Uncles, Reginald J.
1995-01-01
One approach to simulating daily to monthly variability in the bay is the development of intertidal model using tidally-averaged equations and a time step on the order of the day. An intertidal numerical model of the bay's physics, capable of portraying seasonal and inter-annual variability, would have several uses. Observations are limited in time and space, so simulation could help fill the gaps. Also, the ability to simulate multi-year episodes (eg, an extended drought) could provide insight into the response of the ecosystem to such events. Finally, such a model could be used in a forecast mode wherein predicted delta flow is used as model input, and predicted salinity distribution is output with estimates days and months in advance. This note briefly introduces such a tidally-averaged model (Uncles and Peterson, in press) and a corresponding predictive scheme for baywide forecasting.
A productivity model for parasitized, multibrooded songbirds
Powell, L.A.; Knutson, M.G.
2006-01-01
We present an enhancement of a simulation model to predict annual productivity for Wood Thrushes (Hylocichla mustelina) and American Redstarts (Setophaga ruticilla); the model includes effects of Brown-headed Cowbird (Molothrus ater) parasitism. We used species-specific data from the Driftless Area Ecoregion of Wisconsin, Minnesota, and Iowa to parameterize the model as a case study. The simulation model predicted annual productivity of 2.03 ?? 1.60 SD for Wood Thrushes and 1.56 ?? 1.31 SD for American Redstarts. Our sensitivity analysis showed that high parasitism lowered Wood Thrush annual productivity more than American Redstart productivity, even though parasitism affected individual nests of redstarts more severely. Annual productivity predictions are valuable for habitat managers, but productivity is not easily obtained from field studies. Our model provides a useful means of integrating complex life history parameters to predict productivity for songbirds that experience nest parasitism. ?? The Cooper Ornithological Society 2006.
Mammographic density, breast cancer risk and risk prediction
Vachon, Celine M; van Gils, Carla H; Sellers, Thomas A; Ghosh, Karthik; Pruthi, Sandhya; Brandt, Kathleen R; Pankratz, V Shane
2007-01-01
In this review, we examine the evidence for mammographic density as an independent risk factor for breast cancer, describe the risk prediction models that have incorporated density, and discuss the current and future implications of using mammographic density in clinical practice. Mammographic density is a consistent and strong risk factor for breast cancer in several populations and across age at mammogram. Recently, this risk factor has been added to existing breast cancer risk prediction models, increasing the discriminatory accuracy with its inclusion, albeit slightly. With validation, these models may replace the existing Gail model for clinical risk assessment. However, absolute risk estimates resulting from these improved models are still limited in their ability to characterize an individual's probability of developing cancer. Promising new measures of mammographic density, including volumetric density, which can be standardized using full-field digital mammography, will likely result in a stronger risk factor and improve accuracy of risk prediction models. PMID:18190724
Standard Model and New physics for ɛ'k/ɛk
NASA Astrophysics Data System (ADS)
Kitahara, Teppei
2018-05-01
The first result of the lattice simulation and improved perturbative calculations have pointed to a discrepancy between data on ɛ'k/ɛk and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of ℬ(K → πv
Adaptation of clinical prediction models for application in local settings.
Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M
2012-01-01
When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Quon, H; McNutt, T
2016-06-15
Purpose: To determine if the accumulated parotid dosimetry using planning CT to daily CBCT deformation and dose re-calculation can predict for radiation-induced xerostomia. Methods: To track and dosimetrically account for the effects of anatomical changes on the parotid glands, we propagated physicians’ contours from planning CT to daily CBCT using a deformable registration with iterative CBCT intensity correction. A surface mesh for each OAR was created with the deformation applied to the mesh to obtain the deformed parotid volumes. Daily dose was computed on the deformed CT and accumulated to the last fraction. For both the accumulated and the plannedmore » parotid dosimetry, we tested the prediction power of different dosimetric parameters including D90, D50, D10, mean, standard deviation, min/max dose to the combined parotids and patient age to severe xerostomia (NCI-CTCAE grade≥2 at 6 mo follow-up). We also tested the dosimetry to parotid sub-volumes. Three classification algorithms, random tree, support vector machine, and logistic regression were tested to predict severe xerostomia using a leave-one-out validation approach. Results: We tested our prediction model on 35 HN IMRT cases. Parameters from the accumulated dosimetry model demonstrated an 89% accuracy for predicting severe xerostomia. Compared to the planning dosimetry, the accumulated dose consistently demonstrated higher prediction power with all three classification algorithms, including 11%, 5% and 30% higher accuracy, sensitivity and specificity, respectively. Geometric division of the combined parotid glands into superior-inferior regions demonstrated ∼5% increased accuracy than the whole volume. The most influential ranked features include age, mean accumulated dose of the submandibular glands and the accumulated D90 of the superior parotid glands. Conclusion: We demonstrated that the accumulated parotid dosimetry using CT-CBCT registration and dose re-calculation more accurately predicts for severe xerostomia and that the superior portion of the parotid glands may be particularly important in predicting for severe xerostomia. This work was supported in part by NIH/NCI under grant R42CA137886 and in part by Toshiba big data research project funds.« less
Kondoh, Shun; Chiba, Hirofumi; Nishikiori, Hirotaka; Umeda, Yasuaki; Kuronuma, Koji; Otsuka, Mitsuo; Yamada, Gen; Ohnishi, Hirofumi; Mori, Mitsuru; Kondoh, Yasuhiro; Taniguchi, Hiroyuki; Homma, Sakae; Takahashi, Hiroki
2016-09-01
The clinical course of idiopathic pulmonary fibrosis (IPF) shows great inter-individual differences. It is important to standardize the severity classification to accurately evaluate each patient׳s prognosis. In Japan, an original severity classification (the Japanese disease severity classification, JSC) is used. In the United States, the new multidimensional index and staging system (the GAP model) has been proposed. The objective of this study was to evaluate the model performance for the prediction of mortality risk of the JSC and GAP models using a large cohort of Japanese patients with IPF. This is a retrospective cohort study including 326 patients with IPF in the Hokkaido prefecture from 2003 to 2007. We obtained the survival curves of each stage of the GAP and JSC models to perform a comparison. In the GAP model, the prognostic value for mortality risk of Japanese patients was also evaluated. In the JSC, patient prognoses were roughly divided into two groups, mild cases (Stages I and II) and severe cases (Stages III and IV). In the GAP model, there was no significant difference in survival between Stages II and III, and the mortality rates in the patients classified into the GAP Stages I and II were underestimated. It is difficult to predict accurate prognosis of IPF using the JSC and the GAP models. A re-examination of the variables from the two models is required, as well as an evaluation of the prognostic value to revise the severity classification for Japanese patients with IPF. Copyright © 2016 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique
2018-05-01
Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
van Eijkeren, Jan C H; Olie, J Daniël N; Bradberry, Sally M; Vale, J Allister; de Vries, Irma; Clewell, Harvey J; Meulenbelt, Jan; Hunault, Claudine C
2017-02-01
Kinetic models could assist clinicians potentially in managing cases of lead poisoning. Several models exist that can simulate lead kinetics but none of them can predict the effect of chelation in lead poisoning. Our aim was to devise a model to predict the effect of succimer (dimercaptosuccinic acid; DMSA) chelation therapy on blood lead concentrations. We integrated a two-compartment kinetic succimer model into an existing PBPK lead model and produced a Chelation Lead Therapy (CLT) model. The accuracy of the model's predictions was assessed by simulating clinical observations in patients poisoned by lead and treated with succimer. The CLT model calculates blood lead concentrations as the sum of the background exposure and the acute or chronic lead poisoning. The latter was due either to ingestion of traditional remedies or occupational exposure to lead-polluted ambient air. The exposure duration was known. The blood lead concentrations predicted by the CLT model were compared to the measured blood lead concentrations. Pre-chelation blood lead concentrations ranged between 99 and 150 μg/dL. The model was able to simulate accurately the blood lead concentrations during and after succimer treatment. The pattern of urine lead excretion was successfully predicted in some patients, while poorly predicted in others. Our model is able to predict blood lead concentrations after succimer therapy, at least, in situations where the duration of lead exposure is known.
Hoffart, Asle
2016-09-01
The purpose of this study was to test 2 cognitive models of panic disorder with agoraphobia (PDA)-a catastrophic cognitions model and a low self-efficacy model-by examining the within-person effects of model-derived cognitive variables on subsequent anxiety symptoms. Participants were 46 PDA patients with agoraphobic avoidance of moderate to severe degree who were randomly allocated to 6 weeks of either cognitive therapy, based on the catastrophic cognitions model of PDA, or guided mastery (guided exposure) therapy, based on the self-efficacy model of PDA. Cognitions and anxiety were measured weekly over the course of treatment. The data were analyzed with mixed models, using person-mean centering to disaggregate within- and between-person effects. All of the studied variables changed in the expected way over the course of therapy. There was a within-person effect of physical fears, loss of control fears, social fears, and self-efficacy when alone on subsequent state anxiety. On the other hand, within-person changes in anxiety did not predict subsequent cognitions. Loss of control and social fears both predicted subsequent self-efficacy, whereas self-efficacy did not predict catastrophic cognitions. In a multipredictor analysis, within-person catastrophic cognitions still predicted subsequent anxiety, but self-efficacy when alone did not. Overall, the findings indicate that anxiety in PDA, at least in severe and long-standing cases, is driven by catastrophic cognitions. Thus, these cognitions seem to be useful therapeutic targets. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Comparison of statistical models for analyzing wheat yield time series.
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calcaterra, J.R.; Johnson, W.S.; Neu, R.W.
1997-12-31
Several methodologies have been developed to predict the lives of titanium matrix composites (TMCs) subjected to thermomechanical fatigue (TMF). This paper reviews and compares five life prediction models developed at NASA-LaRC. Wright Laboratories, based on a dingle parameter, the fiber stress in the load-carrying, or 0{degree}, direction. The two other models, both developed at Wright Labs. are multi-parameter models. These can account for long-term damage, which is beyond the scope of the single-parameter models, but this benefit is offset by the additional complexity of the methodologies. Each of the methodologies was used to model data generated at NASA-LeRC. Wright Labs.more » and Georgia Tech for the SCS-6/Timetal 21-S material system. VISCOPLY, a micromechanical stress analysis code, was used to determine the constituent stress state for each test and was used for each model to maintain consistency. The predictive capabilities of the models are compared, and the ability of each model to accurately predict the responses of tests dominated by differing damage mechanisms is addressed.« less
Exploring Human Diseases and Biological Mechanisms by Protein Structure Prediction and Modeling.
Wang, Juexin; Luttrell, Joseph; Zhang, Ning; Khan, Saad; Shi, NianQing; Wang, Michael X; Kang, Jing-Qiong; Wang, Zheng; Xu, Dong
2016-01-01
Protein structure prediction and modeling provide a tool for understanding protein functions by computationally constructing protein structures from amino acid sequences and analyzing them. With help from protein prediction tools and web servers, users can obtain the three-dimensional protein structure models and gain knowledge of functions from the proteins. In this chapter, we will provide several examples of such studies. As an example, structure modeling methods were used to investigate the relation between mutation-caused misfolding of protein and human diseases including epilepsy and leukemia. Protein structure prediction and modeling were also applied in nucleotide-gated channels and their interaction interfaces to investigate their roles in brain and heart cells. In molecular mechanism studies of plants, rice salinity tolerance mechanism was studied via structure modeling on crucial proteins identified by systems biology analysis; trait-associated protein-protein interactions were modeled, which sheds some light on the roles of mutations in soybean oil/protein content. In the age of precision medicine, we believe protein structure prediction and modeling will play more and more important roles in investigating biomedical mechanism of diseases and drug design.
Eric J. Gustafson
2014-01-01
Regression models developed in the upper Midwest (United States) to predict drought-induced tree mortality from measures of drought (Palmer Drought Severity Index) were tested in the northeastern United States and found inadequate. The most likely cause of this result is that long drought events were rare in the Northeast during the period when inventory data were...
A Compatible Hardware/Software Reliability Prediction Model.
1981-07-22
machines. In particular, he was interested in the following problem: assu me that one has a collection of connected elements computing and transmitting...software reliability prediction model is desirable, the findings about the Weibull distribution are intriguing. After collecting failure data from several...capacitor, some of the added charge carriers are collected by the capacitor. If the added charge is sufficiently large, the information stored is changed
Marschollek, M; Nemitz, G; Gietzelt, M; Wolf, K H; Meyer Zu Schwabedissen, H; Haux, R
2009-08-01
Falls are among the predominant causes for morbidity and mortality in elderly persons and occur most often in geriatric clinics. Despite several studies that have identified parameters associated with elderly patients' fall risk, prediction models -- e.g., based on geriatric assessment data -- are currently not used on a regular basis. Furthermore, technical aids to objectively assess mobility-associated parameters are currently not used. To assess group differences in clinical as well as common geriatric assessment data and sensory gait measurements between fallers and non-fallers in a geriatric sample, and to derive and compare two prediction models based on assessment data alone (model #1) and added sensory measurement data (model #2). For a sample of n=110 geriatric in-patients (81 women, 29 men) the following fall risk-associated assessments were performed: Timed 'Up & Go' (TUG) test, STRATIFY score and Barthel index. During the TUG test the subjects wore a triaxial accelerometer, and sensory gait parameters were extracted from the data recorded. Group differences between fallers (n=26) and non-fallers (n=84) were compared using Student's t-test. Two classification tree prediction models were computed and compared. Significant differences between the two groups were found for the following parameters: time to complete the TUG test, transfer item (Barthel), recent falls (STRATIFY), pelvic sway while walking and step length. Prediction model #1 (using common assessment data only) showed a sensitivity of 38.5% and a specificity of 97.6%, prediction model #2 (assessment data plus sensory gait parameters) performed with 57.7% and 100%, respectively. Significant differences between fallers and non-fallers among geriatric in-patients can be detected for several assessment subscores as well as parameters recorded by simple accelerometric measurements during a common mobility test. Existing geriatric assessment data may be used for falls prediction on a regular basis. Adding sensory data improves the specificity of our test markedly.
Kravez, Eli; Villiger, Martin; Bouma, Brett; Yarmush, Martin; Yakhini, Zohar; Golberg, Alexander
2017-01-01
Hypertrophic scars remain a major clinical problem in the rehabilitation of burn survivors and lead to physical, aesthetic, functional, psychological, and social stresses. Prediction of healing outcome and scar formation is critical for deciding on the best treatment plan. Both subjective and objective scales have been devised to assess scar severity. Whereas scales of the first type preclude cross-comparison between observers, those of the second type are based on imaging modalities that either lack the ability to image individual layers of the scar or only provide very limited fields of view. To overcome these deficiencies, this work aimed at developing a predictive model of scar formation based on polarization sensitive optical frequency domain imaging (PS-OFDI), which offers comprehensive subsurface imaging. We report on a linear regression model that predicts the size of a scar 6 months after third-degree burn injuries in rats based on early post-injury PS-OFDI and measurements of scar area. When predicting the scar area at month 6 based on the homogeneity and the degree of polarization (DOP), which are signatures derived from the PS-OFDI signal, together with the scar area measured at months 2 and 3, we achieved predictions with a Pearson coefficient of 0.57 (p < 10−4) and a Spearman coefficient of 0.66 (p < 10−5), which were significant in comparison to prediction models trained on randomly shuffled data. As the model in this study was developed on the rat burn model, the methodology can be used in larger studies that are more relevant to humans; however, the actual model inferred herein is not translatable. Nevertheless, our analysis and modeling methodology can be extended to perform larger wound healing studies in different contexts. This study opens new possibilities for quantitative and objective assessment of scar severity that could help to determine the optimal course of therapy. PMID:29249978
[How exactly can we predict the prognosis of COPD].
Atiş, Sibel; Kanik, Arzu; Ozgür, Eylem Sercan; Eker, Suzan; Tümkaya, Münir; Ozge, Cengiz
2009-01-01
Predictive models play a pivotal role in the provision of accurate and useful probabilistic assessments of clinical outcomes in chronic diseases. This study was aimed to develop a dedicated prognostic index for quantifying progression risk in chronic obstructive pulmonary disease (COPD). Data were collected prospectively from 75 COPD patients during a three years period. A predictive model of progression risk of COPD was developed using Bayesian logistic regression analysis by Markov chain Monte Carlo method. One-year cycles were used for the disease progression in this model. Primary end points for progression were impairment in basal dyspne index (BDI) score, FEV(1) decline, and exacerbation frequency in last three years. Time-varying covariates age, smoking, body mass index (BMI), severity of disease according to GOLD, PaO2, PaCO(2), IC, RV/TLC, DLCO were used under the study. The mean age was 57.1 + or - 8.1. BDI were strongly correlated with exacerbation frequency (p= 0.001) but not with FEV(1) decline. BMI was found to be a predictor factor for impairment in BDI (p= 0.03). The following independent risk factors were significant to predict exacerbation frequency: GOLD staging (OR for GOLD I vs. II and III = 2.3 and 4.0), hypoxemia (OR for mild vs moderate and severe = 2.1 and 5.1) and hyperinflation (OR= 1.6). PaO2 (p= 0.026), IC (p= 0.02) and RV/TLC (p= 0.03) were found to be predictive factors for FEV(1) decline. The model estimated BDI, lung function and exacerbation frequency at the last time point by testing initial data of three years with 95% reliability (p< 0.001). Accordingly, this model was evaluated as confident of 95% for assessing the future status of COPD patients. Using Bayesian predictive models, it was possible to develop a risk-stratification index that accurately predicted progression of COPD. This model can provide decision-making about future in COPD patients with high reliability looking clinical data of beginning.
DroSpeGe: rapid access database for new Drosophila species genomes.
Gilbert, Donald G
2007-01-01
The Drosophila species comparative genome database DroSpeGe (http://insects.eugenes.org/DroSpeGe/) provides genome researchers with rapid, usable access to 12 new and old Drosophila genomes, since its inception in 2004. Scientists can use, with minimal computing expertise, the wealth of new genome information for developing new insights into insect evolution. New genome assemblies provided by several sequencing centers have been annotated with known model organism gene homologies and gene predictions to provided basic comparative data. TeraGrid supplies the shared cyberinfrastructure for the primary computations. This genome database includes homologies to Drosophila melanogaster and eight other eukaryote model genomes, and gene predictions from several groups. BLAST searches of the newest assemblies are integrated with genome maps. GBrowse maps provide detailed views of cross-species aligned genomes. BioMart provides for data mining of annotations and sequences. Common chromosome maps identify major synteny among species. Potential gain and loss of genes is suggested by Gene Ontology groupings for genes of the new species. Summaries of essential genome statistics include sizes, genes found and predicted, homology among genomes, phylogenetic trees of species and comparisons of several gene predictions for sensitivity and specificity in finding new and known genes.
Sociophysics:. a Review of Galam Models
NASA Astrophysics Data System (ADS)
Galam, Serge
We review a series of models of sociophysics introduced by Galam and Galam et al. in the last 25 years. The models are divided into five different classes, which deal respectively with democratic voting in bottom-up hierarchical systems, decision making, fragmentation versus coalitions, terrorism and opinion dynamics. For each class the connexion to the original physical model and techniques are outlined underlining both the similarities and the differences. Emphasis is put on the numerous novel and counterintuitive results obtained with respect to the associated social and political framework. Using these models several major real political events were successfully predicted including the victory of the French extreme right party in the 2000 first round of French presidential elections, the voting at fifty-fifty in several democratic countries (Germany, Italy, Mexico), and the victory of the "no" to the 2005 French referendum on the European constitution. The perspectives and the challenges to make sociophysics a predictive solid field of science are discussed.
Bi-objective integer programming for RNA secondary structure prediction with pseudoknots.
Legendre, Audrey; Angel, Eric; Tahi, Fariza
2018-01-15
RNA structure prediction is an important field in bioinformatics, and numerous methods and tools have been proposed. Pseudoknots are specific motifs of RNA secondary structures that are difficult to predict. Almost all existing methods are based on a single model and return one solution, often missing the real structure. An alternative approach would be to combine different models and return a (small) set of solutions, maximizing its quality and diversity in order to increase the probability that it contains the real structure. We propose here an original method for predicting RNA secondary structures with pseudoknots, based on integer programming. We developed a generic bi-objective integer programming algorithm allowing to return optimal and sub-optimal solutions optimizing simultaneously two models. This algorithm was then applied to the combination of two known models of RNA secondary structure prediction, namely MEA and MFE. The resulting tool, called BiokoP, is compared with the other methods in the literature. The results show that the best solution (structure with the highest F 1 -score) is, in most cases, given by BiokoP. Moreover, the results of BiokoP are homogeneous, regardless of the pseudoknot type or the presence or not of pseudoknots. Indeed, the F 1 -scores are always higher than 70% for any number of solutions returned. The results obtained by BiokoP show that combining the MEA and the MFE models, as well as returning several optimal and several sub-optimal solutions, allow to improve the prediction of secondary structures. One perspective of our work is to combine better mono-criterion models, in particular to combine a model based on the comparative approach with the MEA and the MFE models. This leads to develop in the future a new multi-objective algorithm to combine more than two models. BiokoP is available on the EvryRNA platform: https://EvryRNA.ibisc.univ-evry.fr .
Can We Predict Patient Wait Time?
Pianykh, Oleg S; Rosenthal, Daniel I
2015-10-01
The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Qian, Weiguo; Tang, Yunjia; Yan, Wenhua; Sun, Ling; Lv, Haitao
2018-03-09
Kawasaki disease (KD) is the most common pediatric vasculitis. Several models have been established to predict intravenous immunoglobulin (IVIG) resistance. The present study was aimed to evaluate the efficacy of prediction models using the medical data of KD patients. We collected the medical records of patients hospitalized in the Department of Cardiology in Children's Hospital of Soochow University with a diagnosis of KD from Jan 2015 to Dec 2016. IVIG resistance was defined as recrudescent or persistent fever ≥36 h after the end of their IVIG infusion. Patients with IVIG resistance tended to be younger, have higher occurrence of rash and changes of extremities. They had higher levels of c-reactive protein, aspartate aminotransferase, neutrophils proportion (N%), total bilirubin and lower level of albumin. Our prediction model had a sensitivity of 0.72 and a specificity of 0.75. Sensitivity of Kobayashi, Egami, Kawamura, Sano and Formosa were 0.72, 0.44, 0.48, 0.20, and 0.68, respectively. Specificity of these models were 0.62, 0.82, 0.66, 0.91, and 0.48, respectively. Our prediction model had a powerful predictive value in this area, followed by Kobayashi model while all the other prediction models had less excellent performances than ours.
Berghuis, Han; Kamphuis, Jan H; Verheul, Roel
2014-01-01
This study examined the associations of specific personality traits and general personality dysfunction in relation to the presence and severity of Diagnostic and Statistical Manual of Mental Disorders (4th ed. [DSM-IV]; American Psychiatric Association, 1994) personality disorders in a Dutch clinical sample. Two widely used measures of specific personality traits were selected, the Revised NEO Personality Inventory as a measure of normal personality traits, and the Dimensional Assessment of Personality Pathology-Basic Questionnaire as a measure of pathological traits. In addition, 2 promising measures of personality dysfunction were selected, the General Assessment of Personality Disorder and the Severity Indices of Personality Problems. Theoretically predicted associations were found between the measures, and all measures predicted the presence and severity of DSM-IV personality disorders. The combination of general personality dysfunction models and personality traits models provided incremental information about the presence and severity of personality disorders, suggesting that an integrative approach of multiple perspectives might serve comprehensive assessment of personality disorders.
Alcohol use among university students: Considering a positive deviance approach.
Tucker, Maryanne; Harris, Gregory E
2016-09-01
Harmful alcohol consumption among university students continues to be a significant issue. This study examined whether variables identified in the positive deviance literature would predict responsible alcohol consumption among university students. Surveyed students were categorized into three groups: abstainers, responsible drinkers and binge drinkers. Multinomial logistic regression modelling was significant (χ(2) = 274.49, degrees of freedom = 24, p < .001), with several variables predicting group membership. While the model classification accuracy rate (i.e. 71.2%) exceeded the proportional by chance accuracy rate (i.e. 38.4%), providing further support for the model, the model itself best predicted binge drinker membership over the other two groups. © The Author(s) 2015.
Characterizing attention with predictive network models
Rosenberg, M. D.; Finn, E. S.; Scheinost, D.; Constable, R. T.; Chun, M. M.
2017-01-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals’ attentional abilities. Some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that (1) attention is a network property of brain computation, (2) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task, and (3) this architecture supports a general attentional ability common to several lab-based tasks and impaired in attention deficit hyperactivity disorder. Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. PMID:28238605
León-Roque, Noemí; Abderrahim, Mohamed; Nuñez-Alejos, Luis; Arribas, Silvia M; Condezo-Hoyos, Luis
2016-12-01
Several procedures are currently used to assess fermentation index (FI) of cocoa beans (Theobroma cacao L.) for quality control. However, all of them present several drawbacks. The aim of the present work was to develop and validate a simple image based quantitative procedure, using color measurement and artificial neural network (ANNs). ANN models based on color measurements were tested to predict fermentation index (FI) of fermented cocoa beans. The RGB values were measured from surface and center region of fermented beans in images obtained by camera and desktop scanner. The FI was defined as the ratio of total free amino acids in fermented versus non-fermented samples. The ANN model that included RGB color measurement of fermented cocoa surface and R/G ratio in cocoa bean of alkaline extracts was able to predict FI with no statistical difference compared with the experimental values. Performance of the ANN model was evaluated by the coefficient of determination, Bland-Altman plot and Passing-Bablok regression analyses. Moreover, in fermented beans, total sugar content and titratable acidity showed a similar pattern to the total free amino acid predicted through the color based ANN model. The results of the present work demonstrate that the proposed ANN model can be adopted as a low-cost and in situ procedure to predict FI in fermented cocoa beans through apps developed for mobile device. Copyright © 2016 Elsevier B.V. All rights reserved.
Dunlop, Boadie W; Hill, Eric; Johnson, Benjamin N; Klein, Daniel N; Gelenberg, Alan J; Rothbaum, Barbara O; Thase, Michael E; Kocsis, James H
2015-03-01
Sexual dysfunction is common among depressed adults. Childhood sexual abuse (CSA) and depressive symptomology are among the risk factors for sexual dysfunction, and these factors may interact to predict adult relationship functioning. Several models have been developed postulating interactions between these variables. We tested models of the effects of CSA and elucidate the associations between CSA, sexual dysfunction, depression severity, anxiety, and relationship quality in chronically depressed adults. Baseline data from 808 chronically depressed outpatients enrolled in the Research Evaluating the Value of Augmenting Medication with Psychotherapy study were evaluated using structural equation modeling. The Inventory of Depressive Symptomology, self-report version (IDS-SR) assessed depression severity, and the Mood and Anxiety Symptom Questionnaire Anxious Arousal subscale assessed anxiety. Sexual function was assessed with the Arizona Sexual Experiences Scale (ASEX), and the Quality of Marriage Index (QMI) assessed relationship quality for patients in stable relationships. CSA scores predicted depression severity on the IDS-SR, as well as lower relationship quality and sexual satisfaction. ASEX scores were significantly associated with depression severity but were not correlated with the QMI. Two models were evaluated to elucidate these associations, revealing that (i) depression severity and anxious arousal mediated the relationship between CSA and adult sexual function, (ii) anxious arousal and sexual functioning mediated the association between CSA and depression symptoms, and (iii) when these models were combined, anxious arousal emerged as the most important mediator of CSA on depression which, in turn, mediated associations with adult sexual satisfaction and relationship quality. Although CSA predicts lower relationship and sexual satisfaction among depressed adults, the long-term effects of CSA appear to be mediated by depressive and anxious symptoms. It is important to address depression and anxiety symptoms when treating patients with CSA who present with sexual dysfunction or marital concerns. © 2014 International Society for Sexual Medicine.
NASA Technical Reports Server (NTRS)
Schonberg, William P.; Mohamed, Essam
1997-01-01
This report presents the results of a study whose objective was to develop first-principles-based models of hole size and maximum tip-to-tip crack length for a spacecraft module pressure wall that has been perforated in an orbital debris particle impact. The hole size and crack length models are developed by sequentially characterizing the phenomena comprising the orbital debris impact event, including the initial impact, the creation and motion of a debris cloud within the dual-wall system, the impact of the debris cloud on the pressure wall, the deformation of the pressure wall due to debris cloud impact loading prior to crack formation, pressure wall crack initiation, propagation, and arrest, and finally pressure wall deformation following crack initiation and growth. The model development has been accomplished through the application of elementary shock physics and thermodynamic theory, as well as the principles of mass, momentum, and energy conservation. The predictions of the model developed herein are compared against the predictions of empirically-based equations for hole diameters and maximum tip-to-tip crack length for three International Space Station wall configurations. The ISS wall systems considered are the baseline U.S. Lab Cylinder, the enhanced U.S. Lab Cylinder, and the U.S. Lab Endcone. The empirical predictor equations were derived from experimentally obtained hole diameters and crack length data. The original model predictions did not compare favorably with the experimental data, especially for cases in which pressure wall petalling did not occur. Several modifications were made to the original model to bring its predictions closer in line with the experimental results. Following the adjustment of several empirical constants, the predictions of the modified analytical model were in much closer agreement with the experimental results.
Prediction of Disease Case Severity Level To Determine INA CBGs Rate
NASA Astrophysics Data System (ADS)
Puspitorini, Sukma; Kusumadewi, Sri; Rosita, Linda
2017-03-01
Indonesian Case-Based Groups (INA CBGs) is case-mix payment system using software grouper application. INA CBGs consisting of four digits code where the last digits indicating the severity level of disease cases. Severity level influence by secondary diagnosis (complications and co-morbidity) related to resource intensity level. It is medical resources used to treat a hospitalized patient. Objectives of this research is developing decision support system to predict severity level of disease cases and illustrate INA CBGs rate by using data mining decision tree classification model. Primary diagnosis (DU), first secondary diagnosis (DS 1), and second secondary diagnosis (DS 2) are attributes that used as input of severity level. The training process using C4.5 algorithm and the rules will represent in the IF-THEN form. Credibility of the system analyzed through testing process and confusion matrix present the results. Outcome of this research shows that first secondary diagnosis influence significant to form severity level predicting rules from new disease cases and INA CBGs rate illustration.
NASA Astrophysics Data System (ADS)
Xiong, H.; Hamila, N.; Boisse, P.
2017-10-01
Pre-impregnated thermoplastic composites have recently attached increasing interest in the automotive industry for their excellent mechanical properties and their rapid cycle manufacturing process, modelling and numerical simulations of forming processes for composites parts with complex geometry is necessary to predict and optimize manufacturing practices, especially for the consolidation effects. A viscoelastic relaxation model is proposed to characterize the consolidation behavior of thermoplastic prepregs based on compaction tests with a range of temperatures. The intimate contact model is employed to predict the evolution of the consolidation which permits the microstructure prediction of void presented through the prepreg. Within a hyperelastic framework, several simulation tests are launched by combining a new developed solid shell finite element and the consolidation models.
Targeting Forest Management through Fire and Erosion Modeling
NASA Astrophysics Data System (ADS)
Elliot, William J.; Miller, Mary Ellen; MacDonald, Lee H.
2013-04-01
Forests deliver a number of ecosystem services, including clean water. When forests are disturbed by wildfire, the timing and quantity of runoff can be altered, and the quality can be severely degraded. A modeling study for about 1500 km2 in the Upper Mokelumne River Watershed in California was conducted to determine the risk of wildfire and the associated potential sediment delivery should a wildfire occur, and to calculate the potential reduction in sediment delivery that might result from fuel reduction treatments. The first step was to predict wildfire severity and probability of occurrence under current vegetation conditions with FlamMap fire prediction tool. FlamMap uses current vegetation, topography, and wind characteristics to predict the speed, flame length, and direction of a simulated flame front for each 30-m pixel. As the first step in the erosion modeling, a geospatial interface for the WEPP model (GeoWEPP) was used to delineate approximately 6-ha hillslope polygons for the study area. The flame length values from FlamMap were then aggregated for each hillslope polygon to yield a predicted fire intensity. Fire intensity and pre-fire vegetation conditions were used to estimate fire severity (either unburned, low, moderate or high). The fire severity was combined with soil properties from the STATSGO database to build the vegetation and soil files needed to run WEPP for each polygon. Eight different stochastic climates were generated to account for the weather variability within the basin. A modified batching version of GeoWEPP was used to predict the first-year post-fire sediment yield from each hillslope and subwatershed. Estimated sediment yields ranged from 0 to more than 100 Mg/ha, and were typical of observed values. The polygons that generated the greatest amount of sediment or that were critical for reducing fire spread were identified, and these were "treated" by reducing the amount of fuel available for a wildfire. The erosion associated with these fuel treatments was estimated using WEPP. FlamMap and WEPP were run a second time to determine the extent to which the imposed treatments reduced fire intensity, fire severity, and the predicted sediment yields. The results allowed managers to quantify the net reduction in sediment delivery due to the prescribed treatments. The modeling also identified those polygons with the greatest net decline in sediment delivery, with the expectation that these polygons would have the highest priority for fuel reduction treatments. An economic value can be assigned to the predicted net change in sediment delivered to a reservoir or a specified decline in water quality. The estimated avoided costs due to the reduction in sediment delivery can help justify the optimized fuel treatments.
Charlson, Fiona J; Steel, Zachary; Degenhardt, Louisa; Chey, Tien; Silove, Derrick; Marnane, Claire; Whiteford, Harvey A
2012-01-01
Mental disorders are likely to be elevated in the Libyan population during the post-conflict period. We estimated cases of severe PTSD and depression and related health service requirements using modelling from existing epidemiological data and current recommended mental health service targets in low and middle income countries (LMIC's). Post-conflict prevalence estimates were derived from models based on a previously conducted systematic review and meta-regression analysis of mental health among populations living in conflict. Political terror ratings and intensity of exposure to traumatic events were used in predictive models. Prevalence of severe cases was applied to chosen populations along with uncertainty ranges. Six populations deemed to be affected by the conflict were chosen for modelling: Misrata (population of 444,812), Benghazi (pop. 674,094), Zintan (pop. 40,000), displaced people within Tripoli/Zlitan (pop. 49,000), displaced people within Misrata (pop. 25,000) and Ras Jdir camps (pop. 3,700). Proposed targets for service coverage, resource utilisation and full-time equivalent staffing for management of severe cases of major depression and post-traumatic stress disorder (PTSD) are based on a published model for LMIC's. Severe PTSD prevalence in populations exposed to a high level of political terror and traumatic events was estimated at 12.4% (95%CI 8.5-16.7) and was 19.8% (95%CI 14.0-26.3) for severe depression. Across all six populations (total population 1,236,600), the conflict could be associated with 123,200 (71,600-182,400) cases of severe PTSD and 228,100 (134,000-344,200) cases of severe depression; 50% of PTSD cases were estimated to co-occur with severe depression. Based upon service coverage targets, approximately 154 full-time equivalent staff would be required to respond to these cases sufficiently which is substantially below the current level of resource estimates for these regions. This is the first attempt to predict the mental health burden and consequent service response needs of such a conflict, and is crucially timed for Libya.
Charlson, Fiona J.; Steel, Zachary; Degenhardt, Louisa; Chey, Tien; Silove, Derrick; Marnane, Claire; Whiteford, Harvey A.
2012-01-01
Background Mental disorders are likely to be elevated in the Libyan population during the post-conflict period. We estimated cases of severe PTSD and depression and related health service requirements using modelling from existing epidemiological data and current recommended mental health service targets in low and middle income countries (LMIC’s). Methods Post-conflict prevalence estimates were derived from models based on a previously conducted systematic review and meta-regression analysis of mental health among populations living in conflict. Political terror ratings and intensity of exposure to traumatic events were used in predictive models. Prevalence of severe cases was applied to chosen populations along with uncertainty ranges. Six populations deemed to be affected by the conflict were chosen for modelling: Misrata (population of 444,812), Benghazi (pop. 674,094), Zintan (pop. 40,000), displaced people within Tripoli/Zlitan (pop. 49,000), displaced people within Misrata (pop. 25,000) and Ras Jdir camps (pop. 3,700). Proposed targets for service coverage, resource utilisation and full-time equivalent staffing for management of severe cases of major depression and post-traumatic stress disorder (PTSD) are based on a published model for LMIC’s. Findings Severe PTSD prevalence in populations exposed to a high level of political terror and traumatic events was estimated at 12.4% (95%CI 8.5–16.7) and was 19.8% (95%CI 14.0–26.3) for severe depression. Across all six populations (total population 1,236,600), the conflict could be associated with 123,200 (71,600–182,400) cases of severe PTSD and 228,100 (134,000–344,200) cases of severe depression; 50% of PTSD cases were estimated to co-occur with severe depression. Based upon service coverage targets, approximately 154 full-time equivalent staff would be required to respond to these cases sufficiently which is substantially below the current level of resource estimates for these regions. Discussion This is the first attempt to predict the mental health burden and consequent service response needs of such a conflict, and is crucially timed for Libya. PMID:22808201
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
Stata Modules for Calculating Novel Predictive Performance Indices for Logistic Models.
Barkhordari, Mahnaz; Padyab, Mojgan; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza
2016-01-01
Prediction is a fundamental part of prevention of cardiovascular diseases (CVD). The development of prediction algorithms based on the multivariate regression models loomed several decades ago. Parallel with predictive models development, biomarker researches emerged in an impressively great scale. The key question is how best to assess and quantify the improvement in risk prediction offered by new biomarkers or more basically how to assess the performance of a risk prediction model. Discrimination, calibration, and added predictive value have been recently suggested to be used while comparing the predictive performances of the predictive models' with and without novel biomarkers. Lack of user-friendly statistical software has restricted implementation of novel model assessment methods while examining novel biomarkers. We intended, thus, to develop a user-friendly software that could be used by researchers with few programming skills. We have written a Stata command that is intended to help researchers obtain cut point-free and cut point-based net reclassification improvement index and (NRI) and relative and absolute Integrated discriminatory improvement index (IDI) for logistic-based regression analyses.We applied the commands to a real data on women participating the Tehran lipid and glucose study (TLGS) to examine if information of a family history of premature CVD, waist circumference, and fasting plasma glucose can improve predictive performance of the Framingham's "general CVD risk" algorithm. The command is addpred for logistic regression models. The Stata package provided herein can encourage the use of novel methods in examining predictive capacity of ever-emerging plethora of novel biomarkers.
Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.
Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829
Artificial neural network modelling of a large-scale wastewater treatment plant operation.
Güçlü, Dünyamin; Dursun, Sükrü
2010-11-01
Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.
Comparative Analysis of Hybrid Models for Prediction of BP Reactivity to Crossed Legs.
Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar
2017-01-01
Crossing the legs at the knees, during BP measurement, is one of the several physiological stimuli that considerably influence the accuracy of BP measurements. Therefore, it is paramount to develop an appropriate prediction model for interpreting influence of crossed legs on BP. This research work described the use of principal component analysis- (PCA-) fused forward stepwise regression (FSWR), artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS), and least squares support vector machine (LS-SVM) models for prediction of BP reactivity to crossed legs among the normotensive and hypertensive participants. The evaluation of the performance of the proposed prediction models using appropriate statistical indices showed that the PCA-based LS-SVM (PCA-LS-SVM) model has the highest prediction accuracy with coefficient of determination ( R 2 ) = 93.16%, root mean square error (RMSE) = 0.27, and mean absolute percentage error (MAPE) = 5.71 for SBP prediction in normotensive subjects. Furthermore, R 2 = 96.46%, RMSE = 0.19, and MAPE = 1.76 for SBP prediction and R 2 = 95.44%, RMSE = 0.21, and MAPE = 2.78 for DBP prediction in hypertensive subjects using the PCA-LSSVM model. This assessment presents the importance and advantages posed by hybrid computing models for the prediction of variables in biomedical research studies.
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-05-11
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.
Predicting survival across chronic interstitial lung disease: the ILD-GAP model.
Ryerson, Christopher J; Vittinghoff, Eric; Ley, Brett; Lee, Joyce S; Mooney, Joshua J; Jones, Kirk D; Elicker, Brett M; Wolters, Paul J; Koth, Laura L; King, Talmadge E; Collard, Harold R
2014-04-01
Risk prediction is challenging in chronic interstitial lung disease (ILD) because of heterogeneity in disease-specific and patient-specific variables. Our objective was to determine whether mortality is accurately predicted in patients with chronic ILD using the GAP model, a clinical prediction model based on sex, age, and lung physiology, that was previously validated in patients with idiopathic pulmonary fibrosis. Patients with idiopathic pulmonary fibrosis (n=307), chronic hypersensitivity pneumonitis (n=206), connective tissue disease-associated ILD (n=281), idiopathic nonspecific interstitial pneumonia (n=45), or unclassifiable ILD (n=173) were selected from an ongoing database (N=1,012). Performance of the previously validated GAP model was compared with novel prediction models in each ILD subtype and the combined cohort. Patients with follow-up pulmonary function data were used for longitudinal model validation. The GAP model had good performance in all ILD subtypes (c-index, 74.6 in the combined cohort), which was maintained at all stages of disease severity and during follow-up evaluation. The GAP model had similar performance compared with alternative prediction models. A modified ILD-GAP Index was developed for application across all ILD subtypes to provide disease-specific survival estimates using a single risk prediction model. This was done by adding a disease subtype variable that accounted for better adjusted survival in connective tissue disease-associated ILD, chronic hypersensitivity pneumonitis, and idiopathic nonspecific interstitial pneumonia. The GAP model accurately predicts risk of death in chronic ILD. The ILD-GAP model accurately predicts mortality in major chronic ILD subtypes and at all stages of disease.
An injury mortality prediction based on the anatomic injury scale
Wang, Muding; Wu, Dan; Qiu, Wusi; Wang, Weimi; Zeng, Yunji; Shen, Yi
2017-01-01
Abstract To determine whether the injury mortality prediction (IMP) statistically outperforms the trauma mortality prediction model (TMPM) as a predictor of mortality. The TMPM is currently the best trauma score method, which is based on the anatomic injury. Its ability of mortality prediction is superior to the injury severity score (ISS) and to the new injury severity score (NISS). However, despite its statistical significance, the predictive power of TMPM needs to be further improved. Retrospective cohort study is based on the data of 1,148,359 injured patients in the National Trauma Data Bank hospitalized from 2010 to 2011. Sixty percent of the data was used to derive an empiric measure of severity of different Abbreviated Injury Scale predot codes by taking the weighted average death probabilities of trauma patients. Twenty percent of the data was used to create computing method of the IMP model. The remaining 20% of the data was used to evaluate the statistical performance of IMP and then be compared with the TMPM and the single worst injury by examining area under the receiver operating characteristic curve (ROC), the Hosmer–Lemeshow (HL) statistic, and the Akaike information criterion. IMP exhibits significantly both better discrimination (ROC-IMP, 0.903 [0.899–0.907] and ROC-TMPM, 0.890 [0.886–0.895]) and calibration (HL-IMP, 9.9 [4.4–14.7] and HL-TMPM, 197 [143–248]) compared with TMPM. All models show slight changes after the extension of age, gender, and mechanism of injury, but the extended IMP still dominated TMPM in every performance. The IMP has slight improvement in discrimination and calibration compared with the TMPM and can accurately predict mortality. Therefore, we consider it as a new feasible scoring method in trauma research. PMID:28858124
NASA Technical Reports Server (NTRS)
Forssen, B.; Wang, Y. S.; Crocker, M. J.
1981-01-01
Several aspects were studied. The SEA theory was used to develop a theoretical model to predict the transmission loss through an aircraft window. This work mainly consisted of the writing of two computer programs. One program predicts the sound transmission through a plexiglass window (the case of a single partition). The other program applies to the case of a plexiglass window window with a window shade added (the case of a double partition with an air gap). The sound transmission through a structure was measured in experimental studies using several different methods in order that the accuracy and complexity of all the methods could be compared. Also, the measurements were conducted on the simple model of a fuselage (a cylindrical shell), on a real aircraft fuselage, and on stiffened panels.
NASA Astrophysics Data System (ADS)
Forssen, B.; Wang, Y. S.; Crocker, M. J.
1981-12-01
Several aspects were studied. The SEA theory was used to develop a theoretical model to predict the transmission loss through an aircraft window. This work mainly consisted of the writing of two computer programs. One program predicts the sound transmission through a plexiglass window (the case of a single partition). The other program applies to the case of a plexiglass window window with a window shade added (the case of a double partition with an air gap). The sound transmission through a structure was measured in experimental studies using several different methods in order that the accuracy and complexity of all the methods could be compared. Also, the measurements were conducted on the simple model of a fuselage (a cylindrical shell), on a real aircraft fuselage, and on stiffened panels.
NASA Astrophysics Data System (ADS)
Ying, Yibin; Liu, Yande; Tao, Yang
2005-09-01
This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r) 0.940 for the SSC and a moderate r of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSC and also showed the feasibility of using it to predict the VA of apples.
Do We Know the Actual Magnetopause Position for Typical Solar Wind Conditions?
NASA Technical Reports Server (NTRS)
Samsonov, A. A.; Gordeev, E.; Tsyganenko, N. A.; Safrankova, J.; Nemecek, Z.; Simunek, J.; Sibeck, D. G.; Toth, G.; Merkin, V. G.; Raeder, J.
2016-01-01
We compare predicted magnetopause positions at the subsolar point and four reference points in the terminator plane obtained from several empirical and numerical MHD (magnetohydrodynamics) models. Empirical models using various sets of magnetopause crossings and making different assumptions about the magnetopause shape predict significantly different magnetopause positions (with a scatter greater than 1 Earth radius (R (sub E)) even at the subsolar point. Axisymmetric magnetopause models cannot reproduce the cusp indentations or the changes related to the dipole tilt effect, and most of them predict the magnetopause closer to the Earth than non axisymmetric models for typical solar wind conditions and zero tilt angle. Predictions of two global non axisymmetric models do not match each other, and the models need additional verification. MHD models often predict the magnetopause closer to the Earth than the non axisymmetric empirical models, but the predictions of MHD simulations may need corrections for the ring current effect and decreases of the solar wind pressure that occur in the foreshock. Comparing MHD models in which the ring current magnetic field is taken into account with the empirical Lin et al. model, we find that the differences in the reference point positions predicted by these models are relatively small for B (sub z) equals 0 (note: B (sub z) is when the Earth's magnetic field points north versus Sun's magnetic field pointing south). Therefore, we assume that these predictions indicate the actual magnetopause position, but future investigations are still needed.
Challenges of assessing fire and burn severity using field measures, remote sensing and modelling
Penelope Morgan; Robert E. Keane; Gregory K. Dillon; Theresa B. Jain; Andrew T. Hudak; Eva C. Karau; Pamela G. Sikkink; Zachery A. Holden; Eva K. Strand
2014-01-01
Comprehensive assessment of ecological change after fires have burned forests and rangelands is important if we are to understand, predict and measure fire effects. We highlight the challenges in effective assessment of fire and burn severity in the field and using both remote sensing and simulation models. We draw on diverse recent research for guidance on assessing...
A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments
NASA Astrophysics Data System (ADS)
Gokhale, Sharad; Khare, Mukesh
Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).
High-severity fire: evaluating its key drivers and mapping its probability across western US forests
NASA Astrophysics Data System (ADS)
Parks, Sean A.; Holsinger, Lisa M.; Panunto, Matthew H.; Jolly, W. Matt; Dobrowski, Solomon Z.; Dillon, Gregory K.
2018-04-01
Wildland fire is a critical process in forests of the western United States (US). Variation in fire behavior, which is heavily influenced by fuel loading, terrain, weather, and vegetation type, leads to heterogeneity in fire severity across landscapes. The relative influence of these factors in driving fire severity, however, is poorly understood. Here, we explore the drivers of high-severity fire for forested ecoregions in the western US over the period 2002–2015. Fire severity was quantified using a satellite-inferred index of severity, the relativized burn ratio. For each ecoregion, we used boosted regression trees to model high-severity fire as a function of live fuel, topography, climate, and fire weather. We found that live fuel, on average, was the most important factor driving high-severity fire among ecoregions (average relative influence = 53.1%) and was the most important factor in 14 of 19 ecoregions. Fire weather was the second most important factor among ecoregions (average relative influence = 22.9%) and was the most important factor in five ecoregions. Climate (13.7%) and topography (10.3%) were less influential. We also predicted the probability of high-severity fire, were a fire to occur, using recent (2016) satellite imagery to characterize live fuel for a subset of ecoregions in which the model skill was deemed acceptable (n = 13). These ‘wall-to-wall’ gridded ecoregional maps provide relevant and up-to-date information for scientists and managers who are tasked with managing fuel and wildland fire. Lastly, we provide an example of the predicted likelihood of high-severity fire under moderate and extreme fire weather before and after fuel reduction treatments, thereby demonstrating how our framework and model predictions can potentially serve as a performance metric for land management agencies tasked with reducing hazardous fuel across large landscapes.
Modelling Influence and Opinion Evolution in Online Collective Behaviour
Gend, Pascal; Rentfrow, Peter J.; Hendrickx, Julien M.; Blondel, Vincent D.
2016-01-01
Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants’ past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection. PMID:27336834
Plant water potential improves prediction of empirical stomatal models.
Anderegg, William R L; Wolf, Adam; Arango-Velez, Adriana; Choat, Brendan; Chmura, Daniel J; Jansen, Steven; Kolb, Thomas; Li, Shan; Meinzer, Frederick; Pita, Pilar; Resco de Dios, Víctor; Sperry, John S; Wolfe, Brett T; Pacala, Stephen
2017-01-01
Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.
Jobling, Susan; Williams, Richard; Johnson, Andrew; Taylor, Ayesha; Gross-Sorokin, Melanie; Nolan, Monique; Tyler, Charles R.; van Aerle, Ronny; Santos, Eduarda; Brighty, Geoff
2006-01-01
Steroidal estrogens, originating principally from human excretion, are likely to play a major role in causing widespread endocrine disruption in wild populations of the roach (Rutilus rutilus), a common cyprinid fish, in rivers contaminated by treated sewage effluents. Given the extent of this problem, risk assessment models are needed to predict the location and severity of endocrine disruption in river catchments and to identify areas where regulation of sewage discharges to remove these contaminants is necessary. In this study we attempted to correlate the extent of endocrine disruption in roach in British rivers, with their predicted exposure to steroid estrogens derived from the human population. The predictions of steroid estrogen exposure at each river site were determined by combining the modeled concentrations of the individual steroid estrogens [17β -estradiol (E2), estrone (E1), and 17α -ethinylestradiol (EE2)] in each sewage effluent with their predicted dilution in the immediate receiving water. This model was applied to 45 sites on 39 rivers throughout the United Kingdom. Each site studied was then categorized as either high, medium, or low “risk” on the basis of the assumed additive potency of the three steroid estrogens calculated from data derived from published studies in various cyprinid fish species. We sampled 1,438 wild roach from the predicted high-, medium-, and low-risk river sites and examined them for evidence and severity of endocrine disruption. Both the incidence and the severity of intersex in wild roach were significantly correlated with the predicted concentrations of the natural estrogens (E1 and E2) and the synthetic contraceptive pill estrogen (EE2) present. Predicted steroid estrogen exposure was, however, less well correlated with the plasma vitellogenin concentration measured in the same fish. Moreover, we found no correlation between any of the end points measured in the roach and the proportion of industrial effluents entering the rivers we studied. Overall, our results provide further and substantive evidence to support the hypothesis that steroidal estrogens play a major role in causing intersex in wild freshwater fish in rivers in the United Kingdom and clearly show that the location and severity of these endocrine-disrupting effects can be predicted. PMID:16818244
The role of early adversity and recent life stress in depression severity in an outpatient sample.
Vogt, Dominic; Waeldin, Sandra; Hellhammer, Dirk; Meinlschmidt, Gunther
2016-12-01
Pre-, peri-, and postnatal stress have frequently been reported to be associated with negative health outcomes during adult life. However, it is unclear, if these factors independently predict mental health in adulthood. We estimated potential associations between reports of pre-, peri-, and postnatal stress and depression severity in outpatients (N = 473) diagnosed with depression, anxiety or somatoform disorders by their family physician. We retrospectively assessed pre-, peri-, and postnatal stress and measured depression severity as well as recent life stress using questionnaires. First, we estimated if depression severity was predicted by pre-, peri- and/or postnatal stress using multiple regression models. Second, we compared pre- and postnatal stress levels between patient subgroups of different degrees of depression severity, performing multilevel linear modeling. Third, we analyzed if an association between postnatal stress and current depression severity was mediated by recent life stress. We found no associations of pre-, or perinatal stress with depression severity (all p > 0.05). Higher postnatal stress was associated with higher depression severity (p < 0.001). Patients with moderately severe and severe depression reported higher levels of postnatal stress as compared to patients with none to minimal, or mild depression (all p < 0.05). Mediation analysis revealed a significant indirect effect via recent life stress of the association between postnatal stress and depression severity (p < 0.001). In patients diagnosed for depression, anxiety, and/or somatoform disorders, postnatal but neither pre- nor perinatal stress predicted depression severity in adult life. This association was mediated by recent life stress. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Cappelli, Daniele; Mansour, Nagi N.
2012-01-01
Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.
Bedward, Michael; Penman, Trent D.; Doherty, Michael D.; Weber, Rodney O.; Gill, A. Malcolm; Cary, Geoffrey J.
2016-01-01
The influence of plant traits on forest fire behaviour has evolutionary, ecological and management implications, but is poorly understood and frequently discounted. We use a process model to quantify that influence and provide validation in a diverse range of eucalypt forests burnt under varying conditions. Measured height of consumption was compared to heights predicted using a surface fuel fire behaviour model, then key aspects of our model were sequentially added to this with and without species-specific information. Our fully specified model had a mean absolute error 3.8 times smaller than the otherwise identical surface fuel model (p < 0.01), and correctly predicted the height of larger (≥1 m) flames 12 times more often (p < 0.001). We conclude that the primary endogenous drivers of fire severity are the species of plants present rather than the surface fuel load, and demonstrate the accuracy and versatility of the model for quantifying this. PMID:27529789
Zylstra, Philip; Bradstock, Ross A; Bedward, Michael; Penman, Trent D; Doherty, Michael D; Weber, Rodney O; Gill, A Malcolm; Cary, Geoffrey J
2016-01-01
The influence of plant traits on forest fire behaviour has evolutionary, ecological and management implications, but is poorly understood and frequently discounted. We use a process model to quantify that influence and provide validation in a diverse range of eucalypt forests burnt under varying conditions. Measured height of consumption was compared to heights predicted using a surface fuel fire behaviour model, then key aspects of our model were sequentially added to this with and without species-specific information. Our fully specified model had a mean absolute error 3.8 times smaller than the otherwise identical surface fuel model (p < 0.01), and correctly predicted the height of larger (≥1 m) flames 12 times more often (p < 0.001). We conclude that the primary endogenous drivers of fire severity are the species of plants present rather than the surface fuel load, and demonstrate the accuracy and versatility of the model for quantifying this.
Analysis and modeling of infrasound from a four-stage rocket launch.
Blom, Philip; Marcillo, Omar; Arrowsmith, Stephen
2016-06-01
Infrasound from a four-stage sounding rocket was recorded by several arrays within 100 km of the launch pad. Propagation modeling methods have been applied to the known trajectory to predict infrasonic signals at the ground in order to identify what information might be obtained from such observations. There is good agreement between modeled and observed back azimuths, and predicted arrival times for motor ignition signals match those observed. The signal due to the high-altitude stage ignition is found to be low amplitude, despite predictions of weak attenuation. This lack of signal is possibly due to inefficient aeroacoustic coupling in the rarefied upper atmosphere.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
A Micromechanics-Based Damage Model for [+/- Theta/90n]s Composite Laminates
NASA Technical Reports Server (NTRS)
Mayugo, Joan-Andreu; Camanho, Pedro P.; Maimi, Pere; Davila, Carlos G.
2006-01-01
A new damage model based on a micromechanical analysis of cracked [+/- Theta/90n]s laminates subjected to multiaxial loads is proposed. The model predicts the onset and accumulation of transverse matrix cracks in uniformly stressed laminates, the effect of matrix cracks on the stiffness of the laminate, as well as the ultimate failure of the laminate. The model also accounts for the effect of the ply thickness on the ply strength. Predictions relating the elastic properties of several laminates and multiaxial loads are presented.
PREDICTING CLIMATE-INDUCED RANGE SHIFTS FOR MAMMALS: HOW GOOD ARE THE MODELS?
In order to manage wildlife and conserve biodiversity, it is critical that we understand the potential impacts of climate change on species distributions. Several different approaches to predicting climate-induced geographic range shifts have been proposed to address this proble...
Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M
2015-01-20
Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-02-01
Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Stichting European Society for Clinical Investigation Journal Foundation.
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-01-06
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
Reitsma, Johannes B.; Altman, Douglas G.; Moons, Karel G.M.
2015-01-01
Background— Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. Methods— The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. Results— The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. Conclusions— To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25561516
Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M
2015-01-01
Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25562432
Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M
2015-02-01
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Royal College of Obstetricians and Gynaecologists.
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-01-13
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 The Authors.
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-01-06
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-02-01
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). Copyright © 2015 Elsevier Inc. All rights reserved.
Mehra, Tarun; Koljonen, Virve; Seifert, Burkhardt; Volbracht, Jörk; Giovanoli, Pietro; Plock, Jan; Moos, Rudolf Maria
2015-01-01
Reimbursement systems have difficulties depicting the actual cost of burn treatment, leaving care providers with a significant financial burden. Our aim was to establish a simple and accurate reimbursement model compatible with prospective payment systems. A total of 370 966 electronic medical records of patients discharged in 2012 to 2013 from Swiss university hospitals were reviewed. A total of 828 cases of burns including 109 cases of severe burns were retained. Costs, revenues and earnings for severe and nonsevere burns were analysed and a linear regression model predicting total inpatient treatment costs was established. The median total costs per case for severe burns was tenfold higher than for nonsevere burns (179 949 CHF [167 353 EUR] vs 11 312 CHF [10 520 EUR], interquartile ranges 96 782-328 618 CHF vs 4 874-27 783 CHF, p <0.001). The median of earnings per case for nonsevere burns was 588 CHF (547 EUR) (interquartile range -6 720 - 5 354 CHF) whereas severe burns incurred a large financial loss to care providers, with median earnings of -33 178 CHF (30 856 EUR) (interquartile range -95 533 - 23 662 CHF). Differences were highly significant (p <0.001). Our linear regression model predicting total costs per case with length of stay (LOS) as independent variable had an adjusted R2 of 0.67 (p <0.001 for LOS). Severe burns are systematically underfunded within the Swiss reimbursement system. Flat-rate DRG-based refunds poorly reflect the actual treatment costs. In conclusion, we suggest a reimbursement model based on a per diem rate for treatment of severe burns.
The Czech Hydrometeorological Institute's severe storm nowcasting system
NASA Astrophysics Data System (ADS)
Novak, Petr
2007-02-01
To satisfy requirements for operational severe weather monitoring and prediction, the Czech Hydrometeorological Institute (CHMI) has developed a severe storm nowcasting system which uses weather radar data as its primary data source. Previous CHMI studies identified two methods of radar echo prediction, which were then implemented during 2003 into the Czech weather radar network operational weather processor. The applications put into operations were the Continuity Tracking Radar Echoes by Correlation (COTREC) algorithm, and an application that predicts future radar fields using the wind field derived from the geopotential at 700 hPa calculated from a local numerical weather prediction model (ALADIN). To ensure timely delivery of the prediction products to the users, the forecasts are implemented into a web-based viewer (JSMeteoView) that has been developed by the CHMI Radar Department. At present, this viewer is used by all CHMI forecast offices for versatile visualization of radar and other meteorological data (Meteosat, lightning detection, NWP LAM output, SYNOP data) in the Internet/Intranet environment, and the viewer has detailed geographical navigation capabilities.
Measuring the value of accurate link prediction for network seeding.
Wei, Yijin; Spencer, Gwen
2017-01-01
The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Molthan, Andrew; Zavodsky, Bradley T.; Case, Jonathan L.; LaFontaine, Frank J.; Srikishen, Jayanthi
2010-01-01
The NASA Short-term Prediction Research and Transition Center (SPoRT)'s new "Weather in a Box" resources will provide weather research and forecast modeling capabilities for real-time application. Model output will provide additional forecast guidance and research into the impacts of new NASA satellite data sets and software capabilities. By combining several research tools and satellite products, SPoRT can generate model guidance that is strongly influenced by unique NASA contributions.
Prediction on carbon dioxide emissions based on fuzzy rules
NASA Astrophysics Data System (ADS)
Pauzi, Herrini; Abdullah, Lazim
2014-06-01
There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.
Novel method to predict body weight in children based on age and morphological facial features.
Huang, Ziyin; Barrett, Jeffrey S; Barrett, Kyle; Barrett, Ryan; Ng, Chee M
2015-04-01
A new and novel approach of predicting the body weight of children based on age and morphological facial features using a three-layer feed-forward artificial neural network (ANN) model is reported. The model takes in four parameters, including age-based CDC-inferred median body weight and three facial feature distances measured from digital facial images. In this study, thirty-nine volunteer subjects with age ranging from 6-18 years old and BW ranging from 18.6-96.4 kg were used for model development and validation. The final model has a mean prediction error of 0.48, a mean squared error of 18.43, and a coefficient of correlation of 0.94. The model shows significant improvement in prediction accuracy over several age-based body weight prediction methods. Combining with a facial recognition algorithm that can detect, extract and measure the facial features used in this study, mobile applications that incorporate this body weight prediction method may be developed for clinical investigations where access to scales is limited. © 2014, The American College of Clinical Pharmacology.
NASA Astrophysics Data System (ADS)
Wanna, S. B. C.; Basaruddin, K. S.; Mat Som, M. H.; Mohamad Hashim, M. S.; Daud, R.; Majid, M. S. Abdul; Sulaiman, A. R.
2017-10-01
Osteogenesis imperfecta (OI) is a genetic disease which affecting the bone geometry. In a severe case, this disease can cause death to patients. The main issue of this disease is the prediction on bone fracture by the orthopaedic surgeons. The resistance of the bone to withstand the force before the bones fracture often become the main concern. Therefore, the objective of the present preliminary study was to investigate the fracture risk associated with OI bone, particularly in femur, when subjected to the self-weight. Finite element (FEA) was employed to reconstruct the OI bone model and analyse the mechanical stress response of femur before it fractures. Ten deformed models with different severity of OI bones were developed and the force that represents patient self-weight was applied to the reconstructed models in static analysis. Stress and fracture risk were observed and analysed throughout the simulation. None of the deformed model were observed experienced fracture. The fracture risk increased with increased severity of the deformed bone. The results showed that all deformed femur models were able to bear the force without experienced fracture when subjected to only the self-weight.
QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.
Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V
2015-07-27
Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.
Prediction and error of baldcypress stem volume from stump diameter
Bernard R. Parresol
1998-01-01
The need to estimate the volume of removals occurs for many reasons, such as in trespass cases, severance tax reports, and post-harvest assessments. A logarithmic model is presented for prediction of baldcypress total stem cubic foot volume using stump diameter as the independent variable. Because the error of prediction is as important as the volume estimate, the...
Mysara, Mohamed; Elhefnawi, Mahmoud; Garibaldi, Jonathan M
2012-06-01
The investigation of small interfering RNA (siRNA) and its posttranscriptional gene-regulation has become an extremely important research topic, both for fundamental reasons and for potential longer-term therapeutic benefits. Several factors affect the functionality of siRNA including positional preferences, target accessibility and other thermodynamic features. State of the art tools aim to optimize the selection of target siRNAs by identifying those that may have high experimental inhibition. Such tools implement artificial neural network models as Biopredsi and ThermoComposition21, and linear regression models as DSIR, i-Score and Scales, among others. However, all these models have limitations in performance. In this work, a neural-network trained new siRNA scoring/efficacy prediction model was developed based on combining two existing scoring algorithms (ThermoComposition21 and i-Score), together with the whole stacking energy (ΔG), in a multi-layer artificial neural network. These three parameters were chosen after a comparative combinatorial study between five well known tools. Our developed model, 'MysiRNA' was trained on 2431 siRNA records and tested using three further datasets. MysiRNA was compared with 11 alternative existing scoring tools in an evaluation study to assess the predicted and experimental siRNA efficiency where it achieved the highest performance both in terms of correlation coefficient (R(2)=0.600) and receiver operating characteristics analysis (AUC=0.808), improving the prediction accuracy by up to 18% with respect to sensitivity and specificity of the best available tools. MysiRNA is a novel, freely accessible model capable of predicting siRNA inhibition efficiency with improved specificity and sensitivity. This multiclassifier approach could help improve the performance of prediction in several bioinformatics areas. MysiRNA model, part of MysiRNA-Designer package [1], is expected to play a key role in siRNA selection and evaluation. Copyright © 2012 Elsevier Inc. All rights reserved.
Predicting trauma patient mortality: ICD [or ICD-10-AM] versus AIS based approaches.
Willis, Cameron D; Gabbe, Belinda J; Jolley, Damien; Harrison, James E; Cameron, Peter A
2010-11-01
The International Classification of Diseases Injury Severity Score (ICISS) has been proposed as an International Classification of Diseases (ICD)-10-based alternative to mortality prediction tools that use Abbreviated Injury Scale (AIS) data, including the Trauma and Injury Severity Score (TRISS). To date, studies have not examined the performance of ICISS using Australian trauma registry data. This study aimed to compare the performance of ICISS with other mortality prediction tools in an Australian trauma registry. This was a retrospective review of prospectively collected data from the Victorian State Trauma Registry. A training dataset was created for model development and a validation dataset for evaluation. The multiplicative ICISS model was compared with a worst injury ICISS approach, Victorian TRISS (V-TRISS, using local coefficients), maximum AIS severity and a multivariable model including ICD-10-AM codes as predictors. Models were investigated for discrimination (C-statistic) and calibration (Hosmer-Lemeshow statistic). The multivariable approach had the highest level of discrimination (C-statistic 0.90) and calibration (H-L 7.65, P= 0.468). Worst injury ICISS, V-TRISS and maximum AIS had similar performance. The multiplicative ICISS produced the lowest level of discrimination (C-statistic 0.80) and poorest calibration (H-L 50.23, P < 0.001). The performance of ICISS may be affected by the data used to develop estimates, the ICD version employed, the methods for deriving estimates and the inclusion of covariates. In this analysis, a multivariable approach using ICD-10-AM codes was the best-performing method. A multivariable ICISS approach may therefore be a useful alternative to AIS-based methods and may have comparable predictive performance to locally derived TRISS models. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.
How age and gender predict illness course in a first-episode nonaffective psychosis cohort.
Drake, Richard J; Addington, Jean; Viswanathan, Ananth C; Lewis, Shôn W; Cotter, Jack; Yung, Alison R; Abel, Kathryn M
2016-03-01
Male gender and young age at onset of schizophrenia are traditionally associated with poor treatment outcome and often used to determine prognosis. However, many studies use nonincident samples and fail to adjust for symptom severity at onset. We hypothesized that age and gender would influence severity of presentation but would not predict outcome after adjustment for symptoms at presentation. 628 people with first-episode ICD-9 and DSM-IV nonaffective psychosis from 2 historical cohorts recruited from sequential presentations in Canada and the United Kingdom (1996-1998) were assessed prospectively at presentation and over 12-18 months using the Positive and Negative Syndrome Scale (PANSS). Models of the age-at-onset distributions with 2 underlying modes at similar ages in women (ages 23 years and 47 years) and men (ages 22 years and 46 years) had relatively good fits compared to single-mode models (χ(2)1 better by 9.2 for females, 8.0 for males, both P < .05). At presentation, scores for negative symptoms were 1.84 points worse for males (95% CI, 1.05 to 2.58; P < .001) in a mixed effects model. Younger age also predicted higher negative scores at presentation (partial correlation r = -0.18, P < .01; P < .001 in the mixed effects model). Findings were similar for cognitive-disorganized symptoms. However, after controlling for baseline symptoms, age at onset and gender did not significantly predict subsequent symptom course in the mixed effects models. Gender and age at onset are independently associated with symptoms at presentation but not with medium-term course of schizophrenia. This finding reinforces the importance of early identification and prevention of severe negative symptoms at first episode, whatever an individual's age and gender. © Copyright 2016 Physicians Postgraduate Press, Inc.
NASA Astrophysics Data System (ADS)
Luan, Feng; Kleandrova, Valeria V.; González-Díaz, Humberto; Ruso, Juan M.; Melo, André; Speck-Planche, Alejandro; Cordeiro, M. Natália D. S.
2014-08-01
Nowadays, the interest in the search for new nanomaterials with improved electrical, optical, catalytic and biological properties has increased. Despite the potential benefits that can be gathered from the use of nanoparticles, only little attention has been paid to their possible toxic effects that may affect human health. In this context, several assays have been carried out to evaluate the cytotoxicity of nanoparticles in mammalian cells. Owing to the cost in both resources and time involved in such toxicological assays, there has been a considerable increase in the interest towards alternative computational methods, like the application of quantitative structure-activity/toxicity relationship (QSAR/QSTR) models for risk assessment of nanoparticles. However, most QSAR/QSTR models developed so far have predicted cytotoxicity against only one cell line, and they did not provide information regarding the influence of important factors rather than composition or size. This work reports a QSTR-perturbation model aiming at simultaneously predicting the cytotoxicity of different nanoparticles against several mammalian cell lines, and also considering different times of exposure of the cell lines, as well as the chemical composition of nanoparticles, size, conditions under which the size was measured, and shape. The derived QSTR-perturbation model, using a dataset of 1681 cases (nanoparticle-nanoparticle pairs), exhibited an accuracy higher than 93% for both training and prediction sets. In order to demonstrate the practical applicability of our model, the cytotoxicity of different silica (SiO2), nickel (Ni), and nickel(ii) oxide (NiO) nanoparticles were predicted and found to be in very good agreement with experimental reports. To the best of our knowledge, this is the first attempt to simultaneously predict the cytotoxicity of nanoparticles under multiple experimental conditions by applying a single unique QSTR model.Nowadays, the interest in the search for new nanomaterials with improved electrical, optical, catalytic and biological properties has increased. Despite the potential benefits that can be gathered from the use of nanoparticles, only little attention has been paid to their possible toxic effects that may affect human health. In this context, several assays have been carried out to evaluate the cytotoxicity of nanoparticles in mammalian cells. Owing to the cost in both resources and time involved in such toxicological assays, there has been a considerable increase in the interest towards alternative computational methods, like the application of quantitative structure-activity/toxicity relationship (QSAR/QSTR) models for risk assessment of nanoparticles. However, most QSAR/QSTR models developed so far have predicted cytotoxicity against only one cell line, and they did not provide information regarding the influence of important factors rather than composition or size. This work reports a QSTR-perturbation model aiming at simultaneously predicting the cytotoxicity of different nanoparticles against several mammalian cell lines, and also considering different times of exposure of the cell lines, as well as the chemical composition of nanoparticles, size, conditions under which the size was measured, and shape. The derived QSTR-perturbation model, using a dataset of 1681 cases (nanoparticle-nanoparticle pairs), exhibited an accuracy higher than 93% for both training and prediction sets. In order to demonstrate the practical applicability of our model, the cytotoxicity of different silica (SiO2), nickel (Ni), and nickel(ii) oxide (NiO) nanoparticles were predicted and found to be in very good agreement with experimental reports. To the best of our knowledge, this is the first attempt to simultaneously predict the cytotoxicity of nanoparticles under multiple experimental conditions by applying a single unique QSTR model. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr01285b
Toward the Development of an Objective Index of Dysphonia Severity: A Four-Factor Acoustic Model
ERIC Educational Resources Information Center
Awan, Shaheen N.; Roy, Nelson
2006-01-01
During assessment and management of individuals with voice disorders, clinicians routinely attempt to describe or quantify the severity of a patient's dysphonia. This investigation used acoustic measures derived from sustained vowel samples to predict dysphonia severity (as determined by auditory-perceptual ratings), for a diverse set of voice…
Constructing Data Albums for Significant Severe Weather Events
NASA Technical Reports Server (NTRS)
Greene, Ethan; Zavodsky, Bradley; Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Bakare, Rohan; Basyal, Sabin; Conover, Helen
2014-01-01
Data Albums provide a one-stop-shop combining datasets from NASA, NWS, online new sources, and social media. Data Albums will help meteorologists better understand severe weather events to improve predictive models. Developed a new ontology for severe weather based off current hurricane Data Album and selected relevant NASA datasets for inclusion.
Xu, Wen-Shen; Qiu, Xiao-Ming; Ou, Qi-shui; Liu, Can; Lin, Jin-Piao; Chen, Hui-Juan; Lin, Sheng; Wang, Wen-Hua; Lin, Shou-Rong; Chen, Jing
2015-03-01
We aimed to study whether red blood cell distribution width (RDW) could be one of the variables determining the extent of liver fibrosis and inflammation in patients with biopsy-proven hepatitis B. A total of 446 hepatitis B virus-infected patients who underwent liver biopsy were divided into 2 groups: absent or mild and moderate-severe according to the severity of liver fibrosis and inflammation. The independent variables that determine the severity of liver fibrosis and inflammation were explored. RDW values increased with progressive liver fibrosis and inflammation. After adjustments for other potent predictors, liver fibrosis (moderate-severe) was independently associated with RDW, platelet, and albumin (odds ratio = 1.121, 0.987, and 0.941, respectively), whereas increased odds ratios of significant inflammation were found for RDW, alanine aminotransferase, albumin, and PLT (odds ratio = 1.146, 1.003, 0.927, and 0.990, respectively). The sensitivity and specificity of model A were 70.0% and 62.9% for detection of significant liver fibrosis [area under the receiver-operating characteristic curve (AUC) = 0.713, P < 0.001]. The sensitivity and specificity of model B were 66.1% and 79.4% for predicting advanced liver inflammation (AUC = 0.765, P < 0.001). Compared with preexisting indicators, model A achieved the highest AUC, whereas model B showed a higher AUC than RDW to platelet ratio (0.670, P < 0.001) and FIB-4 (0.740, P = 0.32). RDW may provide a useful clinical value for predicting liver fibrosis and necroinflammation in hepatitis B-infected patients with other markers.
Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. W.; Hood, Raleigh R.; Long, Wen
The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat modelsmore » of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.« less
Bolduc, David L; Bünger, Rolf; Moroni, Maria; Blakely, William F
2016-12-01
Multiple hematological biomarkers (i.e. complete blood counts and serum chemistry parameters) were used in a multivariate linear-regression fit to create predictive algorithms for estimating the severity of hematopoietic acute radiation syndrome (H-ARS) using two different species (i.e. Göttingen Minipig and non-human primate (NHP) (Macacca mulatta)). Biomarker data were analyzed prior to irradiation and between 1-60 days (minipig) and 1-30 days (NHP) after irradiation exposures of 1.6-3.5 Gy (minipig) and 6.5 Gy (NHP) 60 Co gamma ray doses at 0.5-0.6 Gy min -1 and 0.4 Gy min -1 , respectively. Fitted radiation risk and injury categorization (RRIC) values and RRIC prediction percent accuracies were compared between the two models. Both models estimated H-ARS severity with over 80% overall predictive power and with receiver operating characteristic curve area values of 0.884 and 0.825. These results based on two animal radiation models support the concept for the use of a hematopoietic-based algorithm for predicting the risk of H-ARS in humans. Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Wet tropospheric delays forecast based on Vienna Mapping Function time series analysis
NASA Astrophysics Data System (ADS)
Rzepecka, Zofia; Kalita, Jakub
2016-04-01
It is well known that the dry part of the zenith tropospheric delay (ZTD) is much easier to model than the wet part (ZTW). The aim of the research is applying stochastic modeling and prediction of ZTW using time series analysis tools. Application of time series analysis enables closer understanding of ZTW behavior as well as short-term prediction of future ZTW values. The ZTW data used for the studies were obtained from the GGOS service hold by Vienna technical University. The resolution of the data is six hours. ZTW for the years 2010 -2013 were adopted for the study. The International GNSS Service (IGS) permanent stations LAMA and GOPE, located in mid-latitudes, were admitted for the investigations. Initially the seasonal part was separated and modeled using periodic signals and frequency analysis. The prominent annual and semi-annual signals were removed using sines and consines functions. The autocorrelation of the resulting signal is significant for several days (20-30 samples). The residuals of this fitting were further analyzed and modeled with ARIMA processes. For both the stations optimal ARMA processes based on several criterions were obtained. On this basis predicted ZTW values were computed for one day ahead, leaving the white process residuals. Accuracy of the prediction can be estimated at about 3 cm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedlunn, Benjamin
Room D was an in-situ, isothermal, underground experiment conducted at the Waste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under-predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under-predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedlunn, Benjamin
Room D was an in-situ, isothermal, underground experiment conducted at theWaste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less
Model Determined for Predicting Fatigue Lives of Metal Matrix Composites Under Mean Stresses
NASA Technical Reports Server (NTRS)
Lerch, Bradley
1997-01-01
Aircraft engine components invariably are subjected to mean stresses over and above the cyclic loads. In monolithic materials, it has been observed that tensile mean stresses are detrimental and compressive mean stresses are beneficial to fatigue life in comparison to a base of zero mean stress. Several mean stress models exist for monolithic metals, but each differ quantitatively in the extent to which detrimental or beneficial effects are ascribed. There have been limited attempts to apply these models to metal matrix composites. At the NASA Lewis Research Center, several mean stress models--the Smith-Watson- Topper, Walker, Normalized Goodman, and Soderberg models--were examined for applicability to this class of composite materials. The Soderberg approach, which normalizes the mean stress to a 0.02-percent yield strength, was shown to best represent the effect of mean stresses over the range covered. The other models varied significantly in their predictability and often failed to predict the composite behavior at very high tensile mean stresses. This work is the first to systematically demonstrate the influence of mean stresses on metal matrix composites and model their effects. Attention also was given to fatigue-cracking mechanisms in the Ti-15-3 matrix and to micromechanics analyses of mean stress effects.
NASA Astrophysics Data System (ADS)
Neris, Jonay; Elliot, William J.; Doerr, Stefan H.; Robichaud, Peter R.
2017-04-01
An estimated that 15% of the world's population lives in volcanic areas. Recent catastrophic erosion events following wildfires in volcanic terrain have highlighted the geomorphological instability of this soil type under disturbed conditions and steep slopes. Predicting the hydrological and erosional response of this soils in the post-fire period is the first step to design and develop adequate actions to minimize risks in the post-fire period. In this work we apply, for the first time, the Water Erosion Prediction Project model for predicting erosion and runoff events in fire-affected volcanic soils in Europe. Two areas affected by wildfires in 2015 were selected in Tenerife (Spain) representative of different fire behaviour (downhill surface fire with long residence time vs uphill crown fire with short residence time), severity (moderate soil burn severity vs light soil burn severity) and climatic conditions (average annual precipitation of 750 and 210 mm respectively). The actual erosion processes were monitored in the field using silt fences. Rainfall and rill simulations were conducted to determine hydrologic, interrill and rill erosion parameters. The soils were sampled and key properties used as model input, evaluated. During the first 18 months after the fire 7 storms produced runoff and erosion in the selected areas. Sediment delivery reached 5.4 and 2.5 Mg ha-1 respectively in the first rainfall event monitored after the fire, figures comparable to those reported for fire-affected areas of the western USA with similar climatic conditions but lower than those showed by wetter environments. The validation of the WEPP model using field data showed reasonable estimates of hillslope sediment delivery in the post-fire period and, therefore, it is suggested that this model can support land managers in volcanic areas in Europe in predicting post-fire hydrological and erosional risks and designing suitable mitigation treatments.
NASA Astrophysics Data System (ADS)
Neris, Jonay; Robichaud, Peter R.; Elliot, William J.; Doerr, Stefan H.; Notario del Pino, Jesús S.; Lado, Marcos
2017-04-01
An estimated that 15% of the world's population lives in volcanic areas. Recent catastrophic erosion events following wildfires in volcanic terrain have highlighted the geomorphological instability of this soil type under disturbed conditions and steep slopes. Predicting the hydrological and erosional response of this soils in the post-fire period is the first step to design and develop adequate actions to minimize risks in the post-fire period. In this work we apply, for the first time, the Water Erosion Prediction Project model for predicting erosion and runoff events in fire-affected volcanic soils in Europe. Two areas affected by wildfires in 2015 were selected in Tenerife (Spain) representative of different fire behaviour (downhill surface fire with long residence time vs uphill crown fire with short residence time), severity (moderate soil burn severity vs light soil burn severity) and climatic conditions (average annual precipitation of 750 and 210 mm respectively). The actual erosion processes were monitored in the field using silt fences. Rainfall and rill simulations were conducted to determine hydrologic, interrill and rill erosion parameters. The soils were sampled and key properties used as model input, evaluated. During the first 18 months after the fire 7 storms produced runoff and erosion in the selected areas. Sediment delivery reached 5.4 and 2.5 Mg ha-1 respectively in the first rainfall event monitored after the fire, figures comparable to those reported for fire-affected areas of the western USA with similar climatic conditions but lower than those showed by wetter environments. The validation of the WEPP model using field data showed reasonable estimates of hillslope sediment delivery in the post-fire period and, therefore, it is suggested that this model can support land managers in volcanic areas in Europe in predicting post-fire hydrological and erosional risks and designing suitable mitigation treatments.
Port, M; Pieper, B; Knie, T; Dörr, H; Ganser, A; Graessle, D; Meineke, V; Abend, M
2017-08-01
Rapid clinical triage of radiation injury patients is essential for determining appropriate diagnostic and therapeutic interventions. We examined the utility of blood cell counts (BCCs) in the first three days postirradiation to predict clinical outcome, specifically for hematologic acute radiation syndrome (HARS). We analyzed BCC test samples from radiation accident victims (n = 135) along with their clinical outcome HARS severity scores (H1-4) using the System for Evaluation and Archiving of Radiation Accidents based on Case Histories (SEARCH) database. Data from nonirradiated individuals (H0, n = 132) were collected from an outpatient facility. We created binary categories for severity scores, i.e., 1 (H0 vs. H1-4), 2 (H0-1 vs. H2-4) and 3 (H0-2 vs. H3-4), to assess the discrimination ability of BCCs using unconditional logistic regression analysis. The test sample contained 454 BCCs from 267 individuals. We validated the discrimination ability on a second independent group comprised of 275 BCCs from 252 individuals originating from SEARCH (HARS 1-4), an outpatient facility (H0) and hospitals (e.g., leukemia patients, H4). Individuals with a score of H0 were easily separated from exposed individuals based on developing lymphopenia and granulocytosis. The separation of H0 and H1-4 became more prominent with increasing hematologic severity scores and time. On day 1, lymphocyte counts were most predictive for discriminating binary categories, followed by granulocytes and thrombocytes. For days 2 and 3, an almost complete separation was achieved when BCCs from different days were combined, supporting the measurement of sequential BCC. We found an almost complete discrimination of H0 vs. irradiated individuals during model validation (negative predictive value, NPV > 94%) for all three days, while the correct prediction of exposed individuals increased from day 1 (positive predictive value, PPV 78-89%) to day 3 (PPV > 90%). The models were unable to provide predictions for 10.9% of the test samples, because the PPVs or NPVs did not reach a 95% likelihood defined as the lower limit for a prediction. We developed a prediction model spreadsheet to provide early and prompt diagnostic predictions and therapeutic recommendations including identification of the worried well, requirement of hospitalization or development of severe hematopoietic syndrome. These results improve the provisional classification of HARS. For the final diagnosis, further procedures (sequential diagnosis, retrospective dosimetry, clinical follow-up, etc.) must be taken into account. Clinical outcome of radiation injury patients can be rapidly predicted within the first three days postirradiation using peripheral BCC.
Meteorological models for estimating phenology of corn
NASA Technical Reports Server (NTRS)
Daughtry, C. S. T.; Cochran, J. C.; Hollinger, S. E.
1984-01-01
Knowledge of when critical crop stages occur and how the environment affects them should provide useful information for crop management decisions and crop production models. Two sources of data were evaluated for predicting dates of silking and physiological maturity of corn (Zea mays L.). Initial evaluations were conducted using data of an adapted corn hybrid grown on a Typic Agriaquoll at the Purdue University Agronomy Farm. The second phase extended the analyses to large areas using data acquired by the Statistical Reporting Service of USDA for crop reporting districts (CRD) in Indiana and Iowa. Several thermal models were compared to calendar days for predicting dates of silking and physiological maturity. Mixed models which used a combination of thermal units to predict silking and days after silking to predict physiological maturity were also evaluated. At the Agronomy Farm the models were calibrated and tested on the same data. The thermal models were significantly less biased and more accurate than calendar days for predicting dates of silking. Differences among the thermal models were small. Significant improvements in both bias and accuracy were observed when the mixed models were used to predict dates of physiological maturity. The results indicate that statistical data for CRD can be used to evaluate models developed at agricultural experiment stations.
Prediction of Multiple Infections After Severe Burn Trauma: a Prospective Cohort Study
Yan, Shuangchun; Tsurumi, Amy; Que, Yok-Ai; Ryan, Colleen M.; Bandyopadhaya, Arunava; Morgan, Alexander A.; Flaherty, Patrick J.; Tompkins, Ronald G.; Rahme, Laurence G.
2014-01-01
Objective To develop predictive models for early triage of burn patients based on hyper-susceptibility to repeated infections. Background Infection remains a major cause of mortality and morbidity after severe trauma, demanding new strategies to combat infections. Models for infection prediction are lacking. Methods Secondary analysis of 459 burn patients (≥16 years old) with ≥20% total body surface area burns recruited from six US burn centers. We compared blood transcriptomes with a 180-h cut-off on the injury-to-transcriptome interval of 47 patients (≤1 infection episode) to those of 66 hyper-susceptible patients (multiple [≥2] infection episodes [MIE]). We used LASSO regression to select biomarkers and multivariate logistic regression to built models, accuracy of which were assessed by area under receiver operating characteristic curve (AUROC) and cross-validation. Results Three predictive models were developed covariates of: (1) clinical characteristics; (2) expression profiles of 14 genomic probes; (3) combining (1) and (2). The genomic and clinical models were highly predictive of MIE status (AUROCGenomic = 0.946 [95% CI, 0.906–0.986]); AUROCClinical = 0.864 [CI, 0.794–0.933]; AUROCGenomic/AUROCClinical P = 0.044). Combined model has an increased AUROCCombined of 0.967 (CI, 0.940–0.993) compared to the individual models (AUROCCombined/AUROCClinical P = 0.0069). Hyper-susceptible patients show early alterations in immune-related signaling pathways, epigenetic modulation and chromatin remodeling. Conclusions Early triage of burn patients more susceptible to infections can be made using clinical characteristics and/or genomic signatures. Genomic signature suggests new insights into the pathophysiology of hyper-susceptibility to infection may lead to novel potential therapeutic or prophylactic targets. PMID:24950278
Roozenbeek, Bob; Lingsma, Hester F.; Lecky, Fiona E.; Lu, Juan; Weir, James; Butcher, Isabella; McHugh, Gillian S.; Murray, Gordon D.; Perel, Pablo; Maas, Andrew I.R.; Steyerberg, Ewout W.
2012-01-01
Objective The International Mission on Prognosis and Analysis of Clinical Trials (IMPACT) and Corticoid Randomisation After Significant Head injury (CRASH) prognostic models predict outcome after traumatic brain injury (TBI) but have not been compared in large datasets. The objective of this is study is to validate externally and compare the IMPACT and CRASH prognostic models for prediction of outcome after moderate or severe TBI. Design External validation study. Patients We considered 5 new datasets with a total of 9036 patients, comprising three randomized trials and two observational series, containing prospectively collected individual TBI patient data. Measurements Outcomes were mortality and unfavourable outcome, based on the Glasgow Outcome Score (GOS) at six months after injury. To assess performance, we studied the discrimination of the models (by AUCs), and calibration (by comparison of the mean observed to predicted outcomes and calibration slopes). Main Results The highest discrimination was found in the TARN trauma registry (AUCs between 0.83 and 0.87), and the lowest discrimination in the Pharmos trial (AUCs between 0.65 and 0.71). Although differences in predictor effects between development and validation populations were found (calibration slopes varying between 0.58 and 1.53), the differences in discrimination were largely explained by differences in case-mix in the validation studies. Calibration was good, the fraction of observed outcomes generally agreed well with the mean predicted outcome. No meaningful differences were noted in performance between the IMPACT and CRASH models. More complex models discriminated slightly better than simpler variants. Conclusions Since both the IMPACT and the CRASH prognostic models show good generalizability to more recent data, they are valid instruments to quantify prognosis in TBI. PMID:22511138
Bayesian averaging over Decision Tree models for trauma severity scoring.
Schetinin, V; Jakaite, L; Krzanowski, W
2018-01-01
Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.
Finite Element Model Development For Aircraft Fuselage Structures
NASA Technical Reports Server (NTRS)
Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.
2000-01-01
The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results.
Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.
Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L
2016-01-01
The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.
Neurocognitive predictors of financial capacity in traumatic brain injury.
Martin, Roy C; Triebel, Kristen; Dreer, Laura E; Novack, Thomas A; Turner, Crystal; Marson, Daniel C
2012-01-01
To develop cognitive models of financial capacity (FC) in patients with traumatic brain injury (TBI). Longitudinal design. Inpatient brain injury rehabilitation unit. Twenty healthy controls, and 24 adults with moderate-to-severe TBI were assessed at baseline (30 days postinjury) and 6 months postinjury. The FC instrument (FCI) and a neuropsychological test battery. Univariate correlation and multiple regression procedures were employed to develop cognitive models of FCI performance in the TBI group, at baseline and 6-month time follow-up. Three cognitive predictor models of FC were developed. At baseline, measures of mental arithmetic/working memory and immediate verbal memory predicted baseline FCI performance (R = 0.72). At 6-month follow-up, measures of executive function and mental arithmetic/working memory predicted 6-month FCI performance (R = 0.79), and a third model found that these 2 measures at baseline predicted 6-month FCI performance (R = 0.71). Multiple cognitive functions are associated with initial impairment and partial recovery of FC in moderate-to-severe TBI patients. In particular, arithmetic, working memory, and executive function skills appear critical to recovery of FC in TBI. The study results represent an initial step toward developing a neurocognitive model of FC in patients with TBI.
Computer Model Inversion and Uncertainty Quantification in the Geosciences
NASA Astrophysics Data System (ADS)
White, Jeremy T.
The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.
a Probability Model for Drought Prediction Using Fusion of Markov Chain and SAX Methods
NASA Astrophysics Data System (ADS)
Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.
2017-09-01
Drought is one of the most powerful natural disasters which are affected on different aspects of the environment. Most of the time this phenomenon is immense in the arid and semi-arid area. Monitoring and prediction the severity of the drought can be useful in the management of the natural disaster caused by drought. Many indices were used in predicting droughts such as SPI, VCI, and TVX. In this paper, based on three data sets (rainfall, NDVI, and land surface temperature) which are acquired from MODIS satellite imagery, time series of SPI, VCI, and TVX in time limited between winters 2000 to summer 2015 for the east region of Isfahan province were created. Using these indices and fusion of symbolic aggregation approximation and hidden Markov chain drought was predicted for fall 2015. For this purpose, at first, each time series was transformed into the set of quality data based on the state of drought (5 group) by using SAX algorithm then the probability matrix for the future state was created by using Markov hidden chain. The fall drought severity was predicted by fusion the probability matrix and state of drought severity in summer 2015. The prediction based on the likelihood for each state of drought includes severe drought, middle drought, normal drought, severe wet and middle wet. The analysis and experimental result from proposed algorithm show that the product of this algorithm is acceptable and the proposed algorithm is appropriate and efficient for predicting drought using remote sensor data.
Sears, Jeanne M; Blanar, Laura; Bowman, Stephen M
2014-01-01
Acute work-related trauma is a leading cause of death and disability among U.S. workers. Occupational health services researchers have described the pressing need to identify valid injury severity measures for purposes such as case-mix adjustment and the construction of appropriate comparison groups in programme evaluation, intervention, quality improvement, and outcome studies. The objective of this study was to compare the performance of several injury severity scores and scoring methods in the context of predicting work-related disability and medical cost outcomes. Washington State Trauma Registry (WTR) records for injuries treated from 1998 to 2008 were linked with workers' compensation claims. Several Abbreviated Injury Scale (AIS)-based injury severity measures (ISS, New ISS, maximum AIS) were estimated directly from ICD-9-CM codes using two software packages: (1) ICDMAP-90, and (2) Stata's user-written ICDPIC programme (ICDPIC). ICDMAP-90 and ICDPIC scores were compared with existing WTR scores using the Akaike Information Criterion, amount of variance explained, and estimated effects on outcomes. Competing risks survival analysis was used to evaluate work disability outcomes. Adjusted total medical costs were modelled using linear regression. The linked sample contained 6052 work-related injury events. There was substantial agreement between WTR scores and those estimated by ICDMAP-90 (kappa=0.73), and between WTR scores and those estimated by ICDPIC (kappa=0.68). Work disability and medical costs increased monotonically with injury severity, and injury severity was a significant predictor of work disability and medical cost outcomes in all models. WTR and ICDMAP-90 scores performed better with regard to predicting outcomes than did ICDPIC scores, but effect estimates were similar. Of the three severity measures, maxAIS was usually weakest, except when predicting total permanent disability. Injury severity was significantly associated with work disability and medical cost outcomes for work-related injuries. Injury severity can be estimated using either ICDMAP-90 or ICDPIC when ICD-9-CM codes are available. We observed little practical difference between severity measures or scoring methods. This study demonstrated that using existing software to estimate injury severity may be useful to enhance occupational injury surveillance and research. Copyright © 2013 Elsevier Ltd. All rights reserved.
EPA's Models-3 CMAQ system is intended to provide a community modeling paradigm that allows continuous improvement of the one-atmosphere modeling capability in a unified fashion. CMAQ's modular design promotes incorporation of several sets of science process modules representing ...
Comparing an annual and daily time-step model for predicting field-scale P loss
USDA-ARS?s Scientific Manuscript database
Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...
Choudhry, Shahid A.; Li, Jing; Davis, Darcy; Erdmann, Cole; Sikka, Rishi; Sutariya, Bharat
2013-01-01
Introduction: Preventing the occurrence of hospital readmissions is needed to improve quality of care and foster population health across the care continuum. Hospitals are being held accountable for improving transitions of care to avert unnecessary readmissions. Advocate Health Care in Chicago and Cerner (ACC) collaborated to develop all-cause, 30-day hospital readmission risk prediction models to identify patients that need interventional resources. Ideally, prediction models should encompass several qualities: they should have high predictive ability; use reliable and clinically relevant data; use vigorous performance metrics to assess the models; be validated in populations where they are applied; and be scalable in heterogeneous populations. However, a systematic review of prediction models for hospital readmission risk determined that most performed poorly (average C-statistic of 0.66) and efforts to improve their performance are needed for widespread usage. Methods: The ACC team incorporated electronic health record data, utilized a mixed-method approach to evaluate risk factors, and externally validated their prediction models for generalizability. Inclusion and exclusion criteria were applied on the patient cohort and then split for derivation and internal validation. Stepwise logistic regression was performed to develop two predictive models: one for admission and one for discharge. The prediction models were assessed for discrimination ability, calibration, overall performance, and then externally validated. Results: The ACC Admission and Discharge Models demonstrated modest discrimination ability during derivation, internal and external validation post-recalibration (C-statistic of 0.76 and 0.78, respectively), and reasonable model fit during external validation for utility in heterogeneous populations. Conclusions: The ACC Admission and Discharge Models embody the design qualities of ideal prediction models. The ACC plans to continue its partnership to further improve and develop valuable clinical models. PMID:24224068
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
Cook, D A
2006-04-01
Models that estimate the probability of death of intensive care unit patients can be used to stratify patients according to the severity of their condition and to control for casemix and severity of illness. These models have been used for risk adjustment in quality monitoring, administration, management and research and as an aid to clinical decision making. Models such as the Mortality Prediction Model family, SAPS II, APACHE II, APACHE III and the organ system failure models provide estimates of the probability of in-hospital death of ICU patients. This review examines methods to assess the performance of these models. The key attributes of a model are discrimination (the accuracy of the ranking in order of probability of death) and calibration (the extent to which the model's prediction of probability of death reflects the true risk of death). These attributes should be assessed in existing models that predict the probability of patient mortality, and in any subsequent model that is developed for the purposes of estimating these probabilities. The literature contains a range of approaches for assessment which are reviewed and a survey of the methodologies used in studies of intensive care mortality models is presented. The systematic approach used by Standards for Reporting Diagnostic Accuracy provides a framework to incorporate these theoretical considerations of model assessment and recommendations are made for evaluation and presentation of the performance of models that estimate the probability of death of intensive care patients.
Predicting oral relative bioavailability of arsenic in soil from in vitro bioaccessibility
Several investigations have been conducted to develop in vitro bioaccessibility (IVBA) assays that reliably predict in vivo oral relative bioavailability (RBA) of arsenic (As). This study describes a meta-regression model relating soil As RBA and IVBA that is based upon data comb...
FATE AND TRANSPORT OF EMISSIONS FOR SEVERAL TRACE METALS OVER THE UNITED STATES
A regional model for atmospheric photochemistry and particulate matter is used to predict the fate and transport of five trace metals: lead, manganese, total chromium, nickel, and cadmium over the continental United States during January and July 2001. Predicted concentrations of...
Phillips, Robert S; Sung, Lillian; Amman, Roland A; Riley, Richard D; Castagnola, Elio; Haeusler, Gabrielle M; Klaassen, Robert; Tissing, Wim J E; Lehrnbecher, Thomas; Chisholm, Julia; Hakim, Hana; Ranasinghe, Neil; Paesmans, Marianne; Hann, Ian M; Stewart, Lesley A
2016-01-01
Background: Risk-stratified management of fever with neutropenia (FN), allows intensive management of high-risk cases and early discharge of low-risk cases. No single, internationally validated, prediction model of the risk of adverse outcomes exists for children and young people. An individual patient data (IPD) meta-analysis was undertaken to devise one. Methods: The ‘Predicting Infectious Complications in Children with Cancer' (PICNICC) collaboration was formed by parent representatives, international clinical and methodological experts. Univariable and multivariable analyses, using random effects logistic regression, were undertaken to derive and internally validate a risk-prediction model for outcomes of episodes of FN based on clinical and laboratory data at presentation. Results: Data came from 22 different study groups from 15 countries, of 5127 episodes of FN in 3504 patients. There were 1070 episodes in 616 patients from seven studies available for multivariable analysis. Univariable analyses showed associations with microbiologically defined infection (MDI) in many items, including higher temperature, lower white cell counts and acute myeloid leukaemia, but not age. Patients with osteosarcoma/Ewings sarcoma and those with more severe mucositis were associated with a decreased risk of MDI. The predictive model included: malignancy type, temperature, clinically ‘severely unwell', haemoglobin, white cell count and absolute monocyte count. It showed moderate discrimination (AUROC 0.723, 95% confidence interval 0.711–0.759) and good calibration (calibration slope 0.95). The model was robust to bootstrap and cross-validation sensitivity analyses. Conclusions: This new prediction model for risk of MDI appears accurate. It requires prospective studies assessing implementation to assist clinicians and parents/patients in individualised decision making. PMID:26954719
NASA Astrophysics Data System (ADS)
Paz, Shlomit; Goldstein, Pavel; Kordova-Biezuner, Levana; Adler, Lea
2017-04-01
Exposure to benzene has been associated with multiple severe impacts on health. This notwithstanding, at most monitoring stations, benzene is not monitored on a regular basis. The aims of the study were to compare benzene rates in different urban environments (region with heavy traffic and industrial region), to analyse the relationship between benzene and meteorological parameters in a Mediterranean climate type, to estimate the linkages between benzene and NOx and to suggest a prediction model for benzene rates based on NOx levels in order contribute to a better estimation of benzene. Data were used from two different monitoring stations, located on the eastern Mediterranean coast: 1) a traffic monitoring station in Tel Aviv, Israel (TLV) located in an urban region with heavy traffic; 2) a general air quality monitoring station in Haifa Bay (HIB), located in Israel's main industrial region. At each station, hourly, daily, monthly, seasonal, and annual data of benzene, NOx, mean temperature, relative humidity, inversion level, and temperature gradient were analysed over three years: 2008, 2009, and 2010. A prediction model for benzene rates based on NOx levels (which are monitored regularly) was developed to contribute to a better estimation of benzene. The severity of benzene pollution was found to be considerably higher at the traffic monitoring station (TLV) than at the general air quality station (HIB), despite the location of the latter in an industrial area. Hourly, daily, monthly, seasonal, and annual patterns have been shown to coincide with anthropogenic activities (traffic), the day of the week, and atmospheric conditions. A strong correlation between NOx and benzene allowed the development of a prediction model for benzene rates, based on NOx, the day of the week, and the month. The model succeeded in predicting the benzene values throughout the year (except for September). The severity of benzene pollution was found to be considerably higher at the traffic station (TLV) than at the general air quality station (HIB), despite being located in an industrial area. Hourly, daily, seasonal, and annual patterns of benzene rates have been shown to coincide with anthropogenic activities (traffic), day of the week, and atmospheric conditions. A prediction model for benzene rates was developed, based on NOx, the day of the week, and the month. The model suggested in this study might be useful for identifying potential risk of benzene in other urban environments.
Study of the Mutual Interaction Between a Wing Wake and an Encountering Airplane
NASA Technical Reports Server (NTRS)
Walden, A. B.; vanDam, C. P.
1996-01-01
In an effort to increase airport productivity, several wind-tunnel and flight-test programs are currently underway to determine safe reductions in separation standards between aircraft. These programs are designed to study numerous concepts from the characteristics and detection of wake vortices to the wake-vortex encounter phenomenon. As part of this latter effort, computational tools are being developed and utilized as a means of modeling and verifying wake-vortex hazard encounters. The objective of this study is to assess the ability of PMARC, a low-order potential-flow panel method, to predict the forces and moments imposed on a following business-jet configuration by a vortex interaction. Other issues addressed include the investigation of several wake models and their ability to predict wake shape and trajectory, the validity of the velocity field imposed on the following configuration, modeling techniques and the effect of the high-lift system and the empennage. Comparisons with wind-tunnel data reveal that PMARC predicts the characteristics for the clean wing-body following configuration fairly well. Non-linear effects produced by the addition of the high-lift system and empennage, however, are not so well predicted.
Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E
2017-07-01
High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.
Understanding reduced rotavirus vaccine efficacy in low socio-economic settings.
Lopman, Benjamin A; Pitzer, Virginia E; Sarkar, Rajiv; Gladstone, Beryl; Patel, Manish; Glasser, John; Gambhir, Manoj; Atchison, Christina; Grenfell, Bryan T; Edmunds, W John; Kang, Gagandeep; Parashar, Umesh D
2012-01-01
Rotavirus vaccine efficacy ranges from >90% in high socio-economic settings (SES) to 50% in low SES. With the imminent introduction of rotavirus vaccine in low SES countries, understanding reasons for reduced efficacy in these settings could identify strategies to improve vaccine performance. We developed a mathematical model to predict rotavirus vaccine efficacy in high, middle and low SES based on data specific for each setting on incidence, protection conferred by natural infection and immune response to vaccination. We then examined factors affecting efficacy. Vaccination was predicted to prevent 93%, 86% and 51% of severe rotavirus gastroenteritis in high, middle and low SES, respectively. Also predicted was that vaccines are most effective against severe disease and efficacy declines with age in low but not high SES. Reduced immunogenicity of vaccination and reduced protection conferred by natural infection are the main factors that compromise efficacy in low SES. The continued risk of severe disease in non-primary natural infections in low SES is a key factor underpinning reduced efficacy of rotavirus vaccines. Predicted efficacy was remarkably consistent with observed clinical trial results from different SES, validating the model. The phenomenon of reduced vaccine efficacy can be predicted by intrinsic immunological and epidemiological factors of low SES populations. Modifying aspects of the vaccine (e.g. improving immunogenicity in low SES) and vaccination program (e.g. additional doses) may bring improvements.
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with the critical information they need to respond quickly and efficiently and to increase public safety and mitigate damage associated with powerful coastal storms. For instance, high resolution local models will predict detailed wave heights, breaking patterns, and current strengths for use in warning systems for harbor-mouth navigation and densely populated coastal regions where beach safety is threatened. The offline applications are intended to equip coastal managers with the information needed to manage and allocate their resources effectively to protect sections of coast that may be most vulnerable to future severe storms.
Predicting intensity ranks of peptide fragment ions.
Frank, Ari M
2009-05-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm into models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal multiple reaction monitoring (MRM) transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html.
Predicting Intensity Ranks of Peptide Fragment Ions
Frank, Ari M.
2009-01-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm in to models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal MRM transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html. PMID:19256476
Thomas, Reuben; Thomas, Russell S.; Auerbach, Scott S.; Portier, Christopher J.
2013-01-01
Background Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. Objectives To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Methods Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Results Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Conclusions Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species. PMID:23737943
Thomas, Reuben; Thomas, Russell S; Auerbach, Scott S; Portier, Christopher J
2013-01-01
Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species.
Topographies and dynamics on multidimensional potential energy surfaces
NASA Astrophysics Data System (ADS)
Ball, Keith Douglas
The stochastic master equation is a valuable tool for elucidating potential energy surface (PES) details that govern structural relaxation in clusters, bulk systems, and protein folding. This work develops a comprehensive framework for studying non-equilibrium relaxation dynamics using the master equation. Since our master equations depend upon accurate partition function models for use in Rice-Ramsperger-Kassel-Marcus (RRK(M) transition state theory, this work introduces several such models employing various harmonic and anharmonic approximations and compares their predicted equilibrium population distributions with those determined from molecular dynamics. This comparison is performed for the fully-delineated surfaces (KCl)5 and Ar9 to evaluate model performance for potential surfaces with long- and short-range interactions, respectively. For each system, several models perform better than a simple harmonic approximation. While no model gives acceptable results for all minima, and optimal modeling strategies differ for (KCl)5 and Ar9, a particular one-parameter model gives the best agreement with simulation for both systems. We then construct master equations from these models and compare their isothermal relaxation predictions for (KCl)5 and Ar9 with molecular dynamics simulations. This is the first comprehensive test of the kinetic performance of partition function models of its kind. Our results show that accurate modeling of transition-state partition functions is more important for (KCl)5 than for Ar9 in reproducing simulation results, due to a marked stiffening anharmonicity in the transition-state normal modes of (KCl)5. For both systems, several models yield qualitative agreement with simulation over a large temperature range. To examine the robustness of the master equation when applied to larger systems, for which full topographical descriptions would be either impossible or infeasible, we compute relaxation predictions for Ar11 using a master equation constructed from data representing the full PES, and compare these predictions to those of reduced master equations based on statistical samples of the full PES. We introduce a sampling method which generates random, Boltzmann-weighted, energetically 'downhill' sequences. The study reveals that, at moderate temperatures, the slowest relaxation timescale converges as the number of sequences in a sample grows to ~1000. Furthermore, the asymptotic timescale is comparable to the full-PES value.
Zhou, Jinzhe; Zhou, Yanbing; Cao, Shougen; Li, Shikuan; Wang, Hao; Niu, Zhaojian; Chen, Dong; Wang, Dongsheng; Lv, Liang; Zhang, Jian; Li, Yu; Jiao, Xuelong; Tan, Xiaojie; Zhang, Jianli; Wang, Haibo; Zhang, Bingyuan; Lu, Yun; Sun, Zhenqing
2016-01-01
Reporting of surgical complications is common, but few provide information about the severity and estimate risk factors of complications. If have, but lack of specificity. We retrospectively analyzed data on 2795 gastric cancer patients underwent surgical procedure at the Affiliated Hospital of Qingdao University between June 2007 and June 2012, established multivariate logistic regression model to predictive risk factors related to the postoperative complications according to the Clavien-Dindo classification system. Twenty-four out of 86 variables were identified statistically significant in univariate logistic regression analysis, 11 significant variables entered multivariate analysis were employed to produce the risk model. Liver cirrhosis, diabetes mellitus, Child classification, invasion of neighboring organs, combined resection, introperative transfusion, Billroth II anastomosis of reconstruction, malnutrition, surgical volume of surgeons, operating time and age were independent risk factors for postoperative complications after gastrectomy. Based on logistic regression equation, p=Exp∑BiXi / (1+Exp∑BiXi), multivariate logistic regression predictive model that calculated the risk of postoperative morbidity was developed, p = 1/(1 + e((4.810-1.287X1-0.504X2-0.500X3-0.474X4-0.405X5-0.318X6-0.316X7-0.305X8-0.278X9-0.255X10-0.138X11))). The accuracy, sensitivity and specificity of the model to predict the postoperative complications were 86.7%, 76.2% and 88.6%, respectively. This risk model based on Clavien-Dindo grading severity of complications system and logistic regression analysis can predict severe morbidity specific to an individual patient's risk factors, estimate patients' risks and benefits of gastric surgery as an accurate decision-making tool and may serve as a template for the development of risk models for other surgical groups.
Age structure is critical to the population dynamics and survival of honeybee colonies
Betti, M. I.; Wahl, L. M.
2016-01-01
Age structure is an important feature of the division of labour within honeybee colonies, but its effects on colony dynamics have rarely been explored. We present a model of a honeybee colony that incorporates this key feature, and use this model to explore the effects of both winter and disease on the fate of the colony. The model offers a novel explanation for the frequently observed phenomenon of ‘spring dwindle’, which emerges as a natural consequence of the age-structured dynamics. Furthermore, the results indicate that a model taking age structure into account markedly affects the predicted timing and severity of disease within a bee colony. The timing of the onset of disease with respect to the changing seasons may also have a substantial impact on the fate of a honeybee colony. Finally, simulations predict that an infection may persist in a honeybee colony over several years, with effects that compound over time. Thus, the ultimate collapse of the colony may be the result of events several years past. PMID:28018627
Effort test failure: toward a predictive model.
Webb, James W; Batchelor, Jennifer; Meares, Susanne; Taylor, Alan; Marsh, Nigel V
2012-01-01
Predictors of effort test failure were examined in an archival sample of 555 traumatically brain-injured (TBI) adults. Logistic regression models were used to examine whether compensation-seeking, injury-related, psychological, demographic, and cultural factors predicted effort test failure (ETF). ETF was significantly associated with compensation-seeking (OR = 3.51, 95% CI [1.25, 9.79]), low education (OR:. 83 [.74, . 94]), self-reported mood disorder (OR: 5.53 [3.10, 9.85]), exaggerated displays of behavior (OR: 5.84 [2.15, 15.84]), psychotic illness (OR: 12.86 [3.21, 51.44]), being foreign-born (OR: 5.10 [2.35, 11.06]), having sustained a workplace accident (OR: 4.60 [2.40, 8.81]), and mild traumatic brain injury severity compared with very severe traumatic brain injury severity (OR: 0.37 [0.13, 0.995]). ETF was associated with a broader range of statistical predictors than has previously been identified and the relative importance of psychological and behavioral predictors of ETF was evident in the logistic regression model. Variables that might potentially extend the model of ETF are identified for future research efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, S.R.; Hoffman, F.O.; Koehler, H.
1996-08-01
A unique opportunity to test dose assessment models arose after the Chernobyl reactor accident. During the passage of the contaminated plume, concentrations of {sup 131}I and {sup 137}Cs in air, pasture, and cow`s milk were collected at various sites in the northern hemisphere. Afterwards, contaminated pasture and milk samples were analyzed over time. Under the auspices of the Biospheric Model Validation Study (BIOMOVS), data from 13 sites for {sup 131}I and 10 sites for {sup 137}Cs were used to test model predictions for the air-pasture-cow milk pathway. Calculations were submitted for 23 models, 10 of which were quasi-steady state. Themore » others were time-dependent. Daily predictions and predictions of time-integrated concentration of {sup 131}I and {sup 137}Cs in pasture grass and milk for six months post-accident were calculated and compared with observed data. Testing against data from several locations over time for several steps in the air-to-milk pathway resulted in a better understanding of important processes and how they should be modeled. This model testing exercise showed both the strengths and weaknesses of the models and revealed the importance of testing all parts of dose assessment models whenever possible. 19 refs., 14 figs., 4 tabs.« less
Landscape modeling for Everglades ecosystem restoration
DeAngelis, D.L.; Gross, L.J.; Huston, M.A.; Wolff, W.F.; Fleming, D.M.; Comiskey, E.J.; Sylvester, S.M.
1998-01-01
A major environmental restoration effort is under way that will affect the Everglades and its neighboring ecosystems in southern Florida. Ecosystem and population-level modeling is being used to help in the planning and evaluation of this restoration. The specific objective of one of these modeling approaches, the Across Trophic Level System Simulation (ATLSS), is to predict the responses of a suite of higher trophic level species to several proposed alterations in Everglades hydrology. These include several species of wading birds, the snail kite, Cape Sable seaside sparrow, Florida panther, white-tailed deer, American alligator, and American crocodile. ATLSS is an ecosystem landscape-modeling approach and uses Geographic Information System (GIS) vegetation data and existing hydrology models for South Florida to provide the basic landscape for these species. A method of pseudotopography provides estimates of water depths through time at 28 ?? 28-m resolution across the landscape of southern Florida. Hydrologic model output drives models of habitat and prey availability for the higher trophic level species. Spatially explicit, individual-based computer models simulate these species. ATLSS simulations can compare the landscape dynamic spatial pattern of the species resulting from different proposed water management strategies. Here we compare the predicted effects of one possible change in water management in South Florida with the base case of no change. Preliminary model results predict substantial differences between these alternatives in some biotic spatial patterns. ?? 1998 Springer-Verlag.
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
Estimating wildfire risk on a Mojave Desert landscape using remote sensing and field sampling
Van Linn, Peter F.; Nussear, Kenneth E.; Esque, Todd C.; DeFalco, Lesley A.; Inman, Richard D.; Abella, Scott R.
2013-01-01
Predicting wildfires that affect broad landscapes is important for allocating suppression resources and guiding land management. Wildfire prediction in the south-western United States is of specific concern because of the increasing prevalence and severe effects of fire on desert shrublands and the current lack of accurate fire prediction tools. We developed a fire risk model to predict fire occurrence in a north-eastern Mojave Desert landscape. First we developed a spatial model using remote sensing data to predict fuel loads based on field estimates of fuels. We then modelled fire risk (interactions of fuel characteristics and environmental conditions conducive to wildfire) using satellite imagery, our model of fuel loads, and spatial data on ignition potential (lightning strikes and distance to roads), topography (elevation and aspect) and climate (maximum and minimum temperatures). The risk model was developed during a fire year at our study landscape and validated at a nearby landscape; model performance was accurate and similar at both sites. This study demonstrates that remote sensing techniques used in combination with field surveys can accurately predict wildfire risk in the Mojave Desert and may be applicable to other arid and semiarid lands where wildfires are prevalent.
Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain
Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises
2015-01-01
Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156
Comparison of Statistical Models for Analyzing Wheat Yield Time Series
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280
Keller, Roberta L; Feng, Rui; DeMauro, Sara B; Ferkol, Thomas; Hardie, William; Rogers, Elizabeth E; Stevens, Timothy P; Voynow, Judith A; Bellamy, Scarlett L; Shaw, Pamela A; Moore, Paul E
2017-08-01
To assess the utility of clinical predictors of persistent respiratory morbidity in extremely low gestational age newborns (ELGANs). We enrolled ELGANs (<29 weeks' gestation) at ≤7 postnatal days and collected antenatal and neonatal clinical data through 36 weeks' postmenstrual age. We surveyed caregivers at 3, 6, 9, and 12 months' corrected age to identify postdischarge respiratory morbidity, defined as hospitalization, home support (oxygen, tracheostomy, ventilation), medications, or symptoms (cough/wheeze). Infants were classified as having postprematurity respiratory disease (PRD, the primary study outcome) if respiratory morbidity persisted over ≥2 questionnaires. Infants were classified with severe respiratory morbidity if there were multiple hospitalizations, exposure to systemic steroids or pulmonary vasodilators, home oxygen after 3 months or mechanical ventilation, or symptoms despite inhaled corticosteroids. Mixed-effects models generated with data available at 1 day (perinatal) and 36 weeks' postmenstrual age were assessed for predictive accuracy. Of 724 infants (918 ± 234 g, 26.7 ± 1.4 weeks' gestational age) classified for the primary outcome, 68.6% had PRD; 245 of 704 (34.8%) were classified as severe. Male sex, intrauterine growth restriction, maternal smoking, race/ethnicity, intubation at birth, and public insurance were retained in perinatal and 36-week models for both PRD and respiratory morbidity severity. The perinatal model accurately predicted PRD (c-statistic 0.858). Neither the 36-week model nor the addition of bronchopulmonary dysplasia to the perinatal model improved accuracy (0.856, 0.860); c-statistic for BPD alone was 0.907. Both bronchopulmonary dysplasia and perinatal clinical data accurately identify ELGANs at risk for persistent and severe respiratory morbidity at 1 year. ClinicalTrials.gov: NCT01435187. Copyright © 2017 Elsevier Inc. All rights reserved.
Model for predicting the injury severity score.
Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi
2015-07-01
To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P < 0.05. To select objective variables, the stepwise method was used. A total of 122 patients were included in this study. The formula for predicting the injury severity score (ISS) was as follows: ISS = 13.252-0.078(mean blood pressure) + 0.12(fibrin degradation products). The P -value of this formula from analysis of variance was <0.001, and the multiple correlation coefficient (R) was 0.739 (R 2 = 0.546). The multiple correlation coefficient adjusted for the degrees of freedom was 0.538. The Durbin-Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction.
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-08-08
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-01-01
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors. PMID:28786957
Juth, Vanessa; Smyth, Joshua M; Santuzzi, Alecia M
2008-10-01
Self-esteem has been demonstrated to predict health and well-being in a number of samples and domains using retrospective reports, but little is known about the effect of self-esteem in daily life. A community sample with asthma (n = 97) or rheumatoid arthritis (n = 31) completed a self-esteem measure and collected Ecological Momentary Assessment (EMA) data 5x/day for one week using a palmtop computer. Low self-esteem predicted more negative affect, less positive affect, greater stress severity, and greater symptom severity in daily life. Naturalistic exploration of mechanisms relating self-esteem to physiological and/or psychological components in illness may clarify causal relationships and inform theoretical models of self-care, well-being, and disease management.
JUTH, VANESSA; SMYTH, JOSHUA M.; SANTUZZI, ALECIA M.
2010-01-01
Self-esteem has been demonstrated to predict health and well-being in a number of samples and domains using retrospective reports, but little is known about the effect of self-esteem in daily life. A community sample with asthma (n = 97) or rheumatoid arthritis (n = 31) completed a self-esteem measure and collected Ecological Momentary Assessment (EMA) data 5x/day for one week using a palmtop computer. Low self-esteem predicted more negative affect, less positive affect, greater stress severity, and greater symptom severity in daily life. Naturalistic exploration of mechanisms relating self-esteem to physiological and/or psychological components in illness may clarify causal relationships and inform theoretical models of self-care, well-being, and disease management. PMID:18809639
Seasonal Atmospheric and Oceanic Predictions
NASA Technical Reports Server (NTRS)
Roads, John; Rienecker, Michele (Technical Monitor)
2003-01-01
Several projects associated with dynamical, statistical, single column, and ocean models are presented. The projects include: 1) Regional Climate Modeling; 2) Statistical Downscaling; 3) Evaluation of SCM and NSIPP AGCM Results at the ARM Program Sites; and 4) Ocean Forecasts.
Owens, Robert L; Edwards, Bradley A; Eckert, Danny J; Jordan, Amy S; Sands, Scott A; Malhotra, Atul; White, David P; Loring, Stephen H; Butler, James P; Wellman, Andrew
2015-06-01
Both anatomical and nonanatomical traits are important in obstructive sleep apnea (OSA) pathogenesis. We have previously described a model combining these traits, but have not determined its diagnostic accuracy to predict OSA. A valid model, and knowledge of the published effect sizes of trait manipulation, would also allow us to predict the number of patients with OSA who might be effectively treated without using positive airway pressure (PAP). Fifty-seven subjects with and without OSA underwent standard clinical and research sleep studies to measure OSA severity and the physiological traits important for OSA pathogenesis, respectively. The traits were incorporated into a physiological model to predict OSA. The model validity was determined by comparing the model prediction of OSA to the clinical diagnosis of OSA. The effect of various trait manipulations was then simulated to predict the proportion of patients treated by each intervention. The model had good sensitivity (80%) and specificity (100%) for predicting OSA. A single intervention on one trait would be predicted to treat OSA in approximately one quarter of all patients. Combination therapy with two interventions was predicted to treat OSA in ∼50% of patients. An integrative model of physiological traits can be used to predict population-wide and individual responses to non-PAP therapy. Many patients with OSA would be expected to be treated based on known trait manipulations, making a strong case for the importance of non-anatomical traits in OSA pathogenesis and the effectiveness of non-PAP therapies. © 2015 Associated Professional Sleep Societies, LLC.
Dynamics and Predictability of The Eta Regional Model: The Role of Domain Size
NASA Astrophysics Data System (ADS)
Vannitsem, S.; Chomé, F.; Nicolis, C.
This paper investigates the dynamical properties of the Eta model, a state-of-the- art nested limited-area model, following the approach previously developed by the present authors. It is first shown that the intrinsic dynamics of the model depends crucially on the size of the domain, with a non-chaotic behavior for small domains, supporting earlier findings on the absence of sensitivity to the initial conditions in these models. The quality of the predictions of several Eta model versions differing by their domain size is next evaluated and compared with the Avn analyses on a targeted region, centered on France. Contrary to what is usually taken for granted, a non-trivial relation between predictability and domain size is found, the best model versions be- ing the ones integrated on the smallest and the largest domain sizes. An explanation in connection with the intrinsic dynamics of the model is advanced.
Nonlinear modeling of chaotic time series: Theory and applications
NASA Astrophysics Data System (ADS)
Casdagli, M.; Eubank, S.; Farmer, J. D.; Gibson, J.; Desjardins, D.; Hunter, N.; Theiler, J.
We review recent developments in the modeling and prediction of nonlinear time series. In some cases, apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases, it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years, methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics, and human speech.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Lei; Yang Jinmin
Little Higgs theory naturally predicts a light Higgs boson whose most important discovery channel at the LHC is the diphoton signal pp{yields}h{yields}{gamma}{gamma}. In this work, we perform a comparative study for this signal in some typical little Higgs models, namely, the littlest Higgs model, two littlest Higgs models with T-parity (named LHT-I and LHT-II), and the simplest little Higgs models. We find that compared with the standard model prediction, the diphoton signal rate is always suppressed and the suppression extent can be quite different for different models. The suppression is mild (< or approx. 10%) in the littlest Higgs modelmore » but can be quite severe ({approx_equal}90%) in other three models. This means that discovering the light Higgs boson predicted by the little Higgs theory through the diphoton channel at the LHC will be more difficult than discovering the standard model Higgs boson.« less
The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring
ERIC Educational Resources Information Center
Haberman, Shelby J.; Sinharay, Sandip
2010-01-01
Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…
From field to region yield predictions in response to pedo-climatic variations in Eastern Canada
NASA Astrophysics Data System (ADS)
JÉGO, G.; Pattey, E.; Liu, J.
2013-12-01
The increase in global population coupled with new pressures to produce energy and bioproducts from agricultural land requires an increase in crop productivity. However, the influence of climate and soil variations on crop production and environmental performance is not fully understood and accounted for to define more sustainable and economical management strategies. Regional crop modeling can be a great tool for understanding the impact of climate variations on crop production, for planning grain handling and for assessing the impact of agriculture on the environment, but it is often limited by the availability of input data. The STICS ("Simulateur mulTIdisciplinaire pour les Cultures Standard") crop model, developed by INRA (France) is a functional crop model which has a built-in module to optimize several input parameters by minimizing the difference between calculated and measured output variables, such as Leaf Area Index (LAI). STICS crop model was adapted to the short growing season of the Mixedwood Plains Ecozone using field experiments results, to predict biomass and yield of soybean, spring wheat and corn. To minimize the numbers of inference required for regional applications, 'generic' cultivars rather than specific ones have been calibrated in STICS. After the calibration of several model parameters, the root mean square error (RMSE) of yield and biomass predictions ranged from 10% to 30% for the three crops. A bit more scattering was obtained for LAI (20%
Sandin, Bonifacio; Sánchez-Arribas, Carmen; Chorot, Paloma; Valiente, Rosa M
2015-04-01
The present study examined the contribution of three main cognitive factors (i.e., anxiety sensitivity, catastrophic misinterpretations of bodily symptoms, and panic self-efficacy) in predicting panic disorder (PD) severity in a sample of patients with a principal diagnosis of panic disorder. It was hypothesized that anxiety sensitivity (AS), catastrophic misinterpretation of bodily sensations, and panic self-efficacy are uniquely related to panic disorder severity. One hundred and sixty-eight participants completed measures of AS, catastrophic misinterpretations of panic-like sensations, and panic self-efficacy prior to receiving treatment. Results of multiple linear regression analyses indicated that AS, catastrophic misinterpretations and panic self-efficacy independently predicted panic disorder severity. Results of path analyses indicated that AS was direct and indirectly (mediated by catastrophic misinterpretations) related with panic severity. Results provide evidence for a tripartite cognitive account of panic disorder. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pre-operative prediction of surgical morbidity in children: comparison of five statistical models.
Cooper, Jennifer N; Wei, Lai; Fernandez, Soledad A; Minneci, Peter C; Deans, Katherine J
2015-02-01
The accurate prediction of surgical risk is important to patients and physicians. Logistic regression (LR) models are typically used to estimate these risks. However, in the fields of data mining and machine-learning, many alternative classification and prediction algorithms have been developed. This study aimed to compare the performance of LR to several data mining algorithms for predicting 30-day surgical morbidity in children. We used the 2012 National Surgical Quality Improvement Program-Pediatric dataset to compare the performance of (1) a LR model that assumed linearity and additivity (simple LR model) (2) a LR model incorporating restricted cubic splines and interactions (flexible LR model) (3) a support vector machine, (4) a random forest and (5) boosted classification trees for predicting surgical morbidity. The ensemble-based methods showed significantly higher accuracy, sensitivity, specificity, PPV, and NPV than the simple LR model. However, none of the models performed better than the flexible LR model in terms of the aforementioned measures or in model calibration or discrimination. Support vector machines, random forests, and boosted classification trees do not show better performance than LR for predicting pediatric surgical morbidity. After further validation, the flexible LR model derived in this study could be used to assist with clinical decision-making based on patient-specific surgical risks. Copyright © 2014 Elsevier Ltd. All rights reserved.
Oguz, Cihan; Sen, Shurjo K; Davis, Adam R; Fu, Yi-Ping; O'Donnell, Christopher J; Gibbons, Gary H
2017-10-26
One goal of personalized medicine is leveraging the emerging tools of data science to guide medical decision-making. Achieving this using disparate data sources is most daunting for polygenic traits. To this end, we employed random forests (RFs) and neural networks (NNs) for predictive modeling of coronary artery calcium (CAC), which is an intermediate endo-phenotype of coronary artery disease (CAD). Model inputs were derived from advanced cases in the ClinSeq®; discovery cohort (n=16) and the FHS replication cohort (n=36) from 89 th -99 th CAC score percentile range, and age-matched controls (ClinSeq®; n=16, FHS n=36) with no detectable CAC (all subjects were Caucasian males). These inputs included clinical variables and genotypes of 56 single nucleotide polymorphisms (SNPs) ranked highest in terms of their nominal correlation with the advanced CAC state in the discovery cohort. Predictive performance was assessed by computing the areas under receiver operating characteristic curves (ROC-AUC). RF models trained and tested with clinical variables generated ROC-AUC values of 0.69 and 0.61 in the discovery and replication cohorts, respectively. In contrast, in both cohorts, the set of SNPs derived from the discovery cohort were highly predictive (ROC-AUC ≥0.85) with no significant change in predictive performance upon integration of clinical and genotype variables. Using the 21 SNPs that produced optimal predictive performance in both cohorts, we developed NN models trained with ClinSeq®; data and tested with FHS data and obtained high predictive accuracy (ROC-AUC=0.80-0.85) with several topologies. Several CAD and "vascular aging" related biological processes were enriched in the network of genes constructed from the predictive SNPs. We identified a molecular network predictive of advanced coronary calcium using genotype data from ClinSeq®; and FHS cohorts. Our results illustrate that machine learning tools, which utilize complex interactions between disease predictors intrinsic to the pathogenesis of polygenic disorders, hold promise for deriving predictive disease models and networks.
Olatunji, Bunmi O; Ebesutani, Chad; Kim, Jingu; Riemann, Bradley C; Jacobi, David M
2017-04-15
Although studies have linked disgust proneness to the etiology and maintenance of obsessive-compulsive disorder (OCD) in adults, there remains a paucity of research examining the specificity of this association among youth. The present study employed structural equation modeling to examine the association between disgust proneness, negative affect, and OCD symptom severity in a clinical sample of youth admitted to a residential treatment facility (N =471). Results indicate that disgust proneness and negative affect latent factors independently predicted an OCD symptom severity latent factor. However, when both variables were modeled as predictors simultaneously, latent disgust proneness remained significantly associated with OCD symptom severity, whereas the association between latent negative affect and OCD symptom severity became nonsignificant. Tests of mediation converged in support of disgust proneness as a significant intervening variable between negative affect and OCD symptom severity. Subsequent analysis showed that the path from disgust proneness to OCD symptom severity in the structural model was significantly stronger among those without a primary diagnosis of OCD compared to those with a primary diagnosis of OCD. Given the cross-sectional design, the causal inferences that can be made are limited. The present study is also limited by the exclusive reliance on self-report measures. Disgust proneness may play a uniquely important role in OCD among youth. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ying Yibin; Liu Yande; Tao Yang
2005-09-01
This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r{sup 2}) 0.940 for the SSC and a moderate r{sup 2} of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSCmore » and also showed the feasibility of using it to predict the VA of apples.« less
Iakova, Maria; Ballabeni, Pierluigi; Erhart, Peter; Seichert, Nikola; Luthi, François; Dériaz, Olivier
2012-12-01
This study aimed to identify self-perception variables which may predict return to work (RTW) in orthopedic trauma patients 2 years after rehabilitation. A prospective cohort investigated 1,207 orthopedic trauma inpatients, hospitalised in rehabilitation, clinics at admission, discharge, and 2 years after discharge. Information on potential predictors was obtained from self administered questionnaires. Multiple logistic regression models were applied. In the final model, a higher likelihood of RTW was predicted by: better general health and lower pain at admission; health and pain improvements during hospitalisation; lower impact of event (IES-R) avoidance behaviour score; higher IES-R hyperarousal score, higher SF-36 mental score and low perceived severity of the injury. RTW is not only predicted by perceived health, pain and severity of the accident at the beginning of a rehabilitation program, but also by the changes in pain and health perceptions observed during hospitalisation.
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress
2018-01-01
The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.
Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He
2018-01-01
The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.
Consensus models to predict endocrine disruption for all ...
Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte
Prediction of SA 349/2 GV blade loads in high speed flight using several rotor analyses
NASA Technical Reports Server (NTRS)
Gaubert, Michel; Yamauchi, Gloria K.
1987-01-01
The influence of blade dynamics, dynamic stall, and transonic aerodynamics on the predictions of rotor loads in high-speed flight are presented. Data were obtained from an Aerospatiale Gazelle SA 349/2 helicopter with three Grande Vitesse blades. Several analyses are used for this investigation. First, blade dynamics effects on the correlation are studied using three rotor analyses which differ mainly in the method of calculating the blade elastic response. Next, an ONERA dynamic stall model is used to predict retreating blade stall. Finally, advancing blade aerodynamic loads are calculated using a NASA-developed rotorcraft analysis coupled with two transonic finite-difference analyses.
NASA Technical Reports Server (NTRS)
Fiorino, Michael; Goerss, James S.; Jensen, Jack J.; Harrison, Edward J., Jr.
1993-01-01
The paper evaluates the meteorological quality and operational utility of the Navy Operational Global Atmospheric Prediction System (NOGAPS) in forecasting tropical cyclones. It is shown that the model can provide useful predictions of motion and formation on a real-time basis in the western North Pacific. The meterological characteristics of the NOGAPS tropical cyclone predictions are evaluated by examining the formation of low-level cyclone systems in the tropics and vortex structure in the NOGAPS analysis and verifying 72-h forecasts. The adjusted NOGAPS track forecasts showed equitable skill to the baseline aid and the dynamical model. NOGAPS successfully predicted unusual equatorward turns for several straight-running cyclones.
Carbon cycling at the tipping point: Does ecosystem structure predict resistance to disturbance?
NASA Astrophysics Data System (ADS)
Gough, C. M.; Bond-Lamberty, B. P.; Stuart-Haentjens, E.; Atkins, J.; Haber, L.; Fahey, R. T.
2017-12-01
Ecosystems worldwide are subjected to disturbances that reshape their physical and biological structure and modify biogeochemical processes, including carbon storage and cycling rates. Disturbances, including those from insect pests, pathogens, and extreme weather, span a continuum of severity and, accordingly, may have different effects on carbon cycling processes. Some ecosystems resist biogeochemical changes following disturbance, until a critical threshold of severity is exceeded. The ecosystem properties underlying such functional resistance, and signifying when a tipping point will occur, however, are almost entirely unknown. Here, we present observational and experimental results from forests in the Great Lakes region, showing ecosystem structure is closely coupled with carbon cycling responses to disturbance, with shifts in structure predicting thresholds of and, in some cases, increases in carbon storage. We find, among forests in the region, that carbon storage regularly exhibits a non-linear threshold response to increasing disturbance levels, but the severity at which a threshold is reached varies among disturbed forests. More biologically and structurally complex forest ecosystems sometimes exhibit greater functional resistance than simpler forests, and consequently may have a higher disturbance severity threshold. Counter to model predictions but consistent with some theoretical frameworks, empirical data show moderate levels of disturbance may increase ecosystem complexity to a point, thereby increasing rates of carbon storage. Disturbances that increase complexity therefore may stimulate carbon storage, while severe disturbances at or beyond thresholds may simplify structure, leading to carbon storage declines. We conclude that ecosystem structural attributes are closely coupled with biogeochemical thresholds across disturbance severity gradients, suggesting that improved predictions of disturbance-related changes in the carbon cycle require better representation of ecosystem structure in models.
[Spatial distribution prediction of surface soil Pb in a battery contaminated site].
Liu, Geng; Niu, Jun-Jie; Zhang, Chao; Zhao, Xin; Guo, Guan-Lin
2014-12-01
In order to enhance the reliability of risk estimation and to improve the accuracy of pollution scope determination in a battery contaminated site with the soil characteristic pollutant Pb, four spatial interpolation models, including Combination Prediction Model (OK(LG) + TIN), kriging model (OK(BC)), Inverse Distance Weighting model (IDW), and Spline model were employed to compare their effects on the spatial distribution and pollution assessment of soil Pb. The results showed that Pb concentration varied significantly and the data was severely skewed. The variation coefficient of the site was higher in the local region. OK(LG) + TIN was found to be more accurate than the other three models in predicting the actual pollution situations of the contaminated site. The prediction accuracy of other models was lower, due to the effect of the principle of different models and datum feature. The interpolation results of OK(BC), IDW and Spline could not reflect the detailed characteristics of seriously contaminated areas, and were not suitable for mapping and spatial distribution prediction of soil Pb in this site. This study gives great contributions and provides useful references for defining the remediation boundary and making remediation decision of contaminated sites.
Parotid gland mean dose as a xerostomia predictor in low-dose domains.
Gabryś, Hubert Szymon; Buettner, Florian; Sterzing, Florian; Hauswald, Henrik; Bangert, Mark
2017-09-01
Xerostomia is a common side effect of radiotherapy resulting from excessive irradiation of salivary glands. Typically, xerostomia is modeled by the mean dose-response characteristic of parotid glands and prevented by mean dose constraints to either contralateral or both parotid glands. The aim of this study was to investigate whether normal tissue complication probability (NTCP) models based on the mean radiation dose to parotid glands are suitable for the prediction of xerostomia in a highly conformal low-dose regime of modern intensity-modulated radiotherapy (IMRT) techniques. We present a retrospective analysis of 153 head and neck cancer patients treated with radiotherapy. The Lyman Kutcher Burman (LKB) model was used to evaluate predictive power of the parotid gland mean dose with respect to xerostomia at 6 and 12 months after the treatment. The predictive performance of the model was evaluated by receiver operating characteristic (ROC) curves and precision-recall (PR) curves. Average mean doses to ipsilateral and contralateral parotid glands were 25.4 Gy and 18.7 Gy, respectively. QUANTEC constraints were met in 74% of patients. Mild to severe (G1+) xerostomia prevalence at both 6 and 12 months was 67%. Moderate to severe (G2+) xerostomia prevalence at 6 and 12 months was 20% and 15%, respectively. G1 + xerostomia was predicted reasonably well with area under the ROC curve ranging from 0.69 to 0.76. The LKB model failed to provide reliable G2 + xerostomia predictions at both time points. Reduction of the mean dose to parotid glands below QUANTEC guidelines resulted in low G2 + xerostomia rates. In this dose domain, the mean dose models predicted G1 + xerostomia fairly well, however, failed to recognize patients at risk of G2 + xerostomia. There is a need for the development of more flexible models able to capture complexity of dose response in this dose regime.
Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale
NASA Astrophysics Data System (ADS)
Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang
2017-12-01
The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
S. A. Covert; P. R. Robichaud; W. J. Elliot; T. E. Link
2005-01-01
This study evaluates runoff predictions generated by GeoWEPP (Geo-spatial interface to the Water Erosion Prediction Project) and a modified version of WEPP v98.4 for forest soils. Three small (2 to 9 ha) watersheds in the mountains of the interior Northwest were monitored for several years following timber harvest and prescribed fires. Observed climate variables,...
Predictability and Coupled Dynamics of MJO During DYNAMO
2013-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Predictability and Coupled Dynamics of MJO During DYNAMO ... DYNAMO time period. APPROACH We are working as a team to study MJO dynamics and predictability using several models as team members of the ONR DRI...associated with the DYNAMO experiment. This is a fundamentally collaborative proposal that involves close collaboration with Dr. Hyodae Seo of the
ERIC Educational Resources Information Center
Marcovitch, Stuart; Zelazo, Philip David
2006-01-01
Age-appropriate modifications of the A-not-B task were used to examine 2-year-olds' search behavior. Several theories predict that A-not-B errors will increase as a function of number of A trials. However, the hierarchical competing systems model (Marcovitch & Zelazo, 1999) predicts that although the ratio of perseverative to nonperseverative…
Simpson, Helen Blair; Maher, Michael J; Wang, Yuanjia; Bao, Yuanyuan; Foa, Edna B; Franklin, Martin
2011-04-01
To examine the effects of patient adherence on outcome from exposure and response prevention (EX/RP) therapy in adults with obsessive-compulsive disorder (OCD). Thirty adults with OCD were randomized to EX/RP (n = 15) or EX/RP augmented by motivational interviewing strategies (n = 15). Both treatments included 3 introductory sessions and 15 exposure sessions. Because there were no significant group differences in adherence or outcome, the groups were combined to examine the effects of patient adherence on outcome. Independent evaluators assessed OCD severity using the Yale-Brown Obsessive Compulsive Scale. Therapists assessed patient adherence to between-session EX/RP assignments at each session using the Patient EX/RP Adherence Scale (PEAS). Linear regression models were used to examine the effects of PEAS scores on outcome, adjusting for baseline severity. The relationship between patient adherence and other predictors of outcome was explored using structural equation modeling. Higher average PEAS ratings significantly predicted lower posttreatment OCD severity in intent-to-treat and completer samples. PEAS ratings in early sessions (5-9) also significantly predicted posttreatment OCD severity. The effects of other significant predictors of outcome in this sample (baseline OCD severity, hoarding subtype, and working alliance) were fully mediated by patient adherence. Patient adherence to between-session EX/RP assignments significantly predicted treatment outcome, as did early patient adherence and change in early adherence. Patient adherence mediated the effects of other predictors of outcome. Future research should develop interventions that increase adherence and then test whether increasing adherence improves outcome. If effective, these interventions could then be used to personalize care. (c) 2011 APA, all rights reserved.
Viterbori, Paola; Usai, M Carmen; Traverso, Laura; De Franchis, Valentina
2015-12-01
This longitudinal study analyzes whether selected components of executive function (EF) measured during the preschool period predict several indices of math achievement in primary school. Six EF measures were assessed in a sample of 5-year-old children (N = 175). The math achievement of the same children was then tested in Grades 1 and 3 using both a composite math score and three single indices of written calculation, arithmetical facts, and problem solving. Using previous results obtained from the same sample of children, a confirmatory factor analysis examining the latent EF structure in kindergarten indicated that a two-factor model provided the best fit for the data. In this model, inhibition and working memory (WM)-flexibility were separate dimensions. A full structural equation model was then used to test the hypothesis that math achievement (the composite math score and single math scores) in Grades 1 and 3 could be explained by the two EF components comprising the kindergarten model. The results indicate that the WM-flexibility component measured during the preschool period substantially predicts mathematical achievement, especially in Grade 3. The math composite scores were predicted by the WM-flexibility factor at both grade levels. In Grade 3, both problem solving and arithmetical facts were predicted by the WM-flexibility component. The results empirically support interventions that target EF as an important component of early childhood mathematics education. Copyright © 2015 Elsevier Inc. All rights reserved.
Rhoden, John J.; Dyas, Gregory L.
2016-01-01
Despite the increasing number of multivalent antibodies, bispecific antibodies, fusion proteins, and targeted nanoparticles that have been generated and studied, the mechanism of multivalent binding to cell surface targets is not well understood. Here, we describe a conceptual and mathematical model of multivalent antibody binding to cell surface antigens. Our model predicts that properties beyond 1:1 antibody:antigen affinity to target antigens have a strong influence on multivalent binding. Predicted crucial properties include the structure and flexibility of the antibody construct, the target antigen(s) and binding epitope(s), and the density of antigens on the cell surface. For bispecific antibodies, the ratio of the expression levels of the two target antigens is predicted to be critical to target binding, particularly for the lower expressed of the antigens. Using bispecific antibodies of different valencies to cell surface antigens including MET and EGF receptor, we have experimentally validated our modeling approach and its predictions and observed several nonintuitive effects of avidity related to antigen density, target ratio, and antibody affinity. In some biological circumstances, the effect we have predicted and measured varied from the monovalent binding interaction by several orders of magnitude. Moreover, our mathematical framework affords us a mechanistic interpretation of our observations and suggests strategies to achieve the desired antibody-antigen binding goals. These mechanistic insights have implications in antibody engineering and structure/activity relationship determination in a variety of biological contexts. PMID:27022022
NASA Astrophysics Data System (ADS)
Shen, B.; Tao, W.; Atlas, R.
2008-12-01
Very Severe Cyclonic Storm Nargis, the deadliest named tropical cyclone (TC) in the North Indian Ocean Basin, devastated Burma (Myanmar) in May 2008, causing tremendous damage and numerous fatalities. An increased lead time in the prediction of TC Nargis would have increased the warning time and may therefore have saved lives and reduced economic damage. Recent advances in high-resolution global models and supercomputers have shown the potential for improving TC track and intensity forecasts, presumably by improving multi-scale simulations. The key but challenging questions to be answered include: (1) if and how realistic, in terms of timing, location and TC general structure, the global mesoscale model (GMM) can simulate TC genesis and (2) under what conditions can the model extend the lead time of TC genesis forecasts. In this study, we focus on genesis prediction for TCs in the Indian Ocean with the GMM. Preliminary real-data simulations show that the initial formation and intensity variations of TC Nargis can be realistically predicted at a lead time of up to 5 days. These simulations also suggest that the accurate representations of a westerly wind burst (WWB) and an equatorial trough, associated with monsoon circulations and/or a Madden-Julian Oscillation (MJO), are important for predicting the formation of this kind of TC. In addition to the WWB and equatorial trough, other favorable environmental conditions will be examined, which include enhanced monsoonal circulation, upper-level outflow, low- and middle-level moistening, and surface fluxes.
A feature-based approach to modeling protein-protein interaction hot spots.
Cho, Kyu-il; Kim, Dongsup; Lee, Doheon
2009-05-01
Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to pi-related interactions, especially pi . . . pi interactions.
AEETES - A solar reflux receiver thermal performance numerical model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, R.E. Jr.
1994-02-01
Reflux solar receivers for dish-Stirling electric power generation systems are currently being investigated by several companies and laboratories. In support of these efforts, the AEETES thermal performance numerical model has been developed to predict thermal performance of pool-boiler and heat-pipe reflux receivers. The formulation of the AEETES numerical model, which is applicable to axisymmetric geometries with asymmetric incident fluxes, is presented in detail. Thermal efficiency predictions agree to within 4.1% with test data from on-sun tests of a pool-boiler reflux receiver. Predicted absorber and sidewall temperatures agree with thermocouple data to within 3.3 and 7.3%, respectively. The importance of accountingmore » for the asymmetric incident fluxes is demonstrated in comparisons with predictions using azimuthally averaged variables. The predicted receiver heat losses are characterized in terms of convective, solar radiative, and infrared radiative, and conductive heat transfer mechanisms.« less
Prediction of blood pressure and blood flow in stenosed renal arteries using CFD
NASA Astrophysics Data System (ADS)
Jhunjhunwala, Pooja; Padole, P. M.; Thombre, S. B.; Sane, Atul
2018-04-01
In the present work an attempt is made to develop a diagnostive tool for renal artery stenosis (RAS) which is inexpensive and in-vitro. To analyse the effects of increase in the degree of severity of stenosis on hypertension and blood flow, haemodynamic parameters are studied by performing numerical simulations. A total of 16 stenosed models with varying degree of stenosis severity from 0-97.11% are assessed numerically. Blood is modelled as a shear-thinning, non-Newtonian fluid using the Carreau model. Computational Fluid Dynamics (CFD) analysis is carried out to compute the values of flow parameters like maximum velocity and maximum pressure attained by blood due to stenosis under pulsatile flow. These values are further used to compute the increase in blood pressure and decrease in available blood flow to kidney. The computed available blood flow and secondary hypertension for varying extent of stenosis are mapped by curve fitting technique using MATLAB and a mathematical model is developed. Based on these mathematical models, a quantification tool is developed for tentative prediction of probable availability of blood flow to the kidney and severity of stenosis if secondary hypertension is known.
Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M
2015-02-01
Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. A complete checklist is available at http://www.tripod-statement.org. © 2015 American College of Physicians.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113