Sample records for prediction type models

  1. An Arrhenius-type viscosity function to model sintering using the Skorohod Olevsky viscous sintering model within a finite element code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewsuk, Kevin Gregory; Arguello, Jose Guadalupe, Jr.; Reiterer, Markus W.

    2006-02-01

    The ease and ability to predict sintering shrinkage and densification with the Skorohod-Olevsky viscous sintering (SOVS) model within a finite-element (FE) code have been improved with the use of an Arrhenius-type viscosity function. The need for a better viscosity function was identified by evaluating SOVS model predictions made using a previously published polynomial viscosity function. Predictions made using the original, polynomial viscosity function do not accurately reflect experimentally observed sintering behavior. To more easily and better predict sintering behavior using FE simulations, a thermally activated viscosity function based on creep theory was used with the SOVS model. In comparison withmore » the polynomial viscosity function, SOVS model predictions made using the Arrhenius-type viscosity function are more representative of experimentally observed viscosity and sintering behavior. Additionally, the effects of changes in heating rate on densification can easily be predicted with the Arrhenius-type viscosity function. Another attribute of the Arrhenius-type viscosity function is that it provides the potential to link different sintering models. For example, the apparent activation energy, Q, for densification used in the construction of the master sintering curve for a low-temperature cofire ceramic dielectric has been used as the apparent activation energy for material flow in the Arrhenius-type viscosity function to predict heating rate-dependent sintering behavior using the SOVS model.« less

  2. Comparison of human gastrocnemius forces predicted by Hill-type muscle models and estimated from ultrasound images

    PubMed Central

    Biewener, Andrew A.; Wakeling, James M.

    2017-01-01

    ABSTRACT Hill-type models are ubiquitous in the field of biomechanics, providing estimates of a muscle's force as a function of its activation state and its assumed force–length and force–velocity properties. However, despite their routine use, the accuracy with which Hill-type models predict the forces generated by muscles during submaximal, dynamic tasks remains largely unknown. This study compared human gastrocnemius forces predicted by Hill-type models with the forces estimated from ultrasound-based measures of tendon length changes and stiffness during cycling, over a range of loads and cadences. We tested both a traditional model, with one contractile element, and a differential model, with two contractile elements that accounted for independent contributions of slow and fast muscle fibres. Both models were driven by subject-specific, ultrasound-based measures of fascicle lengths, velocities and pennation angles and by activation patterns of slow and fast muscle fibres derived from surface electromyographic recordings. The models predicted, on average, 54% of the time-varying gastrocnemius forces estimated from the ultrasound-based methods. However, differences between predicted and estimated forces were smaller under low speed–high activation conditions, with models able to predict nearly 80% of the gastrocnemius force over a complete pedal cycle. Additionally, the predictions from the Hill-type muscle models tested here showed that a similar pattern of force production could be achieved for most conditions with and without accounting for the independent contributions of different muscle fibre types. PMID:28202584

  3. A4 flavour model for Dirac neutrinos: Type I and inverse seesaw

    NASA Astrophysics Data System (ADS)

    Borah, Debasish; Karmakar, Biswajit

    2018-05-01

    We propose two different seesaw models namely, type I and inverse seesaw to realise light Dirac neutrinos within the framework of A4 discrete flavour symmetry. The additional fields and their transformations under the flavour symmetries are chosen in such a way that naturally predicts the hierarchies of different elements of the seesaw mass matrices in these two types of seesaw mechanisms. For generic choices of flavon alignments, both the models predict normal hierarchical light neutrino masses with the atmospheric mixing angle in the lower octant. Apart from predicting interesting correlations between different neutrino parameters as well as between neutrino and model parameters, the model also predicts the leptonic Dirac CP phase to lie in a specific range - π / 3 to π / 3. While the type I seesaw model predicts smaller values of absolute neutrino mass, the inverse seesaw predictions for the absolute neutrino masses can saturate the cosmological upper bound on sum of absolute neutrino masses for certain choices of model parameters.

  4. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  5. Comparison of human gastrocnemius forces predicted by Hill-type muscle models and estimated from ultrasound images.

    PubMed

    Dick, Taylor J M; Biewener, Andrew A; Wakeling, James M

    2017-05-01

    Hill-type models are ubiquitous in the field of biomechanics, providing estimates of a muscle's force as a function of its activation state and its assumed force-length and force-velocity properties. However, despite their routine use, the accuracy with which Hill-type models predict the forces generated by muscles during submaximal, dynamic tasks remains largely unknown. This study compared human gastrocnemius forces predicted by Hill-type models with the forces estimated from ultrasound-based measures of tendon length changes and stiffness during cycling, over a range of loads and cadences. We tested both a traditional model, with one contractile element, and a differential model, with two contractile elements that accounted for independent contributions of slow and fast muscle fibres. Both models were driven by subject-specific, ultrasound-based measures of fascicle lengths, velocities and pennation angles and by activation patterns of slow and fast muscle fibres derived from surface electromyographic recordings. The models predicted, on average, 54% of the time-varying gastrocnemius forces estimated from the ultrasound-based methods. However, differences between predicted and estimated forces were smaller under low speed-high activation conditions, with models able to predict nearly 80% of the gastrocnemius force over a complete pedal cycle. Additionally, the predictions from the Hill-type muscle models tested here showed that a similar pattern of force production could be achieved for most conditions with and without accounting for the independent contributions of different muscle fibre types. © 2017. Published by The Company of Biologists Ltd.

  6. A generalized predictive model for direct gain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Givoni, B.

    In the correlational model for direct gain developed by the Los Alamos National Laboratory, a list of constants applicable to different types of buildings or passive solar systems was specified separately for each type. In its original form, the model was applicable only to buildings similar in their heat capacity, type of glazing, or night insulation to the types specified by the model. While maintaining the general form of the predictive equations, the new model, the predictive model for direct gain (PMDG), replaces the constants with functions dependent upon the thermal properties of the building, or the components of themore » solar system, or both. By this transformation, the LANL model for direct gain becomes a generalized one. The new model predicts the performance of buildings heated by direct gain with any heat capacity, glazing, and night insulation as functions of their thermophysical properties and climatic conditions.« less

  7. Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.

    PubMed

    Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric

    2017-02-01

    In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and severity for each highway facility, and their prediction results are compared with the MVPLN model based on the Average Predicted Mean Absolute Error (APMAE) statistic. A UPLN model for total crashes was also estimated to compare the coefficients of contributing factors with the models that estimate crashes by crash type and severity. The model coefficient estimates show that the signs of coefficients for presence of left-turn lane, presence of right-turn lane, land width and speed limit are different across crash type or severity counts, which suggest that estimating crashes by crash type or severity might be more helpful in identifying crash contributing factors. The standard errors of covariates in the MVPLN model are slightly lower than the UPLN model when the covariates are statistically significant, and the crash counts by crash type and severity are significantly correlated. The model prediction comparisons illustrate that the MVPLN model outperforms the UPLN model in prediction accuracy. Therefore, when predicting crash counts by crash type and crash severity for rural two-lane highways, the MVPLN model should be considered to avoid estimation error and to account for the potential correlations among crash type counts and crash severity counts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Effects of stimulus order on discrimination processes in comparative and equality judgements: data and models.

    PubMed

    Dyjas, Oliver; Ulrich, Rolf

    2014-01-01

    In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.

  9. Testing the generality of above-ground biomass allometry across plant functional types at the continent scale.

    PubMed

    Paul, Keryn I; Roxburgh, Stephen H; Chave, Jerome; England, Jacqueline R; Zerihun, Ayalsew; Specht, Alison; Lewis, Tom; Bennett, Lauren T; Baker, Thomas G; Adams, Mark A; Huxtable, Dan; Montagu, Kelvin D; Falster, Daniel S; Feller, Mike; Sochacki, Stan; Ritson, Peter; Bastin, Gary; Bartle, John; Wildy, Dan; Hobbs, Trevor; Larmour, John; Waterworth, Rob; Stewart, Hugh T L; Jonson, Justin; Forrester, David I; Applegate, Grahame; Mendham, Daniel; Bradford, Matt; O'Grady, Anthony; Green, Daryl; Sudmeyer, Rob; Rance, Stan J; Turner, John; Barton, Craig; Wenk, Elizabeth H; Grove, Tim; Attiwill, Peter M; Pinkard, Elizabeth; Butler, Don; Brooksbank, Kim; Spencer, Beren; Snowdon, Peter; O'Brien, Nick; Battaglia, Michael; Cameron, David M; Hamilton, Steve; McAuthur, Geoff; Sinclair, Jenny

    2016-06-01

    Accurate ground-based estimation of the carbon stored in terrestrial ecosystems is critical to quantifying the global carbon budget. Allometric models provide cost-effective methods for biomass prediction. But do such models vary with ecoregion or plant functional type? We compiled 15 054 measurements of individual tree or shrub biomass from across Australia to examine the generality of allometric models for above-ground biomass prediction. This provided a robust case study because Australia includes ecoregions ranging from arid shrublands to tropical rainforests, and has a rich history of biomass research, particularly in planted forests. Regardless of ecoregion, for five broad categories of plant functional type (shrubs; multistemmed trees; trees of the genus Eucalyptus and closely related genera; other trees of high wood density; and other trees of low wood density), relationships between biomass and stem diameter were generic. Simple power-law models explained 84-95% of the variation in biomass, with little improvement in model performance when other plant variables (height, bole wood density), or site characteristics (climate, age, management) were included. Predictions of stand-based biomass from allometric models of varying levels of generalization (species-specific, plant functional type) were validated using whole-plot harvest data from 17 contrasting stands (range: 9-356 Mg ha(-1) ). Losses in efficiency of prediction were <1% if generalized models were used in place of species-specific models. Furthermore, application of generalized multispecies models did not introduce significant bias in biomass prediction in 92% of the 53 species tested. Further, overall efficiency of stand-level biomass prediction was 99%, with a mean absolute prediction error of only 13%. Hence, for cost-effective prediction of biomass across a wide range of stands, we recommend use of generic allometric models based on plant functional types. Development of new species-specific models is only warranted when gains in accuracy of stand-based predictions are relatively high (e.g. high-value monocultures). © 2015 John Wiley & Sons Ltd.

  10. Prediction of morbidity and mortality in patients with type 2 diabetes.

    PubMed

    Wells, Brian J; Roth, Rachel; Nowacki, Amy S; Arrigain, Susana; Yu, Changhong; Rosenkrans, Wayne A; Kattan, Michael W

    2013-01-01

    Introduction. The objective of this study was to create a tool that accurately predicts the risk of morbidity and mortality in patients with type 2 diabetes according to an oral hypoglycemic agent. Materials and Methods. The model was based on a cohort of 33,067 patients with type 2 diabetes who were prescribed a single oral hypoglycemic agent at the Cleveland Clinic between 1998 and 2006. Competing risk regression models were created for coronary heart disease (CHD), heart failure, and stroke, while a Cox regression model was created for mortality. Propensity scores were used to account for possible treatment bias. A prediction tool was created and internally validated using tenfold cross-validation. The results were compared to a Framingham model and a model based on the United Kingdom Prospective Diabetes Study (UKPDS) for CHD and stroke, respectively. Results and Discussion. Median follow-up for the mortality outcome was 769 days. The numbers of patients experiencing events were as follows: CHD (3062), heart failure (1408), stroke (1451), and mortality (3661). The prediction tools demonstrated the following concordance indices (c-statistics) for the specific outcomes: CHD (0.730), heart failure (0.753), stroke (0.688), and mortality (0.719). The prediction tool was superior to the Framingham model at predicting CHD and was at least as accurate as the UKPDS model at predicting stroke. Conclusions. We created an accurate tool for predicting the risk of stroke, coronary heart disease, heart failure, and death in patients with type 2 diabetes. The calculator is available online at http://rcalc.ccf.org under the heading "Type 2 Diabetes" and entitled, "Predicting 5-Year Morbidity and Mortality." This may be a valuable tool to aid the clinician's choice of an oral hypoglycemic, to better inform patients, and to motivate dialogue between physician and patient.

  11. Predicting Student Academic Performance in an Engineering Dynamics Course: A Comparison of Four Types of Predictive Mathematical Models

    ERIC Educational Resources Information Center

    Huang, Shaobo; Fang, Ning

    2013-01-01

    Predicting student academic performance has long been an important research topic in many academic disciplines. The present study is the first study that develops and compares four types of mathematical models to predict student academic performance in engineering dynamics--a high-enrollment, high-impact, and core course that many engineering…

  12. Study on elevated-temperature flow behavior of Ni-Cr-Mo-B ultra-heavy-plate steel via experiment and modelling

    NASA Astrophysics Data System (ADS)

    Gao, Zhi-yu; Kang, Yu; Li, Yan-shuai; Meng, Chao; Pan, Tao

    2018-04-01

    Elevated-temperature flow behavior of a novel Ni-Cr-Mo-B ultra-heavy-plate steel was investigated by conducting hot compressive deformation tests on a Gleeble-3800 thermo-mechanical simulator at a temperature range of 1123 K–1423 K with a strain rate range from 0.01 s‑1 to10 s‑1 and a height reduction of 70%. Based on the experimental results, classic strain-compensated Arrhenius-type, a new revised strain-compensated Arrhenius-type and classic modified Johnson-Cook constitutive models were developed for predicting the high-temperature deformation behavior of the steel. The predictability of these models were comparatively evaluated in terms of statistical parameters including correlation coefficient (R), average absolute relative error (AARE), average root mean square error (RMSE), normalized mean bias error (NMBE) and relative error. The statistical results indicate that the new revised strain-compensated Arrhenius-type model could give prediction of elevated-temperature flow stress for the steel accurately under the entire process conditions. However, the predicted values by the classic modified Johnson-Cook model could not agree well with the experimental values, and the classic strain-compensated Arrhenius-type model could track the deformation behavior more accurately compared with the modified Johnson-Cook model, but less accurately with the new revised strain-compensated Arrhenius-type model. In addition, reasons of differences in predictability of these models were discussed in detail.

  13. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke.

    PubMed

    Smith, Eric E; Shobha, Nandavar; Dai, David; Olson, DaiWai M; Reeves, Mathew J; Saver, Jeffrey L; Hernandez, Adrian F; Peterson, Eric D; Fonarow, Gregg C; Schwamm, Lee H

    2013-01-28

    We aimed to derive and validate a single risk score for predicting death from ischemic stroke (IS), intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH). Data from 333 865 stroke patients (IS, 82.4%; ICH, 11.2%; SAH, 2.6%; uncertain type, 3.8%) in the Get With The Guidelines-Stroke database were used. In-hospital mortality varied greatly according to stroke type (IS, 5.5%; ICH, 27.2%; SAH, 25.1%; unknown type, 6.0%; P<0.001). The patients were randomly divided into derivation (60%) and validation (40%) samples. Logistic regression was used to determine the independent predictors of mortality and to assign point scores for a prediction model in the overall population and in the subset with the National Institutes of Health Stroke Scale (NIHSS) recorded (37.1%). The c statistic, a measure of how well the models discriminate the risk of death, was 0.78 in the overall validation sample and 0.86 in the model including NIHSS. The model with NIHSS performed nearly as well in each stroke type as in the overall model including all types (c statistics for IS alone, 0.85; for ICH alone, 0.83; for SAH alone, 0.83; uncertain type alone, 0.86). The calibration of the model was excellent, as demonstrated by plots of observed versus predicted mortality. A single prediction score for all stroke types can be used to predict risk of in-hospital death following stroke admission. Incorporation of NIHSS information substantially improves this predictive accuracy.

  14. Individualized pharmacokinetic risk assessment for development of diabetes in high risk population.

    PubMed

    Gupta, N; Al-Huniti, N H; Veng-Pedersen, P

    2007-10-01

    The objective of this study is to propose a non-parametric pharmacokinetic prediction model that addresses the individualized risk of developing type-2 diabetes in subjects with family history of type-2 diabetes. All selected 191 healthy subjects had both parents as type-2 diabetic. Glucose was administered intravenously (0.5 g/kg body weight) and 13 blood samples taken at specified times were analyzed for plasma insulin and glucose concentrations. All subjects were followed for an average of 13-14 years for diabetic or normal (non-diabetic) outcome. The new logistic regression model predicts the development of diabetes based on body mass index and only one blood sample at 90 min analyzed for insulin concentration. Our model correctly identified 4.5 times more subjects (54% versus 11.6%) predicted to develop diabetes and more than twice the subjects (99% versus 46.4%) predicted not to develop diabetes compared to current non-pharmacokinetic probability estimates for development of type-2 diabetes. Our model can be useful for individualized prediction of development of type-2 diabetes in subjects with family history of type-2 diabetes. This improved prediction may be an important mediating factor for better perception of risk and may result in an improved intervention.

  15. Incorporating Psychological Predictors of Treatment Response into Health Economic Simulation Models: A Case Study in Type 1 Diabetes.

    PubMed

    Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan

    2015-10-01

    . Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies. © The Author(s) 2015.

  16. Developing a predictive tropospheric ozone model for Tabriz

    NASA Astrophysics Data System (ADS)

    Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi

    2013-04-01

    Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.

  17. A Quantitative Model of Expert Transcription Typing

    DTIC Science & Technology

    1993-03-08

    side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire

  18. Predicting NonInertial Effects with Algebraic Stress Models which Account for Dissipation Rate Anisotropies

    NASA Technical Reports Server (NTRS)

    Jongen, T.; Machiels, L.; Gatski, T. B.

    1997-01-01

    Three types of turbulence models which account for rotational effects in noninertial frames of reference are evaluated for the case of incompressible, fully developed rotating turbulent channel flow. The different types of models are a Coriolis-modified eddy-viscosity model, a realizable algebraic stress model, and an algebraic stress model which accounts for dissipation rate anisotropies. A direct numerical simulation of a rotating channel flow is used for the turbulent model validation. This simulation differs from previous studies in that significantly higher rotation numbers are investigated. Flows at these higher rotation numbers are characterized by a relaminarization on the cyclonic or suction side of the channel, and a linear velocity profile on the anticyclonic or pressure side of the channel. The predictive performance of the three types of models are examined in detail, and formulation deficiencies are identified which cause poor predictive performance for some of the models. Criteria are identified which allow for accurate prediction of such flows by algebraic stress models and their corresponding Reynolds stress formulations.

  19. Epidemiology of Mild Traumatic Brain Injury with Intracranial Hemorrhage: Focusing Predictive Models for Neurosurgical Intervention.

    PubMed

    Orlando, Alessandro; Levy, A Stewart; Carrick, Matthew M; Tanner, Allen; Mains, Charles W; Bar-Or, David

    2017-11-01

    To outline differences in neurosurgical intervention (NI) rates between intracranial hemorrhage (ICH) types in mild traumatic brain injuries and help identify which ICH types are most likely to benefit from creation of predictive models for NI. A multicenter retrospective study of adult patients spanning 3 years at 4 U.S. trauma centers was performed. Patients were included if they presented with mild traumatic brain injury (Glasgow Coma Scale score 13-15) with head CT scan positive for ICH. Patients were excluded for skull fractures, "unspecified hemorrhage," or coagulopathy. Primary outcome was NI. Stepwise multivariable logistic regression models were built to analyze the independent association between ICH variables and outcome measures. The study comprised 1876 patients. NI rate was 6.7%. There was a significant difference in rate of NI by ICH type. Subdural hematomas had the highest rate of NI (15.5%) and accounted for 78% of all NIs. Isolated subarachnoid hemorrhages had the lowest, nonzero, NI rate (0.19%). Logistic regression models identified ICH type as the most influential independent variable when examining NI. A model predicting NI for isolated subarachnoid hemorrhages would require 26,928 patients, but a model predicting NI for isolated subdural hematomas would require only 328 patients. This study highlighted disparate NI rates among ICH types in patients with mild traumatic brain injury and identified mild, isolated subdural hematomas as most appropriate for construction of predictive NI models. Increased health care efficiency will be driven by accurate understanding of risk, which can come only from accurate predictive models. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Population-Level Prediction of Type 2 Diabetes From Claims Data and Analysis of Risk Factors.

    PubMed

    Razavian, Narges; Blecker, Saul; Schmidt, Ann Marie; Smith-McLallen, Aaron; Nigam, Somesh; Sontag, David

    2015-12-01

    We present a new approach to population health, in which data-driven predictive models are learned for outcomes such as type 2 diabetes. Our approach enables risk assessment from readily available electronic claims data on large populations, without additional screening cost. Proposed model uncovers early and late-stage risk factors. Using administrative claims, pharmacy records, healthcare utilization, and laboratory results of 4.1 million individuals between 2005 and 2009, an initial set of 42,000 variables were derived that together describe the full health status and history of every individual. Machine learning was then used to methodically enhance predictive variable set and fit models predicting onset of type 2 diabetes in 2009-2011, 2010-2012, and 2011-2013. We compared the enhanced model with a parsimonious model consisting of known diabetes risk factors in a real-world environment, where missing values are common and prevalent. Furthermore, we analyzed novel and known risk factors emerging from the model at different age groups at different stages before the onset. Parsimonious model using 21 classic diabetes risk factors resulted in area under ROC curve (AUC) of 0.75 for diabetes prediction within a 2-year window following the baseline. The enhanced model increased the AUC to 0.80, with about 900 variables selected as predictive (p < 0.0001 for differences between AUCs). Similar improvements were observed for models predicting diabetes onset 1-3 years and 2-4 years after baseline. The enhanced model improved positive predictive value by at least 50% and identified novel surrogate risk factors for type 2 diabetes, such as chronic liver disease (odds ratio [OR] 3.71), high alanine aminotransferase (OR 2.26), esophageal reflux (OR 1.85), and history of acute bronchitis (OR 1.45). Liver risk factors emerge later in the process of diabetes development compared with obesity-related factors such as hypertension and high hemoglobin A1c. In conclusion, population-level risk prediction for type 2 diabetes using readily available administrative data is feasible and has better prediction performance than classical diabetes risk prediction algorithms on very large populations with missing data. The new model enables intervention allocation at national scale quickly and accurately and recovers potentially novel risk factors at different stages before the disease onset.

  1. Exchangeable lead from prediction models relates to vetiver lead uptake in different soil types.

    PubMed

    Andra, Syam S; Sarkar, Dibyendu; Saminathan, Sumathi K M; Datta, Rupali

    2011-12-01

    Prediction models for exchangeable soil lead, published earlier in this journal (Andra et al. 2010a), were developed using a suite of native lead (Pb) paint-contaminated residential soils from two US cities heavily populated with homes constructed prior to Pb ban in paints. In this study, we tested the feasibility and practical applications of these prediction models for developing a phytoremediation design using vetiver grass (Vetiveria zizanioides), a Pb-tolerant plant. The models were used to estimate the exchangeable fraction of Pb available for vetiver uptake in four lead-spiked soil types, both acidic and alkaline, with varying physico-chemical properties and that are different from those used to build the prediction models. Results indicate a strong correlation for predictable exchangeable Pb with the observed fraction and as well with total Pb accumulated by vetiver grass grown in these soils. The correlation coefficient for the predicted vs. observed exchangeable Pb with p < 0.001 was 0.999, 0.996, 0.949, and 0.998 in the Immokalee, Millhopper, Pahokee Muck, and Tobosa soil type, respectively. Similarly, the correlation coefficient for the predicted exchangeable Pb vs. accumulated Pb in vetiver grass with p < 0.001 was 0.948, 0.983, 0.929, and 0.969 for each soil type, respectively. This study suggests that the success of a phytoremediation design could be assessed upfront by predicting the exchangeable Pb fraction in a given soil type based on its properties. This helps in modifying the soil conditions to enhance phytoextraction of Pb from contaminated soils.

  2. Developing a topographic model to predict the northern hardwood forest type within Carolina northern flying squirrel (Glaucomys sabrinus coloratus) recovery areas of the southern Appalachians

    USGS Publications Warehouse

    Evans, Andrew; Odom, Richard H.; Resler, Lynn M.; Ford, W. Mark; Prisley, Stephen

    2014-01-01

    The northern hardwood forest type is an important habitat component for the endangered Carolina northern flying squirrel (CNFS; Glaucomys sabrinus coloratus) for den sites and corridor habitats between boreo-montane conifer patches foraging areas. Our study related terrain data to presence of northern hardwood forest type in the recovery areas of CNFS in the southern Appalachian Mountains of western North Carolina, eastern Tennessee, and southwestern Virginia. We recorded overstory species composition and terrain variables at 338 points, to construct a robust, spatially predictive model. Terrain variables analyzed included elevation, aspect, slope gradient, site curvature, and topographic exposure. We used an information-theoretic approach to assess seven models based on associations noted in existing literature as well as an inclusive global model. Our results indicate that, on a regional scale, elevation, aspect, and topographic exposure index (TEI) are significant predictors of the presence of the northern hardwood forest type in the southern Appalachians. Our elevation + TEI model was the best approximating model (the lowest AICc score) for predicting northern hardwood forest type correctly classifying approximately 78% of our sample points. We then used these data to create region-wide predictive maps of the distribution of the northern hardwood forest type within CNFS recovery areas.

  3. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  4. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.

  5. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.

    PubMed

    Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.

  6. Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model

    PubMed Central

    Xiaolei, Wang

    2017-01-01

    The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server. PMID:28497044

  7. [Prediction of regional soil quality based on mutual information theory integrated with decision tree algorithm].

    PubMed

    Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu

    2012-02-01

    In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.

  8. Predicting drug-target interactions using restricted Boltzmann machines.

    PubMed

    Wang, Yuhao; Zeng, Jianyang

    2013-07-01

    In silico prediction of drug-target interactions plays an important role toward identifying and developing new uses of existing or abandoned drugs. Network-based approaches have recently become a popular tool for discovering new drug-target interactions (DTIs). Unfortunately, most of these network-based approaches can only predict binary interactions between drugs and targets, and information about different types of interactions has not been well exploited for DTI prediction in previous studies. On the other hand, incorporating additional information about drug-target relationships or drug modes of action can improve prediction of DTIs. Furthermore, the predicted types of DTIs can broaden our understanding about the molecular basis of drug action. We propose a first machine learning approach to integrate multiple types of DTIs and predict unknown drug-target relationships or drug modes of action. We cast the new DTI prediction problem into a two-layer graphical model, called restricted Boltzmann machine, and apply a practical learning algorithm to train our model and make predictions. Tests on two public databases show that our restricted Boltzmann machine model can effectively capture the latent features of a DTI network and achieve excellent performance on predicting different types of DTIs, with the area under precision-recall curve up to 89.6. In addition, we demonstrate that integrating multiple types of DTIs can significantly outperform other predictions either by simply mixing multiple types of interactions without distinction or using only a single interaction type. Further tests show that our approach can infer a high fraction of novel DTIs that has been validated by known experiments in the literature or other databases. These results indicate that our approach can have highly practical relevance to DTI prediction and drug repositioning, and hence advance the drug discovery process. Software and datasets are available on request. Supplementary data are available at Bioinformatics online.

  9. Inverse and Predictive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less

  10. The Lack of Utility of Circulating Biomarkers of Inflammation and Endothelial Dysfunction for Type 2 Diabetes Risk Prediction Among Postmenopausal Women

    PubMed Central

    Chao, Chun; Song, Yiqing; Cook, Nancy; Tseng, Chi-Hong; Manson, JoAnn E.; Eaton, Charles; Margolis, Karen L.; Rodriguez, Beatriz; Phillips, Lawrence S.; Tinker, Lesley F.; Liu, Simin

    2011-01-01

    Background Recent studies have linked plasma markers of inflammation and endothelial dysfunction to type 2 diabetes mellitus (DM) development. However, the utility of these novel biomarkers for type 2 DM risk prediction remains uncertain. Methods The Women’s Health Initiative Observational Study (WHIOS), a prospective cohort, and a nested case-control study within the WHIOS of 1584 incident type 2 DM cases and 2198 matched controls were used to evaluate the utility of plasma markers of inflammation and endothelial dysfunction for type 2 DM risk prediction. Between September 1994 and December 1998, 93 676 women aged 50 to 79 years were enrolled in the WHIOS. Fasting plasma levels of glucose, insulin, white blood cells, tumor necrosis factor receptor 2, interleukin 6, high-sensitivity C-reactive protein, E-selectin, soluble intercellular adhesion molecule 1, and vascular cell adhesion molecule 1 were measured using blood samples collected at baseline. A series of prediction models including traditional risk factors and novel plasma markers were evaluated on the basis of global model fit, model discrimination, net reclassification improvement, and positive and negative predictive values. Results Although white blood cell count and levels of interleukin 6, high-sensitivity C-reactive protein, and soluble intercellular adhesion molecule 1 significantly enhanced model fit, none of the inflammatory and endothelial dysfunction markers improved the ability of model discrimination (area under the receiver operating characteristic curve, 0.93 vs 0.93), net reclassification, or predictive values (positive, 0.22 vs 0.24; negative, 0.99 vs 0.99 [using 15% 6-year type 2 DM risk as the cutoff]) compared with traditional risk factors. Similar results were obtained in ethnic-specific analyses. Conclusion Beyond traditional risk factors, measurement of plasma markers of systemic inflammation and endothelial dysfunction contribute relatively little additional value in clinical type 2 DM risk prediction in a multiethnic cohort of postmenopausal women. PMID:20876407

  11. Predictive occurrence models for coastal wetland plant communities: Delineating hydrologic response surfaces with multinomial logistic regression

    NASA Astrophysics Data System (ADS)

    Snedden, Gregg A.; Steyer, Gregory D.

    2013-02-01

    Understanding plant community zonation along estuarine stress gradients is critical for effective conservation and restoration of coastal wetland ecosystems. We related the presence of plant community types to estuarine hydrology at 173 sites across coastal Louisiana. Percent relative cover by species was assessed at each site near the end of the growing season in 2008, and hourly water level and salinity were recorded at each site Oct 2007-Sep 2008. Nine plant community types were delineated with k-means clustering, and indicator species were identified for each of the community types with indicator species analysis. An inverse relation between salinity and species diversity was observed. Canonical correspondence analysis (CCA) effectively segregated the sites across ordination space by community type, and indicated that salinity and tidal amplitude were both important drivers of vegetation composition. Multinomial logistic regression (MLR) and Akaike's Information Criterion (AIC) were used to predict the probability of occurrence of the nine vegetation communities as a function of salinity and tidal amplitude, and probability surfaces obtained from the MLR model corroborated the CCA results. The weighted kappa statistic, calculated from the confusion matrix of predicted versus actual community types, was 0.7 and indicated good agreement between observed community types and model predictions. Our results suggest that models based on a few key hydrologic variables can be valuable tools for predicting vegetation community development when restoring and managing coastal wetlands.

  12. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  13. PREDICTING LEVELS OF STRESS FROM BIOLOGICAL ASSESSMENT DATA: EMPIRICAL MODELS FROM THE EASTERN CORN BELT PLAINS, OHIO, USA

    EPA Science Inventory

    Interest is increasing in using biological community data to provide information on the specific types of anthropogenic influences impacting streams. We built empirical models that predict the level of six different types of stress with fish and benthic macroinvertebrate data as...

  14. Models that predict standing crop of stream fish from habitat variables: 1950-85.

    Treesearch

    K.D. Fausch; C.L. Hawkes; M.G. Parsons

    1988-01-01

    We reviewed mathematical models that predict standing crop of stream fish (number or biomass per unit area or length of stream) from measurable habitat variables and classified them by the types of independent habitat variables found significant, by mathematical structure, and by model quality. Habitat variables were of three types and were measured on different scales...

  15. Myofiber metabolic type determination by mass spectrometry imaging.

    PubMed

    Centeno, Delphine; Vénien, Annie; Pujos-Guillot, Estelle; Astruc, Thierry; Chambon, Christophe; Théron, Laëtitia

    2017-08-01

    Matrix assisted laser desorption/ionization (MALDI) mass spectrometry imaging is a powerful tool that opens new research opportunities in the field of biology. In this work, predictive model was developed to discriminate metabolic myofiber types using the MALDI spectral data. Rat skeletal muscles are constituted of type I and type IIA fiber, which have an oxidative metabolism for glycogen degradation, and type IIX and type IIB fiber which have a glycolytic metabolism, present in different proportions according to the muscle function and physiological state. So far, myofiber type is determined by histological methods that are time consuming. Thanks to the predictive model, we were able to predict not only the metabolic fiber type but also their location, on the same muscle section that was used for MALDI imaging. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Experimental evaluation of a recursive model identification technique for type 1 diabetes.

    PubMed

    Finan, Daniel A; Doyle, Francis J; Palerm, Cesar C; Bevier, Wendy C; Zisser, Howard C; Jovanovic, Lois; Seborg, Dale E

    2009-09-01

    A model-based controller for an artificial beta cell requires an accurate model of the glucose-insulin dynamics in type 1 diabetes subjects. To ensure the robustness of the controller for changing conditions (e.g., changes in insulin sensitivity due to illnesses, changes in exercise habits, or changes in stress levels), the model should be able to adapt to the new conditions by means of a recursive parameter estimation technique. Such an adaptive strategy will ensure that the most accurate model is used for the current conditions, and thus the most accurate model predictions are used in model-based control calculations. In a retrospective analysis, empirical dynamic autoregressive exogenous input (ARX) models were identified from glucose-insulin data for nine type 1 diabetes subjects in ambulatory conditions. Data sets consisted of continuous (5-minute) glucose concentration measurements obtained from a continuous glucose monitor, basal insulin infusion rates and times and amounts of insulin boluses obtained from the subjects' insulin pumps, and subject-reported estimates of the times and carbohydrate content of meals. Two identification techniques were investigated: nonrecursive, or batch methods, and recursive methods. Batch models were identified from a set of training data, whereas recursively identified models were updated at each sampling instant. Both types of models were used to make predictions of new test data. For the purpose of comparison, model predictions were compared to zero-order hold (ZOH) predictions, which were made by simply holding the current glucose value constant for p steps into the future, where p is the prediction horizon. Thus, the ZOH predictions are model free and provide a base case for the prediction metrics used to quantify the accuracy of the model predictions. In theory, recursive identification techniques are needed only when there are changing conditions in the subject that require model adaptation. Thus, the identification and validation techniques were performed with both "normal" data and data collected during conditions of reduced insulin sensitivity. The latter were achieved by having the subjects self-administer a medication, prednisone, for 3 consecutive days. The recursive models were allowed to adapt to this condition of reduced insulin sensitivity, while the batch models were only identified from normal data. Data from nine type 1 diabetes subjects in ambulatory conditions were analyzed; six of these subjects also participated in the prednisone portion of the study. For normal test data, the batch ARX models produced 30-, 45-, and 60-minute-ahead predictions that had average root mean square error (RMSE) values of 26, 34, and 40 mg/dl, respectively. For test data characterized by reduced insulin sensitivity, the batch ARX models produced 30-, 60-, and 90-minute-ahead predictions with average RMSE values of 27, 46, and 59 mg/dl, respectively; the recursive ARX models demonstrated similar performance with corresponding values of 27, 45, and 61 mg/dl, respectively. The identified ARX models (batch and recursive) produced more accurate predictions than the model-free ZOH predictions, but only marginally. For test data characterized by reduced insulin sensitivity, RMSE values for the predictions of the batch ARX models were 9, 5, and 5% more accurate than the ZOH predictions for prediction horizons of 30, 60, and 90 minutes, respectively. In terms of RMSE values, the 30-, 60-, and 90-minute predictions of the recursive models were more accurate than the ZOH predictions, by 10, 5, and 2%, respectively. In this experimental study, the recursively identified ARX models resulted in predictions of test data that were similar, but not superior, to the batch models. Even for the test data characteristic of reduced insulin sensitivity, the batch and recursive models demonstrated similar prediction accuracy. The predictions of the identified ARX models were only marginally more accurate than the model-free ZOH predictions. Given the simplicity of the ARX models and the computational ease with which they are identified, however, even modest improvements may justify the use of these models in a model-based controller for an artificial beta cell. 2009 Diabetes Technology Society.

  17. A new scheme for strain typing of methicillin-resistant Staphylococcus aureus on the basis of matrix-assisted laser desorption ionization time-of-flight mass spectrometry by using machine learning approach.

    PubMed

    Wang, Hsin-Yao; Lee, Tzong-Yi; Tseng, Yi-Ju; Liu, Tsui-Ping; Huang, Kai-Yao; Chang, Yung-Ta; Chen, Chun-Hsien; Lu, Jang-Jih

    2018-01-01

    Methicillin-resistant Staphylococcus aureus (MRSA), one of the most important clinical pathogens, conducts an increasing number of morbidity and mortality in the world. Rapid and accurate strain typing of bacteria would facilitate epidemiological investigation and infection control in near real time. Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry is a rapid and cost-effective tool for presumptive strain typing. To develop robust method for strain typing based on MALDI-TOF spectrum, machine learning (ML) is a promising algorithm for the construction of predictive model. In this study, a strategy of building templates of specific types was used to facilitate generating predictive models of methicillin-resistant Staphylococcus aureus (MRSA) strain typing through various ML methods. The strain types of the isolates were determined through multilocus sequence typing (MLST). The area under the receiver operating characteristic curve (AUC) and the predictive accuracy of the models were compared. ST5, ST59, and ST239 were the major MLST types, and ST45 was the minor type. For binary classification, the AUC values of various ML methods ranged from 0.76 to 0.99 for ST5, ST59, and ST239 types. In multiclass classification, the predictive accuracy of all generated models was more than 0.83. This study has demonstrated that ML methods can serve as a cost-effective and promising tool that provides preliminary strain typing information about major MRSA lineages on the basis of MALDI-TOF spectra.

  18. Psychopathy and Deviant Workplace Behavior: A Comparison of Two Psychopathy Models.

    PubMed

    Carre, Jessica R; Mueller, Steven M; Schleicher, Karly M; Jones, Daniel N

    2018-04-01

    Although psychopathy is an interpersonally harmful construct, few studies have compared different psycho athy models in predicting different types of workplace deviance. We examined how the Triarchic Psychopathy Model (TRI-PM) and the Self-Report Psychopathy-Short Form (SRP-SF) predicted deviant workplace behaviors in two forms: sexual harassment and deviant work behaviors. Using structural equations modeling, the latent factor of psychopathy was predictive for both types of deviant workplace behavior. Specifically, the SRP-SF signif cantly predicted both measures of deviant workplace behavior. With respect to the TRI-PM, meanness and disinhibition significantly predicted higher scores of workplace deviance and workplace sexual harassment measures. Future research needs to investigate the influence of psychopathy on deviant workplace behaviors, and consider the measures they use when they investigate these constructs.

  19. Sasang constitutional types for the risk prediction of metabolic syndrome: a 14-year longitudinal prospective cohort study.

    PubMed

    Lee, Sunghee; Lee, Seung Ku; Kim, Jong Yeol; Cho, Namhan; Shin, Chol

    2017-09-02

    To examine whether the use of Sasang constitutional (SC) types, such as Tae-yang (TY), Tae-eum (TE), So-yang (SY), and So-eum (SE) types, increases the accuracy of risk prediction for metabolic syndrome. From 2001 to 2014, 3529 individuals aged 40 to 69 years participated in a longitudinal prospective cohort. The Cox proportional hazard model was utilized to predict the risk of developing metabolic syndrome. During the 14 year follow-up, 1591 incident events of metabolic syndrome were observed. Individuals with TE type had higher body mass indexes and waist circumferences than individuals with SY and SE types. The risk of developing metabolic syndrome was the highest among individuals with the TE type, followed by the SY type and the SE type. When the prediction risk models for incident metabolic syndrome were compared, the area under the curve for the model using SC types was significantly increased to 0.8173. Significant predictors for incident metabolic syndrome were different according to the SC types. For individuals with the TE type, the significant predictors were age, sex, body mass index (BMI), education, smoking, drinking, fasting glucose level, high-density lipoprotein (HDL) cholesterol level, systolic and diastolic blood pressure, and triglyceride level. For Individuals with the SE type, the predictors were sex, smoking, fasting glucose, HDL cholesterol level, systolic and diastolic blood pressure, and triglyceride level, while the predictors in individuals with the SY type were age, sex, BMI, smoking, drinking, total cholesterol level, fasting glucose level, HDL cholesterol level, systolic and diastolic blood pressure, and triglyceride level. In this prospective cohort study among 3529 individuals, we observed that utilizing the SC types significantly increased the accuracy of the risk prediction for the development of metabolic syndrome.

  20. A Feature Fusion Based Forecasting Model for Financial Time Series

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  1. Nucleosynthesis Predictions for Intermediate-Mass AGB Stars: Comparison to Observations of Type I Planetary Nebulae

    NASA Technical Reports Server (NTRS)

    Karakas, Amanda I.; vanRaai, Mark A.; Lugaro, Maria; Sterling, Nicholas C.; Dinerstein, Harriet L.

    2008-01-01

    Type I planetary nebulae (PNe) have high He/H and N/O ratios and are thought to be descendants of stars with initial masses of approx. 3-8 Stellar Mass. These characteristics indicate that the progenitor stars experienced proton-capture nucleosynthesis at the base of the convective envelope, in addition to the slow neutron capture process operating in the He-shell (the s-process). We compare the predicted abundances of elements up to Sr from models of intermediate-mass asymptotic giant branch (AGB) stars to measured abundances in Type I PNe. In particular, we compare predictions and observations for the light trans-iron elements Se and Kr, in order to constrain convective mixing and the s-process in these stars. A partial mixing zone is included in selected models to explore the effect of a C-13 pocket on the s-process yields. The solar-metallicity models produce enrichments of [(Se, Kr)/Fe] less than or approx. 0.6, consistent with Galactic Type I PNe where the observed enhancements are typically less than or approx. 0.3 dex, while lower metallicity models predict larger enrichments of C, N, Se, and Kr. O destruction occurs in the most massive models but it is not efficient enough to account for the greater than or approx. 0.3 dex O depletions observed in some Type I PNe. It is not possible to reach firm conclusions regarding the neutron source operating in massive AGB stars from Se and Kr abundances in Type I PNe; abundances for more s-process elements may help to distinguish between the two neutron sources. We predict that only the most massive (M grester than or approx.5 Stellar Mass) models would evolve into Type I PNe, indicating that extra-mixing processes are active in lower-mass stars (3-4 Stellar Mass), if these stars are to evolve into Type I PNe.

  2. Nucleosynthesis Predictions for Intermediate-Mass Asymptotic Giant Branch Stars: Comparison to Observations of Type I Planetary Nebulae

    NASA Astrophysics Data System (ADS)

    Karakas, Amanda I.; van Raai, Mark A.; Lugaro, Maria; Sterling, N. C.; Dinerstein, Harriet L.

    2009-01-01

    Type I planetary nebulae (PNe) have high He/H and N/O ratios and are thought to be descendants of stars with initial masses of ~3-8 M sun. These characteristics indicate that the progenitor stars experienced proton-capture nucleosynthesis at the base of the convective envelope, in addition to the slow neutron capture process operating in the He-shell (the s-process). We compare the predicted abundances of elements up to Sr from models of intermediate-mass asymptotic giant branch (AGB) stars to measured abundances in Type I PNe. In particular, we compare predictions and observations for the light trans-iron elements Se and Kr, in order to constrain convective mixing and the s-process in these stars. A partial mixing zone is included in selected models to explore the effect of a 13C pocket on the s-process yields. The solar-metallicity models produce enrichments of [(Se, Kr)/Fe] lsim0.6, consistent with Galactic Type I PNe where the observed enhancements are typically lsim0.3 dex, while lower metallicity models predict larger enrichments of C, N, Se, and Kr. O destruction occurs in the most massive models but it is not efficient enough to account for the gsim0.3 dex O depletions observed in some Type I PNe. It is not possible to reach firm conclusions regarding the neutron source operating in massive AGB stars from Se and Kr abundances in Type I PNe; abundances for more s-process elements may help to distinguish between the two neutron sources. We predict that only the most massive (M gsim 5 M sun) models would evolve into Type I PNe, indicating that extra-mixing processes are active in lower-mass stars (3-4 M sun), if these stars are to evolve into Type I PNe. This paper includes data taken at The McDonald Observatory of The University of Texas at Austin.

  3. Large-scale optimization-based classification models in medicine and biology.

    PubMed

    Lee, Eva K

    2007-06-01

    We present novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and (5) successive multi-stage classification capability to handle data points placed in the reserved-judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multi-group prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80 to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.

  4. Predicting Risk of Type 2 Diabetes Mellitus with Genetic Risk Models on the Basis of Established Genome-wide Association Markers: A Systematic Review

    PubMed Central

    Bao, Wei; Hu, Frank B.; Rong, Shuang; Rong, Ying; Bowers, Katherine; Schisterman, Enrique F.; Liu, Liegang; Zhang, Cuilin

    2013-01-01

    This study aimed to evaluate the predictive performance of genetic risk models based on risk loci identified and/or confirmed in genome-wide association studies for type 2 diabetes mellitus. A systematic literature search was conducted in the PubMed/MEDLINE and EMBASE databases through April 13, 2012, and published data relevant to the prediction of type 2 diabetes based on genome-wide association marker–based risk models (GRMs) were included. Of the 1,234 potentially relevant articles, 21 articles representing 23 studies were eligible for inclusion. The median area under the receiver operating characteristic curve (AUC) among eligible studies was 0.60 (range, 0.55–0.68), which did not differ appreciably by study design, sample size, participants’ race/ethnicity, or the number of genetic markers included in the GRMs. In addition, the AUCs for type 2 diabetes did not improve appreciably with the addition of genetic markers into conventional risk factor–based models (median AUC, 0.79 (range, 0.63–0.91) vs. median AUC, 0.78 (range, 0.63–0.90), respectively). A limited number of included studies used reclassification measures and yielded inconsistent results. In conclusion, GRMs showed a low predictive performance for risk of type 2 diabetes, irrespective of study design, participants’ race/ethnicity, and the number of genetic markers included. Moreover, the addition of genome-wide association markers into conventional risk models produced little improvement in predictive performance. PMID:24008910

  5. Predictive occurrence models for coastal wetland plant communities: delineating hydrologic response surfaces with multinomial logistic regression

    USGS Publications Warehouse

    Snedden, Gregg A.; Steyer, Gregory D.

    2013-01-01

    Understanding plant community zonation along estuarine stress gradients is critical for effective conservation and restoration of coastal wetland ecosystems. We related the presence of plant community types to estuarine hydrology at 173 sites across coastal Louisiana. Percent relative cover by species was assessed at each site near the end of the growing season in 2008, and hourly water level and salinity were recorded at each site Oct 2007–Sep 2008. Nine plant community types were delineated with k-means clustering, and indicator species were identified for each of the community types with indicator species analysis. An inverse relation between salinity and species diversity was observed. Canonical correspondence analysis (CCA) effectively segregated the sites across ordination space by community type, and indicated that salinity and tidal amplitude were both important drivers of vegetation composition. Multinomial logistic regression (MLR) and Akaike's Information Criterion (AIC) were used to predict the probability of occurrence of the nine vegetation communities as a function of salinity and tidal amplitude, and probability surfaces obtained from the MLR model corroborated the CCA results. The weighted kappa statistic, calculated from the confusion matrix of predicted versus actual community types, was 0.7 and indicated good agreement between observed community types and model predictions. Our results suggest that models based on a few key hydrologic variables can be valuable tools for predicting vegetation community development when restoring and managing coastal wetlands.

  6. The extension of total gain (TG) statistic in survival models: properties and applications.

    PubMed

    Choodari-Oskooei, Babak; Royston, Patrick; Parmar, Mahesh K B

    2015-07-01

    The results of multivariable regression models are usually summarized in the form of parameter estimates for the covariates, goodness-of-fit statistics, and the relevant p-values. These statistics do not inform us about whether covariate information will lead to any substantial improvement in prediction. Predictive ability measures can be used for this purpose since they provide important information about the practical significance of prognostic factors. R (2)-type indices are the most familiar forms of such measures in survival models, but they all have limitations and none is widely used. In this paper, we extend the total gain (TG) measure, proposed for a logistic regression model, to survival models and explore its properties using simulations and real data. TG is based on the binary regression quantile plot, otherwise known as the predictiveness curve. Standardised TG ranges from 0 (no explanatory power) to 1 ('perfect' explanatory power). The results of our simulations show that unlike many of the other R (2)-type predictive ability measures, TG is independent of random censoring. It increases as the effect of a covariate increases and can be applied to different types of survival models, including models with time-dependent covariate effects. We also apply TG to quantify the predictive ability of multivariable prognostic models developed in several disease areas. Overall, TG performs well in our simulation studies and can be recommended as a measure to quantify the predictive ability in survival models.

  7. Developing logistic regression models using purchase attributes and demographics to predict the probability of purchases of regular and specialty eggs.

    PubMed

    Bejaei, M; Wiseman, K; Cheng, K M

    2015-01-01

    Consumers' interest in specialty eggs appears to be growing in Europe and North America. The objective of this research was to develop logistic regression models that utilise purchaser attributes and demographics to predict the probability of a consumer purchasing a specific type of table egg including regular (white and brown), non-caged (free-run, free-range and organic) or nutrient-enhanced eggs. These purchase prediction models, together with the purchasers' attributes, can be used to assess market opportunities of different egg types specifically in British Columbia (BC). An online survey was used to gather data for the models. A total of 702 completed questionnaires were submitted by BC residents. Selected independent variables included in the logistic regression to develop models for different egg types to predict the probability of a consumer purchasing a specific type of table egg. The variables used in the model accounted for 54% and 49% of variances in the purchase of regular and non-caged eggs, respectively. Research results indicate that consumers of different egg types exhibit a set of unique and statistically significant characteristics and/or demographics. For example, consumers of regular eggs were less educated, older, price sensitive, major chain store buyers, and store flyer users, and had lower awareness about different types of eggs and less concern regarding animal welfare issues. However, most of the non-caged egg consumers were less concerned about price, had higher awareness about different types of table eggs, purchased their eggs from local/organic grocery stores, farm gates or farmers markets, and they were more concerned about care and feeding of hens compared to consumers of other eggs types.

  8. The feasibility of using a universal Random Forest model to map tree height across different locations and vegetation types

    NASA Astrophysics Data System (ADS)

    Su, Y.; Guo, Q.; Jin, S.; Gao, S.; Hu, T.; Liu, J.; Xue, B. L.

    2017-12-01

    Tree height is an important forest structure parameter for understanding forest ecosystem and improving the accuracy of global carbon stock quantification. Light detection and ranging (LiDAR) can provide accurate tree height measurements, but its use in large-scale tree height mapping is limited by the spatial availability. Random Forest (RF) has been one of the most commonly used algorithms for mapping large-scale tree height through the fusion of LiDAR and other remotely sensed datasets. However, how the variances in vegetation types, geolocations and spatial scales of different study sites influence the RF results is still a question that needs to be addressed. In this study, we selected 16 study sites across four vegetation types in United States (U.S.) fully covered by airborne LiDAR data, and the area of each site was 100 km2. The LiDAR-derived canopy height models (CHMs) were used as the ground truth to train the RF algorithm to predict canopy height from other remotely sensed variables, such as Landsat TM imagery, terrain information and climate surfaces. To address the abovementioned question, 22 models were run under different combinations of vegetation types, geolocations and spatial scales. The results show that the RF model trained at one specific location or vegetation type cannot be used to predict tree height in other locations or vegetation types. However, by training the RF model using samples from all locations and vegetation types, a universal model can be achieved for predicting canopy height across different locations and vegetation types. Moreover, the number of training samples and the targeted spatial resolution of the canopy height product have noticeable influence on the RF prediction accuracy.

  9. A predictive model for biomimetic plate type broadband frequency sensor

    NASA Astrophysics Data System (ADS)

    Ahmed, Riaz U.; Banerjee, Sourav

    2016-04-01

    In this work, predictive model for a bio-inspired broadband frequency sensor is developed. Broadband frequency sensing is essential in many domains of science and technology. One great example of such sensor is human cochlea, where it senses a frequency band of 20 Hz to 20 KHz. Developing broadband sensor adopting the physics of human cochlea has found tremendous interest in recent years. Although few experimental studies have been reported, a true predictive model to design such sensors is missing. A predictive model is utmost necessary for accurate design of selective broadband sensors that are capable of sensing very selective band of frequencies. Hence, in this study, we proposed a novel predictive model for the cochlea-inspired broadband sensor, aiming to select the frequency band and model parameters predictively. Tapered plate geometry is considered mimicking the real shape of the basilar membrane in the human cochlea. The predictive model is intended to develop flexible enough that can be employed in a wide variety of scientific domains. To do that, the predictive model is developed in such a way that, it can not only handle homogeneous but also any functionally graded model parameters. Additionally, the predictive model is capable of managing various types of boundary conditions. It has been found that, using the homogeneous model parameters, it is possible to sense a specific frequency band from a specific portion (B) of the model length (L). It is also possible to alter the attributes of `B' using functionally graded model parameters, which confirms the predictive frequency selection ability of the developed model.

  10. Who will have Sustainable Employment After a Back Injury? The Development of a Clinical Prediction Model in a Cohort of Injured Workers.

    PubMed

    Shearer, Heather M; Côté, Pierre; Boyle, Eleanor; Hayden, Jill A; Frank, John; Johnson, William G

    2017-09-01

    Purpose Our objective was to develop a clinical prediction model to identify workers with sustainable employment following an episode of work-related low back pain (LBP). Methods We used data from a cohort study of injured workers with incident LBP claims in the USA to predict employment patterns 1 and 6 months following a workers' compensation claim. We developed three sequential models to determine the contribution of three domains of variables: (1) basic demographic/clinical variables; (2) health-related variables; and (3) work-related factors. Multivariable logistic regression was used to develop the predictive models. We constructed receiver operator curves and used the c-index to measure predictive accuracy. Results Seventy-nine percent and 77 % of workers had sustainable employment at 1 and 6 months, respectively. Sustainable employment at 1 month was predicted by initial back pain intensity, mental health-related quality of life, claim litigation and employer type (c-index = 0.77). At 6 months, sustainable employment was predicted by physical and mental health-related quality of life, claim litigation and employer type (c-index = 0.77). Adding health-related and work-related variables to models improved predictive accuracy by 8.5 and 10 % at 1 and 6 months respectively. Conclusion We developed clinically-relevant models to predict sustainable employment in injured workers who made a workers' compensation claim for LBP. Inquiring about back pain intensity, physical and mental health-related quality of life, claim litigation and employer type may be beneficial in developing programs of care. Our models need to be validated in other populations.

  11. LMethyR-SVM: Predict Human Enhancers Using Low Methylated Regions based on Weighted Support Vector Machines.

    PubMed

    Xu, Jingting; Hu, Hong; Dai, Yang

    The identification of enhancers is a challenging task. Various types of epigenetic information including histone modification have been utilized in the construction of enhancer prediction models based on a diverse panel of machine learning schemes. However, DNA methylation profiles generated from the whole genome bisulfite sequencing (WGBS) have not been fully explored for their potential in enhancer prediction despite the fact that low methylated regions (LMRs) have been implied to be distal active regulatory regions. In this work, we propose a prediction framework, LMethyR-SVM, using LMRs identified from cell-type-specific WGBS DNA methylation profiles and a weighted support vector machine learning framework. In LMethyR-SVM, the set of cell-type-specific LMRs is further divided into three sets: reliable positive, like positive and likely negative, according to their resemblance to a small set of experimentally validated enhancers in the VISTA database based on an estimated non-parametric density distribution. Then, the prediction model is obtained by solving a weighted support vector machine. We demonstrate the performance of LMethyR-SVM by using the WGBS DNA methylation profiles derived from the human embryonic stem cell type (H1) and the fetal lung fibroblast cell type (IMR90). The predicted enhancers are highly conserved with a reasonable validation rate based on a set of commonly used positive markers including transcription factors, p300 binding and DNase-I hypersensitive sites. In addition, we show evidence that the large fraction of the LMethyR-SVM predicted enhancers are not predicted by ChromHMM in H1 cell type and they are more enriched for the FANTOM5 enhancers. Our work suggests that low methylated regions detected from the WGBS data are useful as complementary resources to histone modification marks in developing models for the prediction of cell-type-specific enhancers.

  12. Application of an Integrated HPC Reliability Prediction Framework to HMMWV Suspension System

    DTIC Science & Technology

    2010-09-13

    model number M966 (TOW Missle Carrier, Basic Armor without weapons), since they were available. Tires used for all simulations were the bias-type...vehicle fleet, including consideration of all kinds of uncertainty, especially including model uncertainty. The end result will be a tool to use...building an adequate vehicle reliability prediction framework for military vehicles is the accurate modeling of the integration of various types of

  13. Exploration of Machine Learning Approaches to Predict Pavement Performance

    DOT National Transportation Integrated Search

    2018-03-23

    Machine learning (ML) techniques were used to model and predict pavement condition index (PCI) for various pavement types using a variety of input variables. The primary objective of this research was to develop and assess PCI predictive models for t...

  14. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  15. Meta-path based heterogeneous combat network link prediction

    NASA Astrophysics Data System (ADS)

    Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin

    2017-09-01

    The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.

  16. Predicting turns in proteins with a unified model.

    PubMed

    Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan

    2012-01-01

    Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.

  17. Predicting Turns in Proteins with a Unified Model

    PubMed Central

    Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan

    2012-01-01

    Motivation Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. Results In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications. PMID:23144872

  18. Innate biology versus lifestyle behaviour in the aetiology of obesity and type 2 diabetes: the GLACIER Study.

    PubMed

    Poveda, Alaitz; Koivula, Robert W; Ahmad, Shafqat; Barroso, Inês; Hallmans, Göran; Johansson, Ingegerd; Renström, Frida; Franks, Paul W

    2016-03-01

    We compared the ability of genetic (established type 2 diabetes, fasting glucose, 2 h glucose and obesity variants) and modifiable lifestyle (diet, physical activity, smoking, alcohol and education) risk factors to predict incident type 2 diabetes and obesity in a population-based prospective cohort of 3,444 Swedish adults studied sequentially at baseline and 10 years later. Multivariable logistic regression analyses were used to assess the predictive ability of genetic and lifestyle risk factors on incident obesity and type 2 diabetes by calculating the AUC. The predictive accuracy of lifestyle risk factors was similar to that yielded by genetic information for incident type 2 diabetes (AUC 75% and 74%, respectively) and obesity (AUC 68% and 73%, respectively) in models adjusted for age, age(2) and sex. The addition of genetic information to the lifestyle model significantly improved the prediction of type 2 diabetes (AUC 80%; p = 0.0003) and obesity (AUC 79%; p < 0.0001) and resulted in a net reclassification improvement of 58% for type 2 diabetes and 64% for obesity. These findings illustrate that lifestyle and genetic information separately provide a similarly high degree of long-range predictive accuracy for obesity and type 2 diabetes.

  19. Optimal foraging on the roof of the world: Himalayan langurs and the classical prey model

    PubMed Central

    Sayers, Ken; Norconk, Marilyn A.; Conklin-Brittain, Nancy L.

    2009-01-01

    Optimal foraging theory has only been sporadically applied to nonhuman primates. The classical prey model, modified for patch choice, predicts a sliding “profitability threshold” for dropping patch types from the diet, preference for profitable foods, dietary niche breadth reduction as encounter rates increase, and that exploitation of a patch type is unrelated to its own abundance. We present results from a one-year study testing these predictions with Himalayan langurs (Semnopithecus entellus) at Langtang National Park, Nepal. Behavioral data included continuous recording of feeding bouts and between-patch travel times. Encounter rates were estimated for 55 food types, which were analyzed for crude protein, lipid, free simple sugar, and fibers. Patch types were entered into the prey model algorithm for eight seasonal time periods and differing age-sex classes and nutritional currencies. Although the model consistently underestimated diet breadth, the majority of non-predicted patch types represented rare foods. Profitability was positively related to annual/seasonal dietary contribution by organic matter estimates, while time estimates provided weaker relationships. Patch types utilized did not decrease with increasing encounter rates involving profitable foods, although low-ranking foods available year-round were taken predominantly when high-ranking foods were scarce. High-ranking foods were taken in close relation to encounter rates, while low-ranking foods were not. The utilization of an energetic currency generally resulted in closest conformation to model predictions, and it performed best when assumptions were most closely approximated. These results suggest that even simple models from foraging theory can provide a useful framework for the study of primate feeding behavior. PMID:19844998

  20. Topographic models for predicting malaria vector breeding habitats: potential tools for vector control managers.

    PubMed

    Nmor, Jephtha C; Sunahara, Toshihiko; Goto, Kensuke; Futami, Kyoko; Sonye, George; Akweywa, Peter; Dida, Gabriel; Minakawa, Noboru

    2013-01-16

    Identification of malaria vector breeding sites can enhance control activities. Although associations between malaria vector breeding sites and topography are well recognized, practical models that predict breeding sites from topographic information are lacking. We used topographic variables derived from remotely sensed Digital Elevation Models (DEMs) to model the breeding sites of malaria vectors. We further compared the predictive strength of two different DEMs and evaluated the predictability of various habitat types inhabited by Anopheles larvae. Using GIS techniques, topographic variables were extracted from two DEMs: 1) Shuttle Radar Topography Mission 3 (SRTM3, 90-m resolution) and 2) the Advanced Spaceborne Thermal Emission Reflection Radiometer Global DEM (ASTER, 30-m resolution). We used data on breeding sites from an extensive field survey conducted on an island in western Kenya in 2006. Topographic variables were extracted for 826 breeding sites and for 4520 negative points that were randomly assigned. Logistic regression modelling was applied to characterize topographic features of the malaria vector breeding sites and predict their locations. Model accuracy was evaluated using the area under the receiver operating characteristics curve (AUC). All topographic variables derived from both DEMs were significantly correlated with breeding habitats except for the aspect of SRTM. The magnitude and direction of correlation for each variable were similar in the two DEMs. Multivariate models for SRTM and ASTER showed similar levels of fit indicated by Akaike information criterion (3959.3 and 3972.7, respectively), though the former was slightly better than the latter. The accuracy of prediction indicated by AUC was also similar in SRTM (0.758) and ASTER (0.755) in the training site. In the testing site, both SRTM and ASTER models showed higher AUC in the testing sites than in the training site (0.829 and 0.799, respectively). The predictability of habitat types varied. Drains, foot-prints, puddles and swamp habitat types were most predictable. Both SRTM and ASTER models had similar predictive potentials, which were sufficiently accurate to predict vector habitats. The free availability of these DEMs suggests that topographic predictive models could be widely used by vector control managers in Africa to complement malaria control strategies.

  1. Incorporation of lysosomal sequestration in the mechanistic model for prediction of tissue distribution of basic drugs.

    PubMed

    Assmus, Frauke; Houston, J Brian; Galetin, Aleksandra

    2017-11-15

    The prediction of tissue-to-plasma water partition coefficients (Kpu) from in vitro and in silico data using the tissue-composition based model (Rodgers & Rowland, J Pharm Sci. 2005, 94(6):1237-48.) is well established. However, distribution of basic drugs, in particular into lysosome-rich lung tissue, tends to be under-predicted by this approach. The aim of this study was to develop an extended mechanistic model for the prediction of Kpu which accounts for lysosomal sequestration and the contribution of different cell types in the tissue of interest. The extended model is based on compound-specific physicochemical properties and tissue composition data to describe drug ionization, distribution into tissue water and drug binding to neutral lipids, neutral phospholipids and acidic phospholipids in tissues, including lysosomes. Physiological data on the types of cells contributing to lung, kidney and liver, their lysosomal content and lysosomal pH were collated from the literature. The predictive power of the extended mechanistic model was evaluated using a dataset of 28 basic drugs (pK a ≥7.8, 17 β-blockers, 11 structurally diverse drugs) for which experimentally determined Kpu data in rat tissue have been reported. Accounting for the lysosomal sequestration in the extended mechanistic model improved the accuracy of Kpu predictions in lung compared to the original Rodgers model (56% drugs within 2-fold or 88% within 3-fold of observed values). Reduction in the extent of Kpu under-prediction was also evident in liver and kidney. However, consideration of lysosomal sequestration increased the occurrence of over-predictions, yielding overall comparable model performances for kidney and liver, with 68% and 54% of Kpu values within 2-fold error, respectively. High lysosomal concentration ratios relative to cytosol (>1000-fold) were predicted for the drugs investigated; the extent differed depending on the lysosomal pH and concentration of acidic phospholipids among cell types. Despite this extensive lysosomal sequestration in the individual cells types, the maximal change in the overall predicted tissue Kpu was <3-fold for lysosome-rich tissues investigated here. Accounting for the variability in cellular physiological model input parameters, in particular lysosomal pH and fraction of the cellular volume occupied by the lysosomes, only partially explained discrepancies between observed and predicted Kpu data in the lung. Improved understanding of the system properties, e.g., cell/organelle composition is required to support further development of mechanistic equations for the prediction of drug tissue distribution. Application of this revised mechanistic model is recommended for prediction of Kpu in lysosome-rich tissue to facilitate the advancement of physiologically-based prediction of volume of distribution and drug exposure in the tissues. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Predictive models for radial sap flux variation in coniferous, diffuse-porous and ring-porous temperate trees.

    PubMed

    Berdanier, Aaron B; Miniat, Chelcy F; Clark, James S

    2016-08-01

    Accurately scaling sap flux observations to tree or stand levels requires accounting for variation in sap flux between wood types and by depth into the tree. However, existing models for radial variation in axial sap flux are rarely used because they are difficult to implement, there is uncertainty about their predictive ability and calibration measurements are often unavailable. Here we compare different models with a diverse sap flux data set to test the hypotheses that radial profiles differ by wood type and tree size. We show that radial variation in sap flux is dependent on wood type but independent of tree size for a range of temperate trees. The best-fitting model predicted out-of-sample sap flux observations and independent estimates of sapwood area with small errors, suggesting robustness in the new settings. We develop a method for predicting whole-tree water use with this model and include computer code for simple implementation in other studies. Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  3. A comparative study on improved Arrhenius-type and artificial neural network models to predict high-temperature flow behaviors in 20MnNiMo alloy.

    PubMed

    Quan, Guo-zheng; Yu, Chun-tang; Liu, Ying-ying; Xia, Yu-feng

    2014-01-01

    The stress-strain data of 20MnNiMo alloy were collected from a series of hot compressions on Gleeble-1500 thermal-mechanical simulator in the temperature range of 1173 ∼ 1473 K and strain rate range of 0.01 ∼ 10 s(-1). Based on the experimental data, the improved Arrhenius-type constitutive model and the artificial neural network (ANN) model were established to predict the high temperature flow stress of as-cast 20MnNiMo alloy. The accuracy and reliability of the improved Arrhenius-type model and the trained ANN model were further evaluated in terms of the correlation coefficient (R), the average absolute relative error (AARE), and the relative error (η). For the former, R and AARE were found to be 0.9954 and 5.26%, respectively, while, for the latter, 0.9997 and 1.02%, respectively. The relative errors (η) of the improved Arrhenius-type model and the ANN model were, respectively, in the range of -39.99% ∼ 35.05% and -3.77% ∼ 16.74%. As for the former, only 16.3% of the test data set possesses η-values within ± 1%, while, as for the latter, more than 79% possesses. The results indicate that the ANN model presents a higher predictable ability than the improved Arrhenius-type constitutive model.

  4. The development and evaluation of accident predictive models

    NASA Astrophysics Data System (ADS)

    Maleck, T. L.

    1980-12-01

    A mathematical model that will predict the incremental change in the dependent variables (accident types) resulting from changes in the independent variables is developed. The end product is a tool for estimating the expected number and type of accidents for a given highway segment. The data segments (accidents) are separated in exclusive groups via a branching process and variance is further reduced using stepwise multiple regression. The standard error of the estimate is calculated for each model. The dependent variables are the frequency, density, and rate of 18 types of accidents among the independent variables are: district, county, highway geometry, land use, type of zone, speed limit, signal code, type of intersection, number of intersection legs, number of turn lanes, left-turn control, all-red interval, average daily traffic, and outlier code. Models for nonintersectional accidents did not fit nor validate as well as models for intersectional accidents.

  5. Predictability of two types of El Niño and their climate impacts in boreal spring to summer in coupled models

    NASA Astrophysics Data System (ADS)

    Lee, Ray Wai-Ki; Tam, Chi-Yung; Sohn, Soo-Jin; Ahn, Joong-Bae

    2017-12-01

    The predictability of the two El Niño types and their different impacts on the East Asian climate from boreal spring to summer have been studied, based on coupled general circulation models (CGCM) simulations from the APEC Climate Center (APCC) multi-model ensemble (MME) hindcast experiments. It was found that both the spatial pattern and temporal persistence of canonical (eastern Pacific type) El Niño sea surface temperature (SST) are much better simulated than those for El Niño Modoki (central Pacific type). In particular, most models tend to have El Niño Modoki events that decay too quickly, in comparison to those observed. The ability of these models in distinguishing between the two types of ENSO has also been assessed. Based on the MME average, the two ENSO types become less and less differentiated in the model environment as the forecast leadtime increases. Regarding the climate impact of ENSO, in spring during canonical El Niño, coupled models can reasonably capture the anomalous low-level anticyclone over the western north Pacific (WNP)/Philippine Sea area, as well as rainfall over coastal East Asia. However, most models have difficulties in predicting the springtime dry signal over Indochina to South China Sea (SCS) when El Niño Modoki occurs. This is related to the location of the simulated anomalous anticyclone in this region, which is displaced eastward over SCS relative to the observed. In boreal summer, coupled models still exhibit some skills in predicting the East Asian rainfall during canonical El Nino, but not for El Niño Modoki. Overall, models' performance in spring to summer precipitation forecasts is dictated by their ability in capturing the low-level anticyclonic feature over the WNP/SCS area. The latter in turn is likely to be affected by the realism of the time mean monsoon circulation in models.

  6. Predictable quantum efficient detector based on n-type silicon photodiodes

    NASA Astrophysics Data System (ADS)

    Dönsberg, Timo; Manoocheri, Farshid; Sildoja, Meelis; Juntunen, Mikko; Savin, Hele; Tuovinen, Esa; Ronkainen, Hannu; Prunnila, Mika; Merimaa, Mikko; Tang, Chi Kwong; Gran, Jarle; Müller, Ingmar; Werner, Lutz; Rougié, Bernard; Pons, Alicia; Smîd, Marek; Gál, Péter; Lolli, Lapo; Brida, Giorgio; Rastello, Maria Luisa; Ikonen, Erkki

    2017-12-01

    The predictable quantum efficient detector (PQED) consists of two custom-made induced junction photodiodes that are mounted in a wedged trap configuration for the reduction of reflectance losses. Until now, all manufactured PQED photodiodes have been based on a structure where a SiO2 layer is thermally grown on top of p-type silicon substrate. In this paper, we present the design, manufacturing, modelling and characterization of a new type of PQED, where the photodiodes have an Al2O3 layer on top of n-type silicon substrate. Atomic layer deposition is used to deposit the layer to the desired thickness. Two sets of photodiodes with varying oxide thicknesses and substrate doping concentrations were fabricated. In order to predict recombination losses of charge carriers, a 3D model of the photodiode was built into Cogenda Genius semiconductor simulation software. It is important to note that a novel experimental method was developed to obtain values for the 3D model parameters. This makes the prediction of the PQED responsivity a completely autonomous process. Detectors were characterized for temperature dependence of dark current, spatial uniformity of responsivity, reflectance, linearity and absolute responsivity at the wavelengths of 488 nm and 532 nm. For both sets of photodiodes, the modelled and measured responsivities were generally in agreement within the measurement and modelling uncertainties of around 100 parts per million (ppm). There is, however, an indication that the modelled internal quantum deficiency may be underestimated by a similar amount. Moreover, the responsivities of the detectors were spatially uniform within 30 ppm peak-to-peak variation. The results obtained in this research indicate that the n-type induced junction photodiode is a very promising alternative to the existing p-type detectors, and thus give additional credibility to the concept of modelled quantum detector serving as a primary standard. Furthermore, the manufacturing of PQEDs is no longer dependent on the availability of a certain type of very lightly doped p-type silicon wafers.

  7. Classification and disease prediction via mathematical programming

    NASA Astrophysics Data System (ADS)

    Lee, Eva K.; Wu, Tsung-Lin

    2007-11-01

    In this chapter, we present classification models based on mathematical programming approaches. We first provide an overview on various mathematical programming approaches, including linear programming, mixed integer programming, nonlinear programming and support vector machines. Next, we present our effort of novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule) and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multigroup prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80% to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.

  8. Studying the highly bent spectra of FR II-type radio galaxies with the KDA EXT model

    NASA Astrophysics Data System (ADS)

    Kuligowska, Elżbieta

    2018-04-01

    Context. The Kaiser, Dennett-Thorpe & Alexander (KDA, 1997, MNRAS, 292, 723) EXT model, that is, the extension of the KDA model of Fanaroff & Riley (FR) II-type source evolution, is applied and confronted with the observational data for selected FR II-type radio sources with significantly aged radio spectra. Aim. A sample of FR II-type radio galaxies with radio spectra strongly bent at their highest frequencies is used for testing the usefulness of the KDA EXT model. Methods: The dynamical evolution of FR II-type sources predicted with the KDA EXT model is briefly presented and discussed. The results are then compared to the ones obtained with the classical KDA approach, assuming the source's continuous injection and self-similarity. Results: The results and corresponding diagrams obtained for the eight sample sources indicate that the KDA EXT model predicts the observed radio spectra significantly better than the best spectral fit provided by the original KDA model.

  9. Posterior Predictive Model Checking in Bayesian Networks

    ERIC Educational Resources Information Center

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  10. Development of a screening tool using electronic health records for undiagnosed Type 2 diabetes mellitus and impaired fasting glucose detection in the Slovenian population.

    PubMed

    Štiglic, G; Kocbek, P; Cilar, L; Fijačko, N; Stožer, A; Zaletel, J; Sheikh, A; Povalej Bržan, P

    2018-05-01

    To develop and validate a simplified screening test for undiagnosed Type 2 diabetes mellitus and impaired fasting glucose for the Slovenian population (SloRisk) to be used in the general population. Data on 11 391 people were collected from the electronic health records of comprehensive medical examinations in five Slovenian healthcare centres. Fasting plasma glucose as well as information related to the Finnish Diabetes Risk Score questionnaire, FINDRISC, were collected for 2073 people to build predictive models. Bootstrapping-based evaluation was used to estimate the area under the receiver-operating characteristic curve performance metric of two proposed logistic regression models as well as the Finnish Diabetes Risk Score model both at recommended and at alternative cut-off values. The final model contained five questions for undiagnosed Type 2 diabetes prediction and achieved an area under the receiver-operating characteristic curve of 0.851 (95% CI 0.850-0.853). The impaired fasting glucose prediction model included six questions and achieved an area under the receiver-operating characteristic curve of 0.840 (95% CI 0.839-0.840). There were four questions that were included in both models (age, sex, waist circumference and blood sugar history), with physical activity selected only for undiagnosed Type 2 diabetes and questions on family history and hypertension drug use selected only for the impaired fasting glucose prediction model. This study proposes two simplified models based on FINDRISC questions for screening of undiagnosed Type 2 diabetes and impaired fasting glucose in the Slovenian population. A significant improvement in performance was achieved compared with the original FINDRISC questionnaire. Both models include waist circumference instead of BMI. © 2018 Diabetes UK.

  11. Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.

    PubMed

    Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong

    2007-09-01

    Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.

  12. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

    PubMed

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-07-03

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  13. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus

    PubMed Central

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-01-01

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies. PMID:26151207

  14. A Muscle’s Force Depends on the Recruitment Patterns of Its Fibers

    PubMed Central

    Wakeling, James M.; Lee, Sabrina S. M.; Arnold, Allison S.; de Boef Miara, Maria; Biewener, Andrew A.

    2012-01-01

    Biomechanical models of whole muscles commonly used in simulations of musculoskeletal function and movement typically assume that the muscle generates force as a scaled-up muscle fiber. However, muscles are comprised of motor units that have different intrinsic properties and that can be activated at different times. This study tested whether a muscle model comprised of motor units that could be independently activated resulted in more accurate predictions of force than traditional Hill-type models. Forces predicted by the models were evaluated by direct comparison with the muscle forces measured in situ from the gastrocnemii in goats. The muscle was stimulated tetanically at a range of frequencies, muscle fiber strains were measured using sonomicrometry, and the activation patterns of the different types of motor unit were calculated from electromyographic recordings. Activation patterns were input into five different muscle models. Four models were traditional Hill-type models with different intrinsic speeds and fiber-type properties. The fifth model incorporated differential groups of fast and slow motor units. For all goats, muscles and stimulation frequencies the differential model resulted in the best predictions of muscle force. The in situ muscle output was shown to depend on the recruitment of different motor units within the muscle. PMID:22350666

  15. Development of a Land Use Mapping and Monitoring Protocol for the High Plains Region: A Multitemporal Remote Sensing Application

    NASA Technical Reports Server (NTRS)

    Price, Kevin P.; Nellis, M. Duane

    1996-01-01

    The purpose of this project was to develop a practical protocol that employs multitemporal remotely sensed imagery, integrated with environmental parameters to model and monitor agricultural and natural resources in the High Plains Region of the United States. The value of this project would be extended throughout the region via workshops targeted at carefully selected audiences and designed to transfer remote sensing technology and the methods and applications developed. Implementation of such a protocol using remotely sensed satellite imagery is critical for addressing many issues of regional importance, including: (1) Prediction of rural land use/land cover (LULC) categories within a region; (2) Use of rural LULC maps for successive years to monitor change; (3) Crop types derived from LULC maps as important inputs to water consumption models; (4) Early prediction of crop yields; (5) Multi-date maps of crop types to monitor patterns related to crop change; (6) Knowledge of crop types to monitor condition and improve prediction of crop yield; (7) More precise models of crop types and conditions to improve agricultural economic forecasts; (8;) Prediction of biomass for estimating vegetation production, soil protection from erosion forces, nonpoint source pollution, wildlife habitat quality and other related factors; (9) Crop type and condition information to more accurately predict production of biogeochemicals such as CO2, CH4, and other greenhouse gases that are inputs to global climate models; (10) Provide information regarding limiting factors (i.e., economic constraints of pumping, fertilizing, etc.) used in conjunction with other factors, such as changes in climate for predicting changes in rural LULC; (11) Accurate prediction of rural LULC used to assess the effectiveness of government programs such as the U.S. Soil Conservation Service (SCS) Conservation Reserve Program; and (12) Prediction of water demand based on rural LULC that can be related to rates of draw-down of underground water supplies.

  16. Prediction of First Cardiovascular Disease Event in Type 1 Diabetes Mellitus: The Steno Type 1 Risk Engine.

    PubMed

    Vistisen, Dorte; Andersen, Gregers Stig; Hansen, Christian Stevns; Hulman, Adam; Henriksen, Jan Erik; Bech-Nielsen, Henning; Jørgensen, Marit Eika

    2016-03-15

    Patients with type 1 diabetes mellitus are at increased risk of developing cardiovascular disease (CVD), but they are currently undertreated. There are no risk scores used on a regular basis in clinical practice for assessing the risk of CVD in type 1 diabetes mellitus. From 4306 clinically diagnosed adult patients with type 1 diabetes mellitus, we developed a prediction model for estimating the risk of first fatal or nonfatal CVD event (ischemic heart disease, ischemic stroke, heart failure, and peripheral artery disease). Detailed clinical data including lifestyle factors were linked to event data from validated national registers. The risk prediction model was developed by using a 2-stage approach. First, a nonparametric, data-driven approach was used to identify potentially informative risk factors and interactions (random forest and survival tree analysis). Second, based on results from the first step, Poisson regression analysis was used to derive the final model. The final CVD prediction model was externally validated in a different population of 2119 patients with type 1 diabetes mellitus. During a median follow-up of 6.8 years (interquartile range, 2.9-10.9) a total of 793 (18.4%) patients developed CVD. The final prediction model included age, sex, diabetes duration, systolic blood pressure, low-density lipoprotein cholesterol, hemoglobin A1c, albuminuria, glomerular filtration rate, smoking, and exercise. Discrimination was excellent for a 5-year CVD event with a C-statistic of 0.826 (95% confidence interval, 0.807-0.845) in the derivation data and a C-statistic of 0.803 (95% confidence interval, 0.767-0.839) in the validation data. The Hosmer-Lemeshow test showed good calibration (P>0.05) in both cohorts. This high-performing CVD risk model allows for the implementation of decision rules in a clinical setting. © 2016 American Heart Association, Inc.

  17. Development of an in Silico Model of DPPH• Free Radical Scavenging Capacity: Prediction of Antioxidant Activity of Coumarin Type Compounds.

    PubMed

    Goya Jorge, Elizabeth; Rayar, Anita Maria; Barigye, Stephen J; Jorge Rodríguez, María Elisa; Sylla-Iyarreta Veitía, Maité

    2016-06-07

    A quantitative structure-activity relationship (QSAR) study of the 2,2-diphenyl-l-picrylhydrazyl (DPPH•) radical scavenging ability of 1373 chemical compounds, using DRAGON molecular descriptors (MD) and the neural network technique, a technique based on the multilayer multilayer perceptron (MLP), was developed. The built model demonstrated a satisfactory performance for the training ( R 2 = 0.713 ) and test set ( Q ext 2 = 0.654 ) , respectively. To gain greater insight on the relevance of the MD contained in the MLP model, sensitivity and principal component analyses were performed. Moreover, structural and mechanistic interpretation was carried out to comprehend the relationship of the variables in the model with the modeled property. The constructed MLP model was employed to predict the radical scavenging ability for a group of coumarin-type compounds. Finally, in order to validate the model's predictions, an in vitro assay for one of the compounds (4-hydroxycoumarin) was performed, showing a satisfactory proximity between the experimental and predicted pIC50 values.

  18. The importance of different frequency bands in predicting subcutaneous glucose concentration in type 1 diabetic patients.

    PubMed

    Lu, Yinghui; Gribok, Andrei V; Ward, W Kenneth; Reifman, Jaques

    2010-08-01

    We investigated the relative importance and predictive power of different frequency bands of subcutaneous glucose signals for the short-term (0-50 min) forecasting of glucose concentrations in type 1 diabetic patients with data-driven autoregressive (AR) models. The study data consisted of minute-by-minute glucose signals collected from nine deidentified patients over a five-day period using continuous glucose monitoring devices. AR models were developed using single and pairwise combinations of frequency bands of the glucose signal and compared with a reference model including all bands. The results suggest that: for open-loop applications, there is no need to explicitly represent exogenous inputs, such as meals and insulin intake, in AR models; models based on a single-frequency band, with periods between 60-120 min and 150-500 min, yield good predictive power (error <3 mg/dL) for prediction horizons of up to 25 min; models based on pairs of bands produce predictions that are indistinguishable from those of the reference model as long as the 60-120 min period band is included; and AR models can be developed on signals of short length (approximately 300 min), i.e., ignoring long circadian rhythms, without any detriment in prediction accuracy. Together, these findings provide insights into efficient development of more effective and parsimonious data-driven models for short-term prediction of glucose concentrations in diabetic patients.

  19. Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation

    NASA Astrophysics Data System (ADS)

    Vašát, Radim; Kodešová, Radka; Borůvka, Luboš

    2017-07-01

    A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.

  20. Development and validation of a predictive risk model for all-cause mortality in type 2 diabetes.

    PubMed

    Robinson, Tom E; Elley, C Raina; Kenealy, Tim; Drury, Paul L

    2015-06-01

    Type 2 diabetes is common and is associated with an approximate 80% increase in the rate of mortality. Management decisions may be assisted by an estimate of the patient's absolute risk of adverse outcomes, including death. This study aimed to derive a predictive risk model for all-cause mortality in type 2 diabetes. We used primary care data from a large national multi-ethnic cohort of patients with type 2 diabetes in New Zealand and linked mortality records to develop a predictive risk model for 5-year risk of mortality. We then validated this model using information from a separate cohort of patients with type 2 diabetes. 26,864 people were included in the development cohort with a median follow up time of 9.1 years. We developed three models initially using demographic information and then progressively more clinical detail. The final model, which also included markers of renal disease, proved to give best prediction of all-cause mortality with a C-statistic of 0.80 in the development cohort and 0.79 in the validation cohort (7610 people) and was well calibrated. Ethnicity was a major factor with hazard ratios of 1.37 for indigenous Maori, 0.41 for East Asian and 0.55 for Indo Asian compared with European (P<0.001). We have developed a model using information usually available in primary care that provides good assessment of patient's risk of death. Results are similar to models previously published from smaller cohorts in other countries and apply to a wider range of patient ethnic groups. Copyright © 2015. Published by Elsevier Ireland Ltd.

  1. Prediction of height increment for models of forest growth

    Treesearch

    Albert R. Stage

    1975-01-01

    Functional forms of equations were derived for predicting 10-year periodic height increment of forest trees from height, diameter, diameter increment, and habitat type. Crown ratio was considered as an additional variable for prediction, but its contribution was negligible. Coefficients of the function were estimated for 10 species of trees growing in 10 habitat types...

  2. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  3. Evapotranspiration and canopy resistance at an undeveloped prairie in a humid subtropical climate

    USGS Publications Warehouse

    Bidlake, W.R.

    2002-01-01

    Reliable estimates of evapotranspiration from areas of wildland vegetation are needed for many types of water-resource investigations. However, little is known about surface fluxes from many areally important vegetation types, and relatively few comparisons have been made to examine how well evapotranspiration models can predict evapotranspiration for soil-, climate-, or vegetation-types that differ from those under which the models have been calibrated. In this investigation at a prairie site in west-central Florida, latent heat flux (??E) computed from the energy balance and alternatively by eddy covariance during a 15-month period differed by 4 percent and 7 percent on hourly and daily time scales, respectively. Annual evapotranspiration computed from the energy balance and by eddy covariance were 978 and 944 mm, respectively. An hourly Penman-Monteith (PM) evapotranspiration model with stomatal control predicated on water-vapor-pressure deficit at canopy level, incoming solar radiation intensity, and soil water deficit was developed and calibrated using surface fluxes from eddy covariance. Model-predicted ??E agreed closely with ??E computed from the energy balance except when moisture from dew or precipitation covered vegetation surfaces. Finally, an hourly PM model developed for an Amazonian pasture predicted ??E for the Florida prairie with unexpected reliability. Additional comparisons of PM-type models that have been developed for differing types of short vegetation could aid in assessing interchangeability of such models.

  4. Predictive Monitoring for Improved Management of Glucose Levels

    PubMed Central

    Reifman, Jaques; Rajaraman, Srinivasan; Gribok, Andrei; Ward, W. Kenneth

    2007-01-01

    Background Recent developments and expected near-future improvements in continuous glucose monitoring (CGM) devices provide opportunities to couple them with mathematical forecasting models to produce predictive monitoring systems for early, proactive glycemia management of diabetes mellitus patients before glucose levels drift to undesirable levels. This article assesses the feasibility of data-driven models to serve as the forecasting engine of predictive monitoring systems. Methods We investigated the capabilities of data-driven autoregressive (AR) models to (1) capture the correlations in glucose time-series data, (2) make accurate predictions as a function of prediction horizon, and (3) be made portable from individual to individual without any need for model tuning. The investigation is performed by employing CGM data from nine type 1 diabetic subjects collected over a continuous 5-day period. Results With CGM data serving as the gold standard, AR model-based predictions of glucose levels assessed over nine subjects with Clarke error grid analysis indicated that, for a 30-minute prediction horizon, individually tuned models yield 97.6 to 100.0% of data in the clinically acceptable zones A and B, whereas cross-subject, portable models yield 95.8 to 99.7% of data in zones A and B. Conclusions This study shows that, for a 30-minute prediction horizon, data-driven AR models provide sufficiently-accurate and clinically-acceptable estimates of glucose levels for timely, proactive therapy and should be considered as the modeling engine for predictive monitoring of patients with type 1 diabetes mellitus. It also suggests that AR models can be made portable from individual to individual with minor performance penalties, while greatly reducing the burden associated with model tuning and data collection for model development. PMID:19885110

  5. A Comparative Study on Improved Arrhenius-Type and Artificial Neural Network Models to Predict High-Temperature Flow Behaviors in 20MnNiMo Alloy

    PubMed Central

    Yu, Chun-tang; Liu, Ying-ying; Xia, Yu-feng

    2014-01-01

    The stress-strain data of 20MnNiMo alloy were collected from a series of hot compressions on Gleeble-1500 thermal-mechanical simulator in the temperature range of 1173∼1473 K and strain rate range of 0.01∼10 s−1. Based on the experimental data, the improved Arrhenius-type constitutive model and the artificial neural network (ANN) model were established to predict the high temperature flow stress of as-cast 20MnNiMo alloy. The accuracy and reliability of the improved Arrhenius-type model and the trained ANN model were further evaluated in terms of the correlation coefficient (R), the average absolute relative error (AARE), and the relative error (η). For the former, R and AARE were found to be 0.9954 and 5.26%, respectively, while, for the latter, 0.9997 and 1.02%, respectively. The relative errors (η) of the improved Arrhenius-type model and the ANN model were, respectively, in the range of −39.99%∼35.05% and −3.77%∼16.74%. As for the former, only 16.3% of the test data set possesses η-values within ±1%, while, as for the latter, more than 79% possesses. The results indicate that the ANN model presents a higher predictable ability than the improved Arrhenius-type constitutive model. PMID:24688358

  6. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  7. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  8. Development of Risk Score for Predicting 3-Year Incidence of Type 2 Diabetes: Japan Epidemiology Collaboration on Occupational Health Study

    PubMed Central

    Nanri, Akiko; Nakagawa, Tohru; Kuwahara, Keisuke; Yamamoto, Shuichiro; Honda, Toru; Okazaki, Hiroko; Uehara, Akihiko; Yamamoto, Makoto; Miyamoto, Toshiaki; Kochi, Takeshi; Eguchi, Masafumi; Murakami, Taizo; Shimizu, Chii; Shimizu, Makiko; Tomita, Kentaro; Nagahama, Satsue; Imai, Teppei; Nishihara, Akiko; Sasaki, Naoko; Hori, Ai; Sakamoto, Nobuaki; Nishiura, Chihiro; Totsuzaki, Takafumi; Kato, Noritada; Fukasawa, Kenji; Huanhuan, Hu; Akter, Shamima; Kurotani, Kayo; Kabe, Isamu; Mizoue, Tetsuya; Sone, Tomofumi; Dohi, Seitaro

    2015-01-01

    Objective Risk models and scores have been developed to predict incidence of type 2 diabetes in Western populations, but their performance may differ when applied to non-Western populations. We developed and validated a risk score for predicting 3-year incidence of type 2 diabetes in a Japanese population. Methods Participants were 37,416 men and women, aged 30 or older, who received periodic health checkup in 2008–2009 in eight companies. Diabetes was defined as fasting plasma glucose (FPG) ≥126 mg/dl, random plasma glucose ≥200 mg/dl, glycated hemoglobin (HbA1c) ≥6.5%, or receiving medical treatment for diabetes. Risk scores on non-invasive and invasive models including FPG and HbA1c were developed using logistic regression in a derivation cohort and validated in the remaining cohort. Results The area under the curve (AUC) for the non-invasive model including age, sex, body mass index, waist circumference, hypertension, and smoking status was 0.717 (95% CI, 0.703–0.731). In the invasive model in which both FPG and HbA1c were added to the non-invasive model, AUC was increased to 0.893 (95% CI, 0.883–0.902). When the risk scores were applied to the validation cohort, AUCs (95% CI) for the non-invasive and invasive model were 0.734 (0.715–0.753) and 0.882 (0.868–0.895), respectively. Participants with a non-invasive score of ≥15 and invasive score of ≥19 were projected to have >20% and >50% risk, respectively, of developing type 2 diabetes within 3 years. Conclusions The simple risk score of the non-invasive model might be useful for predicting incident type 2 diabetes, and its predictive performance may be markedly improved by incorporating FPG and HbA1c. PMID:26558900

  9. Development of Risk Score for Predicting 3-Year Incidence of Type 2 Diabetes: Japan Epidemiology Collaboration on Occupational Health Study.

    PubMed

    Nanri, Akiko; Nakagawa, Tohru; Kuwahara, Keisuke; Yamamoto, Shuichiro; Honda, Toru; Okazaki, Hiroko; Uehara, Akihiko; Yamamoto, Makoto; Miyamoto, Toshiaki; Kochi, Takeshi; Eguchi, Masafumi; Murakami, Taizo; Shimizu, Chii; Shimizu, Makiko; Tomita, Kentaro; Nagahama, Satsue; Imai, Teppei; Nishihara, Akiko; Sasaki, Naoko; Hori, Ai; Sakamoto, Nobuaki; Nishiura, Chihiro; Totsuzaki, Takafumi; Kato, Noritada; Fukasawa, Kenji; Huanhuan, Hu; Akter, Shamima; Kurotani, Kayo; Kabe, Isamu; Mizoue, Tetsuya; Sone, Tomofumi; Dohi, Seitaro

    2015-01-01

    Risk models and scores have been developed to predict incidence of type 2 diabetes in Western populations, but their performance may differ when applied to non-Western populations. We developed and validated a risk score for predicting 3-year incidence of type 2 diabetes in a Japanese population. Participants were 37,416 men and women, aged 30 or older, who received periodic health checkup in 2008-2009 in eight companies. Diabetes was defined as fasting plasma glucose (FPG) ≥ 126 mg/dl, random plasma glucose ≥ 200 mg/dl, glycated hemoglobin (HbA1c) ≥ 6.5%, or receiving medical treatment for diabetes. Risk scores on non-invasive and invasive models including FPG and HbA1c were developed using logistic regression in a derivation cohort and validated in the remaining cohort. The area under the curve (AUC) for the non-invasive model including age, sex, body mass index, waist circumference, hypertension, and smoking status was 0.717 (95% CI, 0.703-0.731). In the invasive model in which both FPG and HbA1c were added to the non-invasive model, AUC was increased to 0.893 (95% CI, 0.883-0.902). When the risk scores were applied to the validation cohort, AUCs (95% CI) for the non-invasive and invasive model were 0.734 (0.715-0.753) and 0.882 (0.868-0.895), respectively. Participants with a non-invasive score of ≥ 15 and invasive score of ≥ 19 were projected to have >20% and >50% risk, respectively, of developing type 2 diabetes within 3 years. The simple risk score of the non-invasive model might be useful for predicting incident type 2 diabetes, and its predictive performance may be markedly improved by incorporating FPG and HbA1c.

  10. A Dynamic Bayesian Network model for long-term simulation of clinical complications in type 1 diabetes.

    PubMed

    Marini, Simone; Trifoglio, Emanuele; Barbarini, Nicola; Sambo, Francesco; Di Camillo, Barbara; Malovini, Alberto; Manfrini, Marco; Cobelli, Claudio; Bellazzi, Riccardo

    2015-10-01

    The increasing prevalence of diabetes and its related complications is raising the need for effective methods to predict patient evolution and for stratifying cohorts in terms of risk of developing diabetes-related complications. In this paper, we present a novel approach to the simulation of a type 1 diabetes population, based on Dynamic Bayesian Networks, which combines literature knowledge with data mining of a rich longitudinal cohort of type 1 diabetes patients, the DCCT/EDIC study. In particular, in our approach we simulate the patient health state and complications through discretized variables. Two types of models are presented, one entirely learned from the data and the other partially driven by literature derived knowledge. The whole cohort is simulated for fifteen years, and the simulation error (i.e. for each variable, the percentage of patients predicted in the wrong state) is calculated every year on independent test data. For each variable, the population predicted in the wrong state is below 10% on both models over time. Furthermore, the distributions of real vs. simulated patients greatly overlap. Thus, the proposed models are viable tools to support decision making in type 1 diabetes. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale

    NASA Astrophysics Data System (ADS)

    Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang

    2017-12-01

    The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.

  12. Is the Ratio of Observed X-ray Luminosity to Bolometric Luminosity in Early-type Stars Really a Constant?

    NASA Technical Reports Server (NTRS)

    Waldron, W. L.

    1985-01-01

    The observed X-ray emission from early-type stars can be explained by the recombination stellar wind model (or base coronal model). The model predicts that the true X-ray luminosity from the base coronal zone can be 10 to 1000 times greater than the observed X-ray luminosity. From the models, scaling laws were found for the true and observed X-ray luminosities. These scaling laws predict that the ratio of the observed X-ray luminosity to the bolometric luminosity is functionally dependent on several stellar parameters. When applied to several other O and B stars, it is found that the values of the predicted ratio agree very well with the observed values.

  13. A Theoretical Model to Predict Both Horizontal Displacement and Vertical Displacement for Electromagnetic Induction-Based Deep Displacement Sensors

    PubMed Central

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467

  14. A theoretical model to predict both horizontal displacement and vertical displacement for electromagnetic induction-based deep displacement sensors.

    PubMed

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.

  15. Characterization of Used Nuclear Fuel with Multivariate Analysis for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dayman, Kenneth J.; Coble, Jamie B.; Orton, Christopher R.

    2014-01-01

    The Multi-Isotope Process (MIP) Monitor combines gamma spectroscopy and multivariate analysis to detect anomalies in various process streams in a nuclear fuel reprocessing system. Measured spectra are compared to models of nominal behavior at each measurement location to detect unexpected changes in system behavior. In order to improve the accuracy and specificity of process monitoring, fuel characterization may be used to more accurately train subsequent models in a full analysis scheme. This paper presents initial development of a reactor-type classifier that is used to select a reactor-specific partial least squares model to predict fuel burnup. Nuclide activities for prototypic usedmore » fuel samples were generated in ORIGEN-ARP and used to investigate techniques to characterize used nuclear fuel in terms of reactor type (pressurized or boiling water reactor) and burnup. A variety of reactor type classification algorithms, including k-nearest neighbors, linear and quadratic discriminant analyses, and support vector machines, were evaluated to differentiate used fuel from pressurized and boiling water reactors. Then, reactor type-specific partial least squares models were developed to predict the burnup of the fuel. Using these reactor type-specific models instead of a model trained for all light water reactors improved the accuracy of burnup predictions. The developed classification and prediction models were combined and applied to a large dataset that included eight fuel assembly designs, two of which were not used in training the models, and spanned the range of the initial 235U enrichment, cooling time, and burnup values expected of future commercial used fuel for reprocessing. Error rates were consistent across the range of considered enrichment, cooling time, and burnup values. Average absolute relative errors in burnup predictions for validation data both within and outside the training space were 0.0574% and 0.0597%, respectively. The errors seen in this work are artificially low, because the models were trained, optimized, and tested on simulated, noise-free data. However, these results indicate that the developed models may generalize well to new data and that the proposed approach constitutes a viable first step in developing a fuel characterization algorithm based on gamma spectra.« less

  16. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  17. Ngram time series model to predict activity type and energy cost from wrist, hip and ankle accelerometers: implications of age

    PubMed Central

    Strath, Scott J; Kate, Rohit J; Keenan, Kevin G; Welch, Whitney A; Swartz, Ann M

    2016-01-01

    To develop and test time series single site and multi-site placement models, we used wrist, hip and ankle processed accelerometer data to estimate energy cost and type of physical activity in adults. Ninety-nine subjects in three age groups (18–39, 40–64, 65 + years) performed 11 activities while wearing three triaxial accelereometers: one each on the non-dominant wrist, hip, and ankle. During each activity net oxygen cost (METs) was assessed. The time series of accelerometer signals were represented in terms of uniformly discretized values called bins. Support Vector Machine was used for activity classification with bins and every pair of bins used as features. Bagged decision tree regression was used for net metabolic cost prediction. To evaluate model performance we employed the jackknife leave-one-out cross validation method. Single accelerometer and multi-accelerometer site model estimates across and within age group revealed similar accuracy, with a bias range of −0.03 to 0.01 METs, bias percent of −0.8 to 0.3%, and a rMSE range of 0.81–1.04 METs. Multi-site accelerometer location models improved activity type classification over single site location models from a low of 69.3% to a maximum of 92.8% accuracy. For each accelerometer site location model, or combined site location model, percent accuracy classification decreased as a function of age group, or when young age groups models were generalized to older age groups. Specific age group models on average performed better than when all age groups were combined. A time series computation show promising results for predicting energy cost and activity type. Differences in prediction across age group, a lack of generalizability across age groups, and that age group specific models perform better than when all ages are combined needs to be considered as analytic calibration procedures to detect energy cost and type are further developed. PMID:26449155

  18. Operational, Real-Time, Sun-to-Earth Interplanetary Shock Predictions During Solar Cycle 23

    NASA Astrophysics Data System (ADS)

    Fry, C. D.; Dryer, M.; Sun, W.; Deehr, C. S.; Smith, Z.; Akasofu, S.

    2002-05-01

    We report on our progress in predicting interplanetary shock arrival time (SAT) in real-time, using three forecast models: the Hakamada-Akasofu-Fry (HAF) modified kinematic model, the Interplanetary Shock Propagation Model (ISPM) and the Shock Time of Arrival (STOA) model. These models are run concurrently to provide real-time predictions of the arrival time at Earth of interplanetary shocks caused by solar events. These "fearless forecasts" are the first, and presently only, publicly distributed predictions of SAT and are undergoing quantitative evaluation for operational utility and scientific benchmarking. All three models predict SAT, but the HAF model also provides a global view of the propagation of interplanetary shocks through the pre-existing, non-uniform heliospheric structure. This allows the forecaster to track the propagation of the shock and to differentiate between shocks caused by solar events and those associated with co-rotating interaction regions (CIRs). This study includes 173 events during the period February, 1997 to October, 2000. Shock predictions were compared with spacecraft observations at the L1 location to determine how well the models perform. Sixty-eight shocks were observed at L1 within 120 hours of an event. We concluded that 6 of these observed shocks were caused by CIRs, and the remainder were caused by solar events. The forecast skill of the models are presented in terms of RMS errors, contingency tables and skill scores commonly used by the weather forecasting community. The false alarm rate for HAF was higher than for ISPM or STOA but much lower than for predictions based upon empirical studies or climatology. Of the parameters used to characterize a shock source at the Sun, the initial speed of the coronal shock, as represented by the observed metric type II speed, has the largest influence on the predicted SAT. We also found that HAF model predictions based upon type II speed are generally better for shocks originating from sites near central meridian, and worse for limb events. This tendency suggests that the observed type II speed is more representative of the interplanetary shock speed for events occurring near central meridian. In particular, the type II speed appears to underestimate the actual Earth-directed IP shock speed when the source of the event is near the limb. Several of the most interesting events (Bastille Day epoch (2000), April Fools Day epoch (2001))will be discussed in more detail with the use of real-time animations.

  19. Contemporary model for cardiovascular risk prediction in people with type 2 diabetes.

    PubMed

    Kengne, Andre Pascal; Patel, Anushka; Marre, Michel; Travert, Florence; Lievre, Michel; Zoungas, Sophia; Chalmers, John; Colagiuri, Stephen; Grobbee, Diederick E; Hamet, Pavel; Heller, Simon; Neal, Bruce; Woodward, Mark

    2011-06-01

    Existing cardiovascular risk prediction equations perform non-optimally in different populations with diabetes. Thus, there is a continuing need to develop new equations that will reliably estimate cardiovascular disease (CVD) risk and offer flexibility for adaptation in various settings. This report presents a contemporary model for predicting cardiovascular risk in people with type 2 diabetes mellitus. A 4.5-year follow-up of the Action in Diabetes and Vascular disease: preterax and diamicron-MR controlled evaluation (ADVANCE) cohort was used to estimate coefficients for significant predictors of CVD using Cox models. Similar Cox models were used to fit the 4-year risk of CVD in 7168 participants without previous CVD. The model's applicability was tested on the same sample and another dataset. A total of 473 major cardiovascular events were recorded during follow-up. Age at diagnosis, known duration of diabetes, sex, pulse pressure, treated hypertension, atrial fibrillation, retinopathy, HbA1c, urinary albumin/creatinine ratio and non-HDL cholesterol at baseline were significant predictors of cardiovascular events. The model developed using these predictors displayed an acceptable discrimination (c-statistic: 0.70) and good calibration during internal validation. The external applicability of the model was tested on an independent cohort of individuals with type 2 diabetes, where similar discrimination was demonstrated (c-statistic: 0.69). Major cardiovascular events in contemporary populations with type 2 diabetes can be predicted on the basis of routinely measured clinical and biological variables. The model presented here can be used to quantify risk and guide the intensity of treatment in people with diabetes.

  20. Measuring pedestrian volumes and conflicts. Volume 2, Accident prediction model

    DOT National Transportation Integrated Search

    1987-12-01

    This final report presents the findings, conclusions, and recommendations of the study conducted to model pedestrian/vehicle accidents. A group-type analysis approach for the prediction of pedestrian/vehicle accidents using pedestrian/vehicle conflic...

  1. Modeling individual tree survial

    Treesearch

    Quang V. Cao

    2016-01-01

    Information provided by growth and yield models is the basis for forest managers to make decisions on how to manage their forests. Among different types of growth models, whole-stand models offer predictions at stand level, whereas individual-tree models give detailed information at tree level. The well-known logistic regression is commonly used to predict tree...

  2. A Demonstration of Regression False Positive Selection in Data Mining

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2014-01-01

    Business analytics courses, such as marketing research, data mining, forecasting, and advanced financial modeling, have substantial predictive modeling components. The predictive modeling in these courses requires students to estimate and test many linear regressions. As a result, false positive variable selection ("type I errors") is…

  3. Point-Mass Aircraft Trajectory Prediction Using a Hierarchical, Highly-Adaptable Software Design

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Woods, Sharon E.; Wing, David J.

    2017-01-01

    A highly adaptable and extensible method for predicting four-dimensional trajectories of civil aircraft has been developed. This method, Behavior-Based Trajectory Prediction, is based on taxonomic concepts developed for the description and comparison of trajectory prediction software. A hierarchical approach to the "behavioral" layer of a point-mass model of aircraft flight, a clear separation between the "behavioral" and "mathematical" layers of the model, and an abstraction of the methods of integrating differential equations in the "mathematical" layer have been demonstrated to support aircraft models of different types (in particular, turbojet vs. turboprop aircraft) using performance models at different levels of detail and in different formats, and promise to be easily extensible to other aircraft types and sources of data. The resulting trajectories predict location, altitude, lateral and vertical speeds, and fuel consumption along the flight path of the subject aircraft accurately and quickly, accounting for local conditions of wind and outside air temperature. The Behavior-Based Trajectory Prediction concept was implemented in NASA's Traffic Aware Planner (TAP) flight-optimizing cockpit software application.

  4. An Efficient Interval Type-2 Fuzzy CMAC for Chaos Time-Series Prediction and Synchronization.

    PubMed

    Lee, Ching-Hung; Chang, Feng-Yu; Lin, Chih-Min

    2014-03-01

    This paper aims to propose a more efficient control algorithm for chaos time-series prediction and synchronization. A novel type-2 fuzzy cerebellar model articulation controller (T2FCMAC) is proposed. In some special cases, this T2FCMAC can be reduced to an interval type-2 fuzzy neural network, a fuzzy neural network, and a fuzzy cerebellar model articulation controller (CMAC). So, this T2FCMAC is a more generalized network with better learning ability, thus, it is used for the chaos time-series prediction and synchronization. Moreover, this T2FCMAC realizes the un-normalized interval type-2 fuzzy logic system based on the structure of the CMAC. It can provide better capabilities for handling uncertainty and more design degree of freedom than traditional type-1 fuzzy CMAC. Unlike most of the interval type-2 fuzzy system, the type-reduction of T2FCMAC is bypassed due to the property of un-normalized interval type-2 fuzzy logic system. This causes T2FCMAC to have lower computational complexity and is more practical. For chaos time-series prediction and synchronization applications, the training architectures with corresponding convergence analyses and optimal learning rates based on Lyapunov stability approach are introduced. Finally, two illustrated examples are presented to demonstrate the performance of the proposed T2FCMAC.

  5. A Semi-Supervised Learning Algorithm for Predicting Four Types MiRNA-Disease Associations by Mutual Information in a Heterogeneous Network.

    PubMed

    Zhang, Xiaotian; Yin, Jian; Zhang, Xu

    2018-03-02

    Increasing evidence suggests that dysregulation of microRNAs (miRNAs) may lead to a variety of diseases. Therefore, identifying disease-related miRNAs is a crucial problem. Currently, many computational approaches have been proposed to predict binary miRNA-disease associations. In this study, in order to predict underlying miRNA-disease association types, a semi-supervised model called the network-based label propagation algorithm is proposed to infer multiple types of miRNA-disease associations (NLPMMDA) by mutual information derived from the heterogeneous network. The NLPMMDA method integrates disease semantic similarity, miRNA functional similarity, and Gaussian interaction profile kernel similarity information of miRNAs and diseases to construct a heterogeneous network. NLPMMDA is a semi-supervised model which does not require verified negative samples. Leave-one-out cross validation (LOOCV) was implemented for four known types of miRNA-disease associations and demonstrated the reliable performance of our method. Moreover, case studies of lung cancer and breast cancer confirmed effective performance of NLPMMDA to predict novel miRNA-disease associations and their association types.

  6. Predictors of mortality in hospital survivors with type 2 diabetes mellitus and acute coronary syndromes.

    PubMed

    Savonitto, Stefano; Morici, Nuccia; Nozza, Anna; Cosentino, Francesco; Perrone Filardi, Pasquale; Murena, Ernesto; Morocutti, Giorgio; Ferri, Marco; Cavallini, Claudio; Eijkemans, Marinus Jc; Stähli, Barbara E; Schrieks, Ilse C; Toyama, Tadashi; Lambers Heerspink, H J; Malmberg, Klas; Schwartz, Gregory G; Lincoff, A Michael; Ryden, Lars; Tardif, Jean Claude; Grobbee, Diederick E

    2018-01-01

    To define the predictors of long-term mortality in patients with type 2 diabetes mellitus and recent acute coronary syndrome. A total of 7226 patients from a randomized trial, testing the effect on cardiovascular outcomes of the dual peroxisome proliferator-activated receptor agonist aleglitazar in patients with type 2 diabetes mellitus and recent acute coronary syndrome (AleCardio trial), were analysed. Median follow-up was 2 years. The independent mortality predictors were defined using Cox regression analysis. The predictive information provided by each variable was calculated as percent of total chi-square of the model. All-cause mortality was 4.0%, with cardiovascular death contributing for 73% of mortality. The mortality prediction model included N-terminal proB-type natriuretic peptide (adjusted hazard ratio = 1.68; 95% confidence interval = 1.51-1.88; 27% of prediction), lack of coronary revascularization (hazard ratio = 2.28; 95% confidence interval = 1.77-2.93; 18% of prediction), age (hazard ratio = 1.04; 95% confidence interval = 1.02-1.05; 15% of prediction), heart rate (hazard ratio = 1.02; 95% confidence interval = 1.01-1.03; 10% of prediction), glycated haemoglobin (hazard ratio = 1.11; 95% confidence interval = 1.03-1.19; 8% of prediction), haemoglobin (hazard ratio = 1.01; 95% confidence interval = 1.00-1.02; 8% of prediction), prior coronary artery bypass (hazard ratio = 1.61; 95% confidence interval = 1.11-2.32; 7% of prediction) and prior myocardial infarction (hazard ratio = 1.40; 95% confidence interval = 1.05-1.87; 6% of prediction). In patients with type 2 diabetes mellitus and recent acute coronary syndrome, mortality prediction is largely dominated by markers of cardiac, rather than metabolic, dysfunction.

  7. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Treesearch

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  8. Identifying type 1 and type 2 diabetic cases using administrative data: a tree-structured model.

    PubMed

    Lo-Ciganic, Weihsuan; Zgibor, Janice C; Ruppert, Kristine; Arena, Vincent C; Stone, Roslyn A

    2011-05-01

    To date, few administrative diabetes mellitus (DM) registries have distinguished type 1 diabetes mellitus (T1DM) from type 2 diabetes mellitus (T2DM). Using a classification tree model, a prediction rule was developed to distinguish T1DM from T2DM in a large administrative database. The Medical Archival Retrieval System at the University of Pittsburgh Medical Center included administrative and clinical data from January 1, 2000, through September 30, 2009, for 209,647 DM patients aged ≥18 years. Probable cases (8,173 T1DM and 125,111 T2DM) were identified by applying clinical criteria to administrative data. Nonparametric classification tree models were fit using TIBCO Spotfire S+ 8.1 (TIBCO Software), with model size based on 10-fold cross validation. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of T1DM were estimated. The main predictors that distinguished T1DM from T2DM are age <40 years; International Classification of Disease, 9th revision, codes of T1DM or T2DM diagnosis; inpatient oral hypoglycemic agent use; inpatient insulin use; and episode(s) of diabetic ketoacidosis diagnosis. Compared with a complex clinical algorithm, the tree-structured model to predict T1DM had 92.8% sensitivity, 99.3% specificity, 89.5% PPV, and 99.5% NPV. The preliminary predictive rule appears to be promising. Being able to distinguish between DM subtypes in administrative databases will allow large-scale subtype-specific analyses of medical care costs, morbidity, and mortality. © 2011 Diabetes Technology Society.

  9. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  10. Hydrological-niche models predict water plant functional group distributions in diverse wetland types.

    PubMed

    Deane, David C; Nicol, Jason M; Gehrig, Susan L; Harding, Claire; Aldridge, Kane T; Goodman, Abigail M; Brookes, Justin D

    2017-06-01

    Human use of water resources threatens environmental water supplies. If resource managers are to develop policies that avoid unacceptable ecological impacts, some means to predict ecosystem response to changes in water availability is necessary. This is difficult to achieve at spatial scales relevant for water resource management because of the high natural variability in ecosystem hydrology and ecology. Water plant functional groups classify species with similar hydrological niche preferences together, allowing a qualitative means to generalize community responses to changes in hydrology. We tested the potential for functional groups in making quantitative prediction of water plant functional group distributions across diverse wetland types over a large geographical extent. We sampled wetlands covering a broad range of hydrogeomorphic and salinity conditions in South Australia, collecting both hydrological and floristic data from 687 quadrats across 28 wetland hydrological gradients. We built hydrological-niche models for eight water plant functional groups using a range of candidate models combining different surface inundation metrics. We then tested the predictive performance of top-ranked individual and averaged models for each functional group. Cross validation showed that models achieved acceptable predictive performance, with correct classification rates in the range 0.68-0.95. Model predictions can be made at any spatial scale that hydrological data are available and could be implemented in a geographical information system. We show the response of water plant functional groups to inundation is consistent enough across diverse wetland types to quantify the probability of hydrological impacts over regional spatial scales. © 2017 by the Ecological Society of America.

  11. Aqueous and Tissue Residue-Based Interspecies Correlation Estimation Models Provide Conservative Hazard Estimates for Aromatic Compounds

    EPA Science Inventory

    Interspecies correlation estimation (ICE) models were developed for 30 nonpolar aromatic compounds to allow comparison of prediction accuracy between 2 data compilation approaches. Type 1 models used data combined across studies, and type 2 models used data combined only within s...

  12. Comparing Two Types of Model Progression in an Inquiry Learning Environment with Modelling Facilities

    ERIC Educational Resources Information Center

    Mulder, Yvonne G.; Lazonder, Ard W.; de Jong, Ton

    2011-01-01

    The educational advantages of inquiry learning environments that incorporate modelling facilities are often challenged by students' poor inquiry skills. This study examined two types of model progression as means to compensate for these skill deficiencies. Model order progression (MOP), the predicted optimal variant, gradually increases the…

  13. [Discrimination of types of polyacrylamide based on near infrared spectroscopy coupled with least square support vector machine].

    PubMed

    Zhang, Hong-Guang; Yang, Qin-Min; Lu, Jian-Gang

    2014-04-01

    In this paper, a novel discriminant methodology based on near infrared spectroscopic analysis technique and least square support vector machine was proposed for rapid and nondestructive discrimination of different types of Polyacrylamide. The diffuse reflectance spectra of samples of Non-ionic Polyacrylamide, Anionic Polyacrylamide and Cationic Polyacrylamide were measured. Then principal component analysis method was applied to reduce the dimension of the spectral data and extract of the principal compnents. The first three principal components were used for cluster analysis of the three different types of Polyacrylamide. Then those principal components were also used as inputs of least square support vector machine model. The optimization of the parameters and the number of principal components used as inputs of least square support vector machine model was performed through cross validation based on grid search. 60 samples of each type of Polyacrylamide were collected. Thus a total of 180 samples were obtained. 135 samples, 45 samples for each type of Polyacrylamide, were randomly split into a training set to build calibration model and the rest 45 samples were used as test set to evaluate the performance of the developed model. In addition, 5 Cationic Polyacrylamide samples and 5 Anionic Polyacrylamide samples adulterated with different proportion of Non-ionic Polyacrylamide were also prepared to show the feasibilty of the proposed method to discriminate the adulterated Polyacrylamide samples. The prediction error threshold for each type of Polyacrylamide was determined by F statistical significance test method based on the prediction error of the training set of corresponding type of Polyacrylamide in cross validation. The discrimination accuracy of the built model was 100% for prediction of the test set. The prediction of the model for the 10 mixing samples was also presented, and all mixing samples were accurately discriminated as adulterated samples. The overall results demonstrate that the discrimination method proposed in the present paper can rapidly and nondestructively discriminate the different types of Polyacrylamide and the adulterated Polyacrylamide samples, and offered a new approach to discriminate the types of Polyacrylamide.

  14. Development and validation of a novel predictive scoring model for microvascular invasion in patients with hepatocellular carcinoma.

    PubMed

    Zhao, Hui; Hua, Ye; Dai, Tu; He, Jian; Tang, Min; Fu, Xu; Mao, Liang; Jin, Huihan; Qiu, Yudong

    2017-03-01

    Microvascular invasion (MVI) in patients with hepatocellular carcinoma (HCC) cannot be accurately predicted preoperatively. This study aimed to establish a predictive scoring model of MVI in solitary HCC patients without macroscopic vascular invasion. A total of 309 consecutive HCC patients who underwent curative hepatectomy were divided into the derivation (n=206) and validation cohort (n=103). A predictive scoring model of MVI was established according to the valuable predictors in the derivation cohort based on multivariate logistic regression analysis. The performance of the predictive model was evaluated in the derivation and validation cohorts. Preoperative imaging features on CECT, such as intratumoral arteries, non-nodular type of HCC and absence of radiological tumor capsule were independent predictors for MVI. The predictive scoring model was established according to the β coefficients of the 3 predictors. Area under receiver operating characteristic (AUROC) of the predictive scoring model was 0.872 (95% CI, 0.817-0.928) and 0.856 (95% CI, 0.771-0.940) in the derivation and validation cohorts. The positive and negative predictive values were 76.5% and 88.0% in the derivation cohort and 74.4% and 88.3% in the validation cohort. The performance of the model was similar between the patients with tumor size ≤5cm and >5cm in AUROC (P=0.910). The predictive scoring model based on intratumoral arteries, non-nodular type of HCC, and absence of the radiological tumor capsule on preoperative CECT is of great value in the prediction of MVI regardless of tumor size. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Accuracy of gastrocnemius muscles forces in walking and running goats predicted by one-element and two-element Hill-type models.

    PubMed

    Lee, Sabrina S M; Arnold, Allison S; Miara, Maria de Boef; Biewener, Andrew A; Wakeling, James M

    2013-09-03

    Hill-type models are commonly used to estimate muscle forces during human and animal movement-yet the accuracy of the forces estimated during walking, running, and other tasks remains largely unknown. Further, most Hill-type models assume a single contractile element, despite evidence that faster and slower motor units, which have different activation-deactivation dynamics, may be independently or collectively excited. This study evaluated a novel, two-element Hill-type model with "differential" activation of fast and slow contractile elements. Model performance was assessed using a comprehensive data set (including measures of EMG intensity, fascicle length, and tendon force) collected from the gastrocnemius muscles of goats during locomotor experiments. Muscle forces predicted by the new two-element model were compared to the forces estimated using traditional one-element models and to the forces measured in vivo using tendon buckle transducers. Overall, the two-element model resulted in the best predictions of in vivo gastrocnemius force. The coefficient of determination, r(2), was up to 26.9% higher and the root mean square error, RMSE, was up to 37.4% lower for the two-element model than for the one-element models tested. All models captured salient features of the measured muscle force during walking, trotting, and galloping (r(2)=0.26-0.51), and all exhibited some errors (RMSE=9.63-32.2% of the maximum in vivo force). These comparisons provide important insight into the accuracy of Hill-type models. The results also show that incorporation of fast and slow contractile elements within muscle models can improve estimates of time-varying, whole muscle force during locomotor tasks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Accuracy of gastrocnemius muscles forces in walking and running goats predicted by one-element and two-element Hill-type models

    PubMed Central

    Lee, Sabrina S.M.; Arnold, Allison S.; Miara, Maria de Boef; Biewener, Andrew A.; Wakeling, James M.

    2013-01-01

    Hill-type models are commonly used to estimate muscle forces during human and animal movement —yet the accuracy of the forces estimated during walking, running, and other tasks remains largely unknown. Further, most Hill-type models assume a single contractile element, despite evidence that faster and slower motor units, which have different activation-deactivation dynamics, may be independently or collectively excited. This study evaluated a novel, two-element Hill-type model with “differential” activation of fast and slow contractile elements. Model performance was assessed using a comprehensive data set (including measures of EMG intensity, fascicle length, and tendon force) collected from the gastrocnemius muscles of goats during locomotor experiments. Muscle forces predicted by the new two-element model were compared to the forces estimated using traditional one-element models and to the forces measured in vivo using tendon buckle transducers. Overall, the two-element model resulted in the best predictions of in vivo gastrocnemius force. The coefficient of determination, r2, was up to 26.9% higher and the root mean square error, RMSE, was up to 37.4% lower for the two-element model than for the one-element models tested. All models captured salient features of the measured muscle force during walking, trotting, and galloping (r2 = 0.26 to 0.51), and all exhibited some errors (RMSE = 9.63 to 32.2% of the maximum in vivo force). These comparisons provide important insight into the accuracy of Hill-type models. The results also show that incorporation of fast and slow contractile elements within muscle models can improve estimates of time-varying, whole muscle force during locomotor tasks. PMID:23871235

  17. Legume Diversity Patterns in West Central Africa: Influence of Species Biology on Distribution Models

    PubMed Central

    de la Estrella, Manuel; Mateo, Rubén G.; Wieringa, Jan J.; Mackinder, Barbara; Muñoz, Jesús

    2012-01-01

    Objectives Species Distribution Models (SDMs) are used to produce predictions of potential Leguminosae diversity in West Central Africa. Those predictions are evaluated subsequently using expert opinion. The established methodology of combining all SDMs is refined to assess species diversity within five defined vegetation types. Potential species diversity is thus predicted for each vegetation type respectively. The primary aim of the new methodology is to define, in more detail, areas of species richness for conservation planning. Methodology Using Maxent, SDMs based on a suite of 14 environmental predictors were generated for 185 West Central African Leguminosae species, each categorised according to one of five vegetation types: Afromontane, coastal, non-flooded forest, open formations, or riverine forest. The relative contribution of each environmental variable was compared between different vegetation types using a nonparametric Kruskal-Wallis analysis followed by a post-hoc Kruskal-Wallis Paired Comparison contrast. Legume species diversity patterns were explored initially using the typical method of stacking all SDMs. Subsequently, five different ensemble models were generated by partitioning SDMs according to vegetation category. Ecological modelers worked with legume specialists to improve data integrity and integrate expert opinion in the interpretation of individual species models and potential species richness predictions for different vegetation types. Results/Conclusions Of the 14 environmental predictors used, five showed no difference in their relative contribution to the different vegetation models. Of the nine discriminating variables, the majority were related to temperature variation. The set of variables that played a major role in the Afromontane species diversity model differed significantly from the sets of variables of greatest relative important in other vegetation categories. The traditional approach of stacking all SDMs indicated overall centers of diversity in the region but the maps indicating potential species richness by vegetation type offered more detailed information on which conservation efforts can be focused. PMID:22911808

  18. B-type natriuretic peptide and C-reactive protein in the prediction of atrial fibrillation risk: the CHARGE-AF Consortium of community-based cohort studies

    PubMed Central

    Sinner, Moritz F.; Stepas, Katherine A.; Moser, Carlee B.; Krijthe, Bouwe P.; Aspelund, Thor; Sotoodehnia, Nona; Fontes, João D.; Janssens, A. Cecile J.W.; Kronmal, Richard A.; Magnani, Jared W.; Witteman, Jacqueline C.; Chamberlain, Alanna M.; Lubitz, Steven A.; Schnabel, Renate B.; Vasan, Ramachandran S.; Wang, Thomas J.; Agarwal, Sunil K.; McManus, David D.; Franco, Oscar H.; Yin, Xiaoyan; Larson, Martin G.; Burke, Gregory L.; Launer, Lenore J.; Hofman, Albert; Levy, Daniel; Gottdiener, John S.; Kääb, Stefan; Couper, David; Harris, Tamara B.; Astor, Brad C.; Ballantyne, Christie M.; Hoogeveen, Ron C.; Arai, Andrew E.; Soliman, Elsayed Z.; Ellinor, Patrick T.; Stricker, Bruno H.C.; Gudnason, Vilmundur; Heckbert, Susan R.; Pencina, Michael J.; Benjamin, Emelia J.; Alonso, Alvaro

    2014-01-01

    Aims B-type natriuretic peptide (BNP) and C-reactive protein (CRP) predict atrial fibrillation (AF) risk. However, their risk stratification abilities in the broad community remain uncertain. We sought to improve risk stratification for AF using biomarker information. Methods and results We ascertained AF incidence in 18 556 Whites and African Americans from the Atherosclerosis Risk in Communities Study (ARIC, n=10 675), Cardiovascular Health Study (CHS, n = 5043), and Framingham Heart Study (FHS, n = 2838), followed for 5 years (prediction horizon). We added BNP (ARIC/CHS: N-terminal pro-B-type natriuretic peptide; FHS: BNP), CRP, or both to a previously reported AF risk score, and assessed model calibration and predictive ability [C-statistic, integrated discrimination improvement (IDI), and net reclassification improvement (NRI)]. We replicated models in two independent European cohorts: Age, Gene/Environment Susceptibility Reykjavik Study (AGES), n = 4467; Rotterdam Study (RS), n = 3203. B-type natriuretic peptide and CRP were significantly associated with AF incidence (n = 1186): hazard ratio per 1-SD ln-transformed biomarker 1.66 [95% confidence interval (CI), 1.56–1.76], P < 0.0001 and 1.18 (95% CI, 1.11–1.25), P < 0.0001, respectively. Model calibration was sufficient (BNP, χ2 = 17.0; CRP, χ2 = 10.5; BNP and CRP, χ2 = 13.1). B-type natriuretic peptide improved the C-statistic from 0.765 to 0.790, yielded an IDI of 0.027 (95% CI, 0.022–0.032), a relative IDI of 41.5%, and a continuous NRI of 0.389 (95% CI, 0.322–0.455). The predictive ability of CRP was limited (C-statistic increment 0.003). B-type natriuretic peptide consistently improved prediction in AGES and RS. Conclusion B-type natriuretic peptide, not CRP, substantially improved AF risk prediction beyond clinical factors in an independently replicated, heterogeneous population. B-type natriuretic peptide may serve as a benchmark to evaluate novel putative AF risk biomarkers. PMID:25037055

  19. Turbulence modeling in simulation of gas-turbine flow and heat transfer.

    PubMed

    Brereton, G; Shih, T I

    2001-05-01

    The popular k-epsilon type two-equation turbulence models, which are calibrated by experimental data from simple shear flows, are analyzed for their ability to predict flows involving shear and an extra strain--flow with shear and rotation and flow with shear and streamline curvature. The analysis is based on comparisons between model predictions and those from measurements and large-eddy simulations of homogenous flows involving shear and an extra strain, either from rotation or from streamline curvature. Parameters are identified, which show the conditions under which performance of k-epsilon type models can be expected to be poor.

  20. ACToR A Aggregated Computational Toxicology Resource ...

    EPA Pesticide Factsheets

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  1. ACToR A Aggregated Computational Toxicology Resource (S) ...

    EPA Pesticide Factsheets

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  2. Regression models for predicting peak and continuous three-dimensional spinal loads during symmetric and asymmetric lifting tasks.

    PubMed

    Fathallah, F A; Marras, W S; Parnianpour, M

    1999-09-01

    Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.

  3. Semiparametric Identification of Human Arm Dynamics for Flexible Control of a Functional Electrical Stimulation Neuroprosthesis

    PubMed Central

    Schearer, Eric M.; Liao, Yu-Wei; Perreault, Eric J.; Tresch, Matthew C.; Memberg, William D.; Kirsch, Robert F.; Lynch, Kevin M.

    2016-01-01

    We present a method to identify the dynamics of a human arm controlled by an implanted functional electrical stimulation neuroprosthesis. The method uses Gaussian process regression to predict shoulder and elbow torques given the shoulder and elbow joint positions and velocities and the electrical stimulation inputs to muscles. We compare the accuracy of torque predictions of nonparametric, semiparametric, and parametric model types. The most accurate of the three model types is a semiparametric Gaussian process model that combines the flexibility of a black box function approximator with the generalization power of a parameterized model. The semiparametric model predicted torques during stimulation of multiple muscles with errors less than 20% of the total muscle torque and passive torque needed to drive the arm. The identified model allows us to define an arbitrary reaching trajectory and approximately determine the muscle stimulations required to drive the arm along that trajectory. PMID:26955041

  4. Agreement between gamma passing rates using computed tomography in radiotherapy and secondary cancer risk prediction from more advanced dose calculated models

    PubMed Central

    Balosso, Jacques

    2017-01-01

    Background During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type ‘a’ algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type ‘b’, which consider change in lateral electrons transport. Methods Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type ‘b’ / OED type ‘a’). Results The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type ‘a’, the OED values from type ‘b’ dose distributions’ were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. Conclusions The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing more precisely the dose distributions, but that the prediction of absolute SCR is still very imprecise, only the EAR ratio could be used to rank radiotherapy plans. PMID:28811995

  5. Next-generation genome-scale models for metabolic engineering.

    PubMed

    King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O

    2015-12-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. 47 annual records of allergenic fungi spore: predictive models from the NW Iberian Peninsula.

    PubMed

    Aira, M Jesus; Rodriguez-Rajo, F; Jato, Victoria

    2008-01-01

    An analysis was carried out of the atmospheric representivity of Cladosporium and Alternaria spores in the north-western Iberian Peninsula, registering mean annual concentrations in excess of 300,000 spores/m(3). During the main sporulation period, the highest average daily concentrations corresponded to Cladosporium herbarum type (1,197 spores/m(3)) while the highest daily value was 7,556 spores/m(3) (Cladosporium cladosporioides type). Alternaria only represents between 0.1-1% of the total spores identified. In these spore types, the intraday variation was more acute inland than along the coastline due to oceanic influence. In the predictive models proposed that use the meteorological parameters with which a higher correlation was obtained (mean and maximum temperature) as predictive variables, it was seen that the predicted values did not reveal any significant differences as compared to those observed in 2006, data that was only used for verification purposes.

  7. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    PubMed

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier Inc.

  8. Dynamic Models of Learning That Characterize Parent-Child Exchanges Predict Vocabulary Growth

    ERIC Educational Resources Information Center

    Ober, David R.; Beekman, John A.

    2016-01-01

    Cumulative vocabulary models for infants and toddlers were developed from models of learning that predict trajectories associated with low, average, and high vocabulary growth rates (14 to 46 months). It was hypothesized that models derived from rates of learning mirror the type of exchanges provided to infants and toddlers by parents and…

  9. An evaluation of the predictive performance of distributional models for flora and fauna in north-east New South Wales.

    PubMed

    Pearce, J; Ferrier, S; Scotts, D

    2001-06-01

    To use models of species distributions effectively in conservation planning, it is important to determine the predictive accuracy of such models. Extensive modelling of the distribution of vascular plant and vertebrate fauna species within north-east New South Wales has been undertaken by linking field survey data to environmental and geographical predictors using logistic regression. These models have been used in the development of a comprehensive and adequate reserve system within the region. We evaluate the predictive accuracy of models for 153 small reptile, arboreal marsupial, diurnal bird and vascular plant species for which independent evaluation data were available. The predictive performance of each model was evaluated using the relative operating characteristic curve to measure discrimination capacity. Good discrimination ability implies that a model's predictions provide an acceptable index of species occurrence. The discrimination capacity of 89% of the models was significantly better than random, with 70% of the models providing high levels of discrimination. Predictions generated by this type of modelling therefore provide a reasonably sound basis for regional conservation planning. The discrimination ability of models was highest for the less mobile biological groups, particularly the vascular plants and small reptiles. In the case of diurnal birds, poor performing models tended to be for species which occur mainly within specific habitats not well sampled by either the model development or evaluation data, highly mobile species, species that are locally nomadic or those that display very broad habitat requirements. Particular care needs to be exercised when employing models for these types of species in conservation planning.

  10. Improved Rubin-Bodner Model for the Prediction of Soft Tissue Deformations

    PubMed Central

    Zhang, Guangming; Xia, James J.; Liebschner, Michael; Zhang, Xiaoyan; Kim, Daeseung; Zhou, Xiaobo

    2016-01-01

    In craniomaxillofacial (CMF) surgery, a reliable way of simulating the soft tissue deformation resulted from skeletal reconstruction is vitally important for preventing the risks of facial distortion postoperatively. However, it is difficult to simulate the soft tissue behaviors affected by different types of CMF surgery. This study presents an integrated bio-mechanical and statistical learning model to improve accuracy and reliability of predictions on soft facial tissue behavior. The Rubin-Bodner (RB) model is initially used to describe the biomechanical behavior of the soft facial tissue. Subsequently, a finite element model (FEM) computers the stress of each node in soft facial tissue mesh data resulted from bone displacement. Next, the Generalized Regression Neural Network (GRNN) method is implemented to obtain the relationship between the facial soft tissue deformation and the stress distribution corresponding to different CMF surgical types and to improve evaluation of elastic parameters included in the RB model. Therefore, the soft facial tissue deformation can be predicted by biomechanical properties and statistical model. Leave-one-out cross-validation is used on eleven patients. As a result, the average prediction error of our model (0.7035mm) is lower than those resulting from other approaches. It also demonstrates that the more accurate bio-mechanical information the model has, the better prediction performance it could achieve. PMID:27717593

  11. Particulate Matter Emissions for Fires in the Palmetto-Gallberry Fuel Type

    Treesearch

    Darold E. Ward

    1983-01-01

    Fire management specialists in the southeastern United States needing guides for predicting or assessing particulate matter emission factors, emission rates, and heat release rate can use the models presented in this paper for making these predictions as a function of flame length in the palmetto-gallberry fuel type.

  12. Models to predict emissions of health-damaging pollutants and global warming contributions of residential fuel/stove combinations in China.

    PubMed

    Edwards, Rufus D; Smith, Kirk R; Zhang, Junfeng; Ma, Yuqing

    2003-01-01

    Residential energy use in developing countries has traditionally been associated with combustion devices of poor energy efficiency, which have been shown to produce substantial health-damaging pollution, contributing significantly to the global burden of disease, and greenhouse gas (GHG) emissions. Precision of these estimates in China has been hampered by limited data on stove use and fuel consumption in residences. In addition limited information is available on variability of emissions of pollutants from different stove/fuel combinations in typical use, as measurement of emission factors requires measurement of multiple chemical species in complex burn cycle tests. Such measurements are too costly and time consuming for application in conjunction with national surveys. Emissions of most of the major health-damaging pollutants (HDP) and many of the gases that contribute to GHG emissions from cooking stoves are the result of the significant portion of fuel carbon that is diverted to products of incomplete combustion (PIC) as a result of poor combustion efficiencies. The approximately linear increase in emissions of PIC with decreasing combustion efficiencies allows development of linear models to predict emissions of GHG and HDP intrinsically linked to CO2 and PIC production, and ultimately allows the prediction of global warming contributions from residential stove emissions. A comprehensive emissions database of three burn cycles of 23 typical fuel/stove combinations tested in a simulated village house in China has been used to develop models to predict emissions of HDP and global warming commitment (GWC) from cooking stoves in China, that rely on simple survey information on stove and fuel use that may be incorporated into national surveys. Stepwise regression models predicted 66% of the variance in global warming commitment (CO2, CO, CH4, NOx, TNMHC) per 1 MJ delivered energy due to emissions from these stoves if survey information on fuel type was available. Subsequently if stove type is known, stepwise regression models predicted 73% of the variance. Integrated assessment of policies to change stove or fuel type requires that implications for environmental impacts, energy efficiency, global warming and human exposures to HDP emissions can be evaluated. Frequently, this involves measurement of TSP or CO as the major HDPs. Incorporation of this information into models to predict GWC predicted 79% and 78% of the variance respectively. Clearly, however, the complexity of making multiple measurements in conjunction with a national survey would be both expensive and time consuming. Thus, models to predict HDP using simple survey information, and with measurement of either CO/CO2 or TSP/CO2 to predict emission factors for the other HDP have been derived. Stepwise regression models predicted 65% of the variance in emissions of total suspended particulate as grams of carbon (TSPC) per 1 MJ delivered if survey information on fuel and stove type was available and 74% if the CO/CO2 ratio was measured. Similarly stepwise regression models predicted 76% of the variance in COC emissions per MJ delivered with survey information on stove and fuel type and 85% if the TSPC/CO2 ratio was measured. Ultimately, with international agreements on emissions trading frameworks, similar models based on extensive databases of the fate of fuel carbon during combustion from representative household stoves would provide a mechanism for computing greenhouse credits in the residential sector as part of clean development mechanism frameworks and monitoring compliance to control regimes.

  13. Modeling Seasonality in Carbon Dioxide Emissions From Fossil Fuel Consumption

    NASA Astrophysics Data System (ADS)

    Gregg, J. S.; Andres, R. J.

    2004-05-01

    Using United States data, a method is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels using monthly sales data to estimate the relative monthly proportions of the total annual national fossil fuel use. These proportions are then used to estimate the total monthly carbon dioxide emissions for each state. From these data, the goal is to develop mathematical models that describe the seasonal flux in consumption for each type of fuel, as well as the total emissions for the nation. The time series models have two components. First, the general long-term yearly trend is determined with regression models for the annual totals. After removing the general trend, two alternatives are considered for modeling the seasonality. The first alternative uses the mean of the monthly proportions to predict the seasonal distribution. Because the seasonal patterns are fairly consistent in the United States, this is an effective modeling technique. Such regularity, however, may not be present with data from other nations. Therefore, as a second alternative, an ordinary least squares autoregressive model is used. This model is chosen for its ability to accurately describe dependent data and for its predictive capacity. It also has a meaningful interpretation, as each coefficient in the model quantifies the dependency for each corresponding time lag. Most importantly, it is dynamic, and able to adapt to anomalies and changing patterns. The order of the autoregressive model is chosen by the Akaike Information Criterion (AIC), which minimizes the predicted variance for all models of increasing complexity. To model the monthly fuel consumption, the annual trend is combined with the seasonal model. The models for each fuel type are then summed together to predict the total carbon dioxide emissions. The prediction error is estimated with the root mean square error (RMSE) from the actual estimated emission values. Overall, the models perform very well, with relative RMSE less than 10% for all fuel types, and under 5% for the national total emissions. Development of successful models is important to better understand and predict global environmental impacts from fossil fuel consumption.

  14. Fuel consumption models for pine flatwoods fuel types in the southeastern United States

    Treesearch

    Clinton S. Wright

    2013-01-01

    Modeling fire effects, including terrestrial and atmospheric carbon fluxes and pollutant emissions during wildland fires, requires accurate predictions of fuel consumption. Empirical models were developed for predicting fuel consumption from fuel and environmental measurements on a series of operational prescribed fires in pine flatwoods ecosystems in the southeastern...

  15. Model identification using stochastic differential equation grey-box models in diabetes.

    PubMed

    Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik

    2013-03-01

    The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.

  16. Modeling the Distribution and Type of High-Latitude Natural Wetlands for Methane Studies

    NASA Astrophysics Data System (ADS)

    Romanski, J.; Matthews, E.

    2017-12-01

    High latitude (>50N) natural wetlands emit a substantial amount of methane to the atmosphere, and are located in a region of amplified warming. Northern hemisphere high latitudes are characterized by cold climates, extensive permafrost, poor drainage, short growing seasons, and slow decay rates. Under these conditions, organic carbon accumulates in the soil, sequestering CO2 from the atmosphere. Methanogens produce methane from this carbon reservoir, converting stored carbon into a powerful greenhouse gas. Methane emission from wetland ecosystems depends on vegetation type, climate characteristics (e.g, precipitation amount and seasonality, temperature, snow cover, etc.), and geophysical variables (e.g., permafrost, soil type, and landscape slope). To understand how wetland methane dynamics in this critical region will respond to climate change, we have to first understand how wetlands themselves will change and therefore, what the primary controllers of wetland distribution and type are. Understanding these relationships permits data-anchored, physically-based modeling of wetland distribution and type in other climate scenarios, such as paleoclimates or future climates, a necessary first step toward modeling wetland methane emissions in these scenarios. We investigate techniques and datasets for predicting the distribution and type of high latitude (>50N) natural wetlands from a suite of geophysical and climate predictors. Hierarchical clustering is used to derive an empirical methane-centric wetland model. The model is applied in a multistep process - first to predict the distribution of wetlands from relevant geophysical parameters, and then, given the predicted wetland distribution, to classify the wetlands into methane-relevant types using an expanded suite of climate and biogeophysical variables. As the optimum set of predictor variables is not known a priori, the model is applied iteratively, and each simulation is evaluated with respect to observed high-latitude wetlands.

  17. Uncertainty analysis of a groundwater flow model in East-central Florida.

    PubMed

    Sepúlveda, Nicasio; Doherty, John

    2015-01-01

    A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan Aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The "Null Space Monte Carlo" method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model's capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial or temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context. © 2014, National Ground Water Association.

  18. Modeling diffusion and reaction in soils: 9. The Buckingham-Burdine-Campbell equation for gas diffusivity in undisturbed soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moldrup, P.; Olesen, T.; Yamaguchi, T.

    1999-08-01

    Accurate description of gas diffusivity (ratio of gas diffusion coefficients in soil and free air, D{sub s}/D{sub 0}) in undisturbed soils is a prerequisite for predicting in situ transport and fate of volatile organic chemicals and greenhouse gases. Reference point gas diffusivities (R{sub p}) in completely dry soil were estimated for 20 undisturbed soils by assuming a power function relation between gas diffusivity and air-filled porosity ({epsilon}). Among the classical gas diffusivity models, the Buckingham (1904) expression, equal to the soil total porosity squared, best described R{sub p}. Inasmuch, as their previous works implied a soil-type dependency of D{sub s}/D{submore » 0}({epsilon}) in undisturbed soils, the Buckingham R{sub p} expression was inserted in two soil-type-dependent D{sub s}/D{sub 0}({epsilon}) models. One D{sub s}/D{sub 0}({epsilon}) model is a function of pore-size distribution (the Campbell water retention parameter used in a modified Burdine capillary tube model), and the other is a calibrated, empirical function of soil texture (silt + sand fraction). Both the Buckingham-Burdine-Campbell (BBC) and the Buckingham/soil texture-based D{sub s}/D{sub 0}({epsilon}) models described well the observed soil type effects on gas diffusivity and gave improved predictions compared with soil type independent models when tested against an independent data set for six undisturbed surface soils. This study emphasizes that simple but soil-type-dependent power function D{sub s}/D{sub 0}({epsilon}) models can adequately describe and predict gas diffusivity in undisturbed soil. The authors recommend the new BBC model as basis for modeling gas transport and reactions in undisturbed soil systems.« less

  19. Forecasting of primary energy consumption data in the United States: A comparison between ARIMA and Holter-Winters models

    NASA Astrophysics Data System (ADS)

    Rahman, A.; Ahmar, A. S.

    2017-09-01

    This research has a purpose to compare ARIMA Model and Holt-Winters Model based on MAE, RSS, MSE, and RMS criteria in predicting Primary Energy Consumption Total data in the US. The data from this research ranges from January 1973 to December 2016. This data will be processed by using R Software. Based on the results of data analysis that has been done, it is found that the model of Holt-Winters Additive type (MSE: 258350.1) is the most appropriate model in predicting Primary Energy Consumption Total data in the US. This model is more appropriate when compared with Holt-Winters Multiplicative type (MSE: 262260,4) and ARIMA Seasonal model (MSE: 723502,2).

  20. Testing projected wild bee distributions in agricultural habitats: predictive power depends on species traits and habitat type.

    PubMed

    Marshall, Leon; Carvalheiro, Luísa G; Aguirre-Gutiérrez, Jesús; Bos, Merijn; de Groot, G Arjen; Kleijn, David; Potts, Simon G; Reemer, Menno; Roberts, Stuart; Scheper, Jeroen; Biesmeijer, Jacobus C

    2015-10-01

    Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs' usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and flower availability. Additionally, testing SDMs with field surveys should involve multiple collection techniques.

  1. The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study

    Treesearch

    Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz

    2005-01-01

    We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...

  2. A Stochastic Framework for Evaluating Seizure Prediction Algorithms Using Hidden Markov Models

    PubMed Central

    Wong, Stephen; Gardner, Andrew B.; Krieger, Abba M.; Litt, Brian

    2007-01-01

    Responsive, implantable stimulation devices to treat epilepsy are now in clinical trials. New evidence suggests that these devices may be more effective when they deliver therapy before seizure onset. Despite years of effort, prospective seizure prediction, which could improve device performance, remains elusive. In large part, this is explained by lack of agreement on a statistical framework for modeling seizure generation and a method for validating algorithm performance. We present a novel stochastic framework based on a three-state hidden Markov model (HMM) (representing interictal, preictal, and seizure states) with the feature that periods of increased seizure probability can transition back to the interictal state. This notion reflects clinical experience and may enhance interpretation of published seizure prediction studies. Our model accommodates clipped EEG segments and formalizes intuitive notions regarding statistical validation. We derive equations for type I and type II errors as a function of the number of seizures, duration of interictal data, and prediction horizon length and we demonstrate the model’s utility with a novel seizure detection algorithm that appeared to predicted seizure onset. We propose this framework as a vital tool for designing and validating prediction algorithms and for facilitating collaborative research in this area. PMID:17021032

  3. Predicting the Direction of Stock Market Index Movement Using an Optimized Artificial Neural Network Model.

    PubMed

    Qiu, Mingyue; Song, Yu

    2016-01-01

    In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders' expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day's price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately.

  4. Predicting the Direction of Stock Market Index Movement Using an Optimized Artificial Neural Network Model

    PubMed Central

    Qiu, Mingyue; Song, Yu

    2016-01-01

    In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders’ expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day’s price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately. PMID:27196055

  5. Uncertainty analysis of a groundwater flow model in east-central Florida

    USGS Publications Warehouse

    Sepúlveda, Nicasio; Doherty, John E.

    2014-01-01

    A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The “Null Space Monte Carlo” method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model’s capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial/temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context.

  6. Correlation of predicted and measured thermal stresses on a truss-type aircraft structure

    NASA Technical Reports Server (NTRS)

    Jenkins, J. M.; Schuster, L. S.; Carter, A. L.

    1978-01-01

    A test structure representing a portion of a hypersonic vehicle was instrumented with strain gages and thermocouples. This test structure was then subjected to laboratory heating representative of supersonic and hypersonic flight conditions. A finite element computer model of this structure was developed using several types of elements with the NASA structural analysis (NASTRAN) computer program. Temperature inputs from the test were used to generate predicted model thermal stresses and these were correlated with the test measurements.

  7. Study of a Terrain-Based Motion Estimation Model to Predict the Position of a Moving Target to Enhance Weapon Probability of Kill

    DTIC Science & Technology

    2017-09-01

    target is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...49 A. TRAVELING TIME COMPUTATION ............................................. 49 B. CONVERSION TO

  8. Effect of horizontal curves on urban arterial crashes.

    PubMed

    Banihashemi, Mohamadreza

    2016-10-01

    The crash prediction models of the Highway Safety Manual (HSM), 2010 estimate the expected number of crashes for different facility types. Models in Part C Chapter 12 of the first edition of the HSM include crash prediction models for divided and undivided urban arterials. Each of the HSM crash prediction models for highway segments is comprised of a "Safety Performance Function," a function of AADT and segment length, plus, a series of "Crash Modification Factors" (CMFs). The SPF estimates the expected number of crashes for the site if the site features are of base condition. The effects of the other features of the site, if their values are different from base condition, are carried out through use of CMFs. The existing models for urban arterials do not have any CMF for horizontal curvature. The goal of this research is to investigate if the horizontal alignment has any significant effect on crashes on any of these types of facilities and if so, to develop a CMF for this feature. Washington State cross sectional data from the Highway Safety Information System (HSIS), 2014 was used in this research. Data from 2007 to 2009 was used to conduct the investigation. The 2010 data was used to validate the results. As the results showed, the horizontal curvature has significant safety effect on two-lane undivided urban arterials with speed limits of 35 mph and higher and using a CMF for horizontal curvature in the crash prediction model of this type of facility improves the prediction of crashes significantly, for both tangent and curve segments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Voxel inversion of airborne electromagnetic data for improved groundwater model construction and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Kruse Christensen, Nikolaj; Ferre, Ty Paul A.; Fiandaca, Gianluca; Christensen, Steen

    2017-03-01

    We present a workflow for efficient construction and calibration of large-scale groundwater models that includes the integration of airborne electromagnetic (AEM) data and hydrological data. In the first step, the AEM data are inverted to form a 3-D geophysical model. In the second step, the 3-D geophysical model is translated, using a spatially dependent petrophysical relationship, to form a 3-D hydraulic conductivity distribution. The geophysical models and the hydrological data are used to estimate spatially distributed petrophysical shape factors. The shape factors primarily work as translators between resistivity and hydraulic conductivity, but they can also compensate for structural defects in the geophysical model. The method is demonstrated for a synthetic case study with sharp transitions among various types of deposits. Besides demonstrating the methodology, we demonstrate the importance of using geophysical regularization constraints that conform well to the depositional environment. This is done by inverting the AEM data using either smoothness (smooth) constraints or minimum gradient support (sharp) constraints, where the use of sharp constraints conforms best to the environment. The dependency on AEM data quality is also tested by inverting the geophysical model using data corrupted with four different levels of background noise. Subsequently, the geophysical models are used to construct competing groundwater models for which the shape factors are calibrated. The performance of each groundwater model is tested with respect to four types of prediction that are beyond the calibration base: a pumping well's recharge area and groundwater age, respectively, are predicted by applying the same stress as for the hydrologic model calibration; and head and stream discharge are predicted for a different stress situation. As expected, in this case the predictive capability of a groundwater model is better when it is based on a sharp geophysical model instead of a smoothness constraint. This is true for predictions of recharge area, head change, and stream discharge, while we find no improvement for prediction of groundwater age. Furthermore, we show that the model prediction accuracy improves with AEM data quality for predictions of recharge area, head change, and stream discharge, while there appears to be no accuracy improvement for the prediction of groundwater age.

  10. Tachyon cosmology, supernovae data, and the big brake singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keresztes, Z.; Gergely, L. A.; Gorini, V.

    2009-04-15

    We compare the existing observational data on type Ia supernovae with the evolutions of the Universe predicted by a one-parameter family of tachyon models which we have introduced recently [Phys. Rev. D 69, 123512 (2004)]. Among the set of the trajectories of the model which are compatible with the data there is a consistent subset for which the Universe ends up in a new type of soft cosmological singularity dubbed big brake. This opens up yet another scenario for the future history of the Universe besides the one predicted by the standard {lambda}CDM model.

  11. Swash saturation: an assessment of available models

    NASA Astrophysics Data System (ADS)

    Hughes, Michael G.; Baldock, Tom E.; Aagaard, Troels

    2018-06-01

    An extensive previously published (Hughes et al. Mar Geol 355, 88-97, 2014) field data set representing the full range of micro-tidal beach states (reflective, intermediate and dissipative) is used to investigate swash saturation. Two models that predict the behavior of saturated swash are tested: one driven by standing waves and the other driven by bores. Despite being based on entirely different premises, they predict similar trends in the limiting (saturated) swash height with respect to dependency on frequency and beach gradient. For a given frequency and beach gradient, however, the bore-driven model predicts a larger saturated swash height by a factor 2.5. Both models broadly predict the general behavior of swash saturation evident in the data, but neither model is accurate in detail. While swash saturation in the short-wave frequency band is common on some beach types, it does not always occur across all beach types. Further work is required on wave reflection/breaking and the role of wave-wave and wave-swash interactions to determine limiting swash heights on natural beaches.

  12. An integrated approach utilising chemometrics and GC/MS for classification of chamomile flowers, essential oils and commercial products.

    PubMed

    Wang, Mei; Avula, Bharathi; Wang, Yan-Hong; Zhao, Jianping; Avonto, Cristina; Parcher, Jon F; Raman, Vijayasankar; Zweigenbaum, Jerry A; Wylie, Philip L; Khan, Ikhlas A

    2014-01-01

    As part of an ongoing research program on authentication, safety and biological evaluation of phytochemicals and dietary supplements, an in-depth chemical investigation of different types of chamomile was performed. A collection of chamomile samples including authenticated plants, commercial products and essential oils was analysed by GC/MS. Twenty-seven authenticated plant samples representing three types of chamomile, viz. German chamomile, Roman chamomile and Juhua were analysed. This set of data was employed to construct a sample class prediction (SCP) model based on stepwise reduction of data dimensionality followed by principle component analysis (PCA) and partial least squares discriminant analysis (PLS-DA). The model was cross-validated with samples including authenticated plants and commercial products. The model demonstrated 100.0% accuracy for both recognition and prediction abilities. In addition, 35 commercial products and 11 essential oils purported to contain chamomile were subsequently predicted by the validated PLS-DA model. Furthermore, tentative identification of the marker compounds correlated with different types of chamomile was explored. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Development of a web-based liver cancer prediction model for type II diabetes patients by using an artificial neural network.

    PubMed

    Rau, Hsiao-Hsien; Hsu, Chien-Yeh; Lin, Yu-An; Atique, Suleman; Fuad, Anis; Wei, Li-Ming; Hsu, Ming-Huei

    2016-03-01

    Diabetes mellitus is associated with an increased risk of liver cancer, and these two diseases are among the most common and important causes of morbidity and mortality in Taiwan. To use data mining techniques to develop a model for predicting the development of liver cancer within 6 years of diagnosis with type II diabetes. Data were obtained from the National Health Insurance Research Database (NHIRD) of Taiwan, which covers approximately 22 million people. In this study, we selected patients who were newly diagnosed with type II diabetes during the 2000-2003 periods, with no prior cancer diagnosis. We then used encrypted personal ID to perform data linkage with the cancer registry database to identify whether these patients were diagnosed with liver cancer. Finally, we identified 2060 cases and assigned them to a case group (patients diagnosed with liver cancer after diabetes) and a control group (patients with diabetes but no liver cancer). The risk factors were identified from the literature review and physicians' suggestion, then, chi-square test was conducted on each independent variable (or potential risk factor) for a comparison between patients with liver cancer and those without, those found to be significant were selected as the factors. We subsequently performed data training and testing to construct artificial neural network (ANN) and logistic regression (LR) prediction models. The dataset was randomly divided into 2 groups: a training group and a test group. The training group consisted of 1442 cases (70% of the entire dataset), and the prediction model was developed on the basis of the training group. The remaining 30% (618 cases) were assigned to the test group for model validation. The following 10 variables were used to develop the ANN and LR models: sex, age, alcoholic cirrhosis, nonalcoholic cirrhosis, alcoholic hepatitis, viral hepatitis, other types of chronic hepatitis, alcoholic fatty liver disease, other types of fatty liver disease, and hyperlipidemia. The performance of the ANN was superior to that of LR, according to the sensitivity (0.757), specificity (0.755), and the area under the receiver operating characteristic curve (0.873). After developing the optimal prediction model, we base on this model to construct a web-based application system for liver cancer prediction, which can provide support to physicians during consults with diabetes patients. In the original dataset (n=2060), 33% of diabetes patients were diagnosed with liver cancer (n=515). After using 70% of the original data to training the model and other 30% for testing, the sensitivity and specificity of our model were 0.757 and 0.755, respectively; this means that 75.7% of diabetes patients can be predicted correctly to receive a future liver cancer diagnosis, and 75.5% can be predicted correctly to not be diagnosed with liver cancer. These results reveal that this model can be used as effective predictors of liver cancer for diabetes patients, after discussion with physicians; they also agreed that model can assist physicians to advise potential liver cancer patients and also helpful to decrease the future cost incurred upon cancer treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Predictors of outcome after elective endovascular abdominal aortic aneurysm repair and external validation of a risk prediction model.

    PubMed

    Wisniowski, Brendan; Barnes, Mary; Jenkins, Jason; Boyne, Nicholas; Kruger, Allan; Walker, Philip J

    2011-09-01

    Endovascular abdominal aortic aneurysm (AAA) repair (EVAR) has been associated with lower operative mortality and morbidity than open surgery but comparable long-term mortality and higher delayed complication and reintervention rates. Attention has therefore been directed to identifying preoperative and operative variables that influence outcomes after EVAR. Risk-prediction models, such as the EVAR Risk Assessment (ERA) model, have also been developed to help surgeons plan EVAR procedures. The aims of this study were (1) to describe outcomes of elective EVAR at the Royal Brisbane and Women's Hospital (RBWH), (2) to identify preoperative and operative variables predictive of outcomes after EVAR, and (3) to externally validate the ERA model. All elective EVAR procedures at the RBWH before July 1, 2009, were reviewed. Descriptive analyses were performed to determine the outcomes. Univariate and multivariate analyses were performed to identify preoperative and operative variables predictive of outcomes after EVAR. Binomial logistic regression analyses were used to externally validate the ERA model. Before July 1, 2009, 197 patients (172 men), who were a mean age of 72.8 years, underwent elective EVAR at the RBWH. Operative mortality was 1.0%. Survival was 81.1% at 3 years and 63.2% at 5 years. Multivariate analysis showed predictors of survival were age (P = .0126), American Society of Anesthesiologists (ASA) score (P = .0180), and chronic obstructive pulmonary disease (P = .0348) at 3 years and age (P = .0103), ASA score (P = .0006), renal failure (P = .0048), and serum creatinine (P = .0022) at 5 years. Aortic branch vessel score was predictive of initial (30-day) type II endoleak (P = .0015). AAA tortuosity was predictive of midterm type I endoleak (P = .0251). Female sex was associated with lower rates of initial clinical success (P = .0406). The ERA model fitted RBWH data well for early death (C statistic = .906), 3-year survival (C statistic = .735), 5-year survival (C statistic = .800), and initial type I endoleak (C statistic = .850). The outcomes of elective EVAR at the RBWH are broadly consistent with those of a nationwide Australian audit and recent randomized trials. Age and ASA score are independent predictors of midterm survival after elective EVAR. The ERA model predicts mortality-related outcomes and initial type I endoleak well for RBWH elective EVAR patients. Copyright © 2011 Society for Vascular Surgery. All rights reserved.

  15. Analysis of a novel class of predictive microbial growth models and application to coculture growth.

    PubMed

    Poschet, F; Vereecken, K M; Geeraerd, A H; Nicolaï, B M; Van Impe, J F

    2005-04-15

    In this paper, a novel class of microbial growth models is analysed. In contrast with the currently used logistic type models (e.g., the model of Baranyi and Roberts [Baranyi, J., Roberts, T.A., 1994. A dynamic approach to predicting bacterial growth in food. International Journal of Food Microbiology 23, 277-294]), the novel model class, presented in Van Impe et al. (Van Impe, J.F., Poschet, F., Geeraerd, A.H., Vereecken, K.M., 2004. Towards a novel class of predictive microbial growth models. International Journal of Food Microbiology, this issue), explicitly incorporates nutrient exhaustion and/or metabolic waste product effects inducing stationary phase behaviour. As such, these novel model types can be extended in a natural way towards microbial interactions in cocultures and microbial growth in structured foods. Two illustrative case studies of the novel model types are thoroughly analysed and compared to the widely used model of Baranyi and Roberts. In a first case study, the stationary phase is assumed to be solely resulting from toxic product inhibition and is described as a function of the pH-evolution. In the second case study, substrate exhaustion is the sole cause of the stationary phase. Finally, a more complex case study of a so-called P-model is presented, dealing with a coculture inhibition of Listeria innocua mediated by lactic acid production of Lactococcus lactis.

  16. Biomarkers for predicting type 2 diabetes development-Can metabolomics improve on existing biomarkers?

    PubMed

    Savolainen, Otto; Fagerberg, Björn; Vendelbo Lind, Mads; Sandberg, Ann-Sofie; Ross, Alastair B; Bergström, Göran

    2017-01-01

    The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D) risk that would improve prediction of T2D over current risk markers. Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629). Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D. Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA), smoking, serum adiponectin)) alone, and in combination with metabolomics had the largest areas under the curve (AUC) (0.794 (95% confidence interval [0.738-0.850]) and 0.808 [0.749-0.867] respectively), with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577-0.736]). Prediction based on non-blood based measures was 0.638 [0.565-0.711]). Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model.

  17. Biomarkers for predicting type 2 diabetes development—Can metabolomics improve on existing biomarkers?

    PubMed Central

    Savolainen, Otto; Fagerberg, Björn; Vendelbo Lind, Mads; Sandberg, Ann-Sofie; Ross, Alastair B.; Bergström, Göran

    2017-01-01

    Aim The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D) risk that would improve prediction of T2D over current risk markers. Methods Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629). Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D. Results Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA), smoking, serum adiponectin)) alone, and in combination with metabolomics had the largest areas under the curve (AUC) (0.794 (95% confidence interval [0.738–0.850]) and 0.808 [0.749–0.867] respectively), with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577–0.736]). Prediction based on non-blood based measures was 0.638 [0.565–0.711]). Conclusions Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model. PMID:28692646

  18. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    NASA Astrophysics Data System (ADS)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  19. Land Covers Classification Based on Random Forest Method Using Features from Full-Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Ma, L.; Zhou, M.; Li, C.

    2017-09-01

    In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.

  20. Predicting crystal growth via a unified kinetic three-dimensional partition model

    NASA Astrophysics Data System (ADS)

    Anderson, Michael W.; Gebbie-Rayet, James T.; Hill, Adam R.; Farida, Nani; Attfield, Martin P.; Cubillas, Pablo; Blatov, Vladislav A.; Proserpio, Davide M.; Akporiaye, Duncan; Arstad, Bjørnar; Gale, Julian D.

    2017-04-01

    Understanding and predicting crystal growth is fundamental to the control of functionality in modern materials. Despite investigations for more than one hundred years, it is only recently that the molecular intricacies of these processes have been revealed by scanning probe microscopy. To organize and understand this large amount of new information, new rules for crystal growth need to be developed and tested. However, because of the complexity and variety of different crystal systems, attempts to understand crystal growth in detail have so far relied on developing models that are usually applicable to only one system. Such models cannot be used to achieve the wide scope of understanding that is required to create a unified model across crystal types and crystal structures. Here we describe a general approach to understanding and, in theory, predicting the growth of a wide range of crystal types, including the incorporation of defect structures, by simultaneous molecular-scale simulation of crystal habit and surface topology using a unified kinetic three-dimensional partition model. This entails dividing the structure into ‘natural tiles’ or Voronoi polyhedra that are metastable and, consequently, temporally persistent. As such, these units are then suitable for re-construction of the crystal via a Monte Carlo algorithm. We demonstrate our approach by predicting the crystal growth of a diverse set of crystal types, including zeolites, metal-organic frameworks, calcite, urea and L-cystine.

  1. Predictive modeling of structured electronic health records for adverse drug event detection.

    PubMed

    Zhao, Jing; Henriksson, Aron; Asker, Lars; Boström, Henrik

    2015-01-01

    The digitization of healthcare data, resulting from the increasingly widespread adoption of electronic health records, has greatly facilitated its analysis by computational methods and thereby enabled large-scale secondary use thereof. This can be exploited to support public health activities such as pharmacovigilance, wherein the safety of drugs is monitored to inform regulatory decisions about sustained use. To that end, electronic health records have emerged as a potentially valuable data source, providing access to longitudinal observations of patient treatment and drug use. A nascent line of research concerns predictive modeling of healthcare data for the automatic detection of adverse drug events, which presents its own set of challenges: it is not yet clear how to represent the heterogeneous data types in a manner conducive to learning high-performing machine learning models. Datasets from an electronic health record database are used for learning predictive models with the purpose of detecting adverse drug events. The use and representation of two data types, as well as their combination, are studied: clinical codes, describing prescribed drugs and assigned diagnoses, and measurements. Feature selection is conducted on the various types of data to reduce dimensionality and sparsity, while allowing for an in-depth feature analysis of the usefulness of each data type and representation. Within each data type, combining multiple representations yields better predictive performance compared to using any single representation. The use of clinical codes for adverse drug event detection significantly outperforms the use of measurements; however, there is no significant difference over datasets between using only clinical codes and their combination with measurements. For certain adverse drug events, the combination does, however, outperform using only clinical codes. Feature selection leads to increased predictive performance for both data types, in isolation and combined. We have demonstrated how machine learning can be applied to electronic health records for the purpose of detecting adverse drug events and proposed solutions to some of the challenges this presents, including how to represent the various data types. Overall, clinical codes are more useful than measurements and, in specific cases, it is beneficial to combine the two.

  2. Predictive modeling of structured electronic health records for adverse drug event detection

    PubMed Central

    2015-01-01

    Background The digitization of healthcare data, resulting from the increasingly widespread adoption of electronic health records, has greatly facilitated its analysis by computational methods and thereby enabled large-scale secondary use thereof. This can be exploited to support public health activities such as pharmacovigilance, wherein the safety of drugs is monitored to inform regulatory decisions about sustained use. To that end, electronic health records have emerged as a potentially valuable data source, providing access to longitudinal observations of patient treatment and drug use. A nascent line of research concerns predictive modeling of healthcare data for the automatic detection of adverse drug events, which presents its own set of challenges: it is not yet clear how to represent the heterogeneous data types in a manner conducive to learning high-performing machine learning models. Methods Datasets from an electronic health record database are used for learning predictive models with the purpose of detecting adverse drug events. The use and representation of two data types, as well as their combination, are studied: clinical codes, describing prescribed drugs and assigned diagnoses, and measurements. Feature selection is conducted on the various types of data to reduce dimensionality and sparsity, while allowing for an in-depth feature analysis of the usefulness of each data type and representation. Results Within each data type, combining multiple representations yields better predictive performance compared to using any single representation. The use of clinical codes for adverse drug event detection significantly outperforms the use of measurements; however, there is no significant difference over datasets between using only clinical codes and their combination with measurements. For certain adverse drug events, the combination does, however, outperform using only clinical codes. Feature selection leads to increased predictive performance for both data types, in isolation and combined. Conclusions We have demonstrated how machine learning can be applied to electronic health records for the purpose of detecting adverse drug events and proposed solutions to some of the challenges this presents, including how to represent the various data types. Overall, clinical codes are more useful than measurements and, in specific cases, it is beneficial to combine the two. PMID:26606038

  3. Why does a cleavage plane develop parallel to the spindle axis in conical sand dollar eggs? A key question for clarifying the mechanism of contractile ring positioning.

    PubMed

    Yoshigaki, Tomoyoshi

    2003-03-21

    Three types of models have been proposed about how the mitotic apparatus determines the position of the cleavage furrow in animal cells. In the first and second types, the contractile ring appears in a cortical region that least and most astral microtubules reach, respectively. The third type is that the spindle midzone positions the contractile ring. In the previous study, a new model was proposed through analyses of cytokinesis in sand dollar and sea urchin eggs. Gradients of the surface density of microtubule plus ends are assumed to drive membrane proteins whose accumulation causes the formation of contractile-ring microfilaments. In the present study, the validity of each model is examined by simulating the furrow formation in conical sand dollar eggs with the mitotic apparatus oriented perpendicular to the cone axis. The new model predicts that unilateral furrows with cleavage planes roughly parallel to the spindle axis appear between the mitotic apparatus and the vertex besides the normally positioned furrow. The predictions are consistent with the observations by Rappaport & Rappaport (1994, Dev. Biol.164, 258-266). The other three types of models do not predict the formation of the ectopic furrows. Furthermore, it is pointed out that only the new model has the ability to explain the geometrical relationship between the mitotic apparatus and the contractile ring under various experimental conditions. These results strongly suggest the real existence of the membrane proteins postulated in the model.

  4. Validation of the Economic and Health Outcomes Model of Type 2 Diabetes Mellitus (ECHO-T2DM).

    PubMed

    Willis, Michael; Johansen, Pierre; Nilsson, Andreas; Asseburg, Christian

    2017-03-01

    The Economic and Health Outcomes Model of Type 2 Diabetes Mellitus (ECHO-T2DM) was developed to address study questions pertaining to the cost-effectiveness of treatment alternatives in the care of patients with type 2 diabetes mellitus (T2DM). Naturally, the usefulness of a model is determined by the accuracy of its predictions. A previous version of ECHO-T2DM was validated against actual trial outcomes and the model predictions were generally accurate. However, there have been recent upgrades to the model, which modify model predictions and necessitate an update of the validation exercises. The objectives of this study were to extend the methods available for evaluating model validity, to conduct a formal model validation of ECHO-T2DM (version 2.3.0) in accordance with the principles espoused by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM), and secondarily to evaluate the relative accuracy of four sets of macrovascular risk equations included in ECHO-T2DM. We followed the ISPOR/SMDM guidelines on model validation, evaluating face validity, verification, cross-validation, and external validation. Model verification involved 297 'stress tests', in which specific model inputs were modified systematically to ascertain correct model implementation. Cross-validation consisted of a comparison between ECHO-T2DM predictions and those of the seminal National Institutes of Health model. In external validation, study characteristics were entered into ECHO-T2DM to replicate the clinical results of 12 studies (including 17 patient populations), and model predictions were compared to observed values using established statistical techniques as well as measures of average prediction error, separately for the four sets of macrovascular risk equations supported in ECHO-T2DM. Sub-group analyses were conducted for dependent vs. independent outcomes and for microvascular vs. macrovascular vs. mortality endpoints. All stress tests were passed. ECHO-T2DM replicated the National Institutes of Health cost-effectiveness application with numerically similar results. In external validation of ECHO-T2DM, model predictions agreed well with observed clinical outcomes. For all sets of macrovascular risk equations, the results were close to the intercept and slope coefficients corresponding to a perfect match, resulting in high R 2 and failure to reject concordance using an F test. The results were similar for sub-groups of dependent and independent validation, with some degree of under-prediction of macrovascular events. ECHO-T2DM continues to match health outcomes in clinical trials in T2DM, with prediction accuracy similar to other leading models of T2DM.

  5. Predicted 25-hydroxyvitamin D Score and incident type 2 diabetes in the Framingham Offspring Study

    USDA-ARS?s Scientific Manuscript database

    Accumulating evidence suggests that vitamin D is involved in the development of type 2 diabetes (T2D). Our objective was to examine the relation between vitamin D status and incidence of T2D. We used a subsample of 1972 Framingham Offspring Study participants to develop a regression model to predict...

  6. Predicted 25-hydroxyvitamin D score and incident type 2 diabetes in the Framingham Offspring Study

    USDA-ARS?s Scientific Manuscript database

    Accumulating evidence suggests that vitamin D is involved in the development of type 2 diabetes (T2D). Our objective was to examine the relation between vitamin D status and incidence of T2D. We used a subsample of 1972 Framingham Offspring Study participants to develop a regression model to predict...

  7. Independent data validation of an in vitro method for ...

    EPA Pesticide Factsheets

    In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must be shown to reliably predict in vivo RBA that is determined in an established animal model. Previous studies correlating soil As IVBA with RBA have been limited by the use of few soil types as the source of As. Furthermore, the predictive value of As IVBA assays has not been validated using an independent set of As-contaminated soils. Therefore, the current study was undertaken to develop a robust linear model to predict As RBA in mice using an IVBA assay and to independently validate the predictive capability of this assay using a unique set of As-contaminated soils. Thirty-six As-contaminated soils varying in soil type, As contaminant source, and As concentration were included in this study, with 27 soils used for initial model development and nine soils used for independent model validation. The initial model reliably predicted As RBA values in the independent data set, with a mean As RBA prediction error of 5.3% (range 2.4 to 8.4%). Following validation, all 36 soils were used for final model development, resulting in a linear model with the equation: RBA = 0.59 * IVBA + 9.8 and R2 of 0.78. The in vivo-in vitro correlation and independent data validation presented here provide

  8. Hierarchical Models for Type Ia Supernova Light Curves in the Optical and Near Infrared

    NASA Astrophysics Data System (ADS)

    Mandel, Kaisey; Narayan, G.; Kirshner, R. P.

    2011-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova optical and near infrared light curves. Since the near infrared light curves are excellent standard candles and are less sensitive to dust extinction and reddening, the combination of near infrared and optical data better constrains the host galaxy extinction and improves the precision of distance predictions to SN Ia. A hierarchical probabilistic model coherently accounts for multiple random and uncertain effects, including photometric error, intrinsic supernova light curve variations and correlations across phase and wavelength, dust extinction and reddening, peculiar velocity dispersion and distances. An improved BayeSN MCMC code is implemented for computing probabilistic inferences for individual supernovae and the SN Ia and host galaxy dust populations. I use this hierarchical model to analyze nearby Type Ia supernovae with optical and near infared data from the PAIRITEL, CfA3, and CSP samples and the literature. Using cross-validation to test the robustness of the model predictions, I find that the rms Hubble diagram scatter of predicted distance moduli is 0.11 mag for SN with optical and near infrared data versus 0.15 mag for SN with only optical data. Accounting for the dispersion expected from random peculiar velocities, the rms intrinsic prediction error is 0.08-0.10 mag for SN with both optical and near infrared light curves. I discuss results for the inferred intrinsic correlation structures of the optical-NIR SN Ia light curves and the host galaxy dust distribution captured by the hierarchical model. The continued observation and analysis of Type Ia SN in the optical and near infrared is important for improving their utility as precise and accurate cosmological distance indicators.

  9. Multiplex proteomics for prediction of major cardiovascular events in type 2 diabetes.

    PubMed

    Nowak, Christoph; Carlsson, Axel C; Östgren, Carl Johan; Nyström, Fredrik H; Alam, Moudud; Feldreich, Tobias; Sundström, Johan; Carrero, Juan-Jesus; Leppert, Jerzy; Hedberg, Pär; Henriksen, Egil; Cordeiro, Antonio C; Giedraitis, Vilmantas; Lind, Lars; Ingelsson, Erik; Fall, Tove; Ärnlöv, Johan

    2018-05-24

    Multiplex proteomics could improve understanding and risk prediction of major adverse cardiovascular events (MACE) in type 2 diabetes. This study assessed 80 cardiovascular and inflammatory proteins for biomarker discovery and prediction of MACE in type 2 diabetes. We combined data from six prospective epidemiological studies of 30-77-year-old individuals with type 2 diabetes in whom 80 circulating proteins were measured by proximity extension assay. Multivariable-adjusted Cox regression was used in a discovery/replication design to identify biomarkers for incident MACE. We used gradient-boosted machine learning and lasso regularised Cox regression in a random 75% training subsample to assess whether adding proteins to risk factors included in the Swedish National Diabetes Register risk model would improve the prediction of MACE in the separate 25% test subsample. Of 1211 adults with type 2 diabetes (32% women), 211 experienced a MACE over a mean (±SD) of 6.4 ± 2.3 years. We replicated associations (<5% false discovery rate) between risk of MACE and eight proteins: matrix metalloproteinase (MMP)-12, IL-27 subunit α (IL-27a), kidney injury molecule (KIM)-1, fibroblast growth factor (FGF)-23, protein S100-A12, TNF receptor (TNFR)-1, TNFR-2 and TNF-related apoptosis-inducing ligand receptor (TRAIL-R)2. Addition of the 80-protein assay to established risk factors improved discrimination in the separate test sample from 0.686 (95% CI 0.682, 0.689) to 0.748 (95% CI 0.746, 0.751). A sparse model of 20 added proteins achieved a C statistic of 0.747 (95% CI 0.653, 0.842) in the test sample. We identified eight protein biomarkers, four of which are novel, for risk of MACE in community residents with type 2 diabetes, and found improved risk prediction by combining multiplex proteomics with an established risk model. Multiprotein arrays could be useful in identifying individuals with type 2 diabetes who are at highest risk of a cardiovascular event.

  10. Predicting locations of rare aquatic species’ habitat with a combination of species-specific and assemblage-based models

    USGS Publications Warehouse

    McKenna, James E.; Carlson, Douglas M.; Payne-Wynne, Molly L.

    2013-01-01

    Aim: Rare aquatic species are a substantial component of biodiversity, and their conservation is a major objective of many management plans. However, they are difficult to assess, and their optimal habitats are often poorly known. Methods to effectively predict the likely locations of suitable rare aquatic species habitats are needed. We combine two modelling approaches to predict occurrence and general abundance of several rare fish species. Location: Allegheny watershed of western New York State (USA) Methods: Our method used two empirical neural network modelling approaches (species specific and assemblage based) to predict stream-by-stream occurrence and general abundance of rare darters, based on broad-scale habitat conditions. Species-specific models were developed for longhead darter (Percina macrocephala), spotted darter (Etheostoma maculatum) and variegate darter (Etheostoma variatum) in the Allegheny drainage. An additional model predicted the type of rare darter-containing assemblage expected in each stream reach. Predictions from both models were then combined inclusively and exclusively and compared with additional independent data. Results Example rare darter predictions demonstrate the method's effectiveness. Models performed well (R2 ≥ 0.79), identified where suitable darter habitat was most likely to occur, and predictions matched well to those of collection sites. Additional independent data showed that the most conservative (exclusive) model slightly underestimated the distributions of these rare darters or predictions were displaced by one stream reach, suggesting that new darter habitat types were detected in the later collections. Main conclusions Broad-scale habitat variables can be used to effectively identify rare species' habitats. Combining species-specific and assemblage-based models enhances our ability to make use of the sparse data on rare species and to identify habitat units most likely and least likely to support those species. This hybrid approach may assist managers with the prioritization of habitats to be examined or conserved for rare species.

  11. Are Regional Habitat Models Useful at a Local-Scale? A Case Study of Threatened and Common Insectivorous Bats in South-Eastern Australia

    PubMed Central

    McConville, Anna; Law, Bradley S.; Mahony, Michael J.

    2013-01-01

    Habitat modelling and predictive mapping are important tools for conservation planning, particularly for lesser known species such as many insectivorous bats. However, the scale at which modelling is undertaken can affect the predictive accuracy and restrict the use of the model at different scales. We assessed the validity of existing regional-scale habitat models at a local-scale and contrasted the habitat use of two morphologically similar species with differing conservation status (Mormopterus norfolkensis and Mormopterus species 2). We used negative binomial generalised linear models created from indices of activity and environmental variables collected from systematic acoustic surveys. We found that habitat type (based on vegetation community) best explained activity of both species, which were more active in floodplain areas, with most foraging activity recorded in the freshwater wetland habitat type. The threatened M. norfolkensis avoided urban areas, which contrasts with M. species 2 which occurred frequently in urban bushland. We found that the broad habitat types predicted from local-scale models were generally consistent with those from regional-scale models. However, threshold-dependent accuracy measures indicated a poor fit and we advise caution be applied when using the regional models at a fine scale, particularly when the consequences of false negatives or positives are severe. Additionally, our study illustrates that habitat type classifications can be important predictors and we suggest they are more practical for conservation than complex combinations of raw variables, as they are easily communicated to land managers. PMID:23977296

  12. Predicting The Type Of Pregnancy Using Flexible Discriminate Analysis And Artificial Neural Networks: A Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooman, A.; Mohammadzadeh, M

    Some medical and epidemiological surveys have been designed to predict a nominal response variable with several levels. With regard to the type of pregnancy there are four possible states: wanted, unwanted by wife, unwanted by husband and unwanted by couple. In this paper, we have predicted the type of pregnancy, as well as the factors influencing it using three different models and comparing them. Regarding the type of pregnancy with several levels, we developed a multinomial logistic regression, a neural network and a flexible discrimination based on the data and compared their results using tow statistical indices: Surface under curvemore » (ROC) and kappa coefficient. Based on these tow indices, flexible discrimination proved to be a better fit for prediction on data in comparison to other methods. When the relations among variables are complex, one can use flexible discrimination instead of multinomial logistic regression and neural network to predict the nominal response variables with several levels in order to gain more accurate predictions.« less

  13. Leveraging ISI Multi-Model Prediction for Navy Operations: Proposal to the Office of Naval Research

    DTIC Science & Technology

    2014-09-30

    ISI Multi-Model Prediction for Navy Operations: Proposal to the Office of Naval Research PI: James L. Kinter III Director, Center for Ocean-Land...TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Leveraging ISI Multi-Model Prediction for Navy Operations: Proposal to the ... Office of Naval Research 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f

  14. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  15. Modelling of OGTT curve identifies 1 h plasma glucose level as a strong predictor of incident type 2 diabetes: results from two prospective cohorts.

    PubMed

    Alyass, Akram; Almgren, Peter; Akerlund, Mikael; Dushoff, Jonathan; Isomaa, Bo; Nilsson, Peter; Tuomi, Tiinamaija; Lyssenko, Valeriya; Groop, Leif; Meyre, David

    2015-01-01

    The relevance of the OGTT in predicting type 2 diabetes is unclear. We assessed the performance of 14 OGTT glucose traits in type 2 diabetes prediction. We studied 2,603 and 2,386 Europeans from the Botnia study and Malmö Prevention Project (MPP) cohorts with baseline OGTT data. Over a follow-up period of 4.94 years and 23.5 years, 155 (5.95%) and 467 (19.57%) participants, respectively, developed type 2 diabetes. The main outcome was incident type 2 diabetes. One-hour plasma glucose (1h-PG) was a fair/good predictor of incident type 2 diabetes in the Botnia study and MPP (AUC for receiver operating characteristic [AUCROC] 0.80 [0.77, 0.84] and 0.70 [0.68, 0.73]). 1h-PG alone outperformed the prediction model of multiple clinical risk factors (age, sex, BMI, family history of type 2 diabetes) in the Botnia study and MPP (AUCROC 0.75 [0.72, 0.79] and 0.67 [0.64, 0.70]). The same clinical risk factors added to 1h-PG modestly increased prediction for incident type 2 diabetes (Botnia, AUCROC 0.83 [0.80, 0.86]; MPP, AUCROC 0.74 [0.72, 0.77]). 1h-PG also outperformed HbA1c in predicting type 2 diabetes in the Botnia cohort. A 1h-PG value of 8.9 mmol/l and 8.4 mmol/l was the optimal cut-point for initial screening and selection of high-risk individuals in the Botnia study and MPP, respectively, and represented 30% and 37% of all participants in these cohorts. High-risk individuals had a substantially increased risk of incident type 2 diabetes (OR 8.0 [5.5, 11.6] and 3.8 [3.1, 4.7]) and captured 75% and 62% of all incident type 2 diabetes in the Botnia study and MPP. 1h-PG is a valuable prediction tool for identifying adults at risk for future type 2 diabetes.

  16. A Comparative Study on Johnson Cook, Modified Zerilli-Armstrong and Arrhenius-Type Constitutive Models to Predict High-Temperature Flow Behavior of Ti-6Al-4V Alloy in α + β Phase

    NASA Astrophysics Data System (ADS)

    Cai, Jun; Wang, Kuaishe; Han, Yingying

    2016-03-01

    True stress and true strain values obtained from isothermal compression tests over a wide temperature range from 1,073 to 1,323 K and a strain rate range from 0.001 to 1 s-1 were employed to establish the constitutive equations based on Johnson Cook, modified Zerilli-Armstrong (ZA) and strain-compensated Arrhenius-type models, respectively, to predict the high-temperature flow behavior of Ti-6Al-4V alloy in α + β phase. Furthermore, a comparative study has been made on the capability of the three models to represent the elevated temperature flow behavior of Ti-6Al-4V alloy. Suitability of the three models was evaluated by comparing both the correlation coefficient R and the average absolute relative error (AARE). The results showed that the Johnson Cook model is inadequate to provide good description of flow behavior of Ti-6Al-4V alloy in α + β phase domain, while the predicted values of modified ZA model and the strain-compensated Arrhenius-type model could agree well with the experimental values except under some deformation conditions. Meanwhile, the modified ZA model could track the deformation behavior more accurately than other model throughout the entire temperature and strain rate range.

  17. Route prediction model of infectious diseases for 2018 Winter Olympics in Korea

    NASA Astrophysics Data System (ADS)

    Kim, Eungyeong; Lee, Seok; Byun, Young Tae; Kim, Jae Hun; Lee, Hyuk-jae; Lee, Taikjin

    2014-03-01

    There are many types of respiratory infectious diseases caused by germs, virus, mycetes and parasites. Researchers recently have tried to develop mathematical models to predict the epidemic of infectious diseases. However, with the development of ground transportation system in modern society, the spread of infectious diseases became faster and more complicated in terms of the speed and the pathways. The route of infectious diseases during Vancouver Olympics was predicted based on the Susceptible-Infectious-Recovered (SIR) model. In this model only the air traffic as an essential factor for the intercity migration of infectious diseases was involved. Here, we propose a multi-city transmission model to predict the infection route during 2018 Winter Olympics in Korea based on the pre-existing SIR model. Various types of transportation system such as a train, a car, a bus, and an airplane for the interpersonal contact in both inter- and intra-city are considered. Simulation is performed with assumptions and scenarios based on realistic factors including demographic, transportation and diseases data in Korea. Finally, we analyze an economic profit and loss caused by the variation of the number of tourists during the Olympics.

  18. Intelligent path loss prediction engine design using machine learning in the urban outdoor environment

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Lu, Jingyang; Xu, Yiran; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2018-05-01

    Due to the progressive expansion of public mobile networks and the dramatic growth of the number of wireless users in recent years, researchers are motivated to study the radio propagation in urban environments and develop reliable and fast path loss prediction models. During last decades, different types of propagation models are developed for urban scenario path loss predictions such as the Hata model and the COST 231 model. In this paper, the path loss prediction model is thoroughly investigated using machine learning approaches. Different non-linear feature selection methods are deployed and investigated to reduce the computational complexity. The simulation results are provided to demonstratethe validity of the machine learning based path loss prediction engine, which can correctly determine the signal propagation in a wireless urban setting.

  19. Applying decision tree for identification of a low risk population for type 2 diabetes. Tehran Lipid and Glucose Study.

    PubMed

    Ramezankhani, Azra; Pournik, Omid; Shahrabi, Jamal; Khalili, Davood; Azizi, Fereidoun; Hadaegh, Farzad

    2014-09-01

    The aim of this study was to create a prediction model using data mining approach to identify low risk individuals for incidence of type 2 diabetes, using the Tehran Lipid and Glucose Study (TLGS) database. For a 6647 population without diabetes, aged ≥20 years, followed for 12 years, a prediction model was developed using classification by the decision tree technique. Seven hundred and twenty-nine (11%) diabetes cases occurred during the follow-up. Predictor variables were selected from demographic characteristics, smoking status, medical and drug history and laboratory measures. We developed the predictive models by decision tree using 60 input variables and one output variable. The overall classification accuracy was 90.5%, with 31.1% sensitivity, 97.9% specificity; and for the subjects without diabetes, precision and f-measure were 92% and 0.95, respectively. The identified variables included fasting plasma glucose, body mass index, triglycerides, mean arterial blood pressure, family history of diabetes, educational level and job status. In conclusion, decision tree analysis, using routine demographic, clinical, anthropometric and laboratory measurements, created a simple tool to predict individuals at low risk for type 2 diabetes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Multi-dimensional classification of GABAergic interneurons with Bayesian network-modeled label uncertainty.

    PubMed

    Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.

  1. Modeling changes in biomass composition during microwave-based alkali pretreatment of switchgrass.

    PubMed

    Keshwani, Deepak R; Cheng, Jay J

    2010-01-01

    This study used two different approaches to model changes in biomass composition during microwave-based pretreatment of switchgrass: kinetic modeling using a time-dependent rate coefficient, and a Mamdani-type fuzzy inference system. In both modeling approaches, the dielectric loss tangent of the alkali reagent and pretreatment time were used as predictors for changes in amounts of lignin, cellulose, and xylan during the pretreatment. Training and testing data sets for development and validation of the models were obtained from pretreatment experiments conducted using 1-3% w/v NaOH (sodium hydroxide) and pretreatment times ranging from 5 to 20 min. The kinetic modeling approach for lignin and xylan gave comparable results for training and testing data sets, and the differences between the predictions and experimental values were within 2%. The kinetic modeling approach for cellulose was not as effective, and the differences were within 5-7%. The time-dependent rate coefficients of the kinetic models estimated from experimental data were consistent with the heterogeneity of individual biomass components. The Mamdani-type fuzzy inference was shown to be an effective approach to model the pretreatment process and yielded predictions with less than 2% deviation from the experimental values for lignin and with less than 3% deviation from the experimental values for cellulose and xylan. The entropies of the fuzzy outputs from the Mamdani-type fuzzy inference system were calculated to quantify the uncertainty associated with the predictions. Results indicate that there is no significant difference between the entropies associated with the predictions for lignin, cellulose, and xylan. It is anticipated that these models could be used in process simulations of bioethanol production from lignocellulosic materials.

  2. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  3. Predicting forest insect flight activity: A Bayesian network approach

    PubMed Central

    Pawson, Stephen M.; Marcot, Bruce G.; Woodberry, Owen G.

    2017-01-01

    Daily flight activity patterns of forest insects are influenced by temporal and meteorological conditions. Temperature and time of day are frequently cited as key drivers of activity; however, complex interactions between multiple contributing factors have also been proposed. Here, we report individual Bayesian network models to assess the probability of flight activity of three exotic insects, Hylurgus ligniperda, Hylastes ater, and Arhopalus ferus in a managed plantation forest context. Models were built from 7,144 individual hours of insect sampling, temperature, wind speed, relative humidity, photon flux density, and temporal data. Discretized meteorological and temporal variables were used to build naïve Bayes tree augmented networks. Calibration results suggested that the H. ater and A. ferus Bayesian network models had the best fit for low Type I and overall errors, and H. ligniperda had the best fit for low Type II errors. Maximum hourly temperature and time since sunrise had the largest influence on H. ligniperda flight activity predictions, whereas time of day and year had the greatest influence on H. ater and A. ferus activity. Type II model errors for the prediction of no flight activity is improved by increasing the model’s predictive threshold. Improvements in model performance can be made by further sampling, increasing the sensitivity of the flight intercept traps, and replicating sampling in other regions. Predicting insect flight informs an assessment of the potential phytosanitary risks of wood exports. Quantifying this risk allows mitigation treatments to be targeted to prevent the spread of invasive species via international trade pathways. PMID:28953904

  4. De novo prediction of human chromosome structures: Epigenetic marking patterns encode genome architecture.

    PubMed

    Di Pierro, Michele; Cheng, Ryan R; Lieberman Aiden, Erez; Wolynes, Peter G; Onuchic, José N

    2017-11-14

    Inside the cell nucleus, genomes fold into organized structures that are characteristic of cell type. Here, we show that this chromatin architecture can be predicted de novo using epigenetic data derived from chromatin immunoprecipitation-sequencing (ChIP-Seq). We exploit the idea that chromosomes encode a 1D sequence of chromatin structural types. Interactions between these chromatin types determine the 3D structural ensemble of chromosomes through a process similar to phase separation. First, a neural network is used to infer the relation between the epigenetic marks present at a locus, as assayed by ChIP-Seq, and the genomic compartment in which those loci reside, as measured by DNA-DNA proximity ligation (Hi-C). Next, types inferred from this neural network are used as an input to an energy landscape model for chromatin organization [Minimal Chromatin Model (MiChroM)] to generate an ensemble of 3D chromosome conformations at a resolution of 50 kilobases (kb). After training the model, dubbed Maximum Entropy Genomic Annotation from Biomarkers Associated to Structural Ensembles (MEGABASE), on odd-numbered chromosomes, we predict the sequences of chromatin types and the subsequent 3D conformational ensembles for the even chromosomes. We validate these structural ensembles by using ChIP-Seq tracks alone to predict Hi-C maps, as well as distances measured using 3D fluorescence in situ hybridization (FISH) experiments. Both sets of experiments support the hypothesis of phase separation being the driving process behind compartmentalization. These findings strongly suggest that epigenetic marking patterns encode sufficient information to determine the global architecture of chromosomes and that de novo structure prediction for whole genomes may be increasingly possible. Copyright © 2017 the Author(s). Published by PNAS.

  5. Prediction of Airfoil Characteristics With Higher Order Turbulence Models

    NASA Technical Reports Server (NTRS)

    Gatski, Thomas B.

    1996-01-01

    This study focuses on the prediction of airfoil characteristics, including lift and drag over a range of Reynolds numbers. Two different turbulence models, which represent two different types of models, are tested. The first is a standard isotropic eddy-viscosity two-equation model, and the second is an explicit algebraic stress model (EASM). The turbulent flow field over a general-aviation airfoil (GA(W)-2) at three Reynolds numbers is studied. At each Reynolds number, predicted lift and drag values at different angles of attack are compared with experimental results, and predicted variations of stall locations with Reynolds number are compared with experimental data. Finally, the size of the separation zone predicted by each model is analyzed, and correlated with the behavior of the lift coefficient near stall. In summary, the EASM model is able to predict the lift and drag coefficients over a wider range of angles of attack than the two-equation model for the three Reynolds numbers studied. However, both models are unable to predict the correct lift and drag behavior near the stall angle, and for the lowest Reynolds number case, the two-equation model did not predict separation on the airfoil near stall.

  6. Fuzzy time series forecasting model with natural partitioning length approach for predicting the unemployment rate under different degree of confidence

    NASA Astrophysics Data System (ADS)

    Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud

    2017-08-01

    Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.

  7. Validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, X. F.; Oswald, Fred B.

    1992-01-01

    Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.

  8. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  9. Soil- and crop-dependent variation in correlation lag between precipitation and agricultural drought indices as predicted by the SWAP model

    NASA Astrophysics Data System (ADS)

    Wright, Azin; Cloke, Hannah; Verhoef, Anne

    2017-04-01

    Droughts have a devastating impact on agriculture and economy. The risk of more frequent and more severe droughts is increasing due to global warming and certain anthropogenic activities. At the same time, the global population continues to rise and the need for sustainable food production is becoming more and more pressing. In light of this, drought prediction can be of great value; in the context of early warning, preparedness and mitigation of drought impacts. Prediction of meteorological drought is associated with uncertainties around precipitation variability. As meteorological drought propagates, it can transform into agricultural drought. Determination of the maximum correlation lag between precipitation and agricultural drought indices can be useful for prediction of agricultural drought. However, the influence of soil and crop type on the lag needs to be considered, which we explored using a 1-D Soil-Vegetation-Atmosphere-Transfer model (SWAP (http://www.swap.alterra.nl/), with the following configurations, all forced with ERA-Interim weather data (1979 to 2014): i) different crop types in the UK; ii) three generic soil types (clay, loam and sand) were considered. A Sobol sensitivity analysis was carried out (perturbing the SWAP model van Genuchten soil hydraulic parameters) to study the effect of soil type uncertainty on the water balance variables. Based on the sensitivity analysis results, a few variations of each soil type were selected. Agricultural drought indices including Soil Moisture Deficit Index (SMDI) and Evapotranspiration Deficit Index (ETDI) were calculated. The maximum correlation lag between precipitation and these drought indices was calculated, and analysed in the context of crop and soil model parameters. The findings of this research can be useful to UK farming, by guiding government bodies such as the Environment Agency when issuing drought warnings and implementing drought measures.

  10. Modeling long-term human activeness using recurrent neural networks for biometric data.

    PubMed

    Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin

    2017-05-18

    With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively recommend suitable events or services to the user.

  11. Population Dynamics and Flight Phenology Model of Codling Moth Differ between Commercial and Abandoned Apple Orchard Ecosystems.

    PubMed

    Joshi, Neelendra K; Rajotte, Edwin G; Naithani, Kusum J; Krawczyk, Greg; Hull, Larry A

    2016-01-01

    Apple orchard management practices may affect development and phenology of arthropod pests, such as the codling moth (CM), Cydia pomonella (L.) (Lepidoptera: Tortricidae), which is a serious internal fruit-feeding pest of apples worldwide. Estimating population dynamics and accurately predicting the timing of CM development and phenology events (for instance, adult flight, and egg-hatch) allows growers to understand and control local populations of CM. Studies were conducted to compare the CM flight phenology in commercial and abandoned apple orchard ecosystems using a logistic function model based on degree-days accumulation. The flight models for these orchards were derived from the cumulative percent moth capture using two types of commercially available CM lure baited traps. Models from both types of orchards were also compared to another model known as PETE (prediction extension timing estimator) that was developed in 1970s to predict life cycle events for many fruit pests including CM across different fruit growing regions of the United States. We found that the flight phenology of CM was significantly different in commercial and abandoned orchards. CM male flight patterns for first and second generations as predicted by the constrained and unconstrained PCM (Pennsylvania Codling Moth) models in commercial and abandoned orchards were different than the flight patterns predicted by the currently used CM model (i.e., PETE model). In commercial orchards, during the first and second generations, the PCM unconstrained model predicted delays in moth emergence compared to current model. In addition, the flight patterns of females were different between commercial and abandoned orchards. Such differences in CM flight phenology between commercial and abandoned orchard ecosystems suggest potential impact of orchard environment and crop management practices on CM biology.

  12. Population Dynamics and Flight Phenology Model of Codling Moth Differ between Commercial and Abandoned Apple Orchard Ecosystems

    PubMed Central

    Joshi, Neelendra K.; Rajotte, Edwin G.; Naithani, Kusum J.; Krawczyk, Greg; Hull, Larry A.

    2016-01-01

    Apple orchard management practices may affect development and phenology of arthropod pests, such as the codling moth (CM), Cydia pomonella (L.) (Lepidoptera: Tortricidae), which is a serious internal fruit-feeding pest of apples worldwide. Estimating population dynamics and accurately predicting the timing of CM development and phenology events (for instance, adult flight, and egg-hatch) allows growers to understand and control local populations of CM. Studies were conducted to compare the CM flight phenology in commercial and abandoned apple orchard ecosystems using a logistic function model based on degree-days accumulation. The flight models for these orchards were derived from the cumulative percent moth capture using two types of commercially available CM lure baited traps. Models from both types of orchards were also compared to another model known as PETE (prediction extension timing estimator) that was developed in 1970s to predict life cycle events for many fruit pests including CM across different fruit growing regions of the United States. We found that the flight phenology of CM was significantly different in commercial and abandoned orchards. CM male flight patterns for first and second generations as predicted by the constrained and unconstrained PCM (Pennsylvania Codling Moth) models in commercial and abandoned orchards were different than the flight patterns predicted by the currently used CM model (i.e., PETE model). In commercial orchards, during the first and second generations, the PCM unconstrained model predicted delays in moth emergence compared to current model. In addition, the flight patterns of females were different between commercial and abandoned orchards. Such differences in CM flight phenology between commercial and abandoned orchard ecosystems suggest potential impact of orchard environment and crop management practices on CM biology. PMID:27713702

  13. Application of ride quality technology to predict ride satisfaction for commuter-type aircraft

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.; Kuhlthau, A. R.; Richards, L. G.

    1975-01-01

    A method was developed to predict passenger satisfaction with the ride environment of a transportation vehicle. This method, a general approach, was applied to a commuter-type aircraft for illustrative purposes. The effect of terrain, altitude and seat location were examined. The method predicts the variation in passengers satisfied for any set of flight conditions. In addition several noncommuter aircraft were analyzed for comparison and other uses of the model described. The method has advantages for design, evaluation, and operating decisions.

  14. Predicting ground contact events for a continuum of gait types: An application of targeted machine learning using principal component analysis.

    PubMed

    Osis, Sean T; Hettinga, Blayne A; Ferber, Reed

    2016-05-01

    An ongoing challenge in the application of gait analysis to clinical settings is the standardized detection of temporal events, with unobtrusive and cost-effective equipment, for a wide range of gait types. The purpose of the current study was to investigate a targeted machine learning approach for the prediction of timing for foot strike (or initial contact) and toe-off, using only kinematics for walking, forefoot running, and heel-toe running. Data were categorized by gait type and split into a training set (∼30%) and a validation set (∼70%). A principal component analysis was performed, and separate linear models were trained and validated for foot strike and toe-off, using ground reaction force data as a gold-standard for event timing. Results indicate the model predicted both foot strike and toe-off timing to within 20ms of the gold-standard for more than 95% of cases in walking and running gaits. The machine learning approach continues to provide robust timing predictions for clinical use, and may offer a flexible methodology to handle new events and gait types. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Predicting crash frequency for multi-vehicle collision types using multivariate Poisson-lognormal spatial model: A comparative analysis.

    PubMed

    Hosseinpour, Mehdi; Sahebi, Sina; Zamzuri, Zamira Hasanah; Yahaya, Ahmad Shukri; Ismail, Noriszura

    2018-06-01

    According to crash configuration and pre-crash conditions, traffic crashes are classified into different collision types. Based on the literature, multi-vehicle crashes, such as head-on, rear-end, and angle crashes, are more frequent than single-vehicle crashes, and most often result in serious consequences. From a methodological point of view, the majority of prior studies focused on multivehicle collisions have employed univariate count models to estimate crash counts separately by collision type. However, univariate models fail to account for correlations which may exist between different collision types. Among others, multivariate Poisson lognormal (MVPLN) model with spatial correlation is a promising multivariate specification because it not only allows for unobserved heterogeneity (extra-Poisson variation) and dependencies between collision types, but also spatial correlation between adjacent sites. However, the MVPLN spatial model has rarely been applied in previous research for simultaneously modelling crash counts by collision type. Therefore, this study aims at utilizing a MVPLN spatial model to estimate crash counts for four different multi-vehicle collision types, including head-on, rear-end, angle, and sideswipe collisions. To investigate the performance of the MVPLN spatial model, a two-stage model and a univariate Poisson lognormal model (UNPLN) spatial model were also developed in this study. Detailed information on roadway characteristics, traffic volume, and crash history were collected on 407 homogeneous segments from Malaysian federal roads. The results indicate that the MVPLN spatial model outperforms the other comparing models in terms of goodness-of-fit measures. The results also show that the inclusion of spatial heterogeneity in the multivariate model significantly improves the model fit, as indicated by the Deviance Information Criterion (DIC). The correlation between crash types is high and positive, implying that the occurrence of a specific collision type is highly associated with the occurrence of other crash types on the same road segment. These results support the utilization of the MVPLN spatial model when predicting crash counts by collision manner. In terms of contributing factors, the results show that distinct crash types are attributed to different subsets of explanatory variables. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  17. FUN-LDA: A Latent Dirichlet Allocation Model for Predicting Tissue-Specific Functional Effects of Noncoding Variation: Methods and Applications.

    PubMed

    Backenroth, Daniel; He, Zihuai; Kiryluk, Krzysztof; Boeva, Valentina; Pethukova, Lynn; Khurana, Ekta; Christiano, Angela; Buxbaum, Joseph D; Ionita-Laza, Iuliana

    2018-05-03

    We describe a method based on a latent Dirichlet allocation model for predicting functional effects of noncoding genetic variants in a cell-type- and/or tissue-specific way (FUN-LDA). Using this unsupervised approach, we predict tissue-specific functional effects for every position in the human genome in 127 different tissues and cell types. We demonstrate the usefulness of our predictions by using several validation experiments. Using eQTL data from several sources, including the GTEx project, Geuvadis project, and TwinsUK cohort, we show that eQTLs in specific tissues tend to be most enriched among the predicted functional variants in relevant tissues in Roadmap. We further show how these integrated functional scores can be used for (1) deriving the most likely cell or tissue type causally implicated for a complex trait by using summary statistics from genome-wide association studies and (2) estimating a tissue-based correlation matrix of various complex traits. We found large enrichment of heritability in functional components of relevant tissues for various complex traits, and FUN-LDA yielded higher enrichment estimates than existing methods. Finally, using experimentally validated functional variants from the literature and variants possibly implicated in disease by previous studies, we rigorously compare FUN-LDA with state-of-the-art functional annotation methods and show that FUN-LDA has better prediction accuracy and higher resolution than these methods. In particular, our results suggest that tissue- and cell-type-specific functional prediction methods tend to have substantially better prediction accuracy than organism-level prediction methods. Scores for each position in the human genome and for each ENCODE and Roadmap tissue are available online (see Web Resources). Copyright © 2018 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  18. Response of Douglas-fir advance regeneration to overstory removal

    Treesearch

    J. Chris Maranto; Dennis E. Ferguson; David L. Adams

    2008-01-01

    A statistical model is presented that predicts periodic height growth for released Pseudotsuga menziesii var. glauca [Beissn.] Franco advance regeneration in central Idaho. Individual tree and site variables were used to construct a model that predicts 5-year height growth for years 6 through 10 after release. Habitat type and height growth prior to...

  19. How predictable is the anomaly pattern of the Indian summer rainfall?

    NASA Astrophysics Data System (ADS)

    Li, Juan; Wang, Bin

    2016-05-01

    Century-long efforts have been devoted to seasonal forecast of Indian summer monsoon rainfall (ISMR). Most studies of seasonal forecast so far have focused on predicting the total amount of summer rainfall averaged over the entire India (i.e., all Indian rainfall index-AIRI). However, it is practically more useful to forecast anomalous seasonal rainfall distribution (anomaly pattern) across India. The unknown science question is to what extent the anomalous rainfall pattern is predictable. This study attempted to address this question. Assessment of the 46-year (1960-2005) hindcast made by the five state-of-the-art ENSEMBLE coupled dynamic models' multi-model ensemble (MME) prediction reveals that the temporal correlation coefficient (TCC) skill for prediction of AIRI is 0.43, while the area averaged TCC skill for prediction of anomalous rainfall pattern is only 0.16. The present study aims to estimate the predictability of ISMR on regional scales by using Predictable Mode Analysis method and to develop a set of physics-based empirical (P-E) models for prediction of ISMR anomaly pattern. We show that the first three observed empirical orthogonal function (EOF) patterns of the ISMR have their distinct dynamical origins rooted in an eastern Pacific-type La Nina, a central Pacific-type La Nina, and a cooling center near dateline, respectively. These equatorial Pacific sea surface temperature anomalies, while located in different longitudes, can all set up a specific teleconnection pattern that affects Indian monsoon and results in different rainfall EOF patterns. Furthermore, the dynamical models' skill for predicting ISMR distribution primarily comes primarily from these three modes. Therefore, these modes can be regarded as potentially predictable modes. If these modes are perfectly predicted, about 51 % of the total observed variability is potentially predictable. Based on understanding the lead-lag relationships between the lower boundary anomalies and the predictable modes, a set of P-E models is established to predict the principal component of each predictable mode, so that the ISMR anomaly pattern can be predicted by using the sum of the predictable modes. Three validation schemes are used to assess the performance of the P-E models' hindcast and independent forecast. The validated TCC skills of the P-E model here are more than doubled that of dynamical models' MME hindcast, suggesting a large room for improvement of the current dynamical prediction. The methodology proposed here can be applied to a wide range of climate prediction and predictability studies. The limitation and future improvement are also discussed.

  20. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27396932','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27396932"><span>Predictive models in cancer management: A guide for clinicians.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kazem, Mohammed Ali</p> <p>2017-04-01</p> <p>Predictive tools in cancer management are used to predict different outcomes including survival probability or risk of recurrence. The uptake of these tools by clinicians involved in cancer management has not been as common as other clinical tools, which may be due to the complexity of some of these tools or a lack of understanding of how they can aid decision-making in particular clinical situations. The aim of this article is to improve clinicians' knowledge and understanding of predictive tools used in cancer management, including how they are built, how they can be applied to medical practice, and what their limitations may be. Literature review was conducted to investigate the role of predictive tools in cancer management. All predictive models share similar characteristics, but depending on the type of the tool its ability to predict an outcome will differ. Each type has its own pros and cons, and its generalisability will depend on the cohort used to build the tool. These factors will affect the clinician's decision whether to apply the model to their cohort or not. Before a model is used in clinical practice, it is important to appreciate how the model is constructed, what its use may add over and above traditional decision-making tools, and what problems or limitations may be associated with it. Understanding all the above is an important step for any clinician who wants to decide whether or not use predictive tools in their practice. Copyright © 2016 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..495....1L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..495....1L"><span>A link prediction method for heterogeneous networks based on BP neural network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu</p> <p>2018-04-01</p> <p>Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27133515','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27133515"><span>African American Female Offender's Use of Alternative and Traditional Health Services After Re-Entry: Examining the Behavioral Model for Vulnerable Populations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Oser, Carrie B; Bunting, Amanda M; Pullen, Erin; Stevens-Watkins, Danelle</p> <p>2016-01-01</p> <p>This is the first known study to use the Gelberg-Andersen Behavioral Model for Vulnerable Populations to predict African American women's use of three types of health services (alternative, hospitalization, and ambulatory) in the 18 months after release from prison. In the multivariate models, the most robust predictors of all three types of service utilization were in the vulnerable theoretical domains. Alternative health services were predicted by ethnic community membership, higher religiosity, and HIV/HCV. Hospitalizations were predicted by the lack of barriers to health care and disability. Ambulatory office visits were predicted by more experiences of gendered racism, a greater number of physical health problems, and HIV/HCV. Findings highlight the importance of cultural factors and HIV/HCV in obtaining both alternative and formal health care during community re-entry. Clinicians and policymakers should consider the salient role that the vulnerable domain plays in offender's accessing health services.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4855295','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4855295"><span>African American Female Offender’s Use of Alternative and Traditional Health Services After Re-Entry: Examining the Behavioral Model for Vulnerable Populations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Oser, Carrie B.; Bunting, Amanda M.; Pullen, Erin; Stevens-Watkins, Danelle</p> <p>2016-01-01</p> <p>This is the first known study to use the Gelberg-Andersen Behavioral Model for Vulnerable Populations to predict African American women’s use of three types of health services (alternative, hospitalization, and ambulatory) in the 18 months after release from prison. In the multivariate models, the most robust predictors of all three types of service utilization were in the vulnerable theoretical domains. Alternative health services were predicted by ethnic community membership, higher religiosity, and HIV/HCV. Hospitalizations were predicted by the lack of barriers to health care and disability. Ambulatory office visits were predicted by more experiences of gendered racism, a greater number of physical health problems, and HIV/HCV. Findings highlight the importance of cultural factors and HIV/HCV in obtaining both alternative and formal health care during community re-entry. Clinicians and policy makers should consider the salient role that the vulnerable domain plays in offender’s accessing health services. PMID:27133515</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29241659','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29241659"><span>A novel method for predicting kidney stone type using ensemble learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kazemi, Yassaman; Mirroshandel, Seyed Abolghasem</p> <p>2018-01-01</p> <p>The high morbidity rate associated with kidney stone disease, which is a silent killer, is one of the main concerns in healthcare systems all over the world. Advanced data mining techniques such as classification can help in the early prediction of this disease and reduce its incidence and associated costs. The objective of the present study is to derive a model for the early detection of the type of kidney stone and the most influential parameters with the aim of providing a decision-support system. Information was collected from 936 patients with nephrolithiasis at the kidney center of the Razi Hospital in Rasht from 2012 through 2016. The prepared dataset included 42 features. Data pre-processing was the first step toward extracting the relevant features. The collected data was analyzed with Weka software, and various data mining models were used to prepare a predictive model. Various data mining algorithms such as the Bayesian model, different types of Decision Trees, Artificial Neural Networks, and Rule-based classifiers were used in these models. We also proposed four models based on ensemble learning to improve the accuracy of each learning algorithm. In addition, a novel technique for combining individual classifiers in ensemble learning was proposed. In this technique, for each individual classifier, a weight is assigned based on our proposed genetic algorithm based method. The generated knowledge was evaluated using a 10-fold cross-validation technique based on standard measures. However, the assessment of each feature for building a predictive model was another significant challenge. The predictive strength of each feature for creating a reproducible outcome was also investigated. Regarding the applied models, parameters such as sex, acid uric condition, calcium level, hypertension, diabetes, nausea and vomiting, flank pain, and urinary tract infection (UTI) were the most vital parameters for predicting the chance of nephrolithiasis. The final ensemble-based model (with an accuracy of 97.1%) was a robust one and could be safely applied to future studies to predict the chances of developing nephrolithiasis. This model provides a novel way to study stone disease by deciphering the complex interaction among different biological variables, thus helping in an early identification and reduction in diagnosis time. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28898271','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28898271"><span>A neural network based computational model to predict the output power of different types of photovoltaic cells.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng</p> <p>2017-01-01</p> <p>In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26809759','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26809759"><span>A review of predictive coding algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Spratling, M W</p> <p>2017-03-01</p> <p>Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28238555','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28238555"><span>Prediction of five-year all-cause mortality in Chinese patients with type 2 diabetes mellitus - A population-based retrospective cohort study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wan, Eric Yuk Fai; Fong, Daniel Yee Tak; Fung, Colman Siu Cheung; Yu, Esther Yee Tak; Chin, Weng Yee; Chan, Anca Ka Chun; Lam, Cindy Lo Kuen</p> <p>2017-06-01</p> <p>This study aimed to develop and validate an all-cause mortality risk prediction model for Chinese primary care patients with type 2 diabetes mellitus(T2DM) in Hong Kong. A population-based retrospective cohort study was conducted on 132,462 Chinese patients who had received public primary care services during 2010. Each gender sample was randomly split on a 2:1 basis into derivation and validation cohorts and was followed-up for a median period of 5years. Gender-specific mortality risk prediction models showing the interaction effect between predictors and age were derived using Cox proportional hazards regression with forward stepwise approach. Developed models were compared with pre-existing models by Harrell's C-statistic and calibration plot using validation cohort. Common predictors of increased mortality risk in both genders included: age; smoking habit; diabetes duration; use of anti-hypertensive agents, insulin and lipid-lowering drugs; body mass index; hemoglobin A1c; systolic blood pressure(BP); total cholesterol to high-density lipoprotein-cholesterol ratio; urine albumin to creatinine ratio(urine ACR); and estimated glomerular filtration rate(eGFR). Prediction models showed better discrimination with Harrell"'s C-statistics of 0.768(males) and 0.782(females) and calibration power from the plots than previously established models. Our newly developed gender-specific models provide a more accurate predicted 5-year mortality risk for Chinese diabetic patients than other established models. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23229697','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23229697"><span>The CHOP postnatal weight gain, birth weight, and gestational age retinopathy of prematurity risk model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Binenbaum, Gil; Ying, Gui-Shuang; Quinn, Graham E; Huang, Jiayan; Dreiseitl, Stephan; Antigua, Jules; Foroughi, Negar; Abbasi, Soraya</p> <p>2012-12-01</p> <p>To develop a birth weight (BW), gestational age (GA), and postnatal-weight gain retinopathy of prematurity (ROP) prediction model in a cohort of infants meeting current screening guidelines. Multivariate logistic regression was applied retrospectively to data from infants born with BW less than 1501 g or GA of 30 weeks or less at a single Philadelphia hospital between January 1, 2004, and December 31, 2009. In the model, BW, GA, and daily weight gain rate were used repeatedly each week to predict risk of Early Treatment of Retinopathy of Prematurity type 1 or 2 ROP. If risk was above a cut-point level, examinations would be indicated. Of 524 infants, 20 (4%) had type 1 ROP and received laser treatment; 28 (5%) had type 2 ROP. The model (Children's Hospital of Philadelphia [CHOP]) accurately predicted all infants with type 1 ROP; missed 1 infant with type 2 ROP, who did not require laser treatment; and would have reduced the number of infants requiring examinations by 49%. Raising the cut point to miss one type 1 ROP case would have reduced the need for examinations by 79%. Using daily weight measurements to calculate weight gain rate resulted in slightly higher examination reduction than weekly measurements. The BW-GA-weight gain CHOP ROP model demonstrated accurate ROP risk assessment and a large reduction in the number of ROP examinations compared with current screening guidelines. As a simple logistic equation, it can be calculated by hand or represented as a nomogram for easy clinical use. However, larger studies are needed to achieve a highly precise estimate of sensitivity prior to clinical application.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990102419&hterms=seasonal+forecast&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dseasonal%2Bforecast','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990102419&hterms=seasonal+forecast&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dseasonal%2Bforecast"><span>The Potential for Predicting Precipitation on Seasonal-to-Interannual Timescales</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Koster, R. D.</p> <p>1999-01-01</p> <p>The ability to predict precipitation several months in advance would have a significant impact on water resource management. This talk provides an overview of a project aimed at developing this prediction capability. NASA's Seasonal-to-Interannual Prediction Project (NSIPP) will generate seasonal-to-interannual sea surface temperature predictions through detailed ocean circulation modeling and will then translate these SST forecasts into forecasts of continental precipitation through the application of an atmospheric general circulation model and a "SVAT"-type land surface model. As part of the process, ocean variables (e.g., height) and land variables (e.g., soil moisture) will be updated regularly via data assimilation. The overview will include a discussion of the variability inherent in such a modeling system and will provide some quantitative estimates of the absolute upper limits of seasonal-to-interannual precipitation predictability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA620696','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA620696"><span>The Use of Twitter to Predict the Level of Influenza Activity in the United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2014-09-01</p> <p>Positive for Influenza Type A or B .......................................................................15 3. Influenza Associated Hospitalizations ...D. MODEL FOR PREDICTING NUMBER OF INFLUENZA- ASSOCIATED HOSPITALIZATIONS ......................................................60 VI. CONCLUSIONS...4. Predicted vs. Actual Rate of Influenza-Associated Hospitalizations per 100,000 Population..........................................85 APPENDIX B</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25585899','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25585899"><span>The statistical geometry of transcriptome divergence in cell-type evolution and cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liang, Cong; Forrest, Alistair R R; Wagner, Günter P</p> <p>2015-01-14</p> <p>In evolution, body plan complexity increases due to an increase in the number of individualized cell types. Yet, there is very little understanding of the mechanisms that produce this form of organismal complexity. One model for the origin of novel cell types is the sister cell-type model. According to this model, each cell type arises together with a sister cell type through specialization from an ancestral cell type. A key prediction of the sister cell-type model is that gene expression profiles of cell types exhibit tree structure. Here we present a statistical model for detecting tree structure in transcriptomic data and apply it to transcriptomes from ENCODE and FANTOM5. We show that transcriptomes of normal cells harbour substantial amounts of hierarchical structure. In contrast, cancer cell lines have less tree structure, suggesting that the emergence of cancer cells follows different principles from that of evolutionary cell-type origination.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25214892','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25214892"><span>Knowledge-driven genomic interactions: an application in ovarian cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Dokyoon; Li, Ruowang; Dudek, Scott M; Frase, Alex T; Pendergrass, Sarah A; Ritchie, Marylyn D</p> <p>2014-01-01</p> <p>Effective cancer clinical outcome prediction for understanding of the mechanism of various types of cancer has been pursued using molecular-based data such as gene expression profiles, an approach that has promise for providing better diagnostics and supporting further therapies. However, clinical outcome prediction based on gene expression profiles varies between independent data sets. Further, single-gene expression outcome prediction is limited for cancer evaluation since genes do not act in isolation, but rather interact with other genes in complex signaling or regulatory networks. In addition, since pathways are more likely to co-operate together, it would be desirable to incorporate expert knowledge to combine pathways in a useful and informative manner. Thus, we propose a novel approach for identifying knowledge-driven genomic interactions and applying it to discover models associated with cancer clinical phenotypes using grammatical evolution neural networks (GENN). In order to demonstrate the utility of the proposed approach, an ovarian cancer data from the Cancer Genome Atlas (TCGA) was used for predicting clinical stage as a pilot project. We identified knowledge-driven genomic interactions associated with cancer stage from single knowledge bases such as sources of pathway-pathway interaction, but also knowledge-driven genomic interactions across different sets of knowledge bases such as pathway-protein family interactions by integrating different types of information. Notably, an integration model from different sources of biological knowledge achieved 78.82% balanced accuracy and outperformed the top models with gene expression or single knowledge-based data types alone. Furthermore, the results from the models are more interpretable because they are framed in the context of specific biological pathways or other expert knowledge. The success of the pilot study we have presented herein will allow us to pursue further identification of models predictive of clinical cancer survival and recurrence. Understanding the underlying tumorigenesis and progression in ovarian cancer through the global view of interactions within/between different biological knowledge sources has the potential for providing more effective screening strategies and therapeutic targets for many types of cancer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/49373','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/49373"><span>Using a prescribed fire to test custom and standard fuel models for fire behaviour prediction in a non-native, grass-invaded tropical dry shrubland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Andrew D. Pierce; Sierra McDaniel; Mark Wasser; Alison Ainsworth; Creighton M. Litton; Christian P. Giardina; Susan Cordell; Ralf Ohlemuller</p> <p>2014-01-01</p> <p>Questions: Do fuel models developed for North American fuel types accurately represent fuel beds found in grass-invaded tropical shrublands? Do standard or custom fuel models for firebehavior models with in situ or RAWS measured fuel moistures affect the accuracy of predicted fire behavior in grass-invaded tropical shrublands? Location: Hawai’i Volcanoes National...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=random+AND+variable&pg=4&id=EJ922103','ERIC'); return false;" href="https://eric.ed.gov/?q=random+AND+variable&pg=4&id=EJ922103"><span>Multilevel Model Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Frees, Edward W.; Kim, Jee-Seon</p> <p>2006-01-01</p> <p>Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16249898','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16249898"><span>Testing models of parental investment strategy and offspring size in ants.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gilboa, Smadar; Nonacs, Peter</p> <p>2006-01-01</p> <p>Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11875799','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11875799"><span>Mass gathering medicine: a predictive model for patient presentation and transport rates.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Arbon, P; Bridgewater, F H; Smith, C</p> <p>2001-01-01</p> <p>This paper reports on research into the influence of environmental factors (including crowd size, temperature, humidity, and venue type) on the number of patients and the patient problems presenting to first-aid services at large, public events in Australia. Regression models were developed to predict rates of patient presentation and of transportation-to-a-hospital for future mass gatherings. To develop a data set and predictive model that can be applied across venues and types of mass gathering events that is not venue or event specific. Data collected will allow informed event planning for future mass gatherings for which health care services are required. Mass gatherings were defined as public events attended by in excess of 25,000 people. Over a period of 12 months, 201 mass gatherings attended by a combined audience in excess of 12 million people were surveyed throughout Australia. The survey was undertaken by St. John Ambulance Australia personnel. The researchers collected data on the incidence and type of patients presenting for treatment and on the environmental factors that may influence these presentations. A standard reporting format and definition of event geography was employed to overcome the event-specific nature of many previous surveys. There are 11,956 patients in the sample. The patient presentation rate across all event types was 0.992/1,000 attendees, and the transportation-to-hospital rate was 0.027/1,000 persons in attendance. The rates of patient presentations declined slightly as crowd sizes increased. The weather (particularly the relative humidity) was related positively to an increase in the rates of presentations. Other factors that influenced the number and type of patients presenting were the mobility of the crowd, the availability of alcohol, the event being enclosed by a boundary, and the number of patient-care personnel on duty. Three regression models were developed to predict presentation rates at future events. Several features of the event environment influence patient presentation rates, and that the prediction of patient load at these events is complex and multifactorial. The use of regression modeling and close attention to existing historical data for an event can improve planning and the provision of health care services at mass gatherings.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PPCF...58l4002S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PPCF...58l4002S"><span>Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.</p> <p>2016-12-01</p> <p>Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29558688','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29558688"><span>Exploring unobserved heterogeneity in bicyclists' red-light running behaviors at different crossing facilities.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Yanyong; Li, Zhibin; Wu, Yao; Xu, Chengcheng</p> <p>2018-06-01</p> <p>Bicyclists running the red light at crossing facilities increase the potential of colliding with motor vehicles. Exploring the contributing factors could improve the prediction of running red-light probability and develop countermeasures to reduce such behaviors. However, individuals could have unobserved heterogeneities in running a red light, which make the accurate prediction more challenging. Traditional models assume that factor parameters are fixed and cannot capture the varying impacts on red-light running behaviors. In this study, we employed the full Bayesian random parameters logistic regression approach to account for the unobserved heterogeneous effects. Two types of crossing facilities were considered which were the signalized intersection crosswalks and the road segment crosswalks. Electric and conventional bikes were distinguished in the modeling. Data were collected from 16 crosswalks in urban area of Nanjing, China. Factors such as individual characteristics, road geometric design, environmental features, and traffic variables were examined. Model comparison indicates that the full Bayesian random parameters logistic regression approach is statistically superior to the standard logistic regression model. More red-light runners are predicted at signalized intersection crosswalks than at road segment crosswalks. Factors affecting red-light running behaviors are gender, age, bike type, road width, presence of raised median, separation width, signal type, green ratio, bike and vehicle volume, and average vehicle speed. Factors associated with the unobserved heterogeneity are gender, bike type, signal type, separation width, and bike volume. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/1464224','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/1464224"><span>University of North Carolina Caries Risk Assessment Study: comparisons of high risk prediction, any risk prediction, and any risk etiologic models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M</p> <p>1992-12-01</p> <p>The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29043050','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29043050"><span>Prediction of biodiversity hotspots in the Anthropocene: The case of veteran oaks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Skarpaas, Olav; Blumentrath, Stefan; Evju, Marianne; Sverdrup-Thygeson, Anne</p> <p>2017-10-01</p> <p>Over the past centuries, humans have transformed large parts of the biosphere, and there is a growing need to understand and predict the distribution of biodiversity hotspots influenced by the presence of humans. Our basic hypothesis is that human influence in the Anthropocene is ubiquitous, and we predict that biodiversity hot spot modeling can be improved by addressing three challenges raised by the increasing ecological influence of humans: (i) anthropogenically modified responses to individual ecological factors, (ii) fundamentally different processes and predictors in landscape types shaped by different land use histories and (iii) a multitude and complexity of natural and anthropogenic processes that may require many predictors and even multiple models in different landscape types. We modeled the occurrence of veteran oaks in Norway, and found, in accordance with our basic hypothesis and predictions, that humans influence the distribution of veteran oaks throughout its range, but in different ways in forests and open landscapes. In forests, geographical and topographic variables related to the oak niche are still important, but the occurrence of veteran oaks is shifted toward steeper slopes, where logging is difficult. In open landscapes, land cover variables are more important, and veteran oaks are more common toward the north than expected from the fundamental oak niche. In both landscape types, multiple predictor variables representing ecological and human-influenced processes were needed to build a good model, and several models performed almost equally well. Models accounting for the different anthropogenic influences on landscape structure and processes consistently performed better than models based exclusively on natural biogeographical and ecological predictors. Thus, our results for veteran oaks clearly illustrate the challenges to distribution modeling raised by the ubiquitous influence of humans, even in a moderately populated region, but also show that predictions can be improved by explicitly addressing these anthropogenic complexities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28878155','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28878155"><span>Determination of Highly Sensitive Biological Cell Model Systems to Screen BPA-Related Health Hazards Using Pathway Studio.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ryu, Do-Yeal; Rahman, Md Saidur; Pang, Myung-Geol</p> <p>2017-09-06</p> <p>Bisphenol-A (BPA) is a ubiquitous endocrine-disrupting chemical. Recently, many issues have arisen surrounding the disease pathogenesis of BPA. Therefore, several studies have been conducted to investigate the proteomic biomarkers of BPA that are associated with disease processes. However, studies on identifying highly sensitive biological cell model systems in determining BPA health risk are lacking. Here, we determined suitable cell model systems and potential biomarkers for predicting BPA-mediated disease using the bioinformatics tool Pathway Studio. We compiled known BPA-mediated diseases in humans, which were categorized into five major types. Subsequently, we investigated the differentially expressed proteins following BPA exposure in several cell types, and analyzed the efficacy of altered proteins to investigate their associations with BPA-mediated diseases. Our results demonstrated that colon cancer cells (SW480), mammary gland, and Sertoli cells were highly sensitive biological model systems, because of the efficacy of predicting the majority of BPA-mediated diseases. We selected glucose-6-phosphate dehydrogenase (G6PD), cytochrome b-c1 complex subunit 1 (UQCRC1), and voltage-dependent anion-selective channel protein 2 (VDAC2) as highly sensitive biomarkers to predict BPA-mediated diseases. Furthermore, we summarized proteomic studies in spermatozoa following BPA exposure, which have recently been considered as another suitable cell type for predicting BPA-mediated diseases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5618558','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5618558"><span>Determination of Highly Sensitive Biological Cell Model Systems to Screen BPA-Related Health Hazards Using Pathway Studio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ryu, Do-Yeal</p> <p>2017-01-01</p> <p>Bisphenol-A (BPA) is a ubiquitous endocrine-disrupting chemical. Recently, many issues have arisen surrounding the disease pathogenesis of BPA. Therefore, several studies have been conducted to investigate the proteomic biomarkers of BPA that are associated with disease processes. However, studies on identifying highly sensitive biological cell model systems in determining BPA health risk are lacking. Here, we determined suitable cell model systems and potential biomarkers for predicting BPA-mediated disease using the bioinformatics tool Pathway Studio. We compiled known BPA-mediated diseases in humans, which were categorized into five major types. Subsequently, we investigated the differentially expressed proteins following BPA exposure in several cell types, and analyzed the efficacy of altered proteins to investigate their associations with BPA-mediated diseases. Our results demonstrated that colon cancer cells (SW480), mammary gland, and Sertoli cells were highly sensitive biological model systems, because of the efficacy of predicting the majority of BPA-mediated diseases. We selected glucose-6-phosphate dehydrogenase (G6PD), cytochrome b-c1 complex subunit 1 (UQCRC1), and voltage-dependent anion-selective channel protein 2 (VDAC2) as highly sensitive biomarkers to predict BPA-mediated diseases. Furthermore, we summarized proteomic studies in spermatozoa following BPA exposure, which have recently been considered as another suitable cell type for predicting BPA-mediated diseases. PMID:28878155</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20564000','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20564000"><span>Modeling the effect of 3 missense AGXT mutations on dimerization of the AGT enzyme in primary hyperoxaluria type 1.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Robbiano, Angela; Frecer, Vladimir; Miertus, Jan; Zadro, Cristina; Ulivi, Sheila; Bevilacqua, Elena; Mandrile, Giorgia; De Marchi, Mario; Miertus, Stanislav; Amoroso, Antonio</p> <p>2010-01-01</p> <p>Mutations of the AGXT gene encoding the alanine:glyoxylate aminotransferase liver enzyme (AGT) cause primary hyperoxaluria type 1 (PH1). Here we report a molecular modeling study of selected missense AGXT mutations: the common Gly170Arg and the recently described Gly47Arg and Ser81Leu variants, predicted to be pathogenic using standard criteria. Taking advantage of the refined 3D structure of AGT, we computed the dimerization energy of the wild-type and mutated proteins. Molecular modeling predicted that Gly47Arg affects dimerization with a similar effect to that shown previously for Gly170Arg through classical biochemical approaches. In contrast, no effect on dimerization was predicted for Ser81Leu. Therefore, this probably demonstrates pathogenic properties via a different mechanism, similar to that described for the adjacent Gly82Glu mutation that affects pyridoxine binding. This study shows that the molecular modeling approach can contribute to evaluating the pathogenicity of some missense variants that affect dimerization. However, in silico studies--aimed to assess the relationship between structural change and biological effects--require the integrated use of more than 1 tool.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.976a2007S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.976a2007S"><span>Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan</p> <p>2018-02-01</p> <p>The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=influence+AND+parents+AND+children%27s+AND+life&pg=7&id=EJ1056967','ERIC'); return false;" href="https://eric.ed.gov/?q=influence+AND+parents+AND+children%27s+AND+life&pg=7&id=EJ1056967"><span>Quebec's Child Care Services: What Are the Mechanisms Influencing Children's Behaviors across Quantity, Type, and Quality of Care Experienced?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lemay, Lise; Bigras, Nathalie; Bouchard, Caroline</p> <p>2015-01-01</p> <p>The objective of this study was to examine how quantity, type, and quality of care interact in predicting externalizing and internalizing behaviors of 36-month-old children attending Quebec's educational child care from their first years of life. To do so, the authors examined two hypothesized models: (1) a mediation model where quantity, type,…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CompM..61..237H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CompM..61..237H"><span>Uncertainty aggregation and reduction in structure-material performance prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, Zhen; Mahadevan, Sankaran; Ao, Dan</p> <p>2018-02-01</p> <p>An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4102493','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4102493"><span>The Relationship Between Social Support and Subjective Well-Being Across Age</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Salthouse, Timothy A.; Oishi, Shigehiro; Jeswani, Sheena</p> <p>2014-01-01</p> <p>The relationships among types of social support and different facets of subjective well-being (i.e., life satisfaction, positive affect, and negative affect) were examined in a sample of 1,111 individuals between the ages of 18 and 95. Using structural equation modeling we found that life satisfaction was predicted by enacted and perceived support, positive affect was predicted by family embeddedness and provided support, and negative affect was predicted by perceived support. When personality variables were included in a subsequent model, the influence of the social support variables were generally reduced. Invariance analyses conducted across age groups indicated that there were no substantial differences in predictors of the different types of subjective well-being across age. PMID:25045200</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29648543','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29648543"><span>Modeling and predictions of biphasic mechanosensitive cell migration altered by cell-intrinsic properties and matrix confinement.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pathak, Amit</p> <p>2018-04-12</p> <p>Motile cells sense the stiffness of their extracellular matrix (ECM) through adhesions and respond by modulating the generated forces, which in turn lead to varying mechanosensitive migration phenotypes. Through modeling and experiments, cell migration speed is known to vary with matrix stiffness in a biphasic manner, with optimal motility at an intermediate stiffness. Here, we present a two-dimensional cell model defined by nodes and elements, integrated with subcellular modeling components corresponding to mechanotransductive adhesion formation, force generation, protrusions and node displacement. On 2D matrices, our calculations reproduce the classic biphasic dependence of migration speed on matrix stiffness and predict that cell types with higher force-generating ability do not slow down on very stiff matrices, thus disabling the biphasic response. We also predict that cell types defined by lower number of total receptors require stiffer matrices for optimal motility, which also limits the biphasic response. For a cell type with robust biphasic migration on 2D surface, simulations in channel-like confined environments of varying width and height predict faster migration in more confined matrices. Simulations performed in shallower channels predict that the biphasic mechanosensitive cell migration response is more robust on 2D micro-patterns as compared to the channel-like 3D confinement. Thus, variations in the dimensionality of matrix confinement alters the way migratory cells sense and respond to the matrix stiffness. Our calculations reveal new phenotypes of stiffness- and topography-sensitive cell migration that critically depend on both cell-intrinsic and matrix properties. These predictions may inform our understanding of various mechanosensitive modes of cell motility that could enable tumor invasion through topographically heterogeneous microenvironments. © 2018 IOP Publishing Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24586984','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24586984"><span>Serum peroxiredoxin 4: a marker of oxidative stress associated with mortality in type 2 diabetes (ZODIAC-28).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gerrits, Esther G; Alkhalaf, Alaa; Landman, Gijs W D; van Hateren, Kornelis J J; Groenier, Klaas H; Struck, Joachim; Schulte, Janin; Gans, Reinold O B; Bakker, Stephan J L; Kleefstra, Nanne; Bilo, Henk J G</p> <p>2014-01-01</p> <p>Oxidative stress plays an underlying pathophysiologic role in the development of diabetes complications. The aim of this study was to investigate peroxiredoxin 4 (Prx4), a proposed novel biomarker of oxidative stress, and its association with and capability as a biomarker in predicting (cardiovascular) mortality in type 2 diabetes mellitus. Prx4 was assessed in baseline serum samples of 1161 type 2 diabetes patients. Cox proportional hazard models were used to evaluate the relationship between Prx4 and (cardiovascular) mortality. Risk prediction capabilities of Prx4 for (cardiovascular) mortality were assessed with Harrell's C statistic, the integrated discrimination improvement and net reclassification improvement. Mean age was 67 and the median diabetes duration was 4.0 years. After a median follow-up period of 5.8 years, 327 patients died; 137 cardiovascular deaths. Prx4 was associated with (cardiovascular) mortality. The Cox proportional hazard models added the variables: Prx4 (model 1); age and gender (model 2), and BMI, creatinine, smoking, diabetes duration, systolic blood pressure, cholesterol-HDL ratio, history of macrovascular complications, and albuminuria (model 3). Hazard ratios (HR) (95% CI) for cardiovascular mortality were 1.93 (1.57 - 2.38), 1.75 (1.39 - 2.20), and 1.63 (1.28 - 2.09) for models 1, 2 and 3, respectively. HR for all-cause mortality were 1.73 (1.50 - 1.99), 1.50 (1.29 - 1.75), and 1.44 (1.23 - 1.67) for models 1, 2 and 3, respectively. Addition of Prx4 to the traditional risk factors slightly improved risk prediction of (cardiovascular) mortality. Prx4 is independently associated with (cardiovascular) mortality in type 2 diabetes patients. After addition of Prx4 to the traditional risk factors, there was a slightly improvement in risk prediction of (cardiovascular) mortality in this patient group.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4476684','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4476684"><span>Personalized Modeling for Prediction with Decision-Path Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.</p> <p>2015-01-01</p> <p>Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3967939','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3967939"><span>Global Quantitative Modeling of Chromatin Factor Interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhou, Jian; Troyanskaya, Olga G.</p> <p>2014-01-01</p> <p>Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4296201','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4296201"><span>Validation of Hill-Type Muscle Models in Relation to Neuromuscular Recruitment and Force–Velocity Properties: Predicting Patterns of In Vivo Muscle Force</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Biewener, Andrew A.; Wakeling, James M.; Lee, Sabrina S.; Arnold, Allison S.</p> <p>2014-01-01</p> <p>We review here the use and reliability of Hill-type muscle models to predict muscle performance under varying conditions, ranging from in situ production of isometric force to in vivo dynamics of muscle length change and force in response to activation. Muscle models are frequently used in musculoskeletal simulations of movement, particularly when applied to studies of human motor performance in which surgically implanted transducers have limited use. Musculoskeletal simulations of different animal species also are being developed to evaluate comparative and evolutionary aspects of locomotor performance. However, such models are rarely validated against direct measures of fascicle strain or recordings of muscle–tendon force. Historically, Hill-type models simplify properties of whole muscle by scaling salient properties of single fibers to whole muscles, typically accounting for a muscle’s architecture and series elasticity. Activation of the model’s single contractile element (assigned the properties of homogenous fibers) is also simplified and is often based on temporal features of myoelectric (EMG) activation recorded from the muscle. Comparison of standard one-element models with a novel two-element model and with in situ and in vivo measures of EMG, fascicle strain, and force recorded from the gastrocnemius muscles of goats shows that a two-element Hill-type model, which allows independent recruitment of slow and fast units, better predicts temporal patterns of in situ and in vivo force. Recruitment patterns of slow/fast units based on wavelet decomposition of EMG activity in frequency–time space are generally correlated with the intensity spectra of the EMG signals, the strain rates of the fascicles, and the muscle–tendon forces measured in vivo, with faster units linked to greater strain rates and to more rapid forces. Using direct measures of muscle performance to further test Hill-type models, whether traditional or more complex, remains critical for establishing their accuracy and essential for verifying their applicability to scientific and clinical studies of musculoskeletal function. PMID:24928073</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28559099','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28559099"><span>Forecasting model for Pea seed-borne mosaic virus epidemics in field pea crops in a Mediterranean-type environment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Congdon, B S; Coutts, B A; Jones, R A C; Renton, M</p> <p>2017-09-15</p> <p>An empirical model was developed to forecast Pea seed-borne mosaic virus (PSbMV) incidence at a critical phase of the annual growing season to predict yield loss in field pea crops sown under Mediterranean-type conditions. The model uses pre-growing season rainfall to calculate an index of aphid abundance in early-August which, in combination with PSbMV infection level in seed sown, is used to forecast virus crop incidence. Using predicted PSbMV crop incidence in early-August and day of sowing, PSbMV transmission from harvested seed was also predicted, albeit less accurately. The model was developed so it provides forecasts before sowing to allow sufficient time to implement control recommendations, such as having representative seed samples tested for PSbMV transmission rate to seedlings, obtaining seed with minimal PSbMV infection or of a PSbMV-resistant cultivar, and implementation of cultural management strategies. The model provides a disease forecast risk indication, taking into account predicted percentage yield loss to PSbMV infection and economic factors involved in field pea production. This disease risk forecast delivers location-specific recommendations regarding PSbMV management to end-users. These recommendations will be delivered directly to end-users via SMS alerts with links to web support that provide information on PSbMV management options. This modelling and decision support system approach would likely be suitable for use in other world regions where field pea is grown in similar Mediterranean-type environments. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27929098','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27929098"><span>EP-DNN: A Deep Neural Network-Based Global Enhancer Prediction Algorithm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Seong Gon; Harwani, Mrudul; Grama, Ananth; Chaterji, Somali</p> <p>2016-12-08</p> <p>We present EP-DNN, a protocol for predicting enhancers based on chromatin features, in different cell types. Specifically, we use a deep neural network (DNN)-based architecture to extract enhancer signatures in a representative human embryonic stem cell type (H1) and a differentiated lung cell type (IMR90). We train EP-DNN using p300 binding sites, as enhancers, and TSS and random non-DHS sites, as non-enhancers. We perform same-cell and cross-cell predictions to quantify the validation rate and compare against two state-of-the-art methods, DEEP-ENCODE and RFECS. We find that EP-DNN has superior accuracy with a validation rate of 91.6%, relative to 85.3% for DEEP-ENCODE and 85.5% for RFECS, for a given number of enhancer predictions and also scales better for a larger number of enhancer predictions. Moreover, our H1 → IMR90 predictions turn out to be more accurate than IMR90 → IMR90, potentially because H1 exhibits a richer signature set and our EP-DNN model is expressive enough to extract these subtleties. Our work shows how to leverage the full expressivity of deep learning models, using multiple hidden layers, while avoiding overfitting on the training data. We also lay the foundation for exploration of cross-cell enhancer predictions, potentially reducing the need for expensive experimentation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...638433K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...638433K"><span>EP-DNN: A Deep Neural Network-Based Global Enhancer Prediction Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Seong Gon; Harwani, Mrudul; Grama, Ananth; Chaterji, Somali</p> <p>2016-12-01</p> <p>We present EP-DNN, a protocol for predicting enhancers based on chromatin features, in different cell types. Specifically, we use a deep neural network (DNN)-based architecture to extract enhancer signatures in a representative human embryonic stem cell type (H1) and a differentiated lung cell type (IMR90). We train EP-DNN using p300 binding sites, as enhancers, and TSS and random non-DHS sites, as non-enhancers. We perform same-cell and cross-cell predictions to quantify the validation rate and compare against two state-of-the-art methods, DEEP-ENCODE and RFECS. We find that EP-DNN has superior accuracy with a validation rate of 91.6%, relative to 85.3% for DEEP-ENCODE and 85.5% for RFECS, for a given number of enhancer predictions and also scales better for a larger number of enhancer predictions. Moreover, our H1 → IMR90 predictions turn out to be more accurate than IMR90 → IMR90, potentially because H1 exhibits a richer signature set and our EP-DNN model is expressive enough to extract these subtleties. Our work shows how to leverage the full expressivity of deep learning models, using multiple hidden layers, while avoiding overfitting on the training data. We also lay the foundation for exploration of cross-cell enhancer predictions, potentially reducing the need for expensive experimentation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/1769','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/1769"><span>Using Digital Terrain Modeling to Predict Ecological Types in the Balsam Mountains of Western North Carolina</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Richard H. Odom; W. Henry McNab</p> <p>2000-01-01</p> <p>Relationships between overstory composition and topographic conditions were studied in high-elevation (>1300 meters) forests in the Balsam Mountains of western North Carolina to determine whether models could be developed to predict the occurrence of number vegetative communities in relation to topographic variables (elevation, landscape position, surface geometry,...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/38042','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/38042"><span>Validation of an internal hardwood log defect prediction model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>R. Edward Thomas</p> <p>2011-01-01</p> <p>The type, size, and location of internal defects dictate the grade and value of lumber sawn from hardwood logs. However, acquiring internal defect knowledge with x-ray/computed-tomography or magnetic-resonance imaging technology can be expensive both in time and cost. An alternative approach uses prediction models based on correlations among external defect indicators...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28191997','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28191997"><span>Different types of employee well-being across time and their relationships with job crafting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hakanen, Jari J; Peeters, Maria C W; Schaufeli, Wilmar B</p> <p>2018-04-01</p> <p>We used and integrated the circumplex model of affect (Russell, 1980) and the conservation of resources theory (Hobfoll, 1998) to hypothesize how various types of employee well-being, which can be differentiated on theoretical grounds (i.e., work engagement, job satisfaction, burnout, and workaholism), may differently predict various job crafting behaviors (i.e., increasing structural and social resources and challenging demands, and decreasing hindering demands) and each other over time. At Time 1, we measured employee well-being, and 4 years later at Time 2, job crafting and well-being, using a large sample of Finnish dentists (N = 1,877). The results of structural equation modeling showed that (a) work engagement positively predicted both types of increasing resources and challenging demands and negatively predicted decreasing hindering demands; (b) workaholism positively predicted increasing structural resources and challenging demands; (c) burnout positively predicted decreasing hindering demands and negatively predicted increasing structural resources, whereas (d) job satisfaction did not relate to job crafting over time; and (e) work engagement positively influenced job satisfaction and negatively influenced burnout, whereas (f) workaholism predicted burnout after controlling for baseline levels. Thus, work engagement was a stronger predictor of future job crafting and other types of employee well-being than job satisfaction. Although workaholism was positively associated with job crafting, it also predicted burnout. We conclude that the relationship between job crafting and employee well-being may be more complex than assumed, because the way in which employees will craft their jobs in the future seems to depend on how they currently feel. (PsycINFO Database Record (c) 2018 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=product+AND+mix&pg=7&id=ED529512','ERIC'); return false;" href="https://eric.ed.gov/?q=product+AND+mix&pg=7&id=ED529512"><span>Predicting Academic Success of First-Time College-Bound African American Students at a Predominantly White Four-Year Public Institution: A Preadmission Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Redmond, M. William, Jr.</p> <p>2011-01-01</p> <p>The purpose of this study is to develop a preadmission predictive model of student success for prospective first-time African American college applicants at a predominately White four-year public institution within the Pennsylvania State System of Higher Education. This model will use two types of variables. They are (a) cognitive variables (i.e.,…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27476734','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27476734"><span>Theoretical Analysis of Fas Ligand-Induced Apoptosis with an Ordinary Differential Equation Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shi, Zhimin; Li, Yan; Liu, Zhihai; Mi, Jun; Wang, Renxiao</p> <p>2012-12-01</p> <p>Upon the treatment of Fas ligand, different types of cells exhibit different apoptotic mechanisms, which are determined by a complex network of biological pathways. In order to derive a quantitative interpretation of the cell sensitivity and apoptosis pathways, we have developed an ordinary differential equation model. Our model is intended to include all of the known major components in apoptosis pathways mediated by Fas receptor. It is composed of 29 equations using a total of 49 rate constants and 13 protein concentrations. All parameters used in our model were derived through nonlinear fitting to experimentally measured concentrations of four selected proteins in Jurkat T-cells, including caspase-3, caspase-8, caspase-9, and Bid. Our model is able to correctly interpret the role of kinetic parameters and protein concentrations in cell sensitivity to FasL. It reveals the possible reasons for the transition between type-I and type-II pathways and also provides some interesting predictions, such as the more decisive role of Fas over Bax in apoptosis pathway and a possible feedback mechanism between type-I and type-II pathways. But our model failed in predicting FasL-induced apoptotic mechanism of NCI-60 cells from their gene-expression levels. Limitations in our model are also discussed. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29295123','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29295123"><span>Detecting Protected Health Information in Heterogeneous Clinical Notes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Henriksson, Aron; Kvist, Maria; Dalianis, Hercules</p> <p>2017-01-01</p> <p>To enable secondary use of healthcare data in a privacy-preserving manner, there is a need for methods capable of automatically identifying protected health information (PHI) in clinical text. To that end, learning predictive models from labeled examples has emerged as a promising alternative to rule-based systems. However, little is known about differences with respect to PHI prevalence in different types of clinical notes and how potential domain differences may affect the performance of predictive models trained on one particular type of note and applied to another. In this study, we analyze the performance of a predictive model trained on an existing PHI corpus of Swedish clinical notes and applied to a variety of clinical notes: written (i) in different clinical specialties, (ii) under different headings, and (iii) by persons in different professions. The results indicate that domain adaption is needed for effective detection of PHI in heterogeneous clinical notes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..492..837C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..492..837C"><span>The predictive content of CBOE crude oil volatility index</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Hongtao; Liu, Li; Li, Xiaolei</p> <p>2018-02-01</p> <p>Volatility forecasting is an important issue in the area of econophysics. The information content of implied volatility for financial return volatility has been well documented in the literature but very few studies focus on oil volatility. In this paper, we show that the CBOE crude oil volatility index (OVX) has predictive ability for spot volatility of WTI and Brent oil returns, from both in-sample and out-of-sample perspectives. Including OVX-based implied volatility in GARCH-type volatility models can improve forecasting accuracy most of time. The predictability from OVX to spot volatility is also found for longer forecasting horizons of 5 days and 20 days. The simple GARCH(1,1) and fractionally integrated GARCH with OVX performs significantly better than the other OVX models and all 6 univariate GARCH-type models without OVX. Robustness test results suggest that OVX provides different information from as short-term interest rate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27005742','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27005742"><span>Use of mobile and passive badge air monitoring data for NOX and ozone air pollution spatial exposure prediction models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xu, Wei; Riley, Erin A; Austin, Elena; Sasakura, Miyoko; Schaal, Lanae; Gould, Timothy R; Hartin, Kris; Simpson, Christopher D; Sampson, Paul D; Yost, Michael G; Larson, Timothy V; Xiu, Guangli; Vedal, Sverre</p> <p>2017-03-01</p> <p>Air pollution exposure prediction models can make use of many types of air monitoring data. Fixed location passive samples typically measure concentrations averaged over several days to weeks. Mobile monitoring data can generate near continuous concentration measurements. It is not known whether mobile monitoring data are suitable for generating well-performing exposure prediction models or how they compare with other types of monitoring data in generating exposure models. Measurements from fixed site passive samplers and mobile monitoring platform were made over a 2-week period in Baltimore in the summer and winter months in 2012. Performance of exposure prediction models for long-term nitrogen oxides (NO X ) and ozone (O 3 ) concentrations were compared using a state-of-the-art approach for model development based on land use regression (LUR) and geostatistical smoothing. Model performance was evaluated using leave-one-out cross-validation (LOOCV). Models performed well using the mobile peak traffic monitoring data for both NO X and O 3 , with LOOCV R 2 s of 0.70 and 0.71, respectively, in the summer, and 0.90 and 0.58, respectively, in the winter. Models using 2-week passive samples for NO X had LOOCV R 2 s of 0.60 and 0.65 in the summer and winter months, respectively. The passive badge sampling data were not adequate for developing models for O 3 . Mobile air monitoring data can be used to successfully build well-performing LUR exposure prediction models for NO X and O 3 and are a better source of data for these models than 2-week passive badge data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JSV...319.1271X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JSV...319.1271X"><span>An enhanced beam model for constrained layer damping and a parameter study of damping contribution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xie, Zhengchao; Shepard, W. Steve, Jr.</p> <p>2009-01-01</p> <p>An enhanced analytical model is presented based on an extension of previous models for constrained layer damping (CLD) in beam-like structures. Most existing CLD models are based on the assumption that shear deformation in the core layer is the only source of damping in the structure. However, previous research has shown that other types of deformation in the core layer, such as deformations from longitudinal extension and transverse compression, can also be important. In the enhanced analytical model developed here, shear, extension, and compression deformations are all included. This model can be used to predict the natural frequencies and modal loss factors. The numerical study shows that compared to other models, this enhanced model is accurate in predicting the dynamic characteristics. As a result, the model can be accepted as a general computation model. With all three types of damping included and the formulation used here, it is possible to study the impact of the structure's geometry and boundary conditions on the relative contribution of each type of damping. To that end, the relative contributions in the frequency domain for a few sample cases are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024987','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024987"><span>Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang</p> <p>2016-01-01</p> <p>The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22218698','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22218698"><span>The development and validation of a clinical prediction model to determine the probability of MODY in patients with young-onset diabetes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shields, B M; McDonald, T J; Ellard, S; Campbell, M J; Hyde, C; Hattersley, A T</p> <p>2012-05-01</p> <p>Diagnosing MODY is difficult. To date, selection for molecular genetic testing for MODY has used discrete cut-offs of limited clinical characteristics with varying sensitivity and specificity. We aimed to use multiple, weighted, clinical criteria to determine an individual's probability of having MODY, as a crucial tool for rational genetic testing. We developed prediction models using logistic regression on data from 1,191 patients with MODY (n = 594), type 1 diabetes (n = 278) and type 2 diabetes (n = 319). Model performance was assessed by receiver operating characteristic (ROC) curves, cross-validation and validation in a further 350 patients. The models defined an overall probability of MODY using a weighted combination of the most discriminative characteristics. For MODY, compared with type 1 diabetes, these were: lower HbA(1c), parent with diabetes, female sex and older age at diagnosis. MODY was discriminated from type 2 diabetes by: lower BMI, younger age at diagnosis, female sex, lower HbA(1c), parent with diabetes, and not being treated with oral hypoglycaemic agents or insulin. Both models showed excellent discrimination (c-statistic = 0.95 and 0.98, respectively), low rates of cross-validated misclassification (9.2% and 5.3%), and good performance on the external test dataset (c-statistic = 0.95 and 0.94). Using the optimal cut-offs, the probability models improved the sensitivity (91% vs 72%) and specificity (94% vs 91%) for identifying MODY compared with standard criteria of diagnosis <25 years and an affected parent. The models are now available online at www.diabetesgenes.org . We have developed clinical prediction models that calculate an individual's probability of having MODY. This allows an improved and more rational approach to determine who should have molecular genetic testing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22689365-sci-fri-am-quality-safety-professional-issues-predicting-waiting-times-radiation-oncology-using-machine-learning','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22689365-sci-fri-am-quality-safety-professional-issues-predicting-waiting-times-radiation-oncology-using-machine-learning"><span>Sci-Fri AM: Quality, Safety, and Professional Issues 04: Predicting waiting times in Radiation Oncology using machine learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Joseph, Ackeem; Herrera, David; Hijal, Tarek</p> <p></p> <p>We describe a method for predicting waiting times in radiation oncology. Machine learning is a powerful predictive modelling tool that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The patient waiting experience remains one of the most vexing challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick and in pain, to worry about when they will receive the care they need. In radiation oncology, patients typically experience three types of waiting: Waiting at home for their treatment plan to be prepared Waiting inmore » the waiting room for daily radiotherapy Waiting in the waiting room to see a physician in consultation or follow-up These waiting periods are difficult for staff to predict and only rough estimates are typically provided, based on personal experience. In the present era of electronic health records, waiting times need not be so uncertain. At our centre, we have incorporated the electronic treatment records of all previously-treated patients into our machine learning model. We found that the Random Forest Regression model provides the best predictions for daily radiotherapy treatment waiting times (type 2). Using this model, we achieved a median residual (actual minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes. The main features that generated the best fit model (from most to least significant) are: Allocated time, median past duration, fraction number and the number of treatment fields.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24768747','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24768747"><span>Surgery on spinal epidural metastases (SEM) in renal cell carcinoma: a plea for a new paradigm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bakker, Nicolaas A; Coppes, Maarten H; Vergeer, Rob A; Kuijlen, Jos M A; Groen, Rob J M</p> <p>2014-09-01</p> <p>Prediction models for outcome of decompressive surgical resection of spinal epidural metastases (SEM) have in common that they have been developed for all types of SEM, irrespective of the type of primary tumor. It is our experience in clinical practice, however, that these models often fail to accurately predict outcome in the individual patient. To investigate whether decision making could be optimized by applying tumor-specific prediction models. For the proof of concept, we analyzed patients with SEM from renal cell carcinoma that we have operated on. Retrospective chart analysis 2006 to 2012. Twenty-one consecutive patients with symptomatic SEM of renal cell carcinoma. Predictive factors for survival. Next to established predictive factors for survival, we analyzed the predictive value of the Motzer criteria in these patients. The Motzer criteria comprise a specific and validated risk model for survival in patients with renal cell carcinoma. After multivariable analysis, only Motzer intermediate (hazard ratio [HR] 17.4, 95% confidence interval [CI] 1.82-166, p=.01) and high risk (HR 39.3, 95% CI 3.10-499, p=.005) turned out to be significantly associated with survival in patients with renal cell carcinoma that we have operated on. In this study, we have demonstrated that decision making could have been optimized by implementing the Motzer criteria next to established prediction models. We, therefore, suggest that in future, in patients with SEM from renal cell carcinoma, the Motzer criteria are also taken into account. Copyright © 2014 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28981546','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28981546"><span>Recapitulation of Ayurveda constitution types by machine learning of phenotypic traits.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tiwari, Pradeep; Kutum, Rintu; Sethi, Tavpritesh; Shrivastava, Ankita; Girase, Bhushan; Aggarwal, Shilpi; Patil, Rutuja; Agarwal, Dhiraj; Gautam, Pramod; Agrawal, Anurag; Dash, Debasis; Ghosh, Saurabh; Juvekar, Sanjay; Mukerji, Mitali; Prasher, Bhavana</p> <p>2017-01-01</p> <p>In Ayurveda system of medicine individuals are classified into seven constitution types, "Prakriti", for assessing disease susceptibility and drug responsiveness. Prakriti evaluation involves clinical examination including questions about physiological and behavioural traits. A need was felt to develop models for accurately predicting Prakriti classes that have been shown to exhibit molecular differences. The present study was carried out on data of phenotypic attributes in 147 healthy individuals of three extreme Prakriti types, from a genetically homogeneous population of Western India. Unsupervised and supervised machine learning approaches were used to infer inherent structure of the data, and for feature selection and building classification models for Prakriti respectively. These models were validated in a North Indian population. Unsupervised clustering led to emergence of three natural clusters corresponding to three extreme Prakriti classes. The supervised modelling approaches could classify individuals, with distinct Prakriti types, in the training and validation sets. This study is the first to demonstrate that Prakriti types are distinct verifiable clusters within a multidimensional space of multiple interrelated phenotypic traits. It also provides a computational framework for predicting Prakriti classes from phenotypic attributes. This approach may be useful in precision medicine for stratification of endophenotypes in healthy and diseased populations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/fs/2011/3014/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/fs/2011/3014/"><span>Using models for the optimization of hydrologic monitoring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.</p> <p>2011-01-01</p> <p>Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The techniques briefly described earlier are described in detail in a U.S. Geological Survey Scientific Investigations Report available on the Internet (Fienen and others, 2010; http://pubs.usgs.gov/sir/2010/5159/). This fact sheet presents a synopsis of the techniques as applied to a synthetic model based on a model constructed using properties from the Lake Michigan Basin (Hoard, 2010).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14677870','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14677870"><span>Alternative approaches to predicting methane emissions from dairy cows.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mills, J A N; Kebreab, E; Yates, C M; Crompton, L A; Cammell, S B; Dhanoa, M S; Agnew, R E; France, J</p> <p>2003-12-01</p> <p>Previous attempts to apply statistical models, which correlate nutrient intake with methane production, have been of limited value where predictions are obtained for nutrient intakes and diet types outside those used in model construction. Dynamic mechanistic models have proved more suitable for extrapolation, but they remain computationally expensive and are not applied easily in practical situations. The first objective of this research focused on employing conventional techniques to generate statistical models of methane production appropriate to United Kingdom dairy systems. The second objective was to evaluate these models and a model published previously using both United Kingdom and North American data sets. Thirdly, nonlinear models were considered as alternatives to the conventional linear regressions. The United Kingdom calorimetry data used to construct the linear models also were used to develop the three nonlinear alternatives that were all of modified Mitscherlich (monomolecular) form. Of the linear models tested, an equation from the literature proved most reliable across the full range of evaluation data (root mean square prediction error = 21.3%). However, the Mitscherlich models demonstrated the greatest degree of adaptability across diet types and intake level. The most successful model for simulating the independent data was a modified Mitscherlich equation with the steepness parameter set to represent dietary starch-to-ADF ratio (root mean square prediction error = 20.6%). However, when such data were unavailable, simpler Mitscherlich forms relating dry matter or metabolizable energy intake to methane production remained better alternatives relative to their linear counterparts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014ApPhL.104o4107W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014ApPhL.104o4107W"><span>Modeling of drop breakup in the bag breakup regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, C.; Chang, S.; Wu, H.; Xu, J.</p> <p>2014-04-01</p> <p>Several analytic models for predicting the drop deformation and breakup have been developed over the last three decades, but modeling drop breakup in the bag-type regime is less reported. In this Letter, a breakup model has been proposed to predict the drop deformation length and breakup time in the bag-type breakup regime in a more accurate manner. In the present model, the drop deformation which is approximately as the displacement of the centre of mass (c. m.) along the axis located at the centre of the drop, and the movement of c. m. is obtained by solving the pressure balance equation. The effects of the drop deformation on the drop external aerodynamic force are considered in this model. Drop breakup occurs when the deformation length reaches the maximum value and the maximum deformation length is a function of Weber number. The performance and applicability of the proposed breakup model are tested against the published experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22073726','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22073726"><span>A mathematical model for the interactive behavior of sulfate-reducing bacteria and methanogens during anaerobic digestion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ahammad, S Ziauddin; Gomes, James; Sreekrishnan, T R</p> <p>2011-09-01</p> <p>Anaerobic degradation of waste involves different classes of microorganisms, and there are different types of interactions among them for substrates, terminal electron acceptors, and so on. A mathematical model is developed based on the mass balance of different substrates, products, and microbes present in the system to study the interaction between methanogens and sulfate-reducing bacteria (SRB). The performance of major microbial consortia present in the system, such as propionate-utilizing acetogens, butyrate-utilizing acetogens, acetoclastic methanogens, hydrogen-utilizing methanogens, and SRB were considered and analyzed in the model. Different substrates consumed and products formed during the process also were considered in the model. The experimental observations and model predictions showed very good prediction capabilities of the model. Model prediction was validated statistically. It was observed that the model-predicted values matched the experimental data very closely, with an average error of 3.9%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/7321113-analysis-buoyant-surface-jets','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/7321113-analysis-buoyant-surface-jets"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Shirazi, M.A.; Davis, L.R.</p> <p></p> <p>To obtain improved prediction of heated plume characteristics from a surface jet, an integral analysis computer model was modified and a comprehensive set of field and laboratory data available from the literature was gathered, analyzed, and correlated for estimating the magnitude of certain coefficients that are normally introduced in these analyses to achieve closure. The parameters so estimated include the coefficients for entrainment, turbulent exchange, drag, and shear. Since there appeared considerable scatter in the data, even after appropriate subgrouping to narrow the influence of various flow conditions on the data, only statistical procedures could be applied to find themore » best fit. This and other analyses of its type have been widely used in industry and government for the prediction of thermal plumes from steam power plants. Although the present model has many shortcomings, a recent independent and exhaustive assessment of such predictions revealed that in comparison with other analyses of its type the present analysis predicts the field situations more successfully.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJAME..20..445N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJAME..20..445N"><span>Analysis of Piston Slap Motion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Narayan, S.</p> <p>2015-05-01</p> <p>Piston slap is the major force contibuting towards noise levels in combustion engines.This type of noise depends upon a number of factors such as the piston-liner gap, type of lubricant used, number of piston pins as well as geometry of the piston. In this work the lateral and rotary motion of the piston in the gap between the cylinder liner and piston has been analyzed. A model that can predict the forces and response of the engine block due to slap has been dicussed. The parameters such as mass, spring and damping constant have been predicted using a vibrational mobility model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28755385','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28755385"><span>Building and validating a prediction model for paediatric type 1 diabetes risk using next generation targeted sequencing of class II HLA genes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, Lue Ping; Carlsson, Annelie; Larsson, Helena Elding; Forsander, Gun; Ivarsson, Sten A; Kockum, Ingrid; Ludvigsson, Johnny; Marcus, Claude; Persson, Martina; Samuelsson, Ulf; Örtqvist, Eva; Pyo, Chul-Woo; Bolouri, Hamid; Zhao, Michael; Nelson, Wyatt C; Geraghty, Daniel E; Lernmark, Åke</p> <p>2017-11-01</p> <p>It is of interest to predict possible lifetime risk of type 1 diabetes (T1D) in young children for recruiting high-risk subjects into longitudinal studies of effective prevention strategies. Utilizing a case-control study in Sweden, we applied a recently developed next generation targeted sequencing technology to genotype class II genes and applied an object-oriented regression to build and validate a prediction model for T1D. In the training set, estimated risk scores were significantly different between patients and controls (P = 8.12 × 10 -92 ), and the area under the curve (AUC) from the receiver operating characteristic (ROC) analysis was 0.917. Using the validation data set, we validated the result with AUC of 0.886. Combining both training and validation data resulted in a predictive model with AUC of 0.903. Further, we performed a "biological validation" by correlating risk scores with 6 islet autoantibodies, and found that the risk score was significantly correlated with IA-2A (Z-score = 3.628, P < 0.001). When applying this prediction model to the Swedish population, where the lifetime T1D risk ranges from 0.5% to 2%, we anticipate identifying approximately 20 000 high-risk subjects after testing all newborns, and this calculation would identify approximately 80% of all patients expected to develop T1D in their lifetime. Through both empirical and biological validation, we have established a prediction model for estimating lifetime T1D risk, using class II HLA. This prediction model should prove useful for future investigations to identify high-risk subjects for prevention research in high-risk populations. Copyright © 2017 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29161538','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29161538"><span>Analysis of crash proportion by vehicle type at traffic analysis zone level: A mixed fractional split multinomial logit modeling approach with spatial effects.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Jaeyoung; Yasmin, Shamsunnahar; Eluru, Naveen; Abdel-Aty, Mohamed; Cai, Qing</p> <p>2018-02-01</p> <p>In traffic safety literature, crash frequency variables are analyzed using univariate count models or multivariate count models. In this study, we propose an alternative approach to modeling multiple crash frequency dependent variables. Instead of modeling the frequency of crashes we propose to analyze the proportion of crashes by vehicle type. A flexible mixed multinomial logit fractional split model is employed for analyzing the proportions of crashes by vehicle type at the macro-level. In this model, the proportion allocated to an alternative is probabilistically determined based on the alternative propensity as well as the propensity of all other alternatives. Thus, exogenous variables directly affect all alternatives. The approach is well suited to accommodate for large number of alternatives without a sizable increase in computational burden. The model was estimated using crash data at Traffic Analysis Zone (TAZ) level from Florida. The modeling results clearly illustrate the applicability of the proposed framework for crash proportion analysis. Further, the Excess Predicted Proportion (EPP)-a screening performance measure analogous to Highway Safety Manual (HSM), Excess Predicted Average Crash Frequency is proposed for hot zone identification. Using EPP, a statewide screening exercise by the various vehicle types considered in our analysis was undertaken. The screening results revealed that the spatial pattern of hot zones is substantially different across the various vehicle types considered. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=children+AND+bad+AND+behavior&id=EJ1121200','ERIC'); return false;" href="https://eric.ed.gov/?q=children+AND+bad+AND+behavior&id=EJ1121200"><span>Adolescent Psychosocial Development: A Review of Longitudinal Models and Research</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Meeus, Wim</p> <p>2016-01-01</p> <p>This review used 4 types of longitudinal models (descriptive models, prediction models, developmental sequence models and longitudinal mediation models) to identify regular patterns of psychosocial development in adolescence. Eight patterns of adolescent development were observed across countries: (1) adolescent maturation in multiple…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012WRR....48.7513B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012WRR....48.7513B"><span>Uncertainty assessment and implications for data acquisition in support of integrated hydrologic models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brunner, Philip; Doherty, J.; Simmons, Craig T.</p> <p>2012-07-01</p> <p>The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/90388','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/90388"><span>An experimental and theoretical study to relate uncommon rock/fluid properties to oil recovery. Final report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Watson, R.</p> <p></p> <p>Waterflooding is the most commonly used secondary oil recovery technique. One of the requirements for understanding waterflood performance is a good knowledge of the basic properties of the reservoir rocks. This study is aimed at correlating rock-pore characteristics to oil recovery from various reservoir rock types and incorporating these properties into empirical models for Predicting oil recovery. For that reason, this report deals with the analyses and interpretation of experimental data collected from core floods and correlated against measurements of absolute permeability, porosity. wettability index, mercury porosimetry properties and irreducible water saturation. The results of the radial-core the radial-core andmore » linear-core flow investigations and the other associated experimental analyses are presented and incorporated into empirical models to improve the predictions of oil recovery resulting from waterflooding, for sandstone and limestone reservoirs. For the radial-core case, the standardized regression model selected, based on a subset of the variables, predicted oil recovery by waterflooding with a standard deviation of 7%. For the linear-core case, separate models are developed using common, uncommon and combination of both types of rock properties. It was observed that residual oil saturation and oil recovery are better predicted with the inclusion of both common and uncommon rock/fluid properties into the predictive models.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28890193','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28890193"><span>Evaluating cessation of the type 2 oral polio vaccine by modeling pre- and post-cessation detection rates.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kroiss, Steve J; Famulare, Michael; Lyons, Hil; McCarthy, Kevin A; Mercer, Laina D; Chabot-Couture, Guillaume</p> <p>2017-10-09</p> <p>The globally synchronized removal of the attenuated Sabin type 2 strain from the oral polio vaccine (OPV) in April 2016 marked a major change in polio vaccination policy. This change will provide a significant reduction in the burden of vaccine-associated paralytic polio (VAPP), but may increase the risk of circulating vaccine-derived poliovirus (cVDPV2) outbreaks during the transition period. This risk can be monitored by tracking the disappearance of Sabin-like type 2 (SL2) using data from the polio surveillance system. We studied SL2 prevalence in 17 countries in Africa and Asia, from 2010 to 2016 using acute flaccid paralysis surveillance data. We modeled the peak and decay of SL2 prevalence following mass vaccination events using a beta-binomial model for the detection rate, and a Ricker function for the temporal dependence. We found type 2 circulated the longest of all serotypes after a vaccination campaign, but that SL2 prevalence returned to baseline levels in approximately 50days. Post-cessation model predictions identified 19 anomalous SL2 detections outside of model predictions in Afghanistan, India, Nigeria, Pakistan, and western Africa. Our models established benchmarks for the duration of SL2 detection after OPV2 cessation. As predicted, SL2 detection rates have plummeted, except in Nigeria where OPV2 use continued for some time in response to recent cVDPV2 detections. However, the anomalous SL2 detections suggest specific areas that merit enhanced monitoring for signs of cVDPV2 outbreaks. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24611656','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24611656"><span>Generalized time-dependent model of radiation-induced chromosomal aberrations in normal and repair-deficient human cells.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ponomarev, Artem L; George, Kerry; Cucinotta, Francis A</p> <p>2014-03-01</p> <p>We have developed a model that can simulate the yield of radiation-induced chromosomal aberrations (CAs) and unrejoined chromosome breaks in normal and repair-deficient cells. The model predicts the kinetics of chromosomal aberration formation after exposure in the G₀/G₁ phase of the cell cycle to either low- or high-LET radiation. A previously formulated model based on a stochastic Monte Carlo approach was updated to consider the time dependence of DNA double-strand break (DSB) repair (proper or improper), and different cell types were assigned different kinetics of DSB repair. The distribution of the DSB free ends was derived from a mechanistic model that takes into account the structure of chromatin and DSB clustering from high-LET radiation. The kinetics of chromosomal aberration formation were derived from experimental data on DSB repair kinetics in normal and repair-deficient cell lines. We assessed different types of chromosomal aberrations with the focus on simple and complex exchanges, and predicted the DSB rejoining kinetics and misrepair probabilities for different cell types. The results identify major cell-dependent factors, such as a greater yield of chromosome misrepair in ataxia telangiectasia (AT) cells and slower rejoining in Nijmegen (NBS) cells relative to the wild-type. The model's predictions suggest that two mechanisms could exist for the inefficiency of DSB repair in AT and NBS cells, one that depends on the overall speed of joining (either proper or improper) of DNA broken ends, and another that depends on geometric factors, such as the Euclidian distance between DNA broken ends, which influences the relative frequency of misrepair.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1812960G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1812960G"><span>A data-driven prediction method for fast-slow systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Groth, Andreas; Chekroun, Mickael; Kondrashov, Dmitri; Ghil, Michael</p> <p>2016-04-01</p> <p>In this work, we present a prediction method for processes that exhibit a mixture of variability on low and fast scales. The method relies on combining empirical model reduction (EMR) with singular spectrum analysis (SSA). EMR is a data-driven methodology for constructing stochastic low-dimensional models that account for nonlinearity and serial correlation in the estimated noise, while SSA provides a decomposition of the complex dynamics into low-order components that capture spatio-temporal behavior on different time scales. Our study focuses on the data-driven modeling of partial observations from dynamical systems that exhibit power spectra with broad peaks. The main result in this talk is that the combination of SSA pre-filtering with EMR modeling improves, under certain circumstances, the modeling and prediction skill of such a system, as compared to a standard EMR prediction based on raw data. Specifically, it is the separation into "fast" and "slow" temporal scales by the SSA pre-filtering that achieves the improvement. We show, in particular that the resulting EMR-SSA emulators help predict intermittent behavior such as rapid transitions between specific regions of the system's phase space. This capability of the EMR-SSA prediction will be demonstrated on two low-dimensional models: the Rössler system and a Lotka-Volterra model for interspecies competition. In either case, the chaotic dynamics is produced through a Shilnikov-type mechanism and we argue that the latter seems to be an important ingredient for the good prediction skills of EMR-SSA emulators. Shilnikov-type behavior has been shown to arise in various complex geophysical fluid models, such as baroclinic quasi-geostrophic flows in the mid-latitude atmosphere and wind-driven double-gyre ocean circulation models. This pervasiveness of the Shilnikow mechanism of fast-slow transition opens interesting perspectives for the extension of the proposed EMR-SSA approach to more realistic situations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28487828','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28487828"><span>Predictive Control of the Blood Glucose Level in Type I Diabetic Patient Using Delay Differential Equation Wang Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Esna-Ashari, Mojgan; Zekri, Maryam; Askari, Masood; Khalili, Noushin</p> <p>2017-01-01</p> <p>Because of increasing risk of diabetes, the measurement along with control of blood sugar has been of great importance in recent decades. In type I diabetes, because of the lack of insulin secretion, the cells cannot absorb glucose leading to low level of glucose. To control blood glucose (BG), the insulin must be injected to the body. This paper proposes a method for BG level regulation in type I diabetes. The control strategy is based on nonlinear model predictive control. The aim of the proposed controller optimized with genetics algorithms is to measure BG level each time and predict it for the next time interval. This merit causes a less amount of control effort, which is the rate of insulin delivered to the patient body. Consequently, this method can decrease the risk of hypoglycemia, a lethal phenomenon in regulating BG level in diabetes caused by a low BG level. Two delay differential equation models, namely Wang model and Enhanced Wang model, are applied as controller model and plant, respectively. The simulation results exhibit an acceptable performance of the proposed controller in meal disturbance rejection and robustness against parameter changes. As a result, if the nutrition of the person decreases instantly, the hypoglycemia will not happen. Furthermore, comparing this method with other works, it was shown that the new method outperforms previous studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5394808','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5394808"><span>Predictive Control of the Blood Glucose Level in Type I Diabetic Patient Using Delay Differential Equation Wang Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Esna-Ashari, Mojgan; Zekri, Maryam; Askari, Masood; Khalili, Noushin</p> <p>2017-01-01</p> <p>Because of increasing risk of diabetes, the measurement along with control of blood sugar has been of great importance in recent decades. In type I diabetes, because of the lack of insulin secretion, the cells cannot absorb glucose leading to low level of glucose. To control blood glucose (BG), the insulin must be injected to the body. This paper proposes a method for BG level regulation in type I diabetes. The control strategy is based on nonlinear model predictive control. The aim of the proposed controller optimized with genetics algorithms is to measure BG level each time and predict it for the next time interval. This merit causes a less amount of control effort, which is the rate of insulin delivered to the patient body. Consequently, this method can decrease the risk of hypoglycemia, a lethal phenomenon in regulating BG level in diabetes caused by a low BG level. Two delay differential equation models, namely Wang model and Enhanced Wang model, are applied as controller model and plant, respectively. The simulation results exhibit an acceptable performance of the proposed controller in meal disturbance rejection and robustness against parameter changes. As a result, if the nutrition of the person decreases instantly, the hypoglycemia will not happen. Furthermore, comparing this method with other works, it was shown that the new method outperforms previous studies. PMID:28487828</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70142995','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70142995"><span>Use of the Biotic Ligand Model to predict metal toxicity to aquatic biota in areas of differing geology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Smith, Kathleen S.</p> <p>2005-01-01</p> <p>This work evaluates the use of the biotic ligand model (BLM), an aquatic toxicity model, to predict toxic effects of metals on aquatic biota in areas underlain by different rock types. The chemical composition of water, soil, and sediment is largely derived from the composition of the underlying rock. Geologic source materials control key attributes of water chemistry that affect metal toxicity to aquatic biota, including: 1) potentially toxic elements, 2) alkalinity, 3) total dissolved solids, and 4) soluble major elements, such as Ca and Mg, which contribute to water hardness. Miller (2002) compiled chemical data for water samples collected in watersheds underlain by ten different rock types, and in a mineralized area in western Colorado. He found that each rock type has a unique range of water chemistry. In this study, the ten rock types were grouped into two general categories, igneous and sedimentary. Water collected in watersheds underlain by sedimentary rock has higher mean pH, alkalinity, and calcium concentrations than water collected in watersheds underlain by igneous rock. Water collected in the mineralized area had elevated concentrations of calcium and sulfate in addition to other chemical constituents. Miller's water-chemistry data were used in the BLM (computer program) to determine copper and zinc toxicity to Daphnia magna. Modeling results show that waters from watersheds underlain by different rock types have characteristic ranges of predicted LC 50 values (a measurement of aquatic toxicity) for copper and zinc, with watersheds underlain by igneous rock having lower predicted LC 50 values than watersheds underlain by sedimentary rock. Lower predicted LC 50 values suggest that aquatic biota in watersheds underlain by igneous rock may be more vulnerable to copper and zinc inputs than aquatic biota in watersheds underlain by sedimentary rock. For both copper and zinc, there is a trend of increasing predicted LC 50 values with increasing dissolved organic carbon (DOC) concentrations. Predicted copper LC 50 values are extremely sensitive to DOC concentrations, whereas alkalinity appears to have an influence on zinc toxicity at alkalinities in excess of about 100 mg/L CaCO 3 . These findings show promise for coupling the BLM (computer program) with measured water-chemistry data to predict metal toxicity to aquatic biota in different geologic settings and under different scenarios. This approach may ultimately be a useful tool for mine-site planning, mitigation and remediation strategies, and ecological risk assessment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AcASn..58...28C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AcASn..58...28C"><span>The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.</p> <p>2017-05-01</p> <p>The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29673314','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29673314"><span>A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger</p> <p>2018-04-19</p> <p>Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMNH21B1516S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMNH21B1516S"><span>Dam-Break Flooding and Structural Damage in a Residential Neighborhood: Performance of a coupled hydrodynamic-damage model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanders, B. F.; Gallegos, H. A.; Schubert, J. E.</p> <p>2011-12-01</p> <p>The Baldwin Hills dam-break flood and associated structural damage is investigated in this study. The flood caused high velocity flows exceeding 5 m/s which destroyed 41 wood-framed residential structures, 16 of which were completed washed out. Damage is predicted by coupling a calibrated hydrodynamic flood model based on the shallow-water equations to structural damage models. The hydrodynamic and damage models are two-way coupled so building failure is predicted upon exceedance of a hydraulic intensity parameter, which in turn triggers a localized reduction in flow resistance which affects flood intensity predictions. Several established damage models and damage correlations reported in the literature are tested to evaluate the predictive skill for two damage states defined by destruction (Level 2) and washout (Level 3). Results show that high-velocity structural damage can be predicted with a remarkable level of skill using established damage models, but only with two-way coupling of the hydrodynamic and damage models. In contrast, when structural failure predictions have no influence on flow predictions, there is a significant reduction in predictive skill. Force-based damage models compare well with a subset of the damage models which were devised for similar types of structures. Implications for emergency planning and preparedness as well as monetary damage estimation are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20762216-burnout-prediction-model-based-around-char-morphology','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20762216-burnout-prediction-model-based-around-char-morphology"><span>A burnout prediction model based around char morphology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Tao Wu; Edward Lester; Michael Cloke</p> <p></p> <p>Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..334a2062Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..334a2062Y"><span>The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yulia, M.; Suhandy, D.</p> <p>2018-03-01</p> <p>NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3395053','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3395053"><span>Performance of Reclassification Statistics in Comparing Risk Prediction Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Paynter, Nina P.</p> <p>2012-01-01</p> <p>Concerns have been raised about the use of traditional measures of model fit in evaluating risk prediction models for clinical use, and reclassification tables have been suggested as an alternative means of assessing the clinical utility of a model. Several measures based on the table have been proposed, including the reclassification calibration (RC) statistic, the net reclassification improvement (NRI), and the integrated discrimination improvement (IDI), but the performance of these in practical settings has not been fully examined. We used simulations to estimate the type I error and power for these statistics in a number of scenarios, as well as the impact of the number and type of categories, when adding a new marker to an established or reference model. The type I error was found to be reasonable in most settings, and power was highest for the IDI, which was similar to the test of association. The relative power of the RC statistic, a test of calibration, and the NRI, a test of discrimination, varied depending on the model assumptions. These tools provide unique but complementary information. PMID:21294152</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20628306','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20628306"><span>Pressure prediction model for compression garment design.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q</p> <p>2010-01-01</p> <p>Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22672646','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22672646"><span>Biomine: predicting links between biological entities using network models of heterogeneous databases.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Eronen, Lauri; Toivonen, Hannu</p> <p>2012-06-06</p> <p>Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi, allowing searching for and visualizing connections between given biological entities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMIN12A..07X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMIN12A..07X"><span>Automatic land cover classification of geo-tagged field photos using deep learning method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, G.; Zhu, X.; Fu, D.; Dong, J.; Xiao, X.</p> <p>2016-12-01</p> <p>With the popularity of smartphones, more and more crowdsourcing geo-tagged field photos have been shared by the public online. They are becoming a potentially valuable information source for the environmental studies. However, the labelling and recognition of these photos are time-consuming. To utilise and exploit such information, this research aims to propose a land cover type recognition model for geo-tagged field photo based on the deep learning technique. This model combines a pre-trained convolutional neural network (CNN) as the image feature extractor and the softmax regression model as the feature classifier. The pre-trained CNN model Inception-v3 is used in this study. The previously labelled field photos from the Global Geo-Referenced Field Photo Library (http://eomf.ou.edu/photos) are chosen for model training and validation. The results indicate that our field photo recognition model achieves an acceptable accuracy (50.34% for top-1 prediction and 78.20% for top-3 prediction) of land cover classification. What is more important, this model can provide the probabilities for the predictions as the self-assessment of uncertainty. After filtering out the predictions with the certainty of less than 75%, the overall accuracy can increase to 80.14%, which implies that the model is fully aware of its prediction uncertainty and can quantitatively assess it. Hopefully, by proving the possibility of this type of research, other similar studies could be further conducted, such as geological and atmospheric information extraction from field photos. This research could be a critical exploration of how artificial intelligence and crowd-sourced data can help the earth studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70036559','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70036559"><span>Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Attanasi, E.D.; Coburn, T.C.</p> <p>2009-01-01</p> <p>This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16868014','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16868014"><span>Prediction of 222Rn in Danish dwellings using geology and house construction information from central databases.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Andersen, Claus E; Raaschou-Nielsen, Ole; Andersen, Helle Primdal; Lind, Morten; Gravesen, Peter; Thomsen, Birthe L; Ulbak, Kaare</p> <p>2007-01-01</p> <p>A linear regression model has been developed for the prediction of indoor (222)Rn in Danish houses. The model provides proxy radon concentrations for about 21,000 houses in a Danish case-control study on the possible association between residential radon and childhood cancer (primarily leukaemia). The model was calibrated against radon measurements in 3116 houses. An independent dataset with 788 house measurements was used for model performance assessment. The model includes nine explanatory variables, of which the most important ones are house type and geology. All explanatory variables are available from central databases. The model was fitted to log-transformed radon concentrations and it has an R(2) of 40%. The uncertainty associated with individual predictions of (untransformed) radon concentrations is about a factor of 2.0 (one standard deviation). The comparison with the independent test data shows that the model makes sound predictions and that errors of radon predictions are only weakly correlated with the estimates themselves (R(2) = 10%).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17947036','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17947036"><span>Neural network based glucose - insulin metabolism models for children with Type 1 diabetes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mougiakakou, Stavroula G; Prountzou, Aikaterini; Iliopoulou, Dimitra; Nikita, Konstantina S; Vazeou, Andriani; Bartsocas, Christos S</p> <p>2006-01-01</p> <p>In this paper two models for the simulation of glucose-insulin metabolism of children with Type 1 diabetes are presented. The models are based on the combined use of Compartmental Models (CMs) and artificial Neural Networks (NNs). Data from children with Type 1 diabetes, stored in a database, have been used as input to the models. The data are taken from four children with Type 1 diabetes and contain information about glucose levels taken from continuous glucose monitoring system, insulin intake and food intake, along with corresponding time. The influences of taken insulin on plasma insulin concentration, as well as the effect of food intake on glucose input into the blood from the gut, are estimated from the CMs. The outputs of CMs, along with previous glucose measurements, are fed to a NN, which provides short-term prediction of glucose values. For comparative reasons two different NN architectures have been tested: a Feed-Forward NN (FFNN) trained with the back-propagation algorithm with adaptive learning rate and momentum, and a Recurrent NN (RNN), trained with the Real Time Recurrent Learning (RTRL) algorithm. The results indicate that the best prediction performance can be achieved by the use of RNN.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA214601','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA214601"><span>VHSIC/VHSIC-Like Reliability Prediction Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1989-10-01</p> <p>prediction would require ’ kowledge of event statistics as well as device robustness. Ii1 Additionally, although this is primarily a theoretical, bottom...Degradation in Section 5.3 P = Power PDIP = Plastic DIP P(f) = Probability of Failure due to EOS or ESD P(flc) = Probability of Failure given Contact from an...the results of those stresses: Device Stress Part Number Power Dissipation Manufacturer Test Type Part Description Junction Teniperatune Package Type</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5428936','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5428936"><span>Predictive models of poly(ethylene-terephthalate) film degradation under multi-factor accelerated weathering exposures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.</p> <p>2017-01-01</p> <p>Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/34650','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/34650"><span>Achieving Maximum Crack Remediation Effect from Optimized Hydrotesting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2011-06-15</p> <p>This project developed and validated models that will allow the industry to predict the overall benefits of hydrotests. Such a prediction is made with a consideration of various characteristics of a pipeline including the type of operation, stage of ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/41185','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/41185"><span>Modeling and predicting vegetation response of western USA grasslands, shrublands, and deserts to climate change (Chapter 1)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Megan M. Friggens; Marcus V. Warwell; Jeanne C. Chambers; Stanley G. Kitchen</p> <p>2012-01-01</p> <p>Experimental research and species distribution modeling predict large changes in the distributions of species and vegetation types in the Interior West due to climate change. Species’ responses will depend not only on their physiological tolerances but also on their phenology, establishment properties, biotic interactions, and capacity to evolve and migrate. Because...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/52375','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/52375"><span>Predictive models for radial sap flux variation in coniferous, diffuse-porous and ring-porous temperate trees</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Aaron B. Berdanier; Chelcy F. Miniat; James S. Clark</p> <p>2016-01-01</p> <p>Accurately scaling sap flux observations to tree or stand levels requires accounting for variation in sap flux between wood types and by depth into the tree. However, existing models for radial variation in axial sap flux are rarely used because they are difficult to implement, there is uncertainty about their predictive ability and calibration measurements...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/41171','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/41171"><span>Climate change in grasslands, shrublands, and deserts of the interior American West: a review and needs assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Deborah M. Finch</p> <p>2012-01-01</p> <p>Recent research and species distribution modeling predict large changes in the distributions of species and vegetation types in the western interior of the United States in response to climate change. This volume reviews existing climate models that predict species and vegetation changes in the western United States, and it synthesizes knowledge about climate change...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4353594','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4353594"><span>Parenting stress and depressive symptoms in postpartum mothers: Bidirectional or unidirectional effects?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Thomason, Elizabeth; Volling, Brenda L.; Flynn, Heather A.; McDonough, Susan C.; Marcus, Sheila M.; Lopez, Juan F.; Vazquez, Delia M.</p> <p>2015-01-01</p> <p>Despite the consistent link between parenting stress and postpartum depressive symptoms, few studies have explored the relationships longitudinally. The purpose of this study was to test bidirectional and unidirectional models of depressive symptoms and parenting stress. Uniquely, three specific domains of parenting stress were examined: parental distress, difficult child stress, and parent–child dysfunctional interaction (PCDI). One hundred and five women completed the Beck Depression Inventory and the Parenting Stress Index–Short Form at 3, 7, and 14 months after giving birth. Structural equation modeling revealed that total parenting stress predicted later depressive symptoms, however, there were different patterns between postpartum depressive symptoms and different types of parenting stress. A unidirectional model of parental distress predicting depressive symptoms best fit the data, with significant stability paths but non-significant cross-lagged paths. A unidirectional model of depressive symptoms predicted significant later difficult child stress. No model fit well with PCDI. Future research should continue to explore the specific nature of the associations of postpartum depression and different types of parenting stress on infant development and the infant–mother relationship. PMID:24956500</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28554908','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28554908"><span>Role of B-Type Natriuretic Peptide and N-Terminal Prohormone BNP as Predictors of Cardiovascular Morbidity and Mortality in Patients With a Recent Coronary Event and Type 2 Diabetes Mellitus.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wolsk, Emil; Claggett, Brian; Pfeffer, Marc A; Diaz, Rafael; Dickstein, Kenneth; Gerstein, Hertzel C; Lawson, Francesca C; Lewis, Eldrin F; Maggioni, Aldo P; McMurray, John J V; Probstfield, Jeffrey L; Riddle, Matthew C; Solomon, Scott D; Tardif, Jean-Claude; Køber, Lars</p> <p>2017-05-29</p> <p>Natriuretic peptides are recognized as important predictors of cardiovascular events in patients with heart failure, but less is known about their prognostic importance in patients with acute coronary syndrome. We sought to determine whether B-type natriuretic peptide (BNP) and N-terminal prohormone B-type natriuretic peptide (NT-proBNP) could enhance risk prediction of a broad range of cardiovascular outcomes in patients with acute coronary syndrome and type 2 diabetes mellitus. Patients with a recent acute coronary syndrome and type 2 diabetes mellitus were prospectively enrolled in the ELIXA trial (n=5525, follow-up time 26 months). Best risk models were constructed from relevant baseline variables with and without BNP/NT-proBNP. C statistics, Net Reclassification Index, and Integrated Discrimination Index were analyzed to estimate the value of adding BNP or NT-proBNP to best risk models. Overall, BNP and NT-proBNP were the most important predictors of all outcomes examined, irrespective of history of heart failure or any prior cardiovascular disease. BNP significantly improved C statistics when added to risk models for each outcome examined, the strongest increments being in death (0.77-0.82, P <0.001), cardiovascular death (0.77-0.83, P <0.001), and heart failure (0.84-0.87, P <0.001). BNP or NT-proBNP alone predicted death as well as all other variables combined (0.77 versus 0.77). In patients with a recent acute coronary syndrome and type 2 diabetes mellitus, BNP and NT-proBNP were powerful predictors of cardiovascular outcomes beyond heart failure and death, ie, were also predictive of MI and stroke. Natriuretic peptides added as much predictive information about death as all other conventional variables combined. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01147250. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70034866','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70034866"><span>Use of predictive models and rapid methods to nowcast bacteria levels at coastal beaches</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Francy, Donna S.</p> <p>2009-01-01</p> <p>The need for rapid assessments of recreational water quality to better protect public health is well accepted throughout the research and regulatory communities. Rapid analytical methods, such as quantitative polymerase chain reaction (qPCR) and immunomagnetic separation/adenosine triphosphate (ATP) analysis, are being tested but are not yet ready for widespread use.Another solution is the use of predictive models, wherein variable(s) that are easily and quickly measured are surrogates for concentrations of fecal-indicator bacteria. Rainfall-based alerts, the simplest type of model, have been used by several communities for a number of years. Deterministic models use mathematical representations of the processes that affect bacteria concentrations; this type of model is being used for beach-closure decisions at one location in the USA. Multivariable statistical models are being developed and tested in many areas of the USA; however, they are only used in three areas of the Great Lakes to aid in notifications of beach advisories or closings. These “operational” statistical models can result in more accurate assessments of recreational water quality than use of the previous day's Escherichia coli (E. coli)concentration as determined by traditional culture methods. The Ohio Nowcast, at Huntington Beach, Bay Village, Ohio, is described in this paper as an example of an operational statistical model. Because predictive modeling is a dynamic process, water-resource managers continue to collect additional data to improve the predictive ability of the nowcast and expand the nowcast to other Ohio beaches and a recreational river. Although predictive models have been shown to work well at some beaches and are becoming more widely accepted, implementation in many areas is limited by funding, lack of coordinated technical leadership, and lack of supporting epidemiological data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25708627','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25708627"><span>Students' online collaborative intention for group projects: Evidence from an extended version of the theory of planned behaviour.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheng, Eddie W L; Chu, Samuel K W</p> <p>2016-08-01</p> <p>Given the increasing use of web technology for teaching and learning, this study developed and examined an extended version of the theory of planned behaviour (TPB) model, which explained students' intention to collaborate online for their group projects. Results indicated that past experience predicted the three antecedents of intention, while past behaviour was predictive of subjective norm and perceived behavioural control. Moreover, the three antecedents (attitude towards e-collaboration, subjective norm and perceived behavioural control) were found to significantly predict e-collaborative intention. This study explored the use of the "remember" type of awareness (i.e. past experience) and evaluated the value of the "know" type of awareness (i.e. past behaviour) in the TPB model. © 2015 International Union of Psychological Science.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26745461','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26745461"><span>Using Latent Class Analysis to Model Temperament Types.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Loken, Eric</p> <p>2004-10-01</p> <p>Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AIPC.1658f0008V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AIPC.1658f0008V"><span>Model of land cover change prediction in West Java using cellular automata-Markov chain (CA-MC)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Virtriana, Riantini; Sumarto, Irawan; Deliar, Albertus; Pasaribu, Udjianna S.; Taufik, Moh.</p> <p>2015-04-01</p> <p>Land is a fundamental factor that closely related to economic growth and supports the needs of human life. Land-use activity is a major issue and challenge for country planners. The cause of change in land use type activity may be due to socio economic development or due to changes in the environment or may be due to both. In an effort to understand the phenomenon of land cover changes, can be approached through land cover change modelling. Based on the facts and data contained, West Java has a high economic activity that will have an impact on land cover change. CA-MC is a model that used to determine the statistical change probabilistic for each of land cover type from land cover data at different time periods. CA-MC is able to provide the output of land cover type that should occurred. Results from a CA-MC modelling in predicting land cover changes showed an accuracy rate of 95.42%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1378980-predicting-responses-soil-nitrite-oxidizers-multi-factorial-global-change-trait-based-approach','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1378980-predicting-responses-soil-nitrite-oxidizers-multi-factorial-global-change-trait-based-approach"><span>Predicting the Responses of Soil Nitrite-Oxidizers to Multi-Factorial Global Change: A Trait-Based Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Le Roux, Xavier; Bouskill, Nicholas J.; Niboyet, Audrey; ...</p> <p>2016-05-17</p> <p>Soil microbial diversity is huge and a few grams of soil contain more bacterial taxa than there are bird species on Earth. This high diversity often makes predicting the responses of soil bacteria to environmental change intractable and restricts our capacity to predict the responses of soil functions to global change. Here, using a long-term field experiment in a California grassland, we studied the main and interactive effects of three global change factors (increased atmospheric CO 2 concentration, precipitation and nitrogen addition, and all their factorial combinations, based on global change scenarios for central California) on the potential activity, abundancemore » and dominant taxa of soil nitrite-oxidizing bacteria (NOB). Using a trait-based model, we then tested whether categorizing NOB into a few functional groups unified by physiological traits enables understanding and predicting how soil NOB respond to global environmental change. Contrasted responses to global change treatments were observed between three main NOB functional types. In particular, putatively mixotrophic Nitrobacter, rare under most treatments, became dominant under the 'High CO 2 +Nitrogen+Precipitation' treatment. The mechanistic trait-based model, which simulated ecological niches of NOB types consistent with previous ecophysiological reports, helped predicting the observed effects of global change on NOB and elucidating the underlying biotic and abiotic controls. Our results are a starting point for representing the overwhelming diversity of soil bacteria by a few functional types that can be incorporated into models of terrestrial ecosystems and biogeochemical processes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1378980','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1378980"><span>Predicting the Responses of Soil Nitrite-Oxidizers to Multi-Factorial Global Change: A Trait-Based Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Le Roux, Xavier; Bouskill, Nicholas J.; Niboyet, Audrey</p> <p></p> <p>Soil microbial diversity is huge and a few grams of soil contain more bacterial taxa than there are bird species on Earth. This high diversity often makes predicting the responses of soil bacteria to environmental change intractable and restricts our capacity to predict the responses of soil functions to global change. Here, using a long-term field experiment in a California grassland, we studied the main and interactive effects of three global change factors (increased atmospheric CO 2 concentration, precipitation and nitrogen addition, and all their factorial combinations, based on global change scenarios for central California) on the potential activity, abundancemore » and dominant taxa of soil nitrite-oxidizing bacteria (NOB). Using a trait-based model, we then tested whether categorizing NOB into a few functional groups unified by physiological traits enables understanding and predicting how soil NOB respond to global environmental change. Contrasted responses to global change treatments were observed between three main NOB functional types. In particular, putatively mixotrophic Nitrobacter, rare under most treatments, became dominant under the 'High CO 2 +Nitrogen+Precipitation' treatment. The mechanistic trait-based model, which simulated ecological niches of NOB types consistent with previous ecophysiological reports, helped predicting the observed effects of global change on NOB and elucidating the underlying biotic and abiotic controls. Our results are a starting point for representing the overwhelming diversity of soil bacteria by a few functional types that can be incorporated into models of terrestrial ecosystems and biogeochemical processes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8768E..3IP','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8768E..3IP"><span>The application of improved neural network in hydrocarbon reservoir prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peng, Xiaobo</p> <p>2013-03-01</p> <p>This paper use BP neural network techniques to realize hydrocarbon reservoir predication easier and faster in tarim basin in oil wells. A grey - cascade neural network model is proposed and it is faster convergence speed and low error rate. The new method overcomes the shortcomings of traditional BP neural network convergence slow, easy to achieve extreme minimum value. This study had 220 sets of measured logging data to the sample data training mode. By changing the neuron number and types of the transfer function of hidden layers, the best work prediction model is analyzed. The conclusion is the model which can produce good prediction results in general, and can be used for hydrocarbon reservoir prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150002145','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150002145"><span>Operational Dust Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Benedetti, Angela; Baldasano, Jose M.; Basart, Sara; Benincasa, Francesco; Boucher, Olivier; Brooks, Malcolm E.; Chen, Jen-Ping; Colarco, Peter R.; Gong, Sunlin; Huneeus, Nicolas; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20150002145'); toggleEditAbsImage('author_20150002145_show'); toggleEditAbsImage('author_20150002145_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20150002145_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20150002145_hide"></p> <p>2014-01-01</p> <p>Over the last few years, numerical prediction of dust aerosol concentration has become prominent at several research and operational weather centres due to growing interest from diverse stakeholders, such as solar energy plant managers, health professionals, aviation and military authorities and policymakers. Dust prediction in numerical weather prediction-type models faces a number of challenges owing to the complexity of the system. At the centre of the problem is the vast range of scales required to fully account for all of the physical processes related to dust. Another limiting factor is the paucity of suitable dust observations available for model, evaluation and assimilation. This chapter discusses in detail numerical prediction of dust with examples from systems that are currently providing dust forecasts in near real-time or are part of international efforts to establish daily provision of dust forecasts based on multi-model ensembles. The various models are introduced and described along with an overview on the importance of dust prediction activities and a historical perspective. Assimilation and evaluation aspects in dust prediction are also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFD.G4001F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFD.G4001F"><span>Predicting Insulin Absorption and Glucose Uptake during Exercise in Type 1 Diabetes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Frank, Spencer; Hinshaw, Ling; Basu, Rita; Szeri, Andrew; Basu, Ananda</p> <p>2017-11-01</p> <p>A dose of insulin infused into subcutaneous tissue has been shown to absorb more quickly during exercise, potentially causing hypoglycemia in persons with type 1 diabetes. We develop a model that relates exercise-induced physiological changes to enhanced insulin-absorption (k) and glucose uptake (GU). Drawing on concepts of the microcirculation we derive a relationship that reveals that k and GU are mainly determined by two physiological parameters that characterize the tissue: the tissue perfusion rate (Q) and the capillary permeability surface area (PS). Independently measured values of Q and PS from the literature are used in the model to make predictions of k and GU. We compare these predictions to experimental observations of healthy and diabetic patients that are given a meal followed by rest or exercise. The experiments show that during exercise insulin concentrations significantly increase and that glucose levels fall rapidly. The model predictions are consistent with the experiments and show that increases in Q and PS directly increase k and GU. This mechanistic understanding provides a basis for handling exercise in control algorithms for an artificial pancreas. Now at University of British Columbia.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26377669','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26377669"><span>The balanced ideological antipathy model: explaining the effects of ideological attitudes on inter-group antipathy across the political spectrum.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Crawford, Jarret T; Mallinas, Stephanie R; Furman, Bryan J</p> <p>2015-12-01</p> <p>We introduce the balanced ideological antipathy (BIA) model, which challenges assumptions that right-wing authoritarianism (RWA) and social dominance orientation (SDO) predict inter-group antipathy per se. Rather, the effects of RWA and SDO on antipathy should depend on the target's political orientation and political objectives, the specific components of RWA, and the type of antipathy expressed. Consistent with the model, two studies (N = 585) showed that the Traditionalism component of RWA positively and negatively predicted both political intolerance and prejudice toward tradition-threatening and -reaffirming groups, respectively, whereas SDO positively and negatively predicted prejudice (and to some extent political intolerance) toward hierarchy-attenuating and -enhancing groups, respectively. Critically, the Conservatism component of RWA positively predicted political intolerance (but not prejudice) toward each type of target group, suggesting it captures the anti-democratic impulse at the heart of authoritarianism. Recommendations for future research on the relationship between ideological attitudes and inter-group antipathy are discussed. © 2015 by the Society for Personality and Social Psychology, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..351a2007K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..351a2007K"><span>Probabilistic framework for product design optimization and risk management</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Keski-Rahkonen, J. K.</p> <p>2018-05-01</p> <p>Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6951407-overview-atomic-mass-predictions','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6951407-overview-atomic-mass-predictions"><span>Overview of the 1986--1987 atomic mass predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Haustein, P.E.</p> <p>1988-07-01</p> <p>The need for a comprehensive update of earlier sets of atomic mass predictions is documented. A project that grew from this need and which resulted in the preparation of the 1986--1987 Atomic Mass Predictions is summarized. Ten sets of new mass predictions and expository text from a variety of types of mass models are combined with the latest evaluation of experimentally determined atomic masses. The methodology employed in constructing these mass predictions is outlined. The models are compared with regard to their reproduction of the experimental mass surface and their use of varying numbers of adjustable parameters. Plots are presented,more » for each set of predictions, of differences between model calculations and the measured masses. These plots may be used to estimate the reliability of the new mass predictions in unmeasured regions that border the experimetally known mass surface. copyright 1988 Academic Press, Inc.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.B13H1841B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.B13H1841B"><span>Using Flux Site Observations to Calibrate Root System Architecture Stencils for Water Uptake of Plant Functional Types in Land Surface Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bouda, M.</p> <p>2017-12-01</p> <p>Root system architecture (RSA) can significantly affect plant access to water, total transpiration, as well as its partitioning by soil depth, with implications for surface heat, water, and carbon budgets. Despite recent advances in land surface model (LSM) descriptions of plant hydraulics, RSA has not been included because of its three-dimensional complexity, which makes RSA modelling generally too computationally costly. This work builds upon the recently introduced "RSA stencil," a process-based 1D layered model that captures the dynamic shifts in water potential gradients of 3D RSA in response to heterogeneous soil moisture profiles. In validations using root systems calibrated to the rooting profiles of four plant functional types (PFT) of the Community Land Model, the RSA stencil predicts plant water potentials within 2% of the outputs of full 3D models, despite its trivial computational cost. In transient simulations, the RSA stencil yields improved predictions of water uptake and soil moisture profiles compared to a 1D model based on root fraction alone. Here I show how the RSA stencil can be calibrated to time-series observations of soil moisture and transpiration to yield a water uptake PFT definition for use in terrestrial models. This model-data integration exercise aims to improve LSM predictions of soil moisture dynamics and, under water-limiting conditions, surface fluxes. These improvements can be expected to significantly impact predictions of downstream variables, including surface fluxes, climate-vegetation feedbacks and soil nutrient cycling.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..MAR.F4002D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..MAR.F4002D"><span>De Novo Chromosome Structure Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>di Pierro, Michele; Cheng, Ryan R.; Lieberman-Aiden, Erez; Wolynes, Peter G.; Onuchic, Jose'n.</p> <p></p> <p>Chromatin consists of DNA and hundreds of proteins that interact with the genetic material. In vivo, chromatin folds into nonrandom structures. The physical mechanism leading to these characteristic conformations, however, remains poorly understood. We recently introduced MiChroM, a model that generates chromosome conformations by using the idea that chromatin can be subdivided into types based on its biochemical interactions. Here we extend and complete our previous finding by showing that structural chromatin types can be inferred from ChIP-Seq data. Chromatin types, which are distinct from DNA sequence, are partially epigenetically controlled and change during cell differentiation, thus constituting a link between epigenetics, chromosomal organization, and cell development. We show that, for GM12878 lymphoblastoid cells we are able to predict accurate chromosome structures with the only input of genomic data. The degree of accuracy achieved by our prediction supports the viability of the proposed physical mechanism of chromatin folding and makes the computational model a powerful tool for future investigations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28587073','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28587073"><span>Epidemiological Features and Forecast Model Analysis for the Morbidity of Influenza in Ningbo, China, 2006-2014.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Chunli; Li, Yongdong; Feng, Wei; Liu, Kui; Zhang, Shu; Hu, Fengjiao; Jiao, Suli; Lao, Xuying; Ni, Hongxia; Xu, Guozhang</p> <p>2017-05-25</p> <p>This study aimed to identify circulating influenza virus strains and vulnerable population groups and investigate the distribution and seasonality of influenza viruses in Ningbo, China. Then, an autoregressive integrated moving average (ARIMA) model for prediction was established. Influenza surveillance data for 2006-2014 were obtained for cases of influenza-like illness (ILI) ( n = 129,528) from the municipal Centers for Disease Control and virus surveillance systems of Ningbo, China. The ARIMA model was proposed to predict the expected morbidity cases from January 2015 to December 2015. Of the 13,294 specimens, influenza virus was detected in 1148 (8.64%) samples, including 951 (82.84%) influenza type A and 197 (17.16%) influenza type B viruses; the influenza virus isolation rate was strongly correlated with the rate of ILI during the overall study period ( r = 0.20, p < 0.05). The ARIMA (1, 1, 1) (1, 1, 0) 12 model could be used to predict the ILI incidence in Ningbo. The seasonal pattern of influenza activity in Ningbo tended to peak during the rainy season and winter. Given those results, the model we established could effectively predict the trend of influenza-related morbidity, providing a methodological basis for future influenza monitoring and control strategies in the study area.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24134202','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24134202"><span>Development of a wound healing index for patients with chronic wounds.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Horn, Susan D; Fife, Caroline E; Smout, Randall J; Barrett, Ryan S; Thomson, Brett</p> <p>2013-01-01</p> <p>Randomized controlled trials in wound care generalize poorly because they exclude patients with significant comorbid conditions. Research using real-world wound care patients is hindered by lack of validated methods to stratify patients according to severity of underlying illnesses. We developed a comprehensive stratification system for patients with wounds that predicts healing likelihood. Complete medical record data on 50,967 wounds from the United States Wound Registry were assigned a clear outcome (healed, amputated, etc.). Factors known to be associated with healing were evaluated using logistic regression models. Significant variables (p < 0.05) were determined and subsequently tested on a holdout sample of data. A different model predicted healing for each wound type. Some variables predicted significantly in nearly all models: wound size, wound age, number of wounds, evidence of bioburden, tissue type exposed (Wagner grade or stage), being nonambulatory, and requiring hospitalization during the course of care. Variables significant in some models included renal failure, renal transplant, malnutrition, autoimmune disease, and cardiovascular disease. All models validated well when applied to the holdout sample. The "Wound Healing Index" can validly predict likelihood of wound healing among real-world patients and can facilitate comparative effectiveness research to identify patients needing advanced therapeutics. © 2013 by the Wound Healing Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=fair+AND+value&pg=2&id=ED528891','ERIC'); return false;" href="https://eric.ed.gov/?q=fair+AND+value&pg=2&id=ED528891"><span>Toward Enhancing Automated Credibility Assessment: A Model for Question Type Classification and Tools for Linguistic Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Moffitt, Kevin Christopher</p> <p>2011-01-01</p> <p>The three objectives of this dissertation were to develop a question type model for predicting linguistic features of responses to interview questions, create a tool for linguistic analysis of documents, and use lexical bundle analysis to identify linguistic differences between fraudulent and non-fraudulent financial reports. First, The Moffitt…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27153612','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27153612"><span>ProbFold: a probabilistic method for integration of probing data in RNA secondary structure prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sahoo, Sudhakar; Świtnicki, Michał P; Pedersen, Jakob Skou</p> <p>2016-09-01</p> <p>Recently, new RNA secondary structure probing techniques have been developed, including Next Generation Sequencing based methods capable of probing transcriptome-wide. These techniques hold great promise for improving structure prediction accuracy. However, each new data type comes with its own signal properties and biases, which may even be experiment specific. There is therefore a growing need for RNA structure prediction methods that can be automatically trained on new data types and readily extended to integrate and fully exploit multiple types of data. Here, we develop and explore a modular probabilistic approach for integrating probing data in RNA structure prediction. It can be automatically trained given a set of known structures with probing data. The approach is demonstrated on SHAPE datasets, where we evaluate and selectively model specific correlations. The approach often makes superior use of the probing data signal compared to other methods. We illustrate the use of ProbFold on multiple data types using both simulations and a small set of structures with both SHAPE, DMS and CMCT data. Technically, the approach combines stochastic context-free grammars (SCFGs) with probabilistic graphical models. This approach allows rapid adaptation and integration of new probing data types. ProbFold is implemented in C ++. Models are specified using simple textual formats. Data reformatting is done using separate C ++ programs. Source code, statically compiled binaries for x86 Linux machines, C ++ programs, example datasets and a tutorial is available from http://moma.ki.au.dk/prj/probfold/ : jakob.skou@clin.au.dk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27454101','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27454101"><span>Multivariate functions for predicting the sorption of 2,4,6-trinitrotoluene (TNT) and 1,3,5-trinitro-1,3,5-tricyclohexane (RDX) among taxonomically distinct soils.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Katseanes, Chelsea K; Chappell, Mark A; Hopkins, Bryan G; Durham, Brian D; Price, Cynthia L; Porter, Beth E; Miller, Lesley F</p> <p>2016-11-01</p> <p>After nearly a century of use in numerous munition platforms, TNT and RDX contamination has turned up largely in the environment due to ammunition manufacturing or as part of releases from low-order detonations during training activities. Although the basic knowledge governing the environmental fate of TNT and RDX are known, accurate predictions of TNT and RDX persistence in soil remain elusive, particularly given the universal heterogeneity of pedomorphic soil types. In this work, we proposed a new solution for modeling the sorption and persistence of these munition constituents as multivariate mathematical functions correlating soil attribute data over a variety of taxonomically distinct soil types to contaminant behavior, instead of a single constant or parameter of a specific absolute value. To test this idea, we conducted experiments measuring the sorption of TNT and RDX on taxonomically different soil types that were extensively physical and chemically characterized. Statistical decomposition of the log-transformed, and auto-scaled soil characterization data using the dimension-reduction technique PCA (principal component analysis) revealed a strong latent structure based in the multiple pairwise correlations among the soil properties. TNT and RDX sorption partitioning coefficients (KD-TNT and KD-RDX) were regressed against this latent structure using partial least squares regression (PLSR), generating a 3-factor, multivariate linear functions. Here, PLSR models predicted KD-TNT and KD-RDX values based on attributes contributing to endogenous alkaline/calcareous and soil fertility criteria, respectively, exhibited among the different soil types: We hypothesized that the latent structure arising from the strong covariance of full multivariate geochemical matrix describing taxonomically distinguished soil types may provide the means for potentially predicting complex phenomena in soils. The development of predictive multivariate models tuned to a local soil's taxonomic designation would have direct benefit to military range managers seeking to anticipate the environmental risks of training activities on impact sites. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..226a2099R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..226a2099R"><span>Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab</p> <p>2017-08-01</p> <p>The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992cfd..proc..265T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992cfd..proc..265T"><span>Large eddy simulation on buoyant gas diffusion near building</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tominaga, Yoshihide; Murakami, Shuzo; Mochida, Akashi</p> <p>1992-12-01</p> <p>Large eddy simulations on turbulent diffusion of buoyant gases near a building model are carried out for three cases in which the densimetric Froude Number (Frd) was specified at - 8.6, zero and 8.6 respectively. The accuracy of these simulations is examined by comparing the numerically predicted results with wind tunnel experiments conducted. Two types of sub-grid scale models, the standard Smagorinsky model (type 1) and the modified Smagorinsky model (type 2) are compared. The former does not take account of the production of subgrid energy by buoyancy force but the latter incorporates this effect. The latter model (type 2) gives more accurate results than those given by the standard Smagorinsky model (type 1) in terms of the distributions of kappa greater than sign C less than sign greater than sign C(sup - 2) less than sign.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70035772','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70035772"><span>Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.</p> <p>2009-01-01</p> <p>An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830006049','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830006049"><span>RF model of the distribution system as a communication channel, phase 2. Volume 2: Task reports</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.</p> <p>1982-01-01</p> <p>Based on the established feasibility of predicting, via a model, the propagation of Power Line Frequency on radial type distribution feeders, verification studies comparing model predictions against measurements were undertaken using more complicated feeder circuits and situations. Detailed accounts of the major tasks are presented. These include: (1) verification of model; (2) extension, implementation, and verification of perturbation theory; (3) parameter sensitivity; (4) transformer modeling; and (5) compensation of power distribution systems for enhancement of power line carrier communication reliability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29241658','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29241658"><span>Spatiotemporal Bayesian networks for malaria prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haddawy, Peter; Hasan, A H M Imrul; Kasantikul, Rangwan; Lawpoolsri, Saranath; Sa-Angchai, Patiwat; Kaewkungwal, Jaranit; Singhasivanon, Pratap</p> <p>2018-01-01</p> <p>Targeted intervention and resource allocation are essential for effective malaria control, particularly in remote areas, with predictive models providing important information for decision making. While a diversity of modeling technique have been used to create predictive models of malaria, no work has made use of Bayesian networks. Bayes nets are attractive due to their ability to represent uncertainty, model time lagged and nonlinear relations, and provide explanations. This paper explores the use of Bayesian networks to model malaria, demonstrating the approach by creating village level models with weekly temporal resolution for Tha Song Yang district in northern Thailand. The networks are learned using data on cases and environmental covariates. Three types of networks are explored: networks for numeric prediction, networks for outbreak prediction, and networks that incorporate spatial autocorrelation. Evaluation of the numeric prediction network shows that the Bayes net has prediction accuracy in terms of mean absolute error of about 1.4 cases for 1 week prediction and 1.7 cases for 6 week prediction. The network for outbreak prediction has an ROC AUC above 0.9 for all prediction horizons. Comparison of prediction accuracy of both Bayes nets against several traditional modeling approaches shows the Bayes nets to outperform the other models for longer time horizon prediction of high incidence transmission. To model spread of malaria over space, we elaborate the models with links between the village networks. This results in some very large models which would be far too laborious to build by hand. So we represent the models as collections of probability logic rules and automatically generate the networks. Evaluation of the models shows that the autocorrelation links significantly improve prediction accuracy for some villages in regions of high incidence. We conclude that spatiotemporal Bayesian networks are a highly promising modeling alternative for prediction of malaria and other vector-borne diseases. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28068160','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28068160"><span>Prediction of Injuries and Injury Types in Army Basic Training, Infantry, Armor, and Cavalry Trainees Using a Common Fitness Screen.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sefton, JoEllen M; Lohse, K R; McAdam, J S</p> <p>2016-11-01</p> <p> Musculoskeletal injuries (MSIs) are among the most important challenges facing our military. They influence career success and directly affect military readiness. Several methods of screening initial entry training (IET) soldiers are being tested in an effort to predict which soldiers will sustain an MSI and to develop injury-prevention programs. The Army 1-1-1 Fitness Assessment was examined to determine if it could be used as a screening and MSI prediction mechanism in male IET soldiers.  To determine if a relationship existed among the Army 1-1-1 Fitness Assessment results and MSI, MSI type, and program of instruction (POI) in male IET soldiers.  Retrospective cohort study.  Fort Benning, Georgia.  Male Army IET soldiers (N = 1788).  The likelihood of sustaining acute and overuse MSI was modelled using separate logistic regression analyses. The POI, run time, push-ups and sit-ups (combined into a single score), and IET soldier age were tested as predictors in a series of linear models.  With POI controlled, slower run time, fewer push-ups and sit-ups, and older age were positively correlated with acute MSI; only slower run time was correlated with overuse MSI. For both MSI types, cavalry POIs had a higher risk of acute and overuse MSIs than did basic combat training, armor, or infantry POIs.  The 1-1-1 Fitness Assessment predicted both the likelihood of MSI occurrence and type of MSI (acute or overuse). One-mile (1.6-km) run time predicted both overuse and acute MSIs, whereas the combined push-up and sit-up score predicted only acute MSIs. The MSIs varied by type of training (infantry, basic, armor, cavalry), which allowed the development of prediction equations by POI. We determined 1-1-1 Fitness Assessment cutoff scores for each event, thereby allowing the evaluation to be used as an MSI screening mechanism for IET soldiers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5224725','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5224725"><span>Prediction of Injuries and Injury Types in Army Basic Training, Infantry, Armor, and Cavalry Trainees Using a Common Fitness Screen</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sefton, JoEllen M.; Lohse, K. R.; McAdam, J. S.</p> <p>2016-01-01</p> <p>Context: Musculoskeletal injuries (MSIs) are among the most important challenges facing our military. They influence career success and directly affect military readiness. Several methods of screening initial entry training (IET) soldiers are being tested in an effort to predict which soldiers will sustain an MSI and to develop injury-prevention programs. The Army 1-1-1 Fitness Assessment was examined to determine if it could be used as a screening and MSI prediction mechanism in male IET soldiers. Objective: To determine if a relationship existed among the Army 1-1-1 Fitness Assessment results and MSI, MSI type, and program of instruction (POI) in male IET soldiers. Design: Retrospective cohort study. Setting: Fort Benning, Georgia. Patients or Other Participants: Male Army IET soldiers (N = 1788). Main Outcome Measure(s): The likelihood of sustaining acute and overuse MSI was modelled using separate logistic regression analyses. The POI, run time, push-ups and sit-ups (combined into a single score), and IET soldier age were tested as predictors in a series of linear models. Results: With POI controlled, slower run time, fewer push-ups and sit-ups, and older age were positively correlated with acute MSI; only slower run time was correlated with overuse MSI. For both MSI types, cavalry POIs had a higher risk of acute and overuse MSIs than did basic combat training, armor, or infantry POIs. Conclusions: The 1-1-1 Fitness Assessment predicted both the likelihood of MSI occurrence and type of MSI (acute or overuse). One-mile (1.6-km) run time predicted both overuse and acute MSIs, whereas the combined push-up and sit-up score predicted only acute MSIs. The MSIs varied by type of training (infantry, basic, armor, cavalry), which allowed the development of prediction equations by POI. We determined 1-1-1 Fitness Assessment cutoff scores for each event, thereby allowing the evaluation to be used as an MSI screening mechanism for IET soldiers. PMID:28068160</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28681325','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28681325"><span>Nonlinear autoregressive neural networks with external inputs for forecasting of typhoon inundation level.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ouyang, Huei-Tau</p> <p>2017-08-01</p> <p>Accurate inundation level forecasting during typhoon invasion is crucial for organizing response actions such as the evacuation of people from areas that could potentially flood. This paper explores the ability of nonlinear autoregressive neural networks with exogenous inputs (NARX) to predict inundation levels induced by typhoons. Two types of NARX architecture were employed: series-parallel (NARX-S) and parallel (NARX-P). Based on cross-correlation analysis of rainfall and water-level data from historical typhoon records, 10 NARX models (five of each architecture type) were constructed. The forecasting ability of each model was assessed by considering coefficient of efficiency (CE), relative time shift error (RTS), and peak water-level error (PE). The results revealed that high CE performance could be achieved by employing more model input variables. Comparisons of the two types of model demonstrated that the NARX-S models outperformed the NARX-P models in terms of CE and RTS, whereas both performed exceptionally in terms of PE and without significant difference. The NARX-S and NARX-P models with the highest overall performance were identified and their predictions were compared with those of traditional ARX-based models. The NARX-S model outperformed the ARX-based models in all three indexes, whereas the NARX-P model exhibited comparable CE performance and superior RTS and PE performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/1016161','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/1016161"><span>Fire frequency in the Interior Columbia River Basin: Building regional models from fire history data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>McKenzie, D.; Peterson, D.L.; Agee, James K.</p> <p>2000-01-01</p> <p>Fire frequency affects vegetation composition and successional pathways; thus it is essential to understand fire regimes in order to manage natural resources at broad spatial scales. Fire history data are lacking for many regions for which fire management decisions are being made, so models are needed to estimate past fire frequency where local data are not yet available. We developed multiple regression models and tree-based (classification and regression tree, or CART) models to predict fire return intervals across the interior Columbia River basin at 1-km resolution, using georeferenced fire history, potential vegetation, cover type, and precipitation databases. The models combined semiqualitative methods and rigorous statistics. The fire history data are of uneven quality; some estimates are based on only one tree, and many are not cross-dated. Therefore, we weighted the models based on data quality and performed a sensitivity analysis of the effects on the models of estimation errors that are due to lack of cross-dating. The regression models predict fire return intervals from 1 to 375 yr for forested areas, whereas the tree-based models predict a range of 8 to 150 yr. Both types of models predict latitudinal and elevational gradients of increasing fire return intervals. Examination of regional-scale output suggests that, although the tree-based models explain more of the variation in the original data, the regression models are less likely to produce extrapolation errors. Thus, the models serve complementary purposes in elucidating the relationships among fire frequency, the predictor variables, and spatial scale. The models can provide local managers with quantitative information and provide data to initialize coarse-scale fire-effects models, although predictions for individual sites should be treated with caution because of the varying quality and uneven spatial coverage of the fire history database. The models also demonstrate the integration of qualitative and quantitative methods when requisite data for fully quantitative models are unavailable. They can be tested by comparing new, independent fire history reconstructions against their predictions and can be continually updated, as better fire history data become available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27486040','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27486040"><span>Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Balfer, Jenny; Hu, Ye; Bajorath, Jürgen</p> <p>2014-08-01</p> <p>Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJWC.17901007K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJWC.17901007K"><span>Standard Model and New physics for ɛ'k/ɛk</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kitahara, Teppei</p> <p>2018-05-01</p> <p>The first result of the lattice simulation and improved perturbative calculations have pointed to a discrepancy between data on ɛ'k/ɛk and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of ℬ(K → πv<overline>v</overline>) from the SM predictions, which can be probed precisely in the near future by NA62 and KOTO experiments. We present correlations between ɛ'k/ɛk and ℬ(K → πv<overline>v</overline>) in two types of NP scenarios: a box dominated scenario and a Z-penguin dominated one. It is shown that different correlations are predicted and the future precision measurements of K → πv<overline>v</overline> can distinguish both scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...636131B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...636131B"><span>Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; de Zeeuw, Chris I.</p> <p>2016-11-01</p> <p>Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5095348','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5095348"><span>Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; De Zeeuw, Chris I.</p> <p>2016-01-01</p> <p>Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity. PMID:27805050</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19823687','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19823687"><span>The perfect family: decision making in biparental care.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Akçay, Erol; Roughgarden, Joan</p> <p>2009-10-13</p> <p>Previous theoretical work on parental decisions in biparental care has emphasized the role of the conflict between evolutionary interests of parents in these decisions. A prominent prediction from this work is that parents should compensate for decreases in each other's effort, but only partially so. However, experimental tests that manipulate parents and measure their responses fail to confirm this prediction. At the same time, the process of parental decision making has remained unexplored theoretically. We develop a model to address the discrepancy between experiments and the theoretical prediction, and explore how assuming different decision making processes changes the prediction from the theory. We assume that parents make decisions in behavioral time. They have a fixed time budget, and allocate it between two parental tasks: provisioning the offspring and defending the nest. The proximate determinant of the allocation decisions are parents' behavioral objectives. We assume both parents aim to maximize the offspring production from the nest. Experimental manipulations change the shape of the nest production function. We consider two different scenarios for how parents make decisions: one where parents communicate with each other and act together (the perfect family), and one where they do not communicate, and act independently (the almost perfect family). The perfect family model is able to generate all the types of responses seen in experimental studies. The kind of response predicted depends on the nest production function, i.e. how parents' allocations affect offspring production, and the type of experimental manipulation. In particular, we find that complementarity of parents' allocations promotes matching responses. In contrast, the relative responses do not depend on the type of manipulation in the almost perfect family model. These results highlight the importance of the interaction between nest production function and how parents make decisions, factors that have largely been overlooked in previous models.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24492795','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24492795"><span>A Bayesian network for modelling blood glucose concentration and exercise in type 1 diabetes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ewings, Sean M; Sahu, Sujit K; Valletta, John J; Byrne, Christopher D; Chipperfield, Andrew J</p> <p>2015-06-01</p> <p>This article presents a new statistical approach to analysing the effects of everyday physical activity on blood glucose concentration in people with type 1 diabetes. A physiologically based model of blood glucose dynamics is developed to cope with frequently sampled data on food, insulin and habitual physical activity; the model is then converted to a Bayesian network to account for measurement error and variability in the physiological processes. A simulation study is conducted to determine the feasibility of using Markov chain Monte Carlo methods for simultaneous estimation of all model parameters and prediction of blood glucose concentration. Although there are problems with parameter identification in a minority of cases, most parameters can be estimated without bias. Predictive performance is unaffected by parameter misspecification and is insensitive to misleading prior distributions. This article highlights important practical and theoretical issues not previously addressed in the quest for an artificial pancreas as treatment for type 1 diabetes. The proposed methods represent a new paradigm for analysis of deterministic mathematical models of blood glucose concentration. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70031582','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70031582"><span>A new approach for predicting drought-related vegetation stress: Integrating satellite, climate, and biophysical data over the U.S. central plains</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Tadesse, Tsegaye; Brown, Jesslyn F.; Hayes, M.J.</p> <p>2005-01-01</p> <p>Droughts are normal climate episodes, yet they are among the most expensive natural disasters in the world. Knowledge about the timing, severity, and pattern of droughts on the landscape can be incorporated into effective planning and decision-making. In this study, we present a data mining approach to modeling vegetation stress due to drought and mapping its spatial extent during the growing season. Rule-based regression tree models were generated that identify relationships between satellite-derived vegetation conditions, climatic drought indices, and biophysical data, including land-cover type, available soil water capacity, percent of irrigated farm land, and ecological type. The data mining method builds numerical rule-based models that find relationships among the input variables. Because the models can be applied iteratively with input data from previous time periods, the method enables to provide predictions of vegetation conditions farther into the growing season based on earlier conditions. Visualizing the model outputs as mapped information (called VegPredict) provides a means to evaluate the model. We present prototype maps for the 2002 drought year for Nebraska and South Dakota and discuss potential uses for these maps.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23019318','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23019318"><span>A phenomenological model of muscle fatigue and the power-endurance relationship.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>James, A; Green, S</p> <p>2012-11-01</p> <p>The relationship between power output and the time that it can be sustained during exercise (i.e., endurance) at high intensities is curvilinear. Although fatigue is implicit in this relationship, there is little evidence pertaining to it. To address this, we developed a phenomenological model that predicts the temporal response of muscle power during submaximal and maximal exercise and which was based on the type, contractile properties (e.g., fatiguability), and recruitment of motor units (MUs) during exercise. The model was first used to predict power outputs during all-out exercise when fatigue is clearly manifest and for several distributions of MU type. The model was then used to predict times that different submaximal power outputs could be sustained for several MU distributions, from which several power-endurance curves were obtained. The model was simultaneously fitted to two sets of human data pertaining to all-out exercise (power-time profile) and submaximal exercise (power-endurance relationship), yielding a high goodness of fit (R(2) = 0.96-0.97). This suggested that this simple model provides an accurate description of human power output during submaximal and maximal exercise and that fatigue-related processes inherent in it account for the curvilinearity of the power-endurance relationship.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1411421-personalized-vehicle-energy-efficiency-range-predictor-mygreencar','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1411421-personalized-vehicle-energy-efficiency-range-predictor-mygreencar"><span>Personalized Vehicle Energy Efficiency & Range Predictor/MyGreenCar</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>SAXENA, SAMVEG</p> <p></p> <p>MyGreenCar provides users with the ability to predict the range capabilities, fuel economy, and operating costs for any vehicle for their individual driving patterns. Users launce the MyGreeCar mobile app on their smartphones to collect their driving patterns over any duration (e.g. serval days, weeks, months, etc) using a phones's locational capabilities. Using vehicle powertrain models for any user-specified vehicle type, MyGreenCar, calculates the component-level energy and power interactions for the chosen vehicle to predict several important quantities, including: 1. For Evs: Alleviating range anxiety 2. Comparing fuel economy, operating costs, and payback time across models and types.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/56160','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/56160"><span>Developing models to predict the number of fire hotspots from an accumulated fuel dryness index by vegetation type and region in Mexico</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>D. Vega-Nieva; J. Briseño-Reyes; M. Nava-Miranda; E. Calleros-Flores; P. López-Serrano; J. Corral-Rivas; E. Montiel-Antuna; M. Cruz-López; M. Cuahutle; R. Ressl; E. Alvarado-Celestino; A. González-Cabán; E. Jiménez; J. Álvarez-González; A. Ruiz-González; R. Burgan; H. Preisler</p> <p>2018-01-01</p> <p>Understanding the linkage between accumulated fuel dryness and temporal fire occurrence risk is key for improving decision-making in forest fire management, especially under growing conditions of vegetation stress associated with climate change. This study addresses the development of models to predict the number of 10-day observed Moderate-Resolution Imaging...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/45166','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/45166"><span>Estimation of crown biomass of Pinus pinaster stands and shrubland above-ground biomass using forest inventory data, remotely sensed imagery and spatial prediction models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>H. Viana; J. Aranha; D. Lopes; Warren B. Cohen</p> <p>2012-01-01</p> <p>Spatially crown biomass of Pinus pinaster stands and shrubland above-ground biomass (AGB) estimation was carried-out in a region located in Centre-North Portugal, by means of different approaches including forest inventory data, remotely sensed imagery and spatial prediction models. Two cover types (pine stands and shrubland) were inventoried and...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26759925','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26759925"><span>Traffic accident reconstruction and an approach for prediction of fault rates using artificial neural networks: A case study in Turkey.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Can Yilmaz, Ali; Aci, Cigdem; Aydin, Kadir</p> <p>2016-08-17</p> <p>Currently, in Turkey, fault rates in traffic accidents are determined according to the initiative of accident experts (no speed analyses of vehicles just considering accident type) and there are no specific quantitative instructions on fault rates related to procession of accidents which just represents the type of collision (side impact, head to head, rear end, etc.) in No. 2918 Turkish Highway Traffic Act (THTA 1983). The aim of this study is to introduce a scientific and systematic approach for determination of fault rates in most frequent property damage-only (PDO) traffic accidents in Turkey. In this study, data (police reports, skid marks, deformation, crush depth, etc.) collected from the most frequent and controversial accident types (4 sample vehicle-vehicle scenarios) that consist of PDO were inserted into a reconstruction software called vCrash. Sample real-world scenarios were simulated on the software to generate different vehicle deformations that also correspond to energy-equivalent speed data just before the crash. These values were used to train a multilayer feedforward artificial neural network (MFANN), function fitting neural network (FITNET, a specialized version of MFANN), and generalized regression neural network (GRNN) models within 10-fold cross-validation to predict fault rates without using software. The performance of the artificial neural network (ANN) prediction models was evaluated using mean square error (MSE) and multiple correlation coefficient (R). It was shown that the MFANN model performed better for predicting fault rates (i.e., lower MSE and higher R) than FITNET and GRNN models for accident scenarios 1, 2, and 3, whereas FITNET performed the best for scenario 4. The FITNET model showed the second best results for prediction for the first 3 scenarios. Because there is no training phase in GRNN, the GRNN model produced results much faster than MFANN and FITNET models. However, the GRNN model had the worst prediction results. The R values for prediction of fault rates were close to 1 for all folds and scenarios. This study focuses on exhibiting new aspects and scientific approaches for determining fault rates of involvement in most frequent PDO accidents occurring in Turkey by discussing some deficiencies in THTA and without regard to initiative and/or experience of experts. This study yields judicious decisions to be made especially on forensic investigations and events involving insurance companies. Referring to this approach, injury/fatal and/or pedestrian-related accidents may be analyzed as future work by developing new scientific models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4954635','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4954635"><span>Data integration of structured and unstructured sources for assigning clinical codes to patient stays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim</p> <p>2016-01-01</p> <p>Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/6441313','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/6441313"><span>Comprehensive and critical review of the predictive properties of the various mass models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Haustein, P.E.</p> <p>1984-01-01</p> <p>Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, theremore » is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21718358','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21718358"><span>Two risk score models for predicting incident Type 2 diabetes in Japan.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Doi, Y; Ninomiya, T; Hata, J; Hirakawa, Y; Mukai, N; Iwase, M; Kiyohara, Y</p> <p>2012-01-01</p> <p>Risk scoring methods are effective for identifying persons at high risk of Type 2 diabetes mellitus, but such approaches have not yet been established in Japan. A total of 1935 subjects of a derivation cohort were followed up for 14 years from 1988 and 1147 subjects of a validation cohort independent of the derivation cohort were followed up for 5 years from 2002. Risk scores were estimated based on the coefficients (β) of Cox proportional hazards model in the derivation cohort and were verified in the validation cohort. In the derivation cohort, the non-invasive risk model was established using significant risk factors; namely, age, sex, family history of diabetes, abdominal circumference, body mass index, hypertension, regular exercise and current smoking. We also created another scoring risk model by adding fasting plasma glucose levels to the non-invasive model (plus-fasting plasma glucose model). The area under the curve of the non-invasive model was 0.700 and it increased significantly to 0.772 (P < 0.001) in the plus-fasting plasma glucose model. The ability of the non-invasive model to predict Type 2 diabetes was comparable with that of impaired glucose tolerance, and the plus-fasting plasma glucose model was superior to it. The cumulative incidence of Type 2 diabetes was significantly increased with elevating quintiles of the sum scores of both models in the validation cohort (P for trend < 0.001). We developed two practical risk score models for easily identifying individuals at high risk of incident Type 2 diabetes without an oral glucose tolerance test in the Japanese population. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4121147','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4121147"><span>Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint Using Type-I Censored Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mi, Jinhua; Yang, Yuan-Jian; Huang, Hong-Zhong</p> <p>2014-01-01</p> <p>Because solder joint interconnections are the weaknesses of microelectronic packaging, their reliability has great influence on the reliability of the entire packaging structure. Based on an accelerated life test the reliability assessment and life prediction of lead-free solder joints using Weibull distribution are investigated. The type-I interval censored lifetime data were collected from a thermal cycling test, which was implemented on microelectronic packaging with lead-free ball grid array (BGA) and fine-pitch ball grid array (FBGA) interconnection structures. The number of cycles to failure of lead-free solder joints is predicted by using a modified Engelmaier fatigue life model and a type-I censored data processing method. Then, the Pan model is employed to calculate the acceleration factor of this test. A comparison of life predictions between the proposed method and the ones calculated directly by Matlab and Minitab is conducted to demonstrate the practicability and effectiveness of the proposed method. At last, failure analysis and microstructure evolution of lead-free solders are carried out to provide useful guidance for the regular maintenance, replacement of substructure, and subsequent processing of electronic products. PMID:25121138</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5717021','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5717021"><span>Prediction of Scar Size in Rats Six Months after Burns Based on Early Post-injury Polarization-Sensitive Optical Frequency Domain Imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kravez, Eli; Villiger, Martin; Bouma, Brett; Yarmush, Martin; Yakhini, Zohar; Golberg, Alexander</p> <p>2017-01-01</p> <p>Hypertrophic scars remain a major clinical problem in the rehabilitation of burn survivors and lead to physical, aesthetic, functional, psychological, and social stresses. Prediction of healing outcome and scar formation is critical for deciding on the best treatment plan. Both subjective and objective scales have been devised to assess scar severity. Whereas scales of the first type preclude cross-comparison between observers, those of the second type are based on imaging modalities that either lack the ability to image individual layers of the scar or only provide very limited fields of view. To overcome these deficiencies, this work aimed at developing a predictive model of scar formation based on polarization sensitive optical frequency domain imaging (PS-OFDI), which offers comprehensive subsurface imaging. We report on a linear regression model that predicts the size of a scar 6 months after third-degree burn injuries in rats based on early post-injury PS-OFDI and measurements of scar area. When predicting the scar area at month 6 based on the homogeneity and the degree of polarization (DOP), which are signatures derived from the PS-OFDI signal, together with the scar area measured at months 2 and 3, we achieved predictions with a Pearson coefficient of 0.57 (p < 10−4) and a Spearman coefficient of 0.66 (p < 10−5), which were significant in comparison to prediction models trained on randomly shuffled data. As the model in this study was developed on the rat burn model, the methodology can be used in larger studies that are more relevant to humans; however, the actual model inferred herein is not translatable. Nevertheless, our analysis and modeling methodology can be extended to perform larger wound healing studies in different contexts. This study opens new possibilities for quantitative and objective assessment of scar severity that could help to determine the optimal course of therapy. PMID:29249978</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790012818','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790012818"><span>Correlation of predicted and measured thermal stresses on an advanced aircraft structure with similar materials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jenkins, J. M.</p> <p>1979-01-01</p> <p>A laboratory heating test simulating hypersonic heating was conducted on a heat-sink type structure to provide basic thermal stress measurements. Six NASTRAN models utilizing various combinations of bar, shear panel, membrane, and plate elements were used to develop calculated thermal stresses. Thermal stresses were also calculated using a beam model. For a given temperature distribution there was very little variation in NASTRAN calculated thermal stresses when element types were interchanged for a given grid system. Thermal stresses calculated for the beam model compared similarly to the values obtained for the NASTRAN models. Calculated thermal stresses compared generally well to laboratory measured thermal stresses. A discrepancy of signifiance occurred between the measured and predicted thermal stresses in the skin areas. A minor anomaly in the laboratory skin heating uniformity resulted in inadequate temperature input data for the structural models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/35672','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/35672"><span>Method for Predicting Thermal Buckling in Rails</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2018-01-01</p> <p>A method is proposed herein for predicting the onset of thermal buckling in rails in such a way as to provide a means of avoiding this type of potentially devastating failure. The method consists of the development of a thermomechanical model of rail...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000070855','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000070855"><span>Examining INM Accuracy Using Empirical Sound Monitoring and Radar Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Miller, Nicholas P.; Anderson, Grant S.; Horonjeff, Richard D.; Kimura, Sebastian; Miller, Jonathan S.; Senzig, David A.; Thompson, Richard H.; Shepherd, Kevin P. (Technical Monitor)</p> <p>2000-01-01</p> <p>Aircraft noise measurements were made using noise monitoring systems at Denver International and Minneapolis St. Paul Airports. Measured sound exposure levels for a large number of operations of a wide range of aircraft types were compared with predictions using the FAA's Integrated Noise Model. In general it was observed that measured levels exceeded the predicted levels by a significant margin. These differences varied according to the type of aircraft and also depended on the distance from the aircraft. Many of the assumptions which affect the predicted sound levels were examined but none were able to fully explain the observed differences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28685211','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28685211"><span>Predicting the Emergence of Sexual Violence in Adolescence.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ybarra, Michele L; Thompson, Richard E</p> <p>2018-05-01</p> <p>This study aims to report the epidemiology of sexual violence (SV) perpetration for both female and male youth across a broad age spectrum. Additionally, the etiology of SV perpetration is examined by identifying prior exposures that predict a first SV perpetration. Six waves of data were collected nationally online, between 2006 and 2012, from 1586 youth between 10 and 21 years of age. Five types of SV were assessed: sexual harassment, sexual assault, coercive sex, attempted rape, and rape. To identify how prior exposures may predict the emergence of SV in adolescence, parsimonious lagged multivariable logistic regression models estimated the odds of first perpetrating each of the five types of SV within the context of other variables (e.g., rape attitudes). Average age at first perpetration was between 15 and 16 years of age, depending on SV type. Several characteristics were more commonly reported by perpetrators than non-perpetrators (e.g., alcohol use, other types of SV perpetration and victimization). After adjusting for potentially influential characteristics, prior exposure to parental spousal abuse and current exposure to violent pornography were each strongly associated with the emergence of SV perpetration-attempted rape being the exception for violent pornography. Current aggressive behavior was also significantly implicated in all types of first SV perpetration except rape. Previous victimization of sexual harassment and current victimization of psychological abuse in relationships were additionally predictive of one's first SV perpetration, albeit in various patterns. In this national longitudinal study of different types of SV perpetration among adolescent men and women, findings suggest several malleable factors that need to be targeted, especially scripts of inter-personal violence that are being modeled by abusive parents in youths' homes and also reinforced by violent pornography. The predictive value of victimization for a subsequent first SV perpetration highlights the inter-relatedness of different types of violence involvement. Universal and holistic prevention programming that targets aggressive behaviors and violent scripts in inter-personal relationships is needed well before the age of 15 years.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/6189683','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/6189683"><span>Predictive aging results for cable materials in nuclear power plants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Gillen, K.T.; Clough, R.L.</p> <p>1990-11-01</p> <p>In this report, we provide a detailed discussion of methodology of predicting cable degradation versus dose rate, temperature, and exposure time and its application to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicon rubber and two ethylenetetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7 to 9 years), low dose-rate results recently obtained for the same material types actually aged under nuclear power plant conditions. Based on a combination of the modelling and long-term results, we findmore » indications of reasonably similar degradation responses among several different commercial formulations for each of the following generic'' materials: hypalon, ethylenetetrafluoroethylene, silicone rubber and PVC. If such generic'' behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated. Finally, to aid utilities in their cable life extension decisions, we utilize our modelling results to generate lifetime prediction curves for the materials modelled to data. These curves plot expected material lifetime versus dose rate and temperature down to the levels of interest to nuclear power plant aging. 18 refs., 30 figs., 3 tabs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29136281','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29136281"><span>Genetic prediction of type 2 diabetes using deep neural network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, J; Kim, J; Kwak, M J; Bajaj, M</p> <p>2018-04-01</p> <p>Type 2 diabetes (T2DM) has strong heritability but genetic models to explain heritability have been challenging. We tested deep neural network (DNN) to predict T2DM using the nested case-control study of Nurses' Health Study (3326 females, 45.6% T2DM) and Health Professionals Follow-up Study (2502 males, 46.5% T2DM). We selected 96, 214, 399, and 678 single-nucleotide polymorphism (SNPs) through Fisher's exact test and L1-penalized logistic regression. We split each dataset randomly in 4:1 to train prediction models and test their performance. DNN and logistic regressions showed better area under the curve (AUC) of ROC curves than the clinical model when 399 or more SNPs included. DNN was superior than logistic regressions in AUC with 399 or more SNPs in male and 678 SNPs in female. Addition of clinical factors consistently increased AUC of DNN but failed to improve logistic regressions with 214 or more SNPs. In conclusion, we show that DNN can be a versatile tool to predict T2DM incorporating large numbers of SNPs and clinical information. Limitations include a relatively small number of the subjects mostly of European ethnicity. Further studies are warranted to confirm and improve performance of genetic prediction models using DNN in different ethnic groups. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMIN23C1737W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMIN23C1737W"><span>Using Predictive Analytics to Predict Power Outages from Severe Weather</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wanik, D. W.; Anagnostou, E. N.; Hartman, B.; Frediani, M. E.; Astitha, M.</p> <p>2015-12-01</p> <p>The distribution of reliable power is essential to businesses, public services, and our daily lives. With the growing abundance of data being collected and created by industry (i.e. outage data), government agencies (i.e. land cover), and academia (i.e. weather forecasts), we can begin to tackle problems that previously seemed too complex to solve. In this session, we will present newly developed tools to aid decision-support challenges at electric distribution utilities that must mitigate, prepare for, respond to and recover from severe weather. We will show a performance evaluation of outage predictive models built for Eversource Energy (formerly Connecticut Light & Power) for storms of all types (i.e. blizzards, thunderstorms and hurricanes) and magnitudes (from 20 to >15,000 outages). High resolution weather simulations (simulated with the Weather and Research Forecast Model) were joined with utility outage data to calibrate four types of models: a decision tree (DT), random forest (RF), boosted gradient tree (BT) and an ensemble (ENS) decision tree regression that combined predictions from DT, RF and BT. The study shows that the ENS model forced with weather, infrastructure and land cover data was superior to the other models we evaluated, especially in terms of predicting the spatial distribution of outages. This research has the potential to be used for other critical infrastructure systems (such as telecommunications, drinking water and gas distribution networks), and can be readily expanded to the entire New England region to facilitate better planning and coordination among decision-makers when severe weather strikes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SpWea..15.1270M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SpWea..15.1270M"><span>A methodology for reduced order modeling and calibration of the upper atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mehta, Piyush M.; Linares, Richard</p> <p>2017-10-01</p> <p>Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MMI....22..474Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MMI....22..474Z"><span>Flow behaviour and constitutive modelling of a ferritic stainless steel at elevated temperatures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhao, Jingwei; Jiang, Zhengyi; Zu, Guoqing; Du, Wei; Zhang, Xin; Jiang, Laizhu</p> <p>2016-05-01</p> <p>The flow behaviour of a ferritic stainless steel (FSS) was investigated by a Gleeble 3500 thermal-mechanical test simulator over the temperature range of 900-1100 °C and strain rate range of 1-50 s-1. Empirical and phenomenological constitutive models were established, and a comparative study was made on the predictability of them. The results indicate that the flow stress decreases with increasing the temperature and decreasing the strain rate. High strain rate may cause a drop in flow stress after a peak value due to the adiabatic heating. The Zener-Hollomon parameter depends linearly on the flow stress, and decreases with raising the temperature and reducing the strain rate. Significant deviations occur in the prediction of flow stress by the Johnson-Cook (JC) model, indicating that the JC model cannot accurately track the flow behaviour of the FSS during hot deformation. Both the multiple-linear and the Arrhenius-type models can track the flow behaviour very well under the whole hot working conditions, and have much higher accuracy in predicting the flow behaviour than that of the JC model. The multiple-linear model is recommended in the current work due to its simpler structure and less time needed for solving the equations relative to the Arrhenius-type model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/22107','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/22107"><span>Estimating Setup of Driven Piles into Louisiana Clayey Soils</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2009-11-15</p> <p>Two types of mathematical models for pile setup prediction, the Skov-Denver model and the newly developed rate-based model, have been established from all the dynamic and static testing data, including restrikes of the production piles, restrikes, st...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/20494','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/20494"><span>Estimating setup of driven piles into Louisiana clayey soils.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2010-11-15</p> <p>Two types of mathematical models for pile setup prediction, the Skov-Denver model and the newly developed rate-based model, have been established from all the dynamic and static testing data, including restrikes of the production piles, restrikes, st...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70046795','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70046795"><span>Influence of disturbance on temperate forest productivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Peters, Emily B.; Wythers, Kirk R.; Bradford, John B.; Reich, Peter B.</p> <p>2013-01-01</p> <p>Climate, tree species traits, and soil fertility are key controls on forest productivity. However, in most forest ecosystems, natural and human disturbances, such as wind throw, fire, and harvest, can also exert important and lasting direct and indirect influence over productivity. We used an ecosystem model, PnET-CN, to examine how disturbance type, intensity, and frequency influence net primary production (NPP) across a range of forest types from Minnesota and Wisconsin, USA. We assessed the importance of past disturbances on NPP, net N mineralization, foliar N, and leaf area index at 107 forest stands of differing types (aspen, jack pine, northern hardwood, black spruce) and disturbance history (fire, harvest) by comparing model simulations with observations. The model reasonably predicted differences among forest types in productivity, foliar N, leaf area index, and net N mineralization. Model simulations that included past disturbances minimally improved predictions compared to simulations without disturbance, suggesting the legacy of past disturbances played a minor role in influencing current forest productivity rates. Modeled NPP was more sensitive to the intensity of soil removal during a disturbance than the fraction of stand mortality or wood removal. Increasing crown fire frequency resulted in lower NPP, particularly for conifer forest types with longer leaf life spans and longer recovery times. These findings suggest that, over long time periods, moderate frequency disturbances are a relatively less important control on productivity than climate, soil, and species traits.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25614923','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25614923"><span>Modeling environmental contamination in hospital single- and four-bed rooms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>King, M-F; Noakes, C J; Sleigh, P A</p> <p>2015-12-01</p> <p>Aerial dispersion of pathogens is recognized as a potential transmission route for hospital acquired infections; however, little is known about the link between healthcare worker (HCW) contacts' with contaminated surfaces, the transmission of infections and hospital room design. We combine computational fluid dynamics (CFD) simulations of bioaerosol deposition with a validated probabilistic HCW-surface contact model to estimate the relative quantity of pathogens accrued on hands during six types of care procedures in two room types. Results demonstrate that care type is most influential (P < 0.001), followed by the number of surface contacts (P < 0.001) and the distribution of surface pathogens (P = 0.05). Highest hand contamination was predicted during Personal care despite the highest levels of hand hygiene. Ventilation rates of 6 ac/h vs. 4 ac/h showed only minor reductions in predicted hand colonization. Pathogens accrued on hands decreased monotonically after patient care in single rooms due to the physical barrier of bioaerosol transmission between rooms and subsequent hand sanitation. Conversely, contamination was predicted to increase during contact with patients in four-bed rooms due to spatial spread of pathogens. Location of the infectious patient with respect to ventilation played a key role in determining pathogen loadings (P = 0.05). We present the first quantitative model predicting the surface contacts by HCW and the subsequent accretion of pathogenic material as they perform standard patient care. This model indicates that single rooms may significantly reduce the risk of cross-contamination due to indirect infection transmission. Not all care types pose the same risks to patients, and housekeeping performed by HCWs may be an important contribution in the transmission of pathogens between patients. Ventilation rates and positioning of infectious patients within four-bed rooms can mitigate the accretion of pathogens, whereby reducing the risk of missed hand hygiene opportunities. The model provides a tool to quantitatively evaluate the influence of hospital room design on infection risk. © 2015 The Authors. Indoor Air Published by John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25837808','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25837808"><span>Iranian risk model as a predictive tool for retinopathy in patients with type 2 diabetes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Azizi-Soleiman, Fatemeh; Heidari-Beni, Motahar; Ambler, Gareth; Omar, Rumana; Amini, Masoud; Hosseini, Sayed-Mohsen</p> <p>2015-10-01</p> <p>Diabetic retinopathy (DR) is the leading cause of blindness in patients with type 1 or type 2 diabetes. The gold standard for the detection of DR requires expensive equipment. This study was undertaken to develop a simple and practical scoring system to predict the probability of DR. A total of 1782 patients who had first-degree relatives with type II diabetes were selected. Eye examinations were performed by an expert ophthalmologist. Biochemical and anthropometric predictors of DR were measured. Logistic regression was used to develop a statistical model that can be used to predict DR. Goodness of fit was examined using the Hosmer-Lemeshow test and the area under the receiver operating characteristic (ROC) curve. The risk model demonstrated good calibration and discrimination (ROC area=0.76) in the validation sample. Factors associated with DR in our model were duration of diabetes (odds ratio [OR]=2.14, confidence interval [CI] 95%=1.87 to 2.45); glycated hemoglobin (A1C) (OR=1.21, CI 95%=1.13 to 1.30); fasting plasma glucose (OR=1.83, CI 95%=1.28 to 2.62); systolic blood pressure (OR=1.01, CI 95%= 1.00 to 1.02); and proteinuria (OR=1.37, CI 95%=1.01 to 1.85). The only factor that had a protective effect against DR were body mass index and education level (OR=0.95, CI 95%=0.92 to 0.98). The good performance of our risk model suggests that it may be a useful risk-prediction tool for DR. It consisted of the positive predictors like A1C, diabetes duration, sex (male), fasting plasma glucose, systolic blood pressure and proteinuria, as well as negative risk factors like body mass index and education level. Copyright © 2015 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25694473','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25694473"><span>Cost prediction following traumatic brain injury: model development and validation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Spitz, Gershon; McKenzie, Dean; Attwood, David; Ponsford, Jennie L</p> <p>2016-02-01</p> <p>The ability to predict costs following a traumatic brain injury (TBI) would assist in planning treatment and support services by healthcare providers, insurers and other agencies. The objective of the current study was to develop predictive models of hospital, medical, paramedical, and long-term care (LTC) costs for the first 10 years following a TBI. The sample comprised 798 participants with TBI, the majority of whom were male and aged between 15 and 34 at time of injury. Costing information was obtained for hospital, medical, paramedical, and LTC costs up to 10 years postinjury. Demographic and injury-severity variables were collected at the time of admission to the rehabilitation hospital. Duration of PTA was the most important single predictor for each cost type. The final models predicted 44% of hospital costs, 26% of medical costs, 23% of paramedical costs, and 34% of LTC costs. Greater costs were incurred, depending on cost type, for individuals with longer PTA duration, obtaining a limb or chest injury, a lower GCS score, older age at injury, not being married or defacto prior to injury, living in metropolitan areas, and those reporting premorbid excessive or problem alcohol use. This study has provided a comprehensive analysis of factors predicting various types of costs following TBI, with the combination of injury-related and demographic variables predicting 23-44% of costs. PTA duration was the strongest predictor across all cost categories. These factors may be used for the planning and case management of individuals following TBI. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26942424','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26942424"><span>Prediction of Hematopoietic Stem Cell Transplantation Related Mortality- Lessons Learned from the In-Silico Approach: A European Society for Blood and Marrow Transplantation Acute Leukemia Working Party Data Mining Study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shouval, Roni; Labopin, Myriam; Unger, Ron; Giebel, Sebastian; Ciceri, Fabio; Schmid, Christoph; Esteve, Jordi; Baron, Frederic; Gorin, Norbert Claude; Savani, Bipin; Shimoni, Avichai; Mohty, Mohamad; Nagler, Arnon</p> <p>2016-01-01</p> <p>Models for prediction of allogeneic hematopoietic stem transplantation (HSCT) related mortality partially account for transplant risk. Improving predictive accuracy requires understating of prediction limiting factors, such as the statistical methodology used, number and quality of features collected, or simply the population size. Using an in-silico approach (i.e., iterative computerized simulations), based on machine learning (ML) algorithms, we set out to analyze these factors. A cohort of 25,923 adult acute leukemia patients from the European Society for Blood and Marrow Transplantation (EBMT) registry was analyzed. Predictive objective was non-relapse mortality (NRM) 100 days following HSCT. Thousands of prediction models were developed under varying conditions: increasing sample size, specific subpopulations and an increasing number of variables, which were selected and ranked by separate feature selection algorithms. Depending on the algorithm, predictive performance plateaued on a population size of 6,611-8,814 patients, reaching a maximal area under the receiver operator characteristic curve (AUC) of 0.67. AUCs' of models developed on specific subpopulation ranged from 0.59 to 0.67 for patients in second complete remission and receiving reduced intensity conditioning, respectively. Only 3-5 variables were necessary to achieve near maximal AUCs. The top 3 ranking variables, shared by all algorithms were disease stage, donor type, and conditioning regimen. Our findings empirically demonstrate that with regards to NRM prediction, few variables "carry the weight" and that traditional HSCT data has been "worn out". "Breaking through" the predictive boundaries will likely require additional types of inputs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/27847','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/27847"><span>Basin-scale availability of salmonid spawning gravel as influenced by channel type and hydraulic roughness in mountain catchments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>John M. Buffington; David R. Montgomery; Harvey M. Greenberg</p> <p>2004-01-01</p> <p>A general framework is presented for examining the effects of channel type and associated hydraulic roughness on salmonid spawning-gravel availability in mountain catchments. Digital elevation models are coupled with grain-size predictions to provide basin-scale assessments of the potential extent and spatial pattern of spawning gravels. To demonstrate both the model...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140007270','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140007270"><span>Integrating the Base of Aircraft Data (BADA) in CTAS Trajectory Synthesizer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Abramson, Michael; Ali, Kareem</p> <p>2012-01-01</p> <p>The Center-Terminal Radar Approach Control (TRACON) Automation System (CTAS), developed at NASA Ames Research Center for assisting controllers in the management and control of air traffic in the extended terminal area, supports the modeling of more than four hundred aircraft types. However, 90% of them are supported indirectly by mapping them to one of a relatively few aircraft types for which CTAS has detailed drag and engine thrust models. On the other hand, the Base of Aircraft Data (BADA), developed and maintained by Eurocontrol, supports more than 300 aircraft types, about one third of which are directly supported, i.e. they have validated performance data. All these data were made available for CTAS by integrating BADA version 3.8 into CTAS Trajectory Synthesizer (TS). Several validation tools were developed and used to validate the integrated code and to evaluate the accuracy of trajectory predictions generated using CTAS "native" and BADA Aircraft Performance Models (APM) comparing them with radar track data. Results of these comparisons indicate that the two models have different strengths and weaknesses. The BADA APM can improve the accuracy of CTAS predictions at least for some aircraft types, especially small aircraft, and for some flight phases, especially climb.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MPLB...3140055X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MPLB...3140055X"><span>A Bayesian network model for predicting type 2 diabetes risk based on electronic health records</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen</p> <p>2017-07-01</p> <p>An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11164997','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11164997"><span>Pre-clinical methods for detecting the hypersensitivity potential of pharmaceuticals: regulatory considerations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hastings, K L</p> <p>2001-02-02</p> <p>Immune-based systemic hypersensitivities account for a significant number of adverse drug reactions. There appear to be no adequate nonclinical models to predict systemic hypersensitivity to small molecular weight drugs. Although there are very good methods for detecting drugs that can induce contact sensitization, these have not been successfully adapted for prediction of systemic hypersensitivity. Several factors have made the development of adequate models difficult. The term systemic hypersensitivity encompases many discrete immunopathologies. Each type of immunopathology presumably is the result of a specific cluster of immunologic and biochemical phenomena. Certainly other factors, such as genetic predisposition, metabolic idiosyncrasies, and concomitant diseases, further complicate the problem. Therefore, it may be difficult to find common mechanisms upon which to construct adequate models to predict specific types of systemic hypersensitivity reactions. There is some reason to hope, however, that adequate methods could be developed for at least identifying drugs that have the potential to produce signs indicative of a general hazard for immune-based reactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26041212','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26041212"><span>Detection and quantification of adulteration of sesame oils with vegetable oils using gas chromatography and multivariate data analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peng, Dan; Bi, Yanlan; Ren, Xiaona; Yang, Guolong; Sun, Shangde; Wang, Xuede</p> <p>2015-12-01</p> <p>This study was performed to develop a hierarchical approach for detection and quantification of adulteration of sesame oil with vegetable oils using gas chromatography (GC). At first, a model was constructed to discriminate the difference between authentic sesame oils and adulterated sesame oils using support vector machine (SVM) algorithm. Then, another SVM-based model is developed to identify the type of adulterant in the mixed oil. At last, prediction models for sesame oil were built for each kind of oil using partial least square method. To validate this approach, 746 samples were prepared by mixing authentic sesame oils with five types of vegetable oil. The prediction results show that the detection limit for authentication is as low as 5% in mixing ratio and the root-mean-square errors for prediction range from 1.19% to 4.29%, meaning that this approach is a valuable tool to detect and quantify the adulteration of sesame oil. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29270647','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29270647"><span>Is the Factor-of-2 Rule Broadly Applicable for Evaluating the Prediction Accuracy of Metal-Toxicity Models?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Meyer, Joseph S; Traudt, Elizabeth M; Ranville, James F</p> <p>2018-01-01</p> <p>In aquatic toxicology, a toxicity-prediction model is generally deemed acceptable if its predicted median lethal concentrations (LC50 values) or median effect concentrations (EC50 values) are within a factor of 2 of their paired, observed LC50 or EC50 values. However, that rule of thumb is based on results from only two studies: multiple LC50 values for the fathead minnow (Pimephales promelas) exposed to Cu in one type of exposure water, and multiple EC50 values for Daphnia magna exposed to Zn in another type of exposure water. We tested whether the factor-of-2 rule of thumb also is supported in a different dataset in which D. magna were exposed separately to Cd, Cu, Ni, or Zn. Overall, the factor-of-2 rule of thumb appeared to be a good guide to evaluating the acceptability of a toxicity model's underprediction or overprediction of observed LC50 or EC50 values in these acute toxicity tests.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28263940','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28263940"><span>Early Detection of Heart Failure Using Electronic Health Records: Practical Implications for Time Before Diagnosis, Data Diversity, Data Quantity, and Data Density.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ng, Kenney; Steinhubl, Steven R; deFilippi, Christopher; Dey, Sanjoy; Stewart, Walter F</p> <p>2016-11-01</p> <p>Using electronic health records data to predict events and onset of diseases is increasingly common. Relatively little is known, although, about the tradeoffs between data requirements and model utility. We examined the performance of machine learning models trained to detect prediagnostic heart failure in primary care patients using longitudinal electronic health records data. Model performance was assessed in relation to data requirements defined by the prediction window length (time before clinical diagnosis), the observation window length (duration of observation before prediction window), the number of different data domains (data diversity), the number of patient records in the training data set (data quantity), and the density of patient encounters (data density). A total of 1684 incident heart failure cases and 13 525 sex, age-category, and clinic matched controls were used for modeling. Model performance improved as (1) the prediction window length decreases, especially when <2 years; (2) the observation window length increases but then levels off after 2 years; (3) the training data set size increases but then levels off after 4000 patients; (4) more diverse data types are used, but, in order, the combination of diagnosis, medication order, and hospitalization data was most important; and (5) data were confined to patients who had ≥10 phone or face-to-face encounters in 2 years. These empirical findings suggest possible guidelines for the minimum amount and type of data needed to train effective disease onset predictive models using longitudinal electronic health records data. © 2016 American Heart Association, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22594361','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22594361"><span>Predicting emergency department volume using forecasting methods to create a "surge response" for noncrisis events.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chase, Valerie J; Cohn, Amy E M; Peterson, Timothy A; Lavieri, Mariel S</p> <p>2012-05-01</p> <p>This study investigated whether emergency department (ED) variables could be used in mathematical models to predict a future surge in ED volume based on recent levels of use of physician capacity. The models may be used to guide decisions related to on-call staffing in non-crisis-related surges of patient volume. A retrospective analysis was conducted using information spanning July 2009 through June 2010 from a large urban teaching hospital with a Level I trauma center. A comparison of significance was used to assess the impact of multiple patient-specific variables on the state of the ED. Physician capacity was modeled based on historical physician treatment capacity and productivity. Binary logistic regression analysis was used to determine the probability that the available physician capacity would be sufficient to treat all patients forecasted to arrive in the next time period. The prediction horizons used were 15 minutes, 30 minutes, 1 hour, 2 hours, 4 hours, 8 hours, and 12 hours. Five consecutive months of patient data from July 2010 through November 2010, similar to the data used to generate the models, was used to validate the models. Positive predictive values, Type I and Type II errors, and real-time accuracy in predicting noncrisis surge events were used to evaluate the forecast accuracy of the models. The ratio of new patients requiring treatment over total physician capacity (termed the care utilization ratio [CUR]) was deemed a robust predictor of the state of the ED (with a CUR greater than 1 indicating that the physician capacity would not be sufficient to treat all patients forecasted to arrive). Prediction intervals of 30 minutes, 8 hours, and 12 hours performed best of all models analyzed, with deviances of 1.000, 0.951, and 0.864, respectively. A 95% significance was used to validate the models against the July 2010 through November 2010 data set. Positive predictive values ranged from 0.738 to 0.872, true positives ranged from 74% to 94%, and true negatives ranged from 70% to 90% depending on the threshold used to determine the state of the ED with the 30-minute prediction model. The CUR is a new and robust indicator of an ED system's performance. The study was able to model the tradeoff of longer time to response versus shorter but more accurate predictions, by investigating different prediction intervals. Current practice would have been improved by using the proposed models and would have identified the surge in patient volume earlier on noncrisis days. © 2012 by the Society for Academic Emergency Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3687279','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3687279"><span>Urinary Liver-Type Fatty Acid–Binding Protein and Progression of Diabetic Nephropathy in Type 1 Diabetes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Panduru, Nicolae M.; Forsblom, Carol; Saraheimo, Markku; Thorn, Lena; Bierhaus, Angelika; Humpert, Per M.; Groop, Per-Henrik</p> <p>2013-01-01</p> <p>OBJECTIVE Diabetic nephropathy (DN) has mainly been considered a glomerular disease, although tubular dysfunction may also play a role. This study assessed the predictive value for progression of a tubular marker, urinary liver-type fatty acid–binding protein (L-FABP), at all stages of DN. RESEARCH DESIGN AND METHODS At baseline, 1,549 patients with type 1 diabetes had an albumin excretion rate (AER) within normal reference ranges, 334 had microalbuminuria, and 363 had macroalbuminuria. Patients were monitored for a median of 5.8 years (95% CI 5.7–5.9). In addition, 208 nondiabetic subjects were studied. L-FABP was measured by ELISA and normalized with urinary creatinine. Different Cox proportional hazard models for the progression at every stage of DN were used to evaluate the predictive value of L-FABP. The potential benefit of using L-FABP alone or together with AER was assessed by receiver operating characteristic curve analyses. RESULTS L-FABP was an independent predictor of progression at all stages of DN. As would be expected, receiver operating characteristic curves for the prediction of progression were significantly larger for AER than for L-FABP, except for patients with baseline macroalbuminuria, in whom the areas were similar. Adding L-FABP to AER in the models did not significantly improve risk prediction of progression in favor of the combination of L-FABP plus AER compared with AER alone. CONCLUSIONS L-FABP is an independent predictor of progression of DN irrespective of disease stage. L-FABP used alone or together with AER may not improve the risk prediction of DN progression in patients with type 1 diabetes, but further studies are needed in this regard. PMID:23378622</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28088694','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28088694"><span>Metabolic network modeling with model organisms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yilmaz, L Safak; Walhout, Albertha Jm</p> <p>2017-02-01</p> <p>Flux balance analysis (FBA) with genome-scale metabolic network models (GSMNM) allows systems level predictions of metabolism in a variety of organisms. Different types of predictions with different accuracy levels can be made depending on the applied experimental constraints ranging from measurement of exchange fluxes to the integration of gene expression data. Metabolic network modeling with model organisms has pioneered method development in this field. In addition, model organism GSMNMs are useful for basic understanding of metabolism, and in the case of animal models, for the study of metabolic human diseases. Here, we discuss GSMNMs of most highly used model organisms with the emphasis on recent reconstructions. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5458607','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5458607"><span>Metabolic network modeling with model organisms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yilmaz, L. Safak; Walhout, Albertha J.M.</p> <p>2017-01-01</p> <p>Flux balance analysis (FBA) with genome-scale metabolic network models (GSMNM) allows systems level predictions of metabolism in a variety of organisms. Different types of predictions with different accuracy levels can be made depending on the applied experimental constraints ranging from measurement of exchange fluxes to the integration of gene expression data. Metabolic network modeling with model organisms has pioneered method development in this field. In addition, model organism GSMNMs are useful for basic understanding of metabolism, and in the case of animal models, for the study of metabolic human diseases. Here, we discuss GSMNMs of most highly used model organisms with the emphasis on recent reconstructions. PMID:28088694</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3468746','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3468746"><span>Prediction of Microbial Infection of Cultured Cells Using DNA Microarray Gene-Expression Profiles of Host Responses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Park, Yu Rang; Chung, Tae Su; Lee, Young Joo; Song, Yeong Wook; Lee, Eun Young; Sohn, Yeo Won; Song, Sukgil; Park, Woong Yang</p> <p>2012-01-01</p> <p>Infection by microorganisms may cause fatally erroneous interpretations in the biologic researches based on cell culture. The contamination by microorganism in the cell culture is quite frequent (5% to 35%). However, current approaches to identify the presence of contamination have many limitations such as high cost of time and labor, and difficulty in interpreting the result. In this paper, we propose a model to predict cell infection, using a microarray technique which gives an overview of the whole genome profile. By analysis of 62 microarray expression profiles under various experimental conditions altering cell type, source of infection and collection time, we discovered 5 marker genes, NM_005298, NM_016408, NM_014588, S76389, and NM_001853. In addition, we discovered two of these genes, S76389, and NM_001853, are involved in a Mycolplasma-specific infection process. We also suggest models to predict the source of infection, cell type or time after infection. We implemented a web based prediction tool in microarray data, named Prediction of Microbial Infection (http://www.snubi.org/software/PMI). PMID:23091307</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20020090126&hterms=oxygen+planets&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Doxygen%2Bplanets','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20020090126&hterms=oxygen+planets&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Doxygen%2Bplanets"><span>Silicon in Mars' Core: A Prediction Based on Mars Model Using Nitrogen and Oxygen Isotopes in SNC Meteorites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mohapatra, R. K.; Murty, S. V. S.</p> <p>2002-01-01</p> <p>Chemical and (oxygen) isotopic compositions of SNC meteorites have been used by a number of workers to infer the nature of precursor materials for the accretion of Mars. The idea that chondritic materials played a key role in the formation of Mars has been the central assumption in these works. Wanke and Dreibus have proposed a mixture of two types of chondritic materials, differing in oxygen fugacity but having CI type bulk chemical composition for the nonvolatile elements, for Mars' precursor. But a number of studies based on high pressure and temperature melting experiments do not favor a CI type bulk planet composition for Mars, as it predicts a bulk planet Fe/Si ratio much higher than that reported from the recent Pathfinder data. Oxygen forms the bulk of Mars (approximately 40% by wt.) and might provide clues to the type of materials that formed Mars. But models based on the oxygen isotopic compositions of SNC meteorites predict three different mixtures of precursor materials for Mars: 90% H + 10% CM, 85% H + 11% CV + 4% CI and 45% EH + 55% H. As each of these models has been shown to be consistent with the bulk geophysical properties (such as mean density, and moment of inertia factor) of Mars, the nature of the material that accreted to form Mars remains ambiguous.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22126653-grid-three-dimensional-stellar-atmosphere-models-solar-metallicity-general-properties-granulation-atmospheric-expansion','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22126653-grid-three-dimensional-stellar-atmosphere-models-solar-metallicity-general-properties-granulation-atmospheric-expansion"><span>A GRID OF THREE-DIMENSIONAL STELLAR ATMOSPHERE MODELS OF SOLAR METALLICITY. I. GENERAL PROPERTIES, GRANULATION, AND ATMOSPHERIC EXPANSION</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Trampedach, Regner; Asplund, Martin; Collet, Remo</p> <p>2013-05-20</p> <p>Present grids of stellar atmosphere models are the workhorses in interpreting stellar observations and determining their fundamental parameters. These models rely on greatly simplified models of convection, however, lending less predictive power to such models of late-type stars. We present a grid of improved and more reliable stellar atmosphere models of late-type stars, based on deep, three-dimensional (3D), convective, stellar atmosphere simulations. This grid is to be used in general for interpreting observations and improving stellar and asteroseismic modeling. We solve the Navier Stokes equations in 3D and concurrent with the radiative transfer equation, for a range of atmospheric parameters,more » covering most of stellar evolution with convection at the surface. We emphasize the use of the best available atomic physics for quantitative predictions and comparisons with observations. We present granulation size, convective expansion of the acoustic cavity, and asymptotic adiabat as functions of atmospheric parameters.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22522503-rayleightaylor-unstable-flamesfast-faster','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22522503-rayleightaylor-unstable-flamesfast-faster"><span>RAYLEIGH–TAYLOR UNSTABLE FLAMES—FAST OR FASTER?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hicks, E. P., E-mail: eph2001@columbia.edu</p> <p>2015-04-20</p> <p>Rayleigh–Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate bothmore » models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ApJ...803...72H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ApJ...803...72H"><span>Rayleigh-Taylor Unstable Flames -- Fast or Faster?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hicks, E. P.</p> <p>2015-04-01</p> <p>Rayleigh-Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate both models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70016144','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70016144"><span>Disseminated flake graphite and amorphous graphite deposit types. An analysis using grade and tonnage models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Sutphin, David M.; Bliss, James D.</p> <p>1990-01-01</p> <p>On the basis of differences derived from genetic, descriptive, and grade-tonnage data, graphite deposits are classified here into three deposit types: disseminated flake, amorphous (microcrystalline), or graphite veins. Descriptive models have been constructed for each of these deposit types, and grade-tonnage models are constructed for disseminated flake and amorphous deposit types. Grade and tonnage data are used also to construct grade-tonnage models that assist in predicting the size and grade of undiscovered graphite deposits. The median tonnage and carbon grade of disseminated flake deposits are 240 000 tonnes and 9% carbon and for amorphous deposits, 130 000 tonnes and 40% carbon. The differences in grade between disseminated flake and amorphous deposit types are statistically significant, whereas the differences in amount of contained carbon are not.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JNEng...9b6023C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JNEng...9b6023C"><span>Prediction and control of neural responses to pulsatile electrical stimulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Campbell, Luke J.; Sly, David James; O'Leary, Stephen John</p> <p>2012-04-01</p> <p>This paper aims to predict and control the probability of firing of a neuron in response to pulsatile electrical stimulation of the type delivered by neural prostheses such as the cochlear implant, bionic eye or in deep brain stimulation. Using the cochlear implant as a model, we developed an efficient computational model that predicts the responses of auditory nerve fibers to electrical stimulation and evaluated the model's accuracy by comparing the model output with pooled responses from a group of guinea pig auditory nerve fibers. It was found that the model accurately predicted the changes in neural firing probability over time to constant and variable amplitude electrical pulse trains, including speech-derived signals, delivered at rates up to 889 pulses s-1. A simplified version of the model that did not incorporate adaptation was used to adaptively predict, within its limitations, the pulsatile electrical stimulus required to cause a desired response from neurons up to 250 pulses s-1. Future stimulation strategies for cochlear implants and other neural prostheses may be enhanced using similar models that account for the way that neural responses are altered by previous stimulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPS...277..239P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPS...277..239P"><span>Model predictive control of the solid oxide fuel cell stack temperature with models based on experimental data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari</p> <p>2015-03-01</p> <p>Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21082205','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21082205"><span>Determination of protein folding kinetic types using sequence and predicted secondary structure and solvent accessibility.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Hua; Zhang, Tuo; Gao, Jianzhao; Ruan, Jishou; Shen, Shiyi; Kurgan, Lukasz</p> <p>2012-01-01</p> <p>Proteins fold through a two-state (TS), with no visible intermediates, or a multi-state (MS), via at least one intermediate, process. We analyze sequence-derived factors that determine folding types by introducing a novel sequence-based folding type predictor called FOKIT. This method implements a logistic regression model with six input features which hybridize information concerning amino acid composition and predicted secondary structure and solvent accessibility. FOKIT provides predictions with average Matthews correlation coefficient (MCC) between 0.58 and 0.91 measured using out-of-sample tests on four benchmark datasets. These results are shown to be competitive or better than results of four modern predictors. We also show that FOKIT outperforms these methods when predicting chains that share low similarity with the chains used to build the model, which is an important advantage given the limited number of annotated chains. We demonstrate that inclusion of solvent accessibility helps in discrimination of the folding kinetic types and that three of the features constitute statistically significant markers that differentiate TS and MS folders. We found that the increased content of exposed Trp and buried Leu are indicative of the MS folding, which implies that the exposure/burial of certain hydrophobic residues may play important role in the formation of the folding intermediates. Our conclusions are supported by two case studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=201684&keyword=journal+AND+applied+AND+statistics&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=201684&keyword=journal+AND+applied+AND+statistics&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Effects of regionalization decisions on an O/E index for the US national assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>We examined the effects of different regionalization schemes on the performance of River Invertebrate Prediction and Classification System (RIVPACS)-type predictive models in assessing the biological conditions of streams of the US for the National Wadeable Streams Assessment (WS...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110007949','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110007949"><span>A Final Approach Trajectory Model for Current Operations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gong, Chester; Sadovsky, Alexander</p> <p>2010-01-01</p> <p>Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25484028','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25484028"><span>Development and validation of a cost-utility model for Type 1 diabetes mellitus.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wolowacz, S; Pearson, I; Shannon, P; Chubb, B; Gundgaard, J; Davies, M; Briggs, A</p> <p>2015-08-01</p> <p>To develop a health economic model to evaluate the cost-effectiveness of new interventions for Type 1 diabetes mellitus by their effects on long-term complications (measured through mean HbA1c ) while capturing the impact of treatment on hypoglycaemic events. Through a systematic review, we identified complications associated with Type 1 diabetes mellitus and data describing the long-term incidence of these complications. An individual patient simulation model was developed and included the following complications: cardiovascular disease, peripheral neuropathy, microalbuminuria, end-stage renal disease, proliferative retinopathy, ketoacidosis, cataract, hypoglycemia and adverse birth outcomes. Risk equations were developed from published cumulative incidence data and hazard ratios for the effect of HbA1c , age and duration of diabetes. We validated the model by comparing model predictions with observed outcomes from studies used to build the model (internal validation) and from other published data (external validation). We performed illustrative analyses for typical patient cohorts and a hypothetical intervention. Model predictions were within 2% of expected values in the internal validation and within 8% of observed values in the external validation (percentages represent absolute differences in the cumulative incidence). The model utilized high-quality, recent data specific to people with Type 1 diabetes mellitus. In the model validation, results deviated less than 8% from expected values. © 2014 Research Triangle Institute d/b/a RTI Health Solutions. Diabetic Medicine © 2014 Diabetes UK.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29075892','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29075892"><span>Predicting in vivo effect levels for repeat-dose systemic toxicity using chemical, biological, kinetic and study covariates.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Truong, Lisa; Ouedraogo, Gladys; Pham, LyLy; Clouzeau, Jacques; Loisel-Joubert, Sophie; Blanchet, Delphine; Noçairi, Hicham; Setzer, Woodrow; Judson, Richard; Grulke, Chris; Mansouri, Kamel; Martin, Matthew</p> <p>2018-02-01</p> <p>In an effort to address a major challenge in chemical safety assessment, alternative approaches for characterizing systemic effect levels, a predictive model was developed. Systemic effect levels were curated from ToxRefDB, HESS-DB and COSMOS-DB from numerous study types totaling 4379 in vivo studies for 1247 chemicals. Observed systemic effects in mammalian models are a complex function of chemical dynamics, kinetics, and inter- and intra-individual variability. To address this complex problem, systemic effect levels were modeled at the study-level by leveraging study covariates (e.g., study type, strain, administration route) in addition to multiple descriptor sets, including chemical (ToxPrint, PaDEL, and Physchem), biological (ToxCast), and kinetic descriptors. Using random forest modeling with cross-validation and external validation procedures, study-level covariates alone accounted for approximately 15% of the variance reducing the root mean squared error (RMSE) from 0.96 log 10 to 0.85 log 10  mg/kg/day, providing a baseline performance metric (lower expectation of model performance). A consensus model developed using a combination of study-level covariates, chemical, biological, and kinetic descriptors explained a total of 43% of the variance with an RMSE of 0.69 log 10  mg/kg/day. A benchmark model (upper expectation of model performance) was also developed with an RMSE of 0.5 log 10  mg/kg/day by incorporating study-level covariates and the mean effect level per chemical. To achieve a representative chemical-level prediction, the minimum study-level predicted and observed effect level per chemical were compared reducing the RMSE from 1.0 to 0.73 log 10  mg/kg/day, equivalent to 87% of predictions falling within an order-of-magnitude of the observed value. Although biological descriptors did not improve model performance, the final model was enriched for biological descriptors that indicated xenobiotic metabolism gene expression, oxidative stress, and cytotoxicity, demonstrating the importance of accounting for kinetics and non-specific bioactivity in predicting systemic effect levels. Herein, we generated an externally predictive model of systemic effect levels for use as a safety assessment tool and have generated forward predictions for over 30,000 chemicals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992brla.rept.....K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992brla.rept.....K"><span>A system structure for predictive relations in penetration mechanics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Korjack, Thomas A.</p> <p>1992-02-01</p> <p>The availability of a software system yielding quick numerical models to predict ballistic behavior is a requisite for any research laboratory engaged in material behavior. What is especially true about accessibility of rapid prototyping for terminal impaction is the enhancement of a system structure which will direct the specific material and impact situation towards a specific predictive model. This is of particular importance when the ranges of validity are at stake and the pertinent constraints associated with the impact are unknown. Hence, a compilation of semiempirical predictive penetration relations for various physical phenomena has been organized into a data structure for the purpose of developing a knowledge-based decision aided expert system to predict the terminal ballistic behavior of projectiles and targets. The ranges of validity and constraints of operation of each model were examined and cast into a decision tree structure to include target type, target material, projectile types, projectile materials, attack configuration, and performance or damage measures. This decision system implements many penetration relations, identifies formulas that match user-given conditions, and displays the predictive relation coincident with the match in addition to a numerical solution. The physical regimes under consideration encompass the hydrodynamic, transitional, and solid; the targets are either semi-infinite or plate, and the projectiles include kinetic and chemical energy. A preliminary databases has been constructed to allow further development of inductive and deductive reasoning techniques applied to ballistic situations involving terminal mechanics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28747270','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28747270"><span>Predictive power of the grace score in population with diabetes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baeza-Román, Anna; de Miguel-Balsa, Eva; Latour-Pérez, Jaime; Carrillo-López, Andrés</p> <p>2017-12-01</p> <p>Current clinical practice guidelines recommend risk stratification in patients with acute coronary syndrome (ACS) upon admission to hospital. Diabetes mellitus (DM) is widely recognized as an independent predictor of mortality in these patients, although it is not included in the GRACE risk score. The objective of this study is to validate the GRACE risk score in a contemporary population and particularly in the subgroup of patients with diabetes, and to test the effects of including the DM variable in the model. Retrospective cohort study in patients included in the ARIAM-SEMICYUC registry, with a diagnosis of ACS and with available in-hospital mortality data. We tested the predictive power of the GRACE score, calculating the area under the ROC curve. We assessed the calibration of the score and the predictive ability based on type of ACS and the presence of DM. Finally, we evaluated the effect of including the DM variable in the model by calculating the net reclassification improvement. The GRACE score shows good predictive power for hospital mortality in the study population, with a moderate degree of calibration and no significant differences based on ACS type or the presence of DM. Including DM as a variable did not add any predictive value to the GRACE model. The GRACE score has an appropriate predictive power, with good calibration and clinical applicability in the subgroup of diabetic patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24912460','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24912460"><span>Specificity of disgust domains in the prediction of contamination anxiety and avoidance: a multimodal examination.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Olatunji, Bunmi O; Ebesutani, Chad; Haidt, Jonathan; Sawchuk, Craig N</p> <p>2014-07-01</p> <p>Although core, animal-reminder, and contamination disgust are viewed as distinct "types" of disgust vulnerabilities, the extent to which individual differences in the three disgust domains uniquely predict contamination-related anxiety and avoidance remains unclear. Three studies were conducted to fill this important gap in the literature. Study 1 was conducted to first determine if the three types of disgust could be replicated in a larger and more heterogeneous sample. Confirmatory factor analysis revealed that a bifactor model consisting of a "general disgust" dimension and the three distinct disgust dimensions yielded a better fit than a one-factor model. Structural equation modeling in Study 2 showed that while latent core, animal-reminder, and contamination disgust factors each uniquely predicted a latent "contamination anxiety" factor above and beyond general disgust, only animal-reminder uniquely predicted a latent "non-contamination anxiety" factor above and beyond general disgust. However, Study 3 found that only contamination disgust uniquely predicted behavioral avoidance in a public restroom where contamination concerns are salient. These findings suggest that although the three disgust domains are associated with contamination anxiety and avoidance, individual differences in contamination disgust sensitivity appear to be most uniquely predictive of contamination-related distress. The implications of these findings for the development and maintenance of anxiety-related disorders marked by excessive contamination concerns are discussed. Copyright © 2014. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SMaS...26e5028H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SMaS...26e5028H"><span>Recursive formulae and performance comparisons for first mode dynamics of periodic structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hobeck, Jared D.; Inman, Daniel J.</p> <p>2017-05-01</p> <p>Periodic structures are growing in popularity especially in the energy harvesting and metastructures communities. Common types of these unique structures are referred to in the literature as zigzag, orthogonal spiral, fan-folded, and longitudinal zigzag structures. Many of these studies on periodic structures have two competing goals in common: (a) minimizing natural frequency, and (b) minimizing mass or volume. These goals suggest that no single design is best for all applications; therefore, there is a need for design optimization and comparison tools which first require efficient easy-to-implement models. All available structural dynamics models for these types of structures do provide exact analytical solutions; however, they are complex requiring tedious implementation and providing more information than necessary for practical applications making them computationally inefficient. This paper presents experimentally validated recursive models that are able to very accurately and efficiently predict the dynamics of the four most common types of periodic structures. The proposed modeling technique employs a combination of static deflection formulae and Rayleigh’s Quotient to estimate the first mode shape and natural frequency of periodic structures having any number of beams. Also included in this paper are the results of an extensive experimental validation study which show excellent agreement between model prediction and measurement. Lastly, the proposed models are used to evaluate the performance of each type of structure. Results of this performance evaluation reveal key advantages and disadvantages associated with each type of structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70192533','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70192533"><span>Use of cccupancy models to evaluate expert knowledge-based species-habitat relationships</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Iglecia, Monica N.; Collazo, Jaime A.; McKerrow, Alexa</p> <p>2012-01-01</p> <p>Expert knowledge-based species-habitat relationships are used extensively to guide conservation planning, particularly when data are scarce. Purported relationships describe the initial state of knowledge, but are rarely tested. We assessed support in the data for suitability rankings of vegetation types based on expert knowledge for three terrestrial avian species in the South Atlantic Coastal Plain of the United States. Experts used published studies, natural history, survey data, and field experience to rank vegetation types as optimal, suitable, and marginal. We used single-season occupancy models, coupled with land cover and Breeding Bird Survey data, to examine the hypothesis that patterns of occupancy conformed to species-habitat suitability rankings purported by experts. Purported habitat suitability was validated for two of three species. As predicted for the Eastern Wood-Pewee (Contopus virens) and Brown-headed Nuthatch (Sitta pusilla), occupancy was strongly influenced by vegetation types classified as “optimal habitat” by the species suitability rankings for nuthatches and wood-pewees. Contrary to predictions, Red-headed Woodpecker (Melanerpes erythrocephalus) models that included vegetation types as covariates received similar support by the data as models without vegetation types. For all three species, occupancy was also related to sampling latitude. Our results suggest that covariates representing other habitat requirements might be necessary to model occurrence of generalist species like the woodpecker. The modeling approach described herein provides a means to test expert knowledge-based species-habitat relationships, and hence, help guide conservation planning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/26651','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/26651"><span>A new spatial multiple discrete-continuous modeling approach to land use change analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2013-09-01</p> <p>This report formulates a multiple discrete-continuous probit (MDCP) land-use model within a : spatially explicit economic structural framework for land-use change decisions. The spatial : MDCP model is capable of predicting both the type and intensit...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22946065','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22946065"><span>New Zealand Diabetes Cohort Study cardiovascular risk score for people with Type 2 diabetes: validation in the PREDICT cohort.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Robinson, Tom; Elley, C Raina; Wells, Sue; Robinson, Elizabeth; Kenealy, Tim; Pylypchuk, Romana; Bramley, Dale; Arroll, Bruce; Crengle, Sue; Riddell, Tania; Ameratunga, Shanthi; Metcalf, Patricia; Drury, Paul L</p> <p>2012-09-01</p> <p>New Zealand (NZ) guidelines recommend treating people for cardiovascular disease (CVD) risk on the basis of five-year absolute risk using a NZ adaptation of the Framingham risk equation. A diabetes-specific Diabetes Cohort Study (DCS) CVD predictive risk model has been developed and validated using NZ Get Checked data. To revalidate the DCS model with an independent cohort of people routinely assessed using PREDICT, a web-based CVD risk assessment and management programme. People with Type 2 diabetes without pre-existing CVD were identified amongst people who had a PREDICT risk assessment between 2002 and 2005. From this group we identified those with sufficient data to allow estimation of CVD risk with the DCS models. We compared the DCS models with the NZ Framingham risk equation in terms of discrimination, calibration, and reclassification implications. Of 3044 people in our study cohort, 1829 people had complete data and therefore had CVD risks calculated. Of this group, 12.8% (235) had a cardiovascular event during the five-year follow-up. The DCS models had better discrimination than the currently used equation, with C-statistics being 0.68 for the two DCS models and 0.65 for the NZ Framingham model. The DCS models were superior to the NZ Framingham equation at discriminating people with diabetes who will have a cardiovascular event. The adoption of a DCS model would lead to a small increase in the number of people with diabetes who are treated with medication, but potentially more CVD events would be avoided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29129622','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29129622"><span>Personalized long-term prediction of cognitive function: Using sequential assessments to improve model performance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J</p> <p>2017-12-01</p> <p>Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20038025','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20038025"><span>[Development of an analyzing system for soil parameters based on NIR spectroscopy].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zheng, Li-Hua; Li, Min-Zan; Sun, Hong</p> <p>2009-10-01</p> <p>A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA455392','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA455392"><span>The Effect of Major Organizational Policy on Employee Attitudes Toward Graduate Degrees</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2006-03-01</p> <p>on the type of intention being assessed - measure of intention and measure of estimate ( Fishbein & Ajzen , 1975). The former is used to predict...motivated to pursue graduate degrees. Therefore, the Model of Reasoned Action’s measurement of estimate for goal achievement ( Fishbein & Ajzen , 1975...Five Years The measurement of intention from the Model of Reasoned Action for predicting the performance of a behavior ( Fishbein & Ajzen , 1975) was</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=325394','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=325394"><span>Evaluating the performance of a new model for predicting the growth of Clostridium perfringens in cooked, uncured meat and poultry products under isothermal, heating, and dynamically cooling conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Clostridium perfringens Type A is a significant public health threat and may germinate, outgrow, and multiply during cooling of cooked meats. This study evaluates a new C. perfringens growth model in IPMP Dynamic Prediction using the same criteria and cooling data in Mohr and others (2015), but inc...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA619807','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA619807"><span>Progress in Finite Element Modeling of the Lower Extremities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-06-01</p> <p>bending and subsequent injury , e.g., the distal tibia motion results in bending of the tibia rather than the tibia rotating about the knee joint...layers, rich anisotropy, and wide variability. Developing a model for predictive injury capability, therefore, needs to be versatile and flexible to... injury capability presents many challenges, the first of which is identifying the types of conditions where injury prediction is needed. Our focus</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A41H2386X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A41H2386X"><span>Tracing the source of numerical climate model uncertainties in precipitation simulations using a feature-oriented statistical model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Y.; Jones, A. D.; Rhoades, A.</p> <p>2017-12-01</p> <p>Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20968347','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20968347"><span>A neural network based model for urban noise prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Genaro, N; Torija, A; Ramos-Ridao, A; Requena, I; Ruiz, D P; Zamorano, M</p> <p>2010-10-01</p> <p>Noise is a global problem. In 1972 the World Health Organization (WHO) classified noise as a pollutant. Since then, most industrialized countries have enacted laws and local regulations to prevent and reduce acoustic environmental pollution. A further aim is to alert people to the dangers of this type of pollution. In this context, urban planners need to have tools that allow them to evaluate the degree of acoustic pollution. Scientists in many countries have modeled urban noise, using a wide range of approaches, but their results have not been as good as expected. This paper describes a model developed for the prediction of environmental urban noise using Soft Computing techniques, namely Artificial Neural Networks (ANN). The model is based on the analysis of variables regarded as influential by experts in the field and was applied to data collected on different types of streets. The results were compared to those obtained with other models. The study found that the ANN system was able to predict urban noise with greater accuracy, and thus, was an improvement over those models. The principal component analysis (PCA) was also used to try to simplify the model. Although there was a slight decline in the accuracy of the results, the values obtained were also quite acceptable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26322135','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26322135"><span>Chemically Aware Model Builder (camb): an R package for property and bioactivity modelling of small molecules.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Murrell, Daniel S; Cortes-Ciriano, Isidro; van Westen, Gerard J P; Stott, Ian P; Bender, Andreas; Malliavin, Thérèse E; Glen, Robert C</p> <p>2015-01-01</p> <p>In silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process. camb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2). Overall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25844148','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25844148"><span>Predicting electromyographic signals under realistic conditions using a multiscale chemo-electro-mechanical finite element model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mordhorst, Mylena; Heidlauf, Thomas; Röhrle, Oliver</p> <p>2015-04-06</p> <p>This paper presents a novel multiscale finite element-based framework for modelling electromyographic (EMG) signals. The framework combines (i) a biophysical description of the excitation-contraction coupling at the half-sarcomere level, (ii) a model of the action potential (AP) propagation along muscle fibres, (iii) a continuum-mechanical formulation of force generation and deformation of the muscle, and (iv) a model for predicting the intramuscular and surface EMG. Owing to the biophysical description of the half-sarcomere, the model inherently accounts for physiological properties of skeletal muscle. To demonstrate this, the influence of membrane fatigue on the EMG signal during sustained contractions is investigated. During a stimulation period of 500 ms at 100 Hz, the predicted EMG amplitude decreases by 40% and the AP propagation velocity decreases by 15%. Further, the model can take into account contraction-induced deformations of the muscle. This is demonstrated by simulating fixed-length contractions of an idealized geometry and a model of the human tibialis anterior muscle (TA). The model of the TA furthermore demonstrates that the proposed finite element model is capable of simulating realistic geometries, complex fibre architectures, and can include different types of heterogeneities. In addition, the TA model accounts for a distributed innervation zone, different fibre types and appeals to motor unit discharge times that are based on a biophysical description of the α motor neurons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4342944','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4342944"><span>Predicting electromyographic signals under realistic conditions using a multiscale chemo–electro–mechanical finite element model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mordhorst, Mylena; Heidlauf, Thomas; Röhrle, Oliver</p> <p>2015-01-01</p> <p>This paper presents a novel multiscale finite element-based framework for modelling electromyographic (EMG) signals. The framework combines (i) a biophysical description of the excitation–contraction coupling at the half-sarcomere level, (ii) a model of the action potential (AP) propagation along muscle fibres, (iii) a continuum-mechanical formulation of force generation and deformation of the muscle, and (iv) a model for predicting the intramuscular and surface EMG. Owing to the biophysical description of the half-sarcomere, the model inherently accounts for physiological properties of skeletal muscle. To demonstrate this, the influence of membrane fatigue on the EMG signal during sustained contractions is investigated. During a stimulation period of 500 ms at 100 Hz, the predicted EMG amplitude decreases by 40% and the AP propagation velocity decreases by 15%. Further, the model can take into account contraction-induced deformations of the muscle. This is demonstrated by simulating fixed-length contractions of an idealized geometry and a model of the human tibialis anterior muscle (TA). The model of the TA furthermore demonstrates that the proposed finite element model is capable of simulating realistic geometries, complex fibre architectures, and can include different types of heterogeneities. In addition, the TA model accounts for a distributed innervation zone, different fibre types and appeals to motor unit discharge times that are based on a biophysical description of the α motor neurons. PMID:25844148</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5749861','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5749861"><span>LRSSLMDA: Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Huang, Li</p> <p>2017-01-01</p> <p>Predicting novel microRNA (miRNA)-disease associations is clinically significant due to miRNAs’ potential roles of diagnostic biomarkers and therapeutic targets for various human diseases. Previous studies have demonstrated the viability of utilizing different types of biological data to computationally infer new disease-related miRNAs. Yet researchers face the challenge of how to effectively integrate diverse datasets and make reliable predictions. In this study, we presented a computational model named Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction (LRSSLMDA), which projected miRNAs/diseases’ statistical feature profile and graph theoretical feature profile to a common subspace. It used Laplacian regularization to preserve the local structures of the training data and a L1-norm constraint to select important miRNA/disease features for prediction. The strength of dimensionality reduction enabled the model to be easily extended to much higher dimensional datasets than those exploited in this study. Experimental results showed that LRSSLMDA outperformed ten previous models: the AUC of 0.9178 in global leave-one-out cross validation (LOOCV) and the AUC of 0.8418 in local LOOCV indicated the model’s superior prediction accuracy; and the average AUC of 0.9181+/-0.0004 in 5-fold cross validation justified its accuracy and stability. In addition, three types of case studies further demonstrated its predictive power. Potential miRNAs related to Colon Neoplasms, Lymphoma, Kidney Neoplasms, Esophageal Neoplasms and Breast Neoplasms were predicted by LRSSLMDA. Respectively, 98%, 88%, 96%, 98% and 98% out of the top 50 predictions were validated by experimental evidences. Therefore, we conclude that LRSSLMDA would be a valuable computational tool for miRNA-disease association prediction. PMID:29253885</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012ApCM...19..755L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012ApCM...19..755L"><span>The Behaviour of Naturally Debonded Composites Due to Bending Using a Meso-Level Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lord, C. E.; Rongong, J. A.; Hodzic, A.</p> <p>2012-06-01</p> <p>Numerical simulations and analytical models are increasingly being sought for the design and behaviour prediction of composite materials. The use of high-performance composite materials is growing in both civilian and defence related applications. With this growth comes the necessity to understand and predict how these new materials will behave under their exposed environments. In this study, the displacement behaviour of naturally debonded composites under out-of-plane bending conditions has been investigated. An analytical approach has been developed to predict the displacement response behaviour. The analytical model supports multi-layered composites with full and partial delaminations. The model can be used to extract bulk effective material properties in which can be represented, later, as an ESL (Equivalent Single Layer). The friction between each of the layers is included in the analytical model and is shown to have distinct behaviour for these types of composites. Acceptable agreement was observed between the model predictions, the ANSYS finite element model, and the experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26610982','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26610982"><span>Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L</p> <p>2016-01-01</p> <p>The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhDT........32W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhDT........32W"><span>Computer Model Inversion and Uncertainty Quantification in the Geosciences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>White, Jeremy T.</p> <p></p> <p>The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22125236','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22125236"><span>Evolution of a detailed physiological model to simulate the gastrointestinal transit and absorption process in humans, part II: extension to describe performance of solid dosage forms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thelen, Kirstin; Coboeken, Katrin; Willmann, Stefan; Dressman, Jennifer B; Lippert, Jörg</p> <p>2012-03-01</p> <p>The physiological absorption model presented in part I of this work is now extended to account for dosage-form-dependent gastrointestinal (GI) transit as well as disintegration and dissolution processes of various immediate-release and modified-release dosage forms. Empirical functions of the Weibull type were fitted to experimental in vitro dissolution profiles of solid dosage forms for eight test compounds (aciclovir, caffeine, cimetidine, diclofenac, furosemide, paracetamol, phenobarbital, and theophylline). The Weibull functions were then implemented into the model to predict mean plasma concentration-time profiles of the various dosage forms. On the basis of these dissolution functions, pharmacokinetics (PK) of six model drugs was predicted well. In the case of diclofenac, deviations between predicted and observed plasma concentrations were attributable to the large variability in gastric emptying time of the enteric-coated tablets. Likewise, oral PK of furosemide was found to be predominantly governed by the gastric emptying patterns. It is concluded that the revised model for GI transit and absorption was successfully integrated with dissolution functions of the Weibull type, enabling prediction of in vivo PK profiles from in vitro dissolution data. It facilitates a comparative analysis of the parameters contributing to oral drug absorption and is thus a powerful tool for formulation design. Copyright © 2011 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900042359&hterms=1041&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3D%2526%25231041','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900042359&hterms=1041&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3D%2526%25231041"><span>Statistical analysis of modeling error in structural dynamic systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hasselman, T. K.; Chrostowski, J. D.</p> <p>1990-01-01</p> <p>The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8212S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8212S"><span>A climatology and preliminary investigation of predictability of pristine nocturnal convective initiation in the central United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stelten, Sean; Gallus, William</p> <p>2017-04-01</p> <p>The prediction of convective initiation remains a challenge to forecasters in the central United States, especially for elevated events at night. This study examines a subset of 287 nocturnal elevated convective initiation events that occurred without direct influence from surface boundaries or pre-existing convection over a four-month period during the summer of 2015 (May, June, July, and August). Events were first classified into one of four types based on apparent formation mechanisms and location relative to any low-level jet. A climatology of each of the four types was performed focusing on general spatial tendencies over the central United States and initiation timing trends. Additionally, analysis of initiation elevation was performed. Simulations from five convection-allowing models available during the Plains Elevated Convection At Night (PECAN) field campaign, along with four versions of a 4km horizontal grid spacing Weather Research and Forecasting (WRF) model using different planetary boundary layer (PBL) parameterizations, were used to examine predictability of these types of convective initiation. The climatology revealed a dual-peak pattern for initiation timing with one peak near 0400 UTC and another 0700 UTC, and it was found that the dual peak structure was present for all four types of events, suggesting that the evolution of the low-level jet was not directly responsible for the twin peaks. Subtle differences in location and elevation of the initiation for the different types were identified. The convection-allowing models run during the PECAN project were found to be more deficient with location than timing. Threat scores typically averaged around 0.3 for the models, with false alarm ratios and hit rates both averaging around 0.5 to 0.6 for the various models. Initiation occurring within the low-level jet but far from a surface front was the one type that was occasionally missed by all five models examined. Once case for each of the four types was then simulated with four different configurations of a 4 km horizontal grid spacing WRF model. These WRF runs showed similar location errors and problems with initiating convection at a lower altitude than observed as was found from the simulations performed during PECAN. Three of the four PBL schemes behaved similarly, but one, the ACM2, was often an outlier, failing to indicate the convective initiation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19850017575&hterms=china+economy&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dchina%2Beconomy','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19850017575&hterms=china+economy&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dchina%2Beconomy"><span>Using NASTRAN to model missile inertia loads</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Marvin, R.; Porter, C.</p> <p>1985-01-01</p> <p>An important use of NASTRAN is in the area of structural loads analysis on weapon systems carried aboard aircraft. The program is used to predict bending moments and shears in missile bodies, when subjected to aircraft induced accelerations. The missile, launcher and aircraft wing are idealized, using rod and beam type elements for solution economy. Using the inertia relief capability of NASTRAN, the model is subjected to various acceleration combinations. It is found to be difficult to model the launcher sway braces and hooks which transmit compression only or tension only type forces respectively. A simple, iterative process was developed to overcome this modeling difficulty. A proposed code modification would help model compression or tension only contact type problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28673777','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28673777"><span>Predicting No-Shows in Radiology Using Regression Modeling of Data Available in the Electronic Medical Record.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Harvey, H Benjamin; Liu, Catherine; Ai, Jing; Jaworsky, Cristina; Guerrier, Claude Emmanuel; Flores, Efren; Pianykh, Oleg</p> <p>2017-10-01</p> <p>To test whether data elements available in the electronic medical record (EMR) can be effectively leveraged to predict failure to attend a scheduled radiology examination. Using data from a large academic medical center, we identified all patients with a diagnostic imaging examination scheduled from January 1, 2016, to April 1, 2016, and determined whether the patient successfully attended the examination. Demographic, clinical, and health services utilization variables available in the EMR potentially relevant to examination attendance were recorded for each patient. We used descriptive statistics and logistic regression models to test whether these data elements could predict failure to attend a scheduled radiology examination. The predictive accuracy of the regression models were determined by calculating the area under the receiver operator curve. Among the 54,652 patient appointments with radiology examinations scheduled during the study period, 6.5% were no-shows. No-show rates were highest for the modalities of mammography and CT and lowest for PET and MRI. Logistic regression indicated that 16 of the 27 demographic, clinical, and health services utilization factors were significantly associated with failure to attend a scheduled radiology examination (P ≤ .05). Stepwise logistic regression analysis demonstrated that previous no-shows, days between scheduling and appointments, modality type, and insurance type were most strongly predictive of no-show. A model considering all 16 data elements had good ability to predict radiology no-shows (area under the receiver operator curve = 0.753). The predictive ability was similar or improved when these models were analyzed by modality. Patient and examination information readily available in the EMR can be successfully used to predict radiology no-shows. Moving forward, this information can be proactively leveraged to identify patients who might benefit from additional patient engagement through appointment reminders or other targeted interventions to avoid no-shows. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28118355','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28118355"><span>Optogenetic Stimulation Shifts the Excitability of Cerebral Cortex from Type I to Type II: Oscillation Onset and Wave Propagation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Heitmann, Stewart; Rule, Michael; Truccolo, Wilson; Ermentrout, Bard</p> <p>2017-01-01</p> <p>Constant optogenetic stimulation targeting both pyramidal cells and inhibitory interneurons has recently been shown to elicit propagating waves of gamma-band (40-80 Hz) oscillations in the local field potential of non-human primate motor cortex. The oscillations emerge with non-zero frequency and small amplitude-the hallmark of a type II excitable medium-yet they also propagate far beyond the stimulation site in the manner of a type I excitable medium. How can neural tissue exhibit both type I and type II excitability? We investigated the apparent contradiction by modeling the cortex as a Wilson-Cowan neural field in which optogenetic stimulation was represented by an external current source. In the absence of any external current, the model operated as a type I excitable medium that supported propagating waves of gamma oscillations similar to those observed in vivo. Applying an external current to the population of inhibitory neurons transformed the model into a type II excitable medium. The findings suggest that cortical tissue normally operates as a type I excitable medium but it is locally transformed into a type II medium by optogenetic stimulation which predominantly targets inhibitory neurons. The proposed mechanism accounts for the graded emergence of gamma oscillations at the stimulation site while retaining propagating waves of gamma oscillations in the non-stimulated tissue. It also predicts that gamma waves can be emitted on every second cycle of a 100 Hz oscillation. That prediction was subsequently confirmed by re-analysis of the neurophysiological data. The model thus offers a theoretical account of how optogenetic stimulation alters the excitability of cortical neural fields.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28155598','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28155598"><span>Development of In Vitro Co-Culture Model in Anti-Cancer Drug Development Cascade.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xu, Ruiling; Richards, Frances M</p> <p>2017-01-01</p> <p>Tumour microenvironment is recognized as a major determinant of intrinsic resistance to anticancer therapies. In solid tumour types, such as breast cancer, lung cancer and pancreatic cancer, stromal components provide a fibrotic niche, which promotes stemness, EMT, chemo- and radioresistance of tumour. However, this microenvironment is not recapitulated in the conventional cell monoculture or xenografts, hence these in vitro and in vivo preclinical models are unlikely to be predictive of clinical response; which might attribute to the poor predictively of these preclinical drug-screening models. In this review, we summarized recently developed co-culture platforms in various tumour types that incorporate different stromal cell types and/or extracellular matrix (ECM), in the context of investigating potential mechanisms of stroma-mediated chemoresistance and evaluating novel agents and combinations. Some of these platforms will have great utility in the assessment of novel drug combinations and mechanistic understanding of the tumor-stroma interactions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29718313','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29718313"><span>UNRES server for physics-based coarse-grained simulations and prediction of protein structure, dynamics and thermodynamics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Czaplewski, Cezary; Karczynska, Agnieszka; Sieradzan, Adam K; Liwo, Adam</p> <p>2018-04-30</p> <p>A server implementation of the UNRES package (http://www.unres.pl) for coarse-grained simulations of protein structures with the physics-based UNRES model, coined a name UNRES server, is presented. In contrast to most of the protein coarse-grained models, owing to its physics-based origin, the UNRES force field can be used in simulations, including those aimed at protein-structure prediction, without ancillary information from structural databases; however, the implementation includes the possibility of using restraints. Local energy minimization, canonical molecular dynamics simulations, replica exchange and multiplexed replica exchange molecular dynamics simulations can be run with the current UNRES server; the latter are suitable for protein-structure prediction. The user-supplied input includes protein sequence and, optionally, restraints from secondary-structure prediction or small x-ray scattering data, and simulation type and parameters which are selected or typed in. Oligomeric proteins, as well as those containing D-amino-acid residues and disulfide links can be treated. The output is displayed graphically (minimized structures, trajectories, final models, analysis of trajectory/ensembles); however, all output files can be downloaded by the user. The UNRES server can be freely accessed at http://unres-server.chem.ug.edu.pl.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27387922','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27387922"><span>Feed-forward neural network model for hunger and satiety related VAS score prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Krishnan, Shaji; Hendriks, Henk F J; Hartvigsen, Merete L; de Graaf, Albert A</p> <p>2016-07-07</p> <p>An artificial neural network approach was chosen to model the outcome of the complex signaling pathways in the gastro-intestinal tract and other peripheral organs that eventually produce the satiety feeling in the brain upon feeding. A multilayer feed-forward neural network was trained with sets of experimental data relating concentration-time courses of plasma satiety hormones to Visual Analog Scales (VAS) scores. The network successfully predicted VAS responses from sets of satiety hormone data obtained in experiments using different food compositions. The correlation coefficients for the predicted VAS responses for test sets having i) a full set of three satiety hormones, ii) a set of only two satiety hormones, and iii) a set of only one satiety hormone were 0.96, 0.96, and 0.89, respectively. The predicted VAS responses discriminated the satiety effects of high satiating food types from less satiating food types both in orally fed and ileal infused forms. From this application of artificial neural networks, one may conclude that neural network models are very suitable to describe situations where behavior is complex and incompletely understood. However, training data sets that fit the experimental conditions need to be available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..113a2007X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..113a2007X"><span>Novel Method of Production Decline Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xie, Shan; Lan, Yifei; He, Lei; Jiao, Yang; Wu, Yong</p> <p>2018-02-01</p> <p>ARPS decline curves is the most commonly used in oil and gas field due to its minimal data requirements and ease application. And prediction of production decline which is based on ARPS analysis rely on known decline type. However, when coefficient index are very approximate under different decline type, it is difficult to directly recognize decline trend of matched curves. Due to difficulties above, based on simulation results of multi-factor response experiments, a new dynamic decline prediction model is introduced with using multiple linear regression of influence factors. First of all, according to study of effect factors of production decline, interaction experimental schemes are designed. Based on simulated results, annual decline rate is predicted by decline model. Moreover, the new method is applied in A gas filed of Ordos Basin as example to illustrate reliability. The result commit that the new model can directly predict decline tendency without needing recognize decline style. From arithmetic aspect, it also take advantage of high veracity. Finally, the new method improves the evaluation method of gas well production decline in low permeability gas reservoir, which also provides technical support for further understanding of tight gas field development laws.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29622618','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29622618"><span>Complete Genome Sequence of the Methanococcus maripaludis Type Strain JJ (DSM 2067), a Model for Selenoprotein Synthesis in Archaea.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Poehlein, Anja; Heym, Daniel; Quitzke, Vivien; Fersch, Julia; Daniel, Rolf; Rother, Michael</p> <p>2018-04-05</p> <p>Methanococcus maripaludis type strain JJ (DSM 2067) is an important organism because it serves as a model for primary energy metabolism and hydrogenotrophic methanogenesis and is amenable to genetic manipulation. The complete genome (1.7 Mb) harbors 1,815 predicted protein-encoding genes, including 9 encoding selenoproteins. Copyright © 2018 Poehlein et al.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15738467','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15738467"><span>Modeling regulation of cardiac KATP and L-type Ca2+ currents by ATP, ADP, and Mg2+.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Michailova, Anushka; Saucerman, Jeffrey; Belik, Mary Ellen; McCulloch, Andrew D</p> <p>2005-03-01</p> <p>Changes in cytosolic free Mg(2+) and adenosine nucleotide phosphates affect cardiac excitability and contractility. To investigate how modulation by Mg(2+), ATP, and ADP of K(ATP) and L-type Ca(2+) channels influences excitation-contraction coupling, we incorporated equations for intracellular ATP and MgADP regulation of the K(ATP) current and MgATP regulation of the L-type Ca(2+) current in an ionic-metabolic model of the canine ventricular myocyte. The new model: 1), quantitatively reproduces a dose-response relationship for the effects of changes in ATP on K(ATP) current, 2), simulates effects of ADP in modulating ATP sensitivity of K(ATP) channel, 3), predicts activation of Ca(2+) current during rapid increase in MgATP, and 4), demonstrates that decreased ATP/ADP ratio with normal total Mg(2+) or increased free Mg(2+) with normal ATP and ADP activate K(ATP) current, shorten action potential, and alter ionic currents and intracellular Ca(2+) signals. The model predictions are in agreement with experimental data measured under normal and a variety of pathological conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20050194229&hterms=pathological+marches&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dpathological%2Bmarches','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20050194229&hterms=pathological+marches&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dpathological%2Bmarches"><span>Modeling regulation of cardiac KATP and L-type Ca2+ currents by ATP, ADP, and Mg2+</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Michailova, Anushka; Saucerman, Jeffrey; Belik, Mary Ellen; McCulloch, Andrew D.</p> <p>2005-01-01</p> <p>Changes in cytosolic free Mg(2+) and adenosine nucleotide phosphates affect cardiac excitability and contractility. To investigate how modulation by Mg(2+), ATP, and ADP of K(ATP) and L-type Ca(2+) channels influences excitation-contraction coupling, we incorporated equations for intracellular ATP and MgADP regulation of the K(ATP) current and MgATP regulation of the L-type Ca(2+) current in an ionic-metabolic model of the canine ventricular myocyte. The new model: 1), quantitatively reproduces a dose-response relationship for the effects of changes in ATP on K(ATP) current, 2), simulates effects of ADP in modulating ATP sensitivity of K(ATP) channel, 3), predicts activation of Ca(2+) current during rapid increase in MgATP, and 4), demonstrates that decreased ATP/ADP ratio with normal total Mg(2+) or increased free Mg(2+) with normal ATP and ADP activate K(ATP) current, shorten action potential, and alter ionic currents and intracellular Ca(2+) signals. The model predictions are in agreement with experimental data measured under normal and a variety of pathological conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5007623','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5007623"><span>Assessment of Protein Side-Chain Conformation Prediction Methods in Different Residue Environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Peterson, Lenna X.; Kang, Xuejiao; Kihara, Daisuke</p> <p>2016-01-01</p> <p>Computational prediction of side-chain conformation is an important component of protein structure prediction. Accurate side-chain prediction is crucial for practical applications of protein structure models that need atomic detailed resolution such as protein and ligand design. We evaluated the accuracy of eight side-chain prediction methods in reproducing the side-chain conformations of experimentally solved structures deposited to the Protein Data Bank. Prediction accuracy was evaluated for a total of four different structural environments (buried, surface, interface, and membrane-spanning) in three different protein types (monomeric, multimeric, and membrane). Overall, the highest accuracy was observed for buried residues in monomeric and multimeric proteins. Notably, side-chains at protein interfaces and membrane-spanning regions were better predicted than surface residues even though the methods did not all use multimeric and membrane proteins for training. Thus, we conclude that the current methods are as practically useful for modeling protein docking interfaces and membrane-spanning regions as for modeling monomers. PMID:24619909</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/9819','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/9819"><span>Development of a modal emissions model using data from the Cooperative Industry/Government Exhaust Emission test program</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2003-06-22</p> <p>The Environmental Protection Agencys (EPAs) recommended model, MOBILE5a, has been : used extensively to predict emission factors based on average speeds for each fleet type. : Because average speeds are not appropriate in modeling intersections...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70031311','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70031311"><span>Predicting wetland plant community responses to proposed water-level-regulation plans for Lake Ontario: GIS-based modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Wilcox, D.A.; Xie, Y.</p> <p>2007-01-01</p> <p>Integrated, GIS-based, wetland predictive models were constructed to assist in predicting the responses of wetland plant communities to proposed new water-level regulation plans for Lake Ontario. The modeling exercise consisted of four major components: 1) building individual site wetland geometric models; 2) constructing generalized wetland geometric models representing specific types of wetlands (rectangle model for drowned river mouth wetlands, half ring model for open embayment wetlands, half ellipse model for protected embayment wetlands, and ellipse model for barrier beach wetlands); 3) assigning wetland plant profiles to the generalized wetland geometric models that identify associations between past flooding / dewatering events and the regulated water-level changes of a proposed water-level-regulation plan; and 4) predicting relevant proportions of wetland plant communities and the time durations during which they would be affected under proposed regulation plans. Based on this conceptual foundation, the predictive models were constructed using bathymetric and topographic wetland models and technical procedures operating on the platform of ArcGIS. An example of the model processes and outputs for the drowned river mouth wetland model using a test regulation plan illustrates the four components and, when compared against other test regulation plans, provided results that met ecological expectations. The model results were also compared to independent data collected by photointerpretation. Although data collections were not directly comparable, the predicted extent of meadow marsh in years in which photographs were taken was significantly correlated with extent of mapped meadow marsh in all but barrier beach wetlands. The predictive model for wetland plant communities provided valuable input into International Joint Commission deliberations on new regulation plans and was also incorporated into faunal predictive models used for that purpose.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9628841','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9628841"><span>Mathematical analysis of antiretroviral therapy aimed at HIV-1 eradication or maintenance of low viral loads.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wein, L M; D'Amato, R M; Perelson, A S</p> <p>1998-05-07</p> <p>Motivated by the ability of combinations of antiretroviral agents to sustain viral suppression in HIV-1-infected individuals, we analyse the transient and steady-state behavior of a mathematical model of HIV-1 dynamics in vivo in order to predict whether these drug regimens can eradicate HIV-1 or maintain viral loads at low levels. The model incorporates two cell types (CD4+ T cells and a long-lived pool of cells), two strains of virus (drug-sensitive wild type and drug-resistant mutant) and two types of antiretroviral agents (reverse transcriptase and protease inhibitors). The transient behavior of the cells and virus and the eventual eradication of the virus are determined primarily by the strength of the combination therapy against the mutant strain and the maximum achievable increase in the uninfected CD4+ T cell concentration. We also predict, if the parameters of the model remain constant during therapy, that less intensive maintenance regimens will be unable to maintain low viral loads for extensive periods of time. However, if the reduction in viral load produced by therapy reduces the state of activation of the immune system, the number of cells susceptible for HIV-1 infection may decrease even though total CD4+ T cells increase. Our model predicts that if this occurs strong inductive therapy that reduces viral load followed by weaker maintenance regimes may succeed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JLTP..156..193B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JLTP..156..193B"><span>Modeling Kelvin Wave Cascades in Superfluid Helium</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Boffetta, G.; Celani, A.; Dezzani, D.; Laurie, J.; Nazarenko, S.</p> <p>2009-09-01</p> <p>We study two different types of simplified models for Kelvin wave turbulence on quantized vortex lines in superfluids near zero temperature. Our first model is obtained from a truncated expansion of the Local Induction Approximation (Truncated-LIA) and it is shown to possess the same scalings and the essential behaviour as the full Biot-Savart model, being much simpler than the later and, therefore, more amenable to theoretical and numerical investigations. The Truncated-LIA model supports six-wave interactions and dual cascades, which are clearly demonstrated via the direct numerical simulation of this model in the present paper. In particular, our simulations confirm presence of the weak turbulence regime and the theoretically predicted spectra for the direct energy cascade and the inverse wave action cascade. The second type of model we study, the Differential Approximation Model (DAM), takes a further drastic simplification by assuming locality of interactions in k-space via using a differential closure that preserves the main scalings of the Kelvin wave dynamics. DAMs are even more amenable to study and they form a useful tool by providing simple analytical solutions in the cases when extra physical effects are present, e.g. forcing by reconnections, friction dissipation and phonon radiation. We study these models numerically and test their theoretical predictions, in particular the formation of the stationary spectra, and closeness of numerics for the higher-order DAM to the analytical predictions for the lower-order DAM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1029824','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1029824"><span>Predicting the Accuracy of Unguided Artillery Projectiles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2016-09-01</p> <p>metrics using error models . E . OUTLINE OF THESIS Chapter I provides an introduction to artillery and briefly describes the types of artillery fire...METHODOLOGY ....................................................................................6 E . OUTLINE OF THESIS ...30 B . MODIFIED POINT MASS TRAJECTORY MODEL (MPMTM</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9826E..0CH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9826E..0CH"><span>Social relevance: toward understanding the impact of the individual in an information cascade</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hall, Robert T.; White, Joshua S.; Fields, Jeremy</p> <p>2016-05-01</p> <p>Information Cascades (IC) through a social network occur due to the decision of users to disseminate content. We define this decision process as User Diffusion (UD). IC models typically describe an information cascade by treating a user as a node within a social graph, where a node's reception of an idea is represented by some activation state. The probability of activation then becomes a function of a node's connectedness to other activated nodes as well as, potentially, the history of activation attempts. We enrich this Coarse-Grained User Diffusion (CGUD) model by applying actor type logics to the nodes of the graph. The resulting Fine-Grained User Diffusion (FGUD) model utilizes prior research in actor typing to generate a predictive model regarding the future influence a user will have on an Information Cascade. Furthermore, we introduce a measure of Information Resonance that is used to aid in predictions regarding user behavior.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920067789&hterms=js&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Djs','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920067789&hterms=js&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Djs"><span>Characterization of YBa2Cu3O7, including critical current density Jc, by trapped magnetic field</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chen, In-Gann; Liu, Jianxiong; Weinstein, Roy; Lau, Kwong</p> <p>1992-01-01</p> <p>Spatial distributions of persistent magnetic field trapped by sintered and melt-textured ceramic-type high-temperature superconductor (HTS) samples have been studied. The trapped field can be reproduced by a model of the current consisting of two components: (1) a surface current Js and (2) a uniform volume current Jv. This Js + Jv model gives a satisfactory account of the spatial distribution of the magnetic field trapped by different types of HTS samples. The magnetic moment can be calculated, based on the Js + Jv model, and the result agrees well with that measured by standard vibrating sample magnetometer (VSM). As a consequence, Jc predicted by VSM methods agrees with Jc predicted from the Js + Jv model. The field mapping method described is also useful to reveal the granular structure of large HTS samples and regions of weak links.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JAMES...9.1167L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JAMES...9.1167L"><span>Spatially distributed modeling of soil organic carbon across China with improved accuracy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Qi-quan; Zhang, Hao; Jiang, Xin-ye; Luo, Youlin; Wang, Chang-quan; Yue, Tian-xiang; Li, Bing; Gao, Xue-song</p> <p>2017-06-01</p> <p>There is a need for more detailed spatial information on soil organic carbon (SOC) for the accurate estimation of SOC stock and earth system models. As it is effective to use environmental factors as auxiliary variables to improve the prediction accuracy of spatially distributed modeling, a combined method (HASM_EF) was developed to predict the spatial pattern of SOC across China using high accuracy surface modeling (HASM), artificial neural network (ANN), and principal component analysis (PCA) to introduce land uses, soil types, climatic factors, topographic attributes, and vegetation cover as predictors. The performance of HASM_EF was compared with ordinary kriging (OK), OK, and HASM combined, respectively, with land uses and soil types (OK_LS and HASM_LS), and regression kriging combined with land uses and soil types (RK_LS). Results showed that HASM_EF obtained the lowest prediction errors and the ratio of performance to deviation (RPD) presented the relative improvements of 89.91%, 63.77%, 55.86%, and 42.14%, respectively, compared to the other four methods. Furthermore, HASM_EF generated more details and more realistic spatial information on SOC. The improved performance of HASM_EF can be attributed to the introduction of more environmental factors, to explicit consideration of the multicollinearity of selected factors and the spatial nonstationarity and nonlinearity of relationships between SOC and selected factors, and to the performance of HASM and ANN. This method may play a useful tool in providing more precise spatial information on soil parameters for global modeling across large areas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1250405','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1250405"><span>Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind</p> <p></p> <p>In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1250405-analyzing-impact-modeling-choices-assumptions-compartmental-epidemiological-models','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1250405-analyzing-impact-modeling-choices-assumptions-compartmental-epidemiological-models"><span>Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...</p> <p>2016-05-01</p> <p>In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298964','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298964"><span>Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred</p> <p>2017-01-01</p> <p>Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28728577','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28728577"><span>Integrating data from randomized controlled trials and observational studies to predict the response to pregabalin in patients with painful diabetic peripheral neuropathy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alexander, Joe; Edwards, Roger A; Savoldelli, Alberto; Manca, Luigi; Grugni, Roberto; Emir, Birol; Whalen, Ed; Watt, Stephen; Brodsky, Marina; Parsons, Bruce</p> <p>2017-07-20</p> <p>More patient-specific medical care is expected as more is learned about variations in patient responses to medical treatments. Analytical tools enable insights by linking treatment responses from different types of studies, such as randomized controlled trials (RCTs) and observational studies. Given the importance of evidence from both types of studies, our goal was to integrate these types of data into a single predictive platform to help predict response to pregabalin in individual patients with painful diabetic peripheral neuropathy (pDPN). We utilized three pivotal RCTs of pregabalin (398 North American patients) and the largest observational study of pregabalin (3159 German patients). We implemented a hierarchical cluster analysis to identify patient clusters in the Observational Study to which RCT patients could be matched using the coarsened exact matching (CEM) technique, thereby creating a matched dataset. We then developed autoregressive moving average models (ARMAXs) to estimate weekly pain scores for pregabalin-treated patients in each cluster in the matched dataset using the maximum likelihood method. Finally, we validated ARMAX models using Observational Study patients who had not matched with RCT patients, using t tests between observed and predicted pain scores. Cluster analysis yielded six clusters (287-777 patients each) with the following clustering variables: gender, age, pDPN duration, body mass index, depression history, pregabalin monotherapy, prior gabapentin use, baseline pain score, and baseline sleep interference. CEM yielded 1528 unique patients in the matched dataset. The reduction in global imbalance scores for the clusters after adding the RCT patients (ranging from 6 to 63% depending on the cluster) demonstrated that the process reduced the bias of covariates in five of the six clusters. ARMAX models of pain score performed well (R 2 : 0.85-0.91; root mean square errors: 0.53-0.57). t tests did not show differences between observed and predicted pain scores in the 1955 patients who had not matched with RCT patients. The combination of cluster analyses, CEM, and ARMAX modeling enabled strong predictive capabilities with respect to pain scores. Integrating RCT and Observational Study data using CEM enabled effective use of Observational Study data to predict patient responses.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22370551-predicted-space-motions-hypervelocity-runaway-stars-proper-motions-radial-velocities-gaia-era','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22370551-predicted-space-motions-hypervelocity-runaway-stars-proper-motions-radial-velocities-gaia-era"><span>Predicted space motions for hypervelocity and runaway stars: proper motions and radial velocities for the Gaia Era</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kenyon, Scott J.; Brown, Warren R.; Geller, Margaret J.</p> <p></p> <p>We predict the distinctive three-dimensional space motions of hypervelocity stars (HVSs) and runaway stars moving in a realistic Galactic potential. For nearby stars with distances less than 10 kpc, unbound stars are rare; proper motions alone rarely isolate bound HVSs and runaways from indigenous halo stars. At large distances of 20-100 kpc, unbound HVSs are much more common than runaways; radial velocities easily distinguish both from indigenous halo stars. Comparisons of the predictions with existing observations are encouraging. Although the models fail to match observations of solar-type HVS candidates from SEGUE, they agree well with data for B-type HVS andmore » runaways from other surveys. Complete samples of g ≲ 20 stars with Gaia should provide clear tests of formation models for HVSs and runaways and will enable accurate probes of the shape of the Galactic potential.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvF...1d1701A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvF...1d1701A"><span>Minimum-dissipation scalar transport model for large-eddy simulation of turbulent flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abkar, Mahdi; Bae, Hyun J.; Moin, Parviz</p> <p>2016-08-01</p> <p>Minimum-dissipation models are a simple alternative to the Smagorinsky-type approaches to parametrize the subfilter turbulent fluxes in large-eddy simulation. A recently derived model of this type for subfilter stress tensor is the anisotropic minimum-dissipation (AMD) model [Rozema et al., Phys. Fluids 27, 085107 (2015), 10.1063/1.4928700], which has many desirable properties. It is more cost effective than the dynamic Smagorinsky model, it appropriately switches off in laminar and transitional flows, and it is consistent with the exact subfilter stress tensor on both isotropic and anisotropic grids. In this study, an extension of this approach to modeling the subfilter scalar flux is proposed. The performance of the AMD model is tested in the simulation of a high-Reynolds-number rough-wall boundary-layer flow with a constant and uniform surface scalar flux. The simulation results obtained from the AMD model show good agreement with well-established empirical correlations and theoretical predictions of the resolved flow statistics. In particular, the AMD model is capable of accurately predicting the expected surface-layer similarity profiles and power spectra for both velocity and scalar concentration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5225787','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5225787"><span>Advances and challenges in logical modeling of cell cycle regulation: perspective for multi-scale, integrative yeast cell models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Todd, Robert G.; van der Zee, Lucas</p> <p>2016-01-01</p> <p>Abstract The eukaryotic cell cycle is robustly designed, with interacting molecules organized within a definite topology that ensures temporal precision of its phase transitions. Its underlying dynamics are regulated by molecular switches, for which remarkable insights have been provided by genetic and molecular biology efforts. In a number of cases, this information has been made predictive, through computational models. These models have allowed for the identification of novel molecular mechanisms, later validated experimentally. Logical modeling represents one of the youngest approaches to address cell cycle regulation. We summarize the advances that this type of modeling has achieved to reproduce and predict cell cycle dynamics. Furthermore, we present the challenge that this type of modeling is now ready to tackle: its integration with intracellular networks, and its formalisms, to understand crosstalks underlying systems level properties, ultimate aim of multi-scale models. Specifically, we discuss and illustrate how such an integration may be realized, by integrating a minimal logical model of the cell cycle with a metabolic network. PMID:27993914</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25649803','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25649803"><span>Biomarkers and mortality after transient ischemic attack and minor ischemic stroke: population-based study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Greisenegger, Stefan; Segal, Helen C; Burgess, Annette I; Poole, Debbie L; Mehta, Ziyah; Rothwell, Peter M</p> <p>2015-03-01</p> <p>Premature death after transient ischemic attack or stroke is more often because of heart disease or cancer than stroke. Previous studies found blood biomarkers not usefully predictive of nonfatal stroke but possibly of all-cause death. This association might be explained by potentially treatable occult cardiac disease or cancer. We therefore aimed to validate the association of a panel of biomarkers with all-cause death, particularly cardiac death and cancer death, despite the absence of associations with risk of nonfatal vascular events. Fifteen biomarkers were measured in 929 consecutive patients in a population-based study (Oxford Vascular Study), recruited from 2002 and followed up to 2013. Associations were determined by Cox regression. Model discrimination was assessed by c-statistic and the integrated discrimination improvement. During 5560 patient-years of follow-up, none of the biomarkers predicted risk of nonfatal vascular events. However, soluble tumor necrosis factor α receptor-1, von Willebrand factor, heart-type fatty-acid-binding protein, and N-terminal pro-B-type natriuretic peptide were independently predictive of all-cause death (n=361; adjusted hazard ratio per SD, 95% confidence interval: heart-type fatty-acid-binding protein: 1.31, 1.12-1.56, P=0.002; N-terminal pro-B-type natriuretic peptide: 1.34, 1.11-1.62, P=0.002; soluble tumor necrosis factor α receptor-1: 1.45, 1.26-1.66, P=0.02; von Willebrand factor: 1.19, 1.04-1.36, P=0.01). The independent contribution of the four biomarkers taken together added prognostic information and improved model discrimination (integrated discrimination improvement=0.028, P=0.0001). N-terminal pro-B-type natriuretic peptide was most predictive of vascular death (adjusted hazard ratio=1.80, 95% confidence interval, 1.34-2.41, P<0.0001), whereas heart-type fatty-acid-binding protein predicted cancer deaths (1.64, 1.26-2.12, P=0.0002). Associations were strongest in patients without known prior cardiac disease or cancer. Several biomarkers predicted death of any cause after transient ischemic attack and minor stroke. N-terminal pro-B-type natriuretic peptide and heart-type fatty-acid-binding protein might improve patient selection for additional screening for occult cardiac disease or cancer, respectively. However, our results require validation in future studies. © 2015 American Heart Association, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JMEP...22.2982S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JMEP...22.2982S"><span>Prediction of Flow Stress in Cadmium Using Constitutive Equation and Artificial Neural Network Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sarkar, A.; Chakravartty, J. K.</p> <p>2013-10-01</p> <p>A model is developed to predict the constitutive flow behavior of cadmium during compression test using artificial neural network (ANN). The inputs of the neural network are strain, strain rate, and temperature, whereas flow stress is the output. Experimental data obtained from compression tests in the temperature range -30 to 70 °C, strain range 0.1 to 0.6, and strain rate range 10-3 to 1 s-1 are employed to develop the model. A three-layer feed-forward ANN is trained with Levenberg-Marquardt training algorithm. It has been shown that the developed ANN model can efficiently and accurately predict the deformation behavior of cadmium. This trained network could predict the flow stress better than a constitutive equation of the type.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27717397','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27717397"><span>Prediction of gestational age based on genome-wide differentially methylated regions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bohlin, J; Håberg, S E; Magnus, P; Reese, S E; Gjessing, H K; Magnus, M C; Parr, C L; Page, C M; London, S J; Nystad, W</p> <p>2016-10-07</p> <p>We explored the association between gestational age and cord blood DNA methylation at birth and whether DNA methylation could be effective in predicting gestational age due to limitations with the presently used methods. We used data from the Norwegian Mother and Child Birth Cohort study (MoBa) with Illumina HumanMethylation450 data measured for 1753 newborns in two batches: MoBa 1, n = 1068; and MoBa 2, n = 685. Gestational age was computed using both ultrasound and the last menstrual period. We evaluated associations between DNA methylation and gestational age and developed a statistical model for predicting gestational age using MoBa 1 for training and MoBa 2 for predictions. The prediction model was additionally used to compare ultrasound and last menstrual period-based gestational age predictions. Furthermore, both CpGs and associated genes detected in the training models were compared to those detected in a published prediction model for chronological age. There were 5474 CpGs associated with ultrasound gestational age after adjustment for a set of covariates, including estimated cell type proportions, and Bonferroni-correction for multiple testing. Our model predicted ultrasound gestational age more accurately than it predicted last menstrual period gestational age. DNA methylation at birth appears to be a good predictor of gestational age. Ultrasound gestational age is more strongly associated with methylation than last menstrual period gestational age. The CpGs linked with our gestational age prediction model, and their associated genes, differed substantially from the corresponding CpGs and genes associated with a chronological age prediction model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5124101','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5124101"><span>The world's biomes and primary production as a triple tragedy of the commons foraging game played among plants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gonzalez-Meler, Miquel A.; Lynch, Douglas J.; Baltzer, Jennifer L.</p> <p>2016-01-01</p> <p>Plants appear to produce an excess of leaves, stems and roots beyond what would provide the most efficient harvest of available resources. One way to understand this overproduction of tissues is that excess tissue production provides a competitive advantage. Game theoretic models predict overproduction of all tissues compared with non-game theoretic models because they explicitly account for this indirect competitive benefit. Here, we present a simple game theoretic model of plants simultaneously competing to harvest carbon and nitrogen. In the model, a plant's fitness is influenced by its own leaf, stem and root production, and the tissue production of others, which produces a triple tragedy of the commons. Our model predicts (i) absolute net primary production when compared with two independent global datasets; (ii) the allocation relationships to leaf, stem and root tissues in one dataset; (iii) the global distribution of biome types and the plant functional types found within each biome; and (iv) ecosystem responses to nitrogen or carbon fertilization. Our game theoretic approach removes the need to define allocation or vegetation type a priori but instead lets these emerge from the model as evolutionarily stable strategies. We believe this to be the simplest possible model that can describe plant production. PMID:28120794</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28120794','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28120794"><span>The world's biomes and primary production as a triple tragedy of the commons foraging game played among plants.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McNickle, Gordon G; Gonzalez-Meler, Miquel A; Lynch, Douglas J; Baltzer, Jennifer L; Brown, Joel S</p> <p>2016-11-16</p> <p>Plants appear to produce an excess of leaves, stems and roots beyond what would provide the most efficient harvest of available resources. One way to understand this overproduction of tissues is that excess tissue production provides a competitive advantage. Game theoretic models predict overproduction of all tissues compared with non-game theoretic models because they explicitly account for this indirect competitive benefit. Here, we present a simple game theoretic model of plants simultaneously competing to harvest carbon and nitrogen. In the model, a plant's fitness is influenced by its own leaf, stem and root production, and the tissue production of others, which produces a triple tragedy of the commons. Our model predicts (i) absolute net primary production when compared with two independent global datasets; (ii) the allocation relationships to leaf, stem and root tissues in one dataset; (iii) the global distribution of biome types and the plant functional types found within each biome; and (iv) ecosystem responses to nitrogen or carbon fertilization. Our game theoretic approach removes the need to define allocation or vegetation type a priori but instead lets these emerge from the model as evolutionarily stable strategies. We believe this to be the simplest possible model that can describe plant production. © 2016 The Author(s).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26437681','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26437681"><span>Developing models for the prediction of hospital healthcare waste generation rate.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe</p> <p>2016-01-01</p> <p>An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007SPIE.6653E..0OG','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007SPIE.6653E..0OG"><span>Modeling and prediction of relaxation of polar order in high-activity nonlinear optical polymers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guenthner, Andrew J.; Lindsay, Geoffrey A.; Wright, Michael E.; Fallis, Stephen; Ashley, Paul R.; Sanghadasa, Mohan</p> <p>2007-09-01</p> <p>Mach-Zehnder optical modulators were fabricated using the CLD and FTC chromophores in polymer-on-silicon optical waveguides. Up to 17 months of oven-ageing stability are reported for the poled polymer films. Modulators containing an FTC-polyimide had the best over all aging performance. To model and extrapolate the ageing data, a relaxation correlation function attributed to A. K. Jonscher was compared to the well-established stretched exponential correlation function. Both models gave a good fit to the data. The Jonscher model predicted a slower relaxation rate in the out years. Analysis showed that collecting data for a longer period relative to the relaxation time was more important for generating useful predictions than the precision with which individual model parameters could be estimated. Thus from a practical standpoint, time-temperature superposition must be assumed in order to generate meaningful predictions. For this purpose, Arrhenius-type expressions were found to relate the model time constants to the ageing temperatures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JSMME...3.1202O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JSMME...3.1202O"><span>Numerical Simulation for Predicting Fatigue Damage Progress in Notched CFRP Laminates by Using Cohesive Elements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Okabe, Tomonaga; Yashiro, Shigeki</p> <p></p> <p>This study proposes the cohesive zone model (CZM) for predicting fatigue damage growth in notched carbon-fiber-reinforced composite plastic (CFRP) cross-ply laminates. In this model, damage growth in the fracture process of cohesive elements due to cyclic loading is represented by the conventional damage mechanics model. We preliminarily investigated whether this model can appropriately express fatigue damage growth for a circular crack embedded in isotropic solid material. This investigation demonstrated that this model could reproduce the results with the well-established fracture mechanics model plus the Paris' law by tuning adjustable parameters. We then numerically investigated the damage process in notched CFRP cross-ply laminates under tensile cyclic loading and compared the predicted damage patterns with those in experiments reported by Spearing et al. (Compos. Sci. Technol. 1992). The predicted damage patterns agreed with the experiment results, which exhibited the extension of multiple types of damage (e.g., splits, transverse cracks and delaminations) near the notches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18069535','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18069535"><span>Predicting driver from front passenger using only the postmortem pattern of injury following a motor vehicle collision.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Curtin, Eleanor; Langlois, Neil E I</p> <p>2007-10-01</p> <p>This study aimed to establish whether post-mortem injury patterns can assist in distinguishing drivers from front seat passengers among victims of motor vehicle collisions without regard to collision type, vehicle type or if safety equipment had been used. Injuries sustained by 206 drivers and 91 front seat passengers were catalogued from post-mortem reports. Injuries were coded for the body region, depth and location of the injury. Statistical analysis was used to detect injuries capable of discriminating between driver and passenger. Drivers were more likely to sustain the following injuries: brain injury; fractures to the right femur, right posterior ribs, base of skull, right humerus and right shoulder; and superficial wounds at the right lateral and posterior thigh, right face, right and left anterior knee, right anterior shoulder, lateral right arm and forearm and left anterior thigh. Front passengers were more vulnerable to splenic injury; fractures to the left posterior and anterior ribs, left shoulder and left femur; and superficial wounds at the left anterior shoulder region and left lateral neck. Linear discriminant analysis generated a model for predicting seating position based on the presence of injury to certain regions of the body; the overall predictive accuracy of the model was 69.3%. It was found that driver and front passenger fatalities receive different injury patterns from motor vehicle collisions, regardless of collision type. A larger study is required to improve the predictive accuracy of this model and to ascertain its value to forensic medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy...50.2799H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy...50.2799H"><span>Enhanced seasonal predictability of the summer mean temperature in Central Europe favored by new dominant weather patterns</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoffmann, P.</p> <p>2018-04-01</p> <p>In this study two complementary approaches have been combined to estimate the reliability of the data-driven seasonal predictability of the meteorological summer mean temperature (T_{JJA}) over Europe. The developed model is based on linear regressions and uses early season predictors to estimate the target value T_{JJA}. We found for the Potsdam (Germany) climate station that the monthly standard deviations (σ) from January to April and the temperature mean ( m) in April are good predictors to describe T_{JJA} after 1990. However, before 1990 the model failed. The core region where this model works is the north-eastern part of Central Europe. We also analyzed long-term trends of monthly Hess/Brezowsky weather types as possible causes of the dynamical changes. In spring, a significant increase of the occurrences for two opposite weather patterns was found: Zonal Ridge across Central Europe (BM) and Trough over Central Europe (TRM). Both currently make up about 30% of the total alternating weather systems over Europe. Other weather types are predominantly decreasing or their trends are not significant. Thus, the predictability may be attributed to these two weather types where the difference between the two Z500 composite patterns is large. This also applies to the north-eastern part of Central Europe. Finally, the detected enhanced seasonal predictability over Europe is alarming, because severe side effects may occur. One of these are more frequent climate extremes in summer half-year.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950003657','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950003657"><span>Creep and stress relaxation modeling of polycrystalline ceramic fibers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dicarlo, James A.; Morscher, Gregory N.</p> <p>1994-01-01</p> <p>A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMNH51A1220K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMNH51A1220K"><span>Comparison of empirical and data driven hydrometeorological hazard models on coastal cities of São Paulo, Brazil</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Koga-Vicente, A.; Friedel, M. J.</p> <p>2010-12-01</p> <p>Every year thousands of people are affected by floods and landslide hazards caused by rainstorms. The problem is more serious in tropical developing countries because of the susceptibility as a result of the high amount of available energy to form storms, and the high vulnerability due to poor economic and social conditions. Predictive models of hazards are important tools to manage this kind of risk. In this study, a comparison of two different modeling approaches was made for predicting hydrometeorological hazards in 12 cities on the coast of São Paulo, Brazil, from 1994 to 2003. In the first approach, an empirical multiple linear regression (MLR) model was developed and used; the second approach used a type of unsupervised nonlinear artificial neural network called a self-organized map (SOM). By using twenty three independent variables of susceptibility (precipitation, soil type, slope, elevation, and regional atmospheric system scale) and vulnerability (distribution and total population, income and educational characteristics, poverty intensity, human development index), binary hazard responses were obtained. Model performance by cross-validation indicated that the respective MLR and SOM model accuracy was about 67% and 80%. Prediction accuracy can be improved by the addition of information, but the SOM approach is preferred because of sparse data and highly nonlinear relations among the independent variables.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930047357&hterms=stress+good&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dstress%2Bgood','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930047357&hterms=stress+good&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dstress%2Bgood"><span>Creep and stress relaxation modeling of polycrystalline ceramic fibers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dicarlo, James A.; Morscher, Gregory N.</p> <p>1991-01-01</p> <p>A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A33J0390A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A33J0390A"><span>Subseasonal-to-Seasonal Science and Prediction Initiatives of the NOAA MAPP Program</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Archambault, H. M.; Barrie, D.; Mariotti, A.</p> <p>2016-12-01</p> <p>There is great practical interest in developing predictions beyond the 2-week weather timescale. Scientific communities have historically organized themselves around the weather and climate problems, but the subseasonal-to-seasonal (S2S) timescale range overall is recognized as new territory for which a concerted shared effort is needed. For instance, the climate community, as part of programs like CLIVAR, has historically tackled coupled phenomena and modeling, keys to harnessing predictability on longer timescales. In contrast, the weather community has focused on synoptic dynamics, higher-resolution modeling, and enhanced model initialization, of importance at the shorter timescales and especially for the prediction of extremes. The processes and phenomena specific to timescales between weather and climate require a unified approach to science, modeling, and predictions. Internationally, the WWRP/WCRP S2S Prediction Project is a promising catalyzer for these types of activities. Among the various contributing U.S. research programs, the Modeling, Analysis, Predictions and Projections (MAPP) program, as part of the NOAA Climate Program Office, has launched coordinated research and transition activities that help to meet the agency's goals to fill the weather-to-climate prediction gap and will contribute to advance international goals. This presentation will describe ongoing MAPP program S2S science and prediction initiatives, specifically the MAPP S2S Task Force and the SubX prediction experiment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19790047018&hterms=Nor+Fatigue+Formulae&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DNor%2BFatigue%2BFormulae','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19790047018&hterms=Nor+Fatigue+Formulae&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DNor%2BFatigue%2BFormulae"><span>Fatigue crack growth in fiber reinforced plastics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mandell, J. F.</p> <p>1979-01-01</p> <p>Fatigue crack growth in fiber composites occurs by such complex modes as to frustrate efforts at developing comprehensive theories and models. Under certain loading conditions and with certain types of reinforcement, simpler modes of fatigue crack growth are observed. These modes are more amenable to modeling efforts, and the fatigue crack growth rate can be predicted in some cases. Thus, a formula for prediction of ligamented mode fatigue crack growth rate is available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA565192','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA565192"><span>Pre-launch Optical Characteristics of the Oculus-ASR Nanosatellite for Attitude and Shape Recognition Experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2011-12-02</p> <p>construction and validation of predictive computer models such as those used in Time-domain Analysis Simulation for Advanced Tracking (TASAT), a...characterization data, successful construction and validation of predictive computer models was accomplished. And an investigation in pose determination from...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040040265&hterms=State+flow&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DState%2Bflow','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040040265&hterms=State+flow&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DState%2Bflow"><span>Prediction of Flows about Forebodies at High-Angle-of-Attack Dynamic Conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fremaux, C. M.; vanDam, C. P.; Saephan, S.; DalBello, T.</p> <p>2003-01-01</p> <p>A Reynolds-average Navier Stokes method developed for rotorcraft type of flow problems is applied for predicting the forces and moments of forebody models at high-angle-of-attack dynamic conditions and for providing insight into the flow characteristics at these conditions. Wind-tunnel results from rotary testing on generic forebody models conducted by NASA Langley and DERA are used for comparison. This paper focuses on the steady-state flow problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=multilocation&id=EJ830984','ERIC'); return false;" href="https://eric.ed.gov/?q=multilocation&id=EJ830984"><span>The Influence of Number of A Trials on 2-Year-Olds' Behavior in Two A-Not-B-Type Search Tasks: A Test of the Hierarchical Competing Systems Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Marcovitch, Stuart; Zelazo, Philip David</p> <p>2006-01-01</p> <p>Age-appropriate modifications of the A-not-B task were used to examine 2-year-olds' search behavior. Several theories predict that A-not-B errors will increase as a function of number of A trials. However, the hierarchical competing systems model (Marcovitch & Zelazo, 1999) predicts that although the ratio of perseverative to nonperseverative…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdSpR..61.2628P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdSpR..61.2628P"><span>Improving orbit prediction accuracy through supervised machine learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peng, Hao; Bai, Xiaoli</p> <p>2018-05-01</p> <p>Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMGC41A0787V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMGC41A0787V"><span>Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Van Hooidonk, R. J.</p> <p>2011-12-01</p> <p>Future widespread coral bleaching and subsequent mortality has been projected with sea surface temperature (SST) data from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends and their propagation in predictions. To analyze the relative importance of various types of model errors and biases on coral reef bleaching predictive skill, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from GCMs 20th century simulations to be included in the Intergovernmental Panel on Climate Change (IPCC) 5th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate skill using an objective measure of forecast quality, the Peirce Skill Score (PSS). This methodology will identify frequency bands that are important to predicting coral bleaching and it will highlight deficiencies in these bands in models. The methodology we describe can be used to improve future climate model derived predictions of coral reef bleaching and it can be used to better characterize the errors and uncertainty in predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...852L...6B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...852L...6B"><span>Multidimensional Models of Type Ia Supernova Nebular Spectra: Strong Emission Lines from Stripped Companion Gas Rule Out Classic Single-degenerate Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Botyánszki, János; Kasen, Daniel; Plewa, Tomasz</p> <p>2018-01-01</p> <p>The classic single-degenerate model for the progenitors of Type Ia supernova (SN Ia) predicts that the supernova ejecta should be enriched with solar-like abundance material stripped from the companion star. Spectroscopic observations of normal SNe Ia at late times, however, have not resulted in definite detection of hydrogen. In this Letter, we study line formation in SNe Ia at nebular times using non-LTE spectral modeling. We present, for the first time, multidimensional radiative transfer calculations of SNe Ia with stripped material mixed in the ejecta core, based on hydrodynamical simulations of ejecta–companion interaction. We find that interaction models with main-sequence companions produce significant Hα emission at late times, ruling out these types of binaries being viable progenitors of SNe Ia. We also predict significant He I line emission at optical and near-infrared wavelengths for both hydrogen-rich or helium-rich material, providing an additional observational probe of stripped ejecta. We produce models with reduced stripped masses and find a more stringent mass limit of M st ≲ 1 × 10‑4 M ⊙ of stripped companion material for SN 2011fe.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28319817','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28319817"><span>Indoor air quality of low and middle income urban households in Durban, South Africa.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jafta, Nkosana; Barregard, Lars; Jeena, Prakash M; Naidoo, Rajen N</p> <p>2017-07-01</p> <p>Elevated levels of indoor air pollutants may cause cardiopulmonary disease such as lower respiratory infection, chronic obstructive lung disease and lung cancer, but the association with tuberculosis (TB) is unclear. So far the risk estimates of TB infection or/and disease due to indoor air pollution (IAP) exposure are based on self-reported exposures rather than direct measurements of IAP, and these exposures have not been validated. The aim of this paper was to characterize and develop predictive models for concentrations of three air pollutants (PM 10 , NO 2 and SO 2 ) in homes of children participating in a childhood TB study. Children younger than 15 years living within the eThekwini Municipality in South Africa were recruited for a childhood TB case control study. The homes of these children (n=246) were assessed using a walkthrough checklist, and in 114 of them monitoring of three indoor pollutants was also performed (sampling period: 24h for PM 10 , and 2-3 weeks for NO 2 and SO 2 ). Linear regression models were used to predict PM 10 and NO 2 concentrations from household characteristics, and these models were validated using leave out one cross validation (LOOCV). SO 2 concentrations were not modeled as concentrations were very low. Mean indoor concentrations of PM 10 (n=105) , NO 2 (n=82) and SO 2 (n=82) were 64μg/m 3 (range 6.6-241); 19μg/m 3 (range 4.5-55) and 0.6μg/m 3 (range 0.005-3.4) respectively with the distributions for all three pollutants being skewed to the right. Spearman correlations showed weak positive correlations between the three pollutants. The largest contributors to the PM 10 predictive model were type of housing structure (formal or informal), number of smokers in the household, and type of primary fuel used in the household. The NO 2 predictive model was influenced mostly by the primary fuel type and by distance from the major roadway. The coefficients of determination (R 2 ) for the models were 0.41 for PM 10 and 0.31 for NO 2 . Spearman correlations were significant between measured vs. predicted PM 10 and NO 2 with coefficients of 0.66 and 0.55 respectively. Indoor PM 10 levels were relatively high in these households. Both PM 10 and NO 2 can be modeled with a reasonable validity and these predictive models can decrease the necessary number of direct measurements that are expensive and time consuming. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JMEP...24.2140L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JMEP...24.2140L"><span>Hot Deformation Behavior and Flow Stress Prediction of TC4-DT Alloy in Single-Phase Region and Dual-Phase Regions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Jianglin; Zeng, Weidong; Zhu, Yanchun; Yu, Hanqing; Zhao, Yongqing</p> <p>2015-05-01</p> <p>Isothermal compression tests of TC4-DT titanium alloy at the deformation temperature ranging from 1181 to 1341 K covering α + β phase field and β-phase field, the strain rate ranging from 0.01 to 10.0 s-1 and the height reduction of 70% were conducted on a Gleeble-3500 thermo-mechanical simulator. The experimental true stress-true strain data were employed to develop the strain-compensated Arrhenius-type flow stress model and artificial neural network (ANN) model; the predictability of two models was quantified in terms of correlation coefficient ( R) and average absolute relative error (AARE). The R and AARE for the Arrhenius-type flow stress model were 0.9952 and 5.78%, which were poorer linear relation and more deviation than 0.9997 and 1.04% for the feed-forward back-propagation ANN model, respectively. The results indicated that the trained ANN model was more efficient and accurate in predicting the flow behavior for TC4-DT titanium alloy at elevated temperature deformation than the strain-compensated Arrhenius-type constitutive equations. The constitutive relationship compensating strain could track the experimental data across the whole hot working domain other than that at high strain rates (≥1 s-1). The microstructure analysis illustrated that the deformation mechanisms existed at low strain rates (≤0.1 s-1), where dynamic recrystallization occurred, were far different from that at high strain rates (≥1 s-1) that presented bands of flow localization and cracking along grain boundary.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998PhDT........79F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998PhDT........79F"><span>Analytical solutions of hypersonic type IV shock - shock interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Frame, Michael John</p> <p></p> <p>An analytical model has been developed to predict the effects of a type IV shock interaction at high Mach numbers. This interaction occurs when an impinging oblique shock wave intersects the most normal portion of a detached bow shock. The flowfield which develops is complicated and contains an embedded jet of supersonic flow, which may be unsteady. The jet impinges on the blunt body surface causing very high pressure and heating loads. Understanding this type of interaction is vital to the designers of cowl lips and leading edges on air- breathing hypersonic vehicles. This analytical model represents the first known attempt at predicting the geometry of the interaction explicitly, without knowing beforehand the jet dimensions, including the length of the transmitted shock where the jet originates. The model uses a hyperbolic equation for the bow shock and by matching mass continuity, flow directions and pressure throughout the flowfield, a prediction of the interaction geometry can be derived. The model has been shown to agree well with the flowfield patterns and properties of experiments and CFD, but the prediction for where the peak pressure is located, and its value, can be significantly in error due to a lack of sophistication in the model of the jet fluid stagnation region. Therefore it is recommended that this region of the flowfield be modeled in more detail and more accurate experimental and CFD measurements be used for validation. However, the analytical model has been shown to be a fast and economic prediction tool, suitable for preliminary design, or for understanding the interactions effects, including the basic physics of the interaction, such as the jet unsteadiness. The model has been used to examine a wide parametric space of possible interactions, including different Mach number, impinging shock strength and location, and cylinder radius. It has also been used to examine the interaction on power-law shaped blunt bodies, a possible candidate for hypersonic leading edges. The formation of vortices at the termination shock of the supersonic jet has been modeled using the analytical method. The vortices lead to deflections in the jet terminating flow, and the presence of the cylinder surface seems to causes the vortices to break off the jet resulting in an oscillation in the jet flow.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=330377','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=330377"><span>Mathematical and computational modeling simulation of solar drying Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Mathematical modeling of solar drying systems has the primary aim of predicting the required drying time for a given commodity, dryer type, and environment. Both fundamental (Fickian diffusion) and semi-empirical drying models have been applied to the solar drying of a variety of agricultural commo...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA504037','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA504037"><span>U.S. GODAE: Global Ocean Prediction With the HYbrid Coordinate Ocean Model (HYCOM)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2009-06-01</p> <p>REPORT DATE (DD-MM- YYYY) 12-08-2009 2. REPORT TYPE Journal Article 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE U.S. GODAE: Global ...the lerformance and application of eddy-resolving, real-time global - and basin-scale ocean prediction systems using the HYbrid Coordinate Ocean...prediction system outputs. In addnion to providing real-time, eddy-resolving global - and basin-scale ocean prediction systems for the US Navy and NOAA, this</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT........67H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT........67H"><span>Electrical Resistance Based Damage Modeling of Multifunctional Carbon Fiber Reinforced Polymer Matrix Composites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hart, Robert James</p> <p></p> <p>In the current thesis, the 4-probe electrical resistance of carbon fiber-reinforced polymer (CFRP) composites is utilized as a metric for sensing low-velocity impact damage. A robust method has been developed for recovering the directionally dependent electrical resistivities using an experimental line-type 4-probe resistance method. Next, the concept of effective conducting thickness was uniquely applied in the development of a brand new point-type 4-probe method for applications with electrically anisotropic materials. An extensive experimental study was completed to characterize the 4-probe electrical resistance of CFRP specimens using both the traditional line-type and new point-type methods. Leveraging the concept of effective conducting thickness, a novel method was developed for building 4-probe electrical finite element (FE) models in COMSOL. The electrical models were validated against experimental resistance measurements and the FE models demonstrated predictive capabilities when applied to CFRP specimens with varying thickness and layup. These new models demonstrated a significant improvement in accuracy compared to previous literature and could provide a framework for future advancements in FE modeling of electrically anisotropic materials. FE models were then developed in ABAQUS for evaluating the influence of prescribed localized damage on the 4-probe resistance. Experimental data was compiled on the impact response of various CFRP laminates, and was used in the development of quasi- static FE models for predicting presence of impact-induced delamination. The simulation-based delamination predictions were then integrated into the electrical FE models for the purpose of studying the influence of realistic damage patterns on electrical resistance. When the size of the delamination damage was moderate compared to the electrode spacing, the electrical resistance increased by less than 1% due to the delamination damage. However, for a specimen with large delamination extending beyond the electrode locations, the oblique resistance increased by 30%. This result suggests that for damage sensing applications, the spacing of electrodes relative to the size of the delamination is important. Finally CT image data was used to model 3-D void distributions and the electrical response of such specimens were compared to models with no voids. As the void content increased, the electrical resistance increased non-linearly. The relationship between void content and electrical resistance was attributed to a combination of three factors: (i) size and shape, (ii) orientation, and (iii) distribution of voids. As a whole, the current thesis provides a comprehensive framework for developing predictive, resistance-based damage sensing models for CFRP laminates of various layup and thickness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12585508','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12585508"><span>Using the theory of reasoned action to predict organizational misbehavior.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vardi, Yoav; Weitz, Ely</p> <p>2002-12-01</p> <p>A review of literature on organizational behavior and management on predicting work behavior indicated that most reported studies emphasize positive work outcomes, e.g., attachment, performance, and satisfaction, while job related misbehaviors have received relatively less systematic research attention. Yet, forms of employee misconduct in organizations are pervasive and quite costly for both individuals and organizations. We selected two conceptual frameworks for the present investigation: Vardi and Wiener's model of organizational misbehavior and Fishbein and Ajzen's Theory of Reasoned Action. The latter views individual behavior as intentional, a function of rationally based attitudes toward the behavior, and internalized normative pressures concerning such behavior. The former model posits that different (normative and instrumental) internal forces lead to the intention to engage in job-related misbehavior. In this paper we report a scenario based quasi-experimental study especially designed to test the utility of the Theory of Reasoned Action in predicting employee intentions to engage in self-benefitting (Type S), organization-benefitting (Type O, or damaging (Type D) organizational misbehavior. Results support the Theory of Reasoned Action in predicting negative workplace behaviors. Both attitude and subjective norm are useful in explaining organizational misbehavior. We discuss some theoretical and methodological implications for the study of misbehavior intentions in organizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/990176-international-model-validation-exercise-radionuclide-transfer-doses-freshwater-biota','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/990176-international-model-validation-exercise-radionuclide-transfer-doses-freshwater-biota"><span>An international model validation exercise on radionuclide transfer and doses to freshwater biota.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Yankovich, T. L.; Vives i Batlle, J.; Vives-Lynch, S.</p> <p>2010-06-09</p> <p>Under the International Atomic Energy Agency (IAEA)'s EMRAS (Environmental Modelling for Radiation Safety) program, activity concentrations of {sup 60}Co, {sup 90}Sr, {sup 137}Cs and {sup 3}H in Perch Lake at Atomic Energy of Canada Limited's Chalk River Laboratories site were predicted, in freshwater primary producers, invertebrates, fishes, herpetofauna and mammals using eleven modelling approaches. Comparison of predicted radionuclide concentrations in the different species types with measured values highlighted a number of areas where additional work and understanding is required to improve the predictions of radionuclide transfer. For some species, the differences could be explained by ecological factors such as trophicmore » level or the influence of stable analogues. Model predictions were relatively poor for mammalian species and herpetofauna compared with measured values, partly due to a lack of relevant data. In addition, concentration ratios are sometimes under-predicted when derived from experiments performed under controlled laboratory conditions representative of conditions in other water bodies.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996JHyd..175..595P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996JHyd..175..595P"><span>Validation of catchment models for predicting land-use and climate change impacts. 2. Case study for a Mediterranean catchment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.</p> <p>1996-02-01</p> <p>Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990007765','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990007765"><span>Prediction Accuracy of Error Rates for MPTB Space Experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.</p> <p>1998-01-01</p> <p>This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23861828','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23861828"><span>Thematic and spatial resolutions affect model-based predictions of tree species distribution.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei</p> <p>2013-01-01</p> <p>Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3701650','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3701650"><span>Thematic and Spatial Resolutions Affect Model-Based Predictions of Tree Species Distribution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liang, Yu; He, Hong S.; Fraser, Jacob S.; Wu, ZhiWei</p> <p>2013-01-01</p> <p>Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution. PMID:23861828</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25670713','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25670713"><span>Mathematical modeling and experimental validation of the spatial distribution of boron in the root of Arabidopsis thaliana identify high boron accumulation in the tip and predict a distinct root tip uptake function.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shimotohno, Akie; Sotta, Naoyuki; Sato, Takafumi; De Ruvo, Micol; Marée, Athanasius F M; Grieneisen, Verônica A; Fujiwara, Toru</p> <p>2015-04-01</p> <p>Boron, an essential micronutrient, is transported in roots of Arabidopsis thaliana mainly by two different types of transporters, BORs and NIPs (nodulin26-like intrinsic proteins). Both are plasma membrane localized, but have distinct transport properties and patterns of cell type-specific accumulation with different polar localizations, which are likely to affect boron distribution. Here, we used mathematical modeling and an experimental determination to address boron distributions in the root. A computational model of the root is created at the cellular level, describing the boron transporters as observed experimentally. Boron is allowed to diffuse into roots, in cells and cell walls, and to be transported over plasma membranes, reflecting the properties of the different transporters. The model predicts that a region around the quiescent center has a higher concentration of soluble boron than other portions. To evaluate this prediction experimentally, we determined the boron distribution in roots using laser ablation-inductivity coupled plasma-mass spectrometry. The analysis indicated that the boron concentration is highest near the tip and is lower in the more proximal region of the meristem zone, similar to the pattern of soluble boron distribution predicted by the model. Our model also predicts that upward boron flux does not continuously increase from the root tip toward the mature region, indicating that boron taken up in the root tip is not efficiently transported to shoots. This suggests that root tip-absorbed boron is probably used for local root growth, and that instead it is the more mature root regions which have a greater role in transporting boron toward the shoots. © The Author 2015. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4387314','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4387314"><span>Mathematical Modeling and Experimental Validation of the Spatial Distribution of Boron in the Root of Arabidopsis thaliana Identify High Boron Accumulation in the Tip and Predict a Distinct Root Tip Uptake Function</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shimotohno, Akie; Sotta, Naoyuki; Sato, Takafumi; De Ruvo, Micol; Marée, Athanasius F.M.; Grieneisen, Verônica A.; Fujiwara, Toru</p> <p>2015-01-01</p> <p>Boron, an essential micronutrient, is transported in roots of Arabidopsis thaliana mainly by two different types of transporters, BORs and NIPs (nodulin26-like intrinsic proteins). Both are plasma membrane localized, but have distinct transport properties and patterns of cell type-specific accumulation with different polar localizations, which are likely to affect boron distribution. Here, we used mathematical modeling and an experimental determination to address boron distributions in the root. A computational model of the root is created at the cellular level, describing the boron transporters as observed experimentally. Boron is allowed to diffuse into roots, in cells and cell walls, and to be transported over plasma membranes, reflecting the properties of the different transporters. The model predicts that a region around the quiescent center has a higher concentration of soluble boron than other portions. To evaluate this prediction experimentally, we determined the boron distribution in roots using laser ablation-inductivity coupled plasma-mass spectrometry. The analysis indicated that the boron concentration is highest near the tip and is lower in the more proximal region of the meristem zone, similar to the pattern of soluble boron distribution predicted by the model. Our model also predicts that upward boron flux does not continuously increase from the root tip toward the mature region, indicating that boron taken up in the root tip is not efficiently transported to shoots. This suggests that root tip-absorbed boron is probably used for local root growth, and that instead it is the more mature root regions which have a greater role in transporting boron toward the shoots. PMID:25670713</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020060784','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020060784"><span>Computational Simulation of the High Strain Rate Tensile Response of Polymer Matrix Composites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Goldberg, Robert K.</p> <p>2002-01-01</p> <p>A research program is underway to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. Under these types of loading conditions, the material response can be highly strain rate dependent and nonlinear. State variable constitutive equations based on a viscoplasticity approach have been developed to model the deformation of the polymer matrix. The constitutive equations are then combined with a mechanics of materials based micromechanics model which utilizes fiber substructuring to predict the effective mechanical and thermal response of the composite. To verify the analytical model, tensile stress-strain curves are predicted for a representative composite over strain rates ranging from around 1 x 10(exp -5)/sec to approximately 400/sec. The analytical predictions compare favorably to experimentally obtained values both qualitatively and quantitatively. Effective elastic and thermal constants are predicted for another composite, and compared to finite element results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5580454','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5580454"><span>The Mechanisms for Within-Host Influenza Virus Control Affect Model-Based Assessment and Prediction of Antiviral Treatment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cao, Pengxing</p> <p>2017-01-01</p> <p>Models of within-host influenza viral dynamics have contributed to an improved understanding of viral dynamics and antiviral effects over the past decade. Existing models can be classified into two broad types based on the mechanism of viral control: models utilising target cell depletion to limit the progress of infection and models which rely on timely activation of innate and adaptive immune responses to control the infection. In this paper, we compare how two exemplar models based on these different mechanisms behave and investigate how the mechanistic difference affects the assessment and prediction of antiviral treatment. We find that the assumed mechanism for viral control strongly influences the predicted outcomes of treatment. Furthermore, we observe that for the target cell-limited model the assumed drug efficacy strongly influences the predicted treatment outcomes. The area under the viral load curve is identified as the most reliable predictor of drug efficacy, and is robust to model selection. Moreover, with support from previous clinical studies, we suggest that the target cell-limited model is more suitable for modelling in vitro assays or infection in some immunocompromised/immunosuppressed patients while the immune response model is preferred for predicting the infection/antiviral effect in immunocompetent animals/patients. PMID:28933757</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10399E..0MV','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10399E..0MV"><span>Predicting silicon pore optics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vacanti, Giuseppe; Barriére, Nicolas; Bavdaz, Marcos; Chatbi, Abdelhakim; Collon, Maximilien; Dekker, Danielle; Girou, David; Günther, Ramses; van der Hoeven, Roy; Landgraf, Boris; Sforzini, Jessica; Vervest, Mark; Wille, Eric</p> <p>2017-09-01</p> <p>Continuing improvement of Silicon Pore Optics (SPO) calls for regular extension and validation of the tools used to model and predict their X-ray performance. In this paper we present an updated geometrical model for the SPO optics and describe how we make use of the surface metrology collected during each of the SPO manufacturing runs. The new geometrical model affords the user a finer degree of control on the mechanical details of the SPO stacks, while a standard interface has been developed to make use of any type of metrology that can return changes in the local surface normal of the reflecting surfaces. Comparisons between the predicted and actual performance of samples optics will be shown and discussed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23818528','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23818528"><span>The prediction of type 1 diabetes by multiple autoantibody levels and their incorporation into an autoantibody risk score in relatives of type 1 diabetic patients.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sosenko, Jay M; Skyler, Jay S; Palmer, Jerry P; Krischer, Jeffrey P; Yu, Liping; Mahon, Jeffrey; Beam, Craig A; Boulware, David C; Rafkin, Lisa; Schatz, Desmond; Eisenbarth, George</p> <p>2013-09-01</p> <p>We assessed whether a risk score that incorporates levels of multiple islet autoantibodies could enhance the prediction of type 1 diabetes (T1D). TrialNet Natural History Study participants (n = 784) were tested for three autoantibodies (GADA, IA-2A, and mIAA) at their initial screening. Samples from those positive for at least one autoantibody were subsequently tested for ICA and ZnT8A. An autoantibody risk score (ABRS) was developed from a proportional hazards model that combined autoantibody levels from each autoantibody along with their designations of positivity and negativity. The ABRS was strongly predictive of T1D (hazard ratio [with 95% CI] 2.72 [2.23-3.31], P < 0.001). Receiver operating characteristic curve areas (with 95% CI) for the ABRS revealed good predictability (0.84 [0.78-0.90] at 2 years, 0.81 [0.74-0.89] at 3 years, P < 0.001 for both). The composite of levels from the five autoantibodies was predictive of T1D before and after an adjustment for the positivity or negativity of autoantibodies (P < 0.001). The findings were almost identical when ICA was excluded from the risk score model. The combination of the ABRS and the previously validated Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) predicted T1D more accurately (0.93 [0.88-0.98] at 2 years, 0.91 [0.83-0.99] at 3 years) than either the DPTRS or the ABRS alone (P ≤ 0.01 for all comparisons). These findings show the importance of considering autoantibody levels in assessing the risk of T1D. Moreover, levels of multiple autoantibodies can be incorporated into an ABRS that accurately predicts T1D.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3747899','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3747899"><span>The Prediction of Type 1 Diabetes by Multiple Autoantibody Levels and Their Incorporation Into an Autoantibody Risk Score in Relatives of Type 1 Diabetic Patients</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sosenko, Jay M.; Skyler, Jay S.; Palmer, Jerry P.; Krischer, Jeffrey P.; Yu, Liping; Mahon, Jeffrey; Beam, Craig A.; Boulware, David C.; Rafkin, Lisa; Schatz, Desmond; Eisenbarth, George</p> <p>2013-01-01</p> <p>OBJECTIVE We assessed whether a risk score that incorporates levels of multiple islet autoantibodies could enhance the prediction of type 1 diabetes (T1D). RESEARCH DESIGN AND METHODS TrialNet Natural History Study participants (n = 784) were tested for three autoantibodies (GADA, IA-2A, and mIAA) at their initial screening. Samples from those positive for at least one autoantibody were subsequently tested for ICA and ZnT8A. An autoantibody risk score (ABRS) was developed from a proportional hazards model that combined autoantibody levels from each autoantibody along with their designations of positivity and negativity. RESULTS The ABRS was strongly predictive of T1D (hazard ratio [with 95% CI] 2.72 [2.23–3.31], P < 0.001). Receiver operating characteristic curve areas (with 95% CI) for the ABRS revealed good predictability (0.84 [0.78–0.90] at 2 years, 0.81 [0.74–0.89] at 3 years, P < 0.001 for both). The composite of levels from the five autoantibodies was predictive of T1D before and after an adjustment for the positivity or negativity of autoantibodies (P < 0.001). The findings were almost identical when ICA was excluded from the risk score model. The combination of the ABRS and the previously validated Diabetes Prevention Trial–Type 1 Risk Score (DPTRS) predicted T1D more accurately (0.93 [0.88–0.98] at 2 years, 0.91 [0.83–0.99] at 3 years) than either the DPTRS or the ABRS alone (P ≤ 0.01 for all comparisons). CONCLUSIONS These findings show the importance of considering autoantibody levels in assessing the risk of T1D. Moreover, levels of multiple autoantibodies can be incorporated into an ABRS that accurately predicts T1D. PMID:23818528</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27164424','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27164424"><span>Comorbid Anxiety and Depression and Their Impact on Cardiovascular Disease in Type 2 Diabetes: The Fremantle Diabetes Study Phase II.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bruce, David G; Davis, Wendy A; Dragovic, Milan; Davis, Timothy M E; Starkstein, Sergio E</p> <p>2016-10-01</p> <p>The aims were to determine whether anxious depression, defined by latent class analysis (LCA), predicts cardiovascular outcomes in type 2 diabetes and to compare the predictive power of anxious depression with Diagnostic & Statistical Manual Versions IV and 5 (DSM-IV/5) categories of depression and generalized anxiety disorder (GAD). Prospective observational study of 1,337 type 2 participants. Baseline assessment with the 9-item Patient Health Questionnaire and the GAD Scale; LCA-defined groups with minor or major anxious depression based on anxiety and depression symptoms. Cox modeling used to compare the independent impact of: (1) LCA anxious depression, (2) DSM-IV/5 depression, (3) GAD on incident cardiovascular events and deaths after 4 years. LCA minor and major anxious depression was present in 21.9 and 7.8% of participants, respectively, DSM-IV/5 minor and major depression in 6.2 and 6.1%, respectively, and GAD in 4.8%. There were 110 deaths, 31 cardiovascular deaths, and 199 participants had incident cardiovascular events. In adjusted models, minor anxious depression (Hazard ratio (95% confidence intervals): 1.70 (1.15-2.50)) and major anxious depression (1.90 (1.11-3.25)) predicted incident cardiovascular events and major anxious depression also predicted cardiovascular mortality (4.32 (1.35-13.86)). By comparison, incident cardiovascular events were predicted by DSM-IV/5 major depression (2.10 (1.22-3.62)) only and cardiovascular mortality was predicted by both DSM-IV/5 major depression (3.56 (1.03-12.35)) and GAD (5.92 (1.84-19.08)). LCA-defined anxious depression is more common than DSM-IV/5 categories and is a strong predictor of cardiovascular outcomes in type 2 diabetes. These data suggest that this diagnostic scheme has predictive validity and clinical relevance. © 2016 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28438725','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28438725"><span>Analyzing and Predicting User Participations in Online Health Communities: A Social Support Perspective.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Xi; Zhao, Kang; Street, Nick</p> <p>2017-04-24</p> <p>Online health communities (OHCs) have become a major source of social support for people with health problems. Members of OHCs interact online with similar peers to seek, receive, and provide different types of social support, such as informational support, emotional support, and companionship. As active participations in an OHC are beneficial to both the OHC and its users, it is important to understand factors related to users' participations and predict user churn for user retention efforts. This study aimed to analyze OHC users' Web-based interactions, reveal which types of social support activities are related to users' participation, and predict whether and when a user will churn from the OHC. We collected a large-scale dataset from a popular OHC for cancer survivors. We used text mining techniques to decide what kinds of social support each post contained. We illustrated how we built text classifiers for 5 different social support categories: seeking informational support (SIS), providing informational support (PIS), seeking emotional support (SES), providing emotional support (PES), and companionship (COM). We conducted survival analysis to identify types of social support related to users' continued participation. Using supervised machine learning methods, we developed a predictive model for user churn. Users' behaviors to PIS, SES, and COM had hazard ratios significantly lower than 1 (0.948, 0.972, and 0.919, respectively) and were indicative of continued participations in the OHC. The churn prediction model based on social support activities offers accurate predictions on whether and when a user will leave the OHC. Detecting different types of social support activities via text mining contributes to better understanding and prediction of users' participations in an OHC. The outcome of this study can help the management and design of a sustainable OHC via more proactive and effective user retention strategies. ©Xi Wang, Kang Zhao, Nick Street. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 24.04.2017.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5422656','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5422656"><span>Analyzing and Predicting User Participations in Online Health Communities: A Social Support Perspective</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Xi; Street, Nick</p> <p>2017-01-01</p> <p>Background Online health communities (OHCs) have become a major source of social support for people with health problems. Members of OHCs interact online with similar peers to seek, receive, and provide different types of social support, such as informational support, emotional support, and companionship. As active participations in an OHC are beneficial to both the OHC and its users, it is important to understand factors related to users’ participations and predict user churn for user retention efforts. Objective This study aimed to analyze OHC users’ Web-based interactions, reveal which types of social support activities are related to users’ participation, and predict whether and when a user will churn from the OHC. Methods We collected a large-scale dataset from a popular OHC for cancer survivors. We used text mining techniques to decide what kinds of social support each post contained. We illustrated how we built text classifiers for 5 different social support categories: seeking informational support (SIS), providing informational support (PIS), seeking emotional support (SES), providing emotional support (PES), and companionship (COM). We conducted survival analysis to identify types of social support related to users’ continued participation. Using supervised machine learning methods, we developed a predictive model for user churn. Results Users’ behaviors to PIS, SES, and COM had hazard ratios significantly lower than 1 (0.948, 0.972, and 0.919, respectively) and were indicative of continued participations in the OHC. The churn prediction model based on social support activities offers accurate predictions on whether and when a user will leave the OHC. Conclusions Detecting different types of social support activities via text mining contributes to better understanding and prediction of users’ participations in an OHC. The outcome of this study can help the management and design of a sustainable OHC via more proactive and effective user retention strategies. PMID:28438725</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/33676','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/33676"><span>Stage-structured matrix models for organisms with non-geometric development times</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Andrew Birt; Richard M. Feldman; David M. Cairns; Robert N. Coulson; Maria Tchakerian; Weimin Xi; James M. Guldin</p> <p>2009-01-01</p> <p>Matrix models have been used to model population growth of organisms for many decades. They are popular because of both their conceptual simplicity and their computational efficiency. For some types of organisms they are relatively accurate in predicting population growth; however, for others the matrix approach does not adequately model...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006WRR....42.3408K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006WRR....42.3408K"><span>Bayesian analysis of input uncertainty in hydrological modeling: 2. Application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.</p> <p>2006-03-01</p> <p>The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H22D..06C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H22D..06C"><span>Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.</p> <p>2017-12-01</p> <p>Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5658761','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5658761"><span>Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2017-01-01</p> <p>We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.V11A0335F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.V11A0335F"><span>A dual-porosity reactive-transport model of off-axis hydrothermal systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Farahat, N. X.; Abbot, D. S.; Archer, D. E.</p> <p>2017-12-01</p> <p>We built a dual-porosity reactive-transport 2D numerical model of off-axis pillow basalt alteration. An "outer chamber" full of porous glassy material supports significant seawater flushing, and an "inner chamber", which represents the more crystalline interior of a pillow, supports diffusive alteration. Hydrothermal fluids in the two chambers interact, and the two chambers are coupled to 2D flows. In a few million years of low-temperature alteration, the dual-porosity model predicts progressive stages of alteration that have been observed in drilled crust. A single-porosity model, with all else being equal, does not predict alteration stages as well. The dual-chamber model also does a better job than the single-chamber model at predicting the types of minerals expected in off-axis environments. We validate the model's ability to reproduce observations by configuring it to represent a thoroughly-studied transect of the Juan de Fuca Ridge eastern flank.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25728793','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25728793"><span>In silico platform for predicting and initiating β-turns in a protein at desired locations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Singh, Harinder; Singh, Sandeep; Raghava, Gajendra P S</p> <p>2015-05-01</p> <p>Numerous studies have been performed for analysis and prediction of β-turns in a protein. This study focuses on analyzing, predicting, and designing of β-turns to understand the preference of amino acids in β-turn formation. We analyzed around 20,000 PDB chains to understand the preference of residues or pair of residues at different positions in β-turns. Based on the results, a propensity-based method has been developed for predicting β-turns with an accuracy of 82%. We introduced a new approach entitled "Turn level prediction method," which predicts the complete β-turn rather than focusing on the residues in a β-turn. Finally, we developed BetaTPred3, a Random forest based method for predicting β-turns by utilizing various features of four residues present in β-turns. The BetaTPred3 achieved an accuracy of 79% with 0.51 MCC that is comparable or better than existing methods on BT426 dataset. Additionally, models were developed to predict β-turn types with better performance than other methods available in the literature. In order to improve the quality of prediction of turns, we developed prediction models on a large and latest dataset of 6376 nonredundant protein chains. Based on this study, a web server has been developed for prediction of β-turns and their types in proteins. This web server also predicts minimum number of mutations required to initiate or break a β-turn in a protein at specified location of a protein. © 2015 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23254111','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23254111"><span>Sport type and interpersonal and intrapersonal predictors of body dissatisfaction in high school female sport participants.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karr, Trisha M; Davidson, Denise; Bryant, Fred B; Balague, Gloria; Bohnert, Amy M</p> <p>2013-03-01</p> <p>Through multiple group structural equation modeling analyses, path models were used to test the predictive effects of sport type and both interpersonal (i.e., mothers' body dissatisfaction, family dynamics) and intrapersonal factors (i.e., athletic self-efficacy, body mass index [BMI]) on high school female sport participants' (N=627) body dissatisfaction. Sport types were classified as esthetic/lean (i.e., gymnastics), non-esthetic/lean (i.e., cross-country), or non-esthetic/non-lean (i.e., softball). Most participants reported low body dissatisfaction, and body dissatisfaction did not differ across sport types. Nevertheless, mothers' body dissatisfaction was positively associated with daughters' body dissatisfaction for non-esthetic/lean and non-esthetic/non-lean sport participants, and high family cohesion was predictive of body dissatisfaction among non-esthetic/lean sport participants. Across sport types, higher BMI was associated with greater body dissatisfaction, whereas greater athletic self-efficacy was associated with lower body dissatisfaction. These findings highlight the complex relationship between interpersonal and intrapersonal factors and body dissatisfaction in adolescent female sport participants. Copyright © 2012. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/41642','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/41642"><span>Wildfire ignition-distribution modelling: a comparative study in the Huron-Manistee National Forest, Michigan, USA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Avi Bar Massada; Alexandra D. Syphard; Susan I. Stewart; Volker C. Radeloff</p> <p>2012-01-01</p> <p>Wildfire ignition distribution models are powerful tools for predicting the probability of ignitions across broad areas, and identifying their drivers. Several approaches have been used for ignition-distribution modelling, yet the performance of different model types has not been compared. This is unfortunate, given that conceptually similar species-distribution models...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28836078','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28836078"><span>Stress enhanced calcium kinetics in a neuron.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kant, Aayush; Bhandakkar, Tanmay K; Medhekar, Nikhil V</p> <p>2018-02-01</p> <p>Accurate modeling of the mechanobiological response of a Traumatic Brain Injury is beneficial toward its effective clinical examination, treatment and prevention. Here, we present a stress history-dependent non-spatial kinetic model to predict the microscale phenomena of secondary insults due to accumulation of excess calcium ions (Ca[Formula: see text]) induced by the macroscale primary injuries. The model is able to capture the experimentally observed increase and subsequent partial recovery of intracellular Ca[Formula: see text] concentration in response to various types of mechanical impulses. We further establish the accuracy of the model by comparing our predictions with key experimental observations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFM.S22C..08W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFM.S22C..08W"><span>Testing the Predictive Power of Coulomb Stress on Aftershock Sequences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.</p> <p>2009-12-01</p> <p>Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhDT........73R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhDT........73R"><span>Enhanced propagation modeling of directional aviation noise: A hybrid parabolic equation-fast field program method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rosenbaum, Joyce E.</p> <p>2011-12-01</p> <p>Commercial air traffic is anticipated to increase rapidly in the coming years. The impact of aviation noise on communities surrounding airports is, therefore, a growing concern. Accurate prediction of noise can help to mitigate the impact on communities and foster smoother integration of aerospace engineering advances. The problem of accurate sound level prediction requires careful inclusion of all mechanisms that affect propagation, in addition to correct source characterization. Terrain, ground type, meteorological effects, and source directivity can have a substantial influence on the noise level. Because they are difficult to model, these effects are often included only by rough approximation. This dissertation presents a model designed for sound propagation over uneven terrain, with mixed ground type and realistic meteorological conditions. The model is a hybrid of two numerical techniques: the parabolic equation (PE) and fast field program (FFP) methods, which allow for physics-based inclusion of propagation effects and ensure the low frequency content, a factor in community impact, is predicted accurately. Extension of the hybrid model to a pseudo-three-dimensional representation allows it to produce aviation noise contour maps in the standard form. In order for the model to correctly characterize aviation noise sources, a method of representing arbitrary source directivity patterns was developed for the unique form of the parabolic equation starting field. With this advancement, the model can represent broadband, directional moving sound sources, traveling along user-specified paths. This work was prepared for possible use in the research version of the sound propagation module in the Federal Aviation Administration's new standard predictive tool.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3080874','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3080874"><span>Propagating Cell-Membrane Waves Driven by Curved Activators of Actin Polymerization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Peleg, Barak; Disanza, Andrea; Scita, Giorgio; Gov, Nir</p> <p>2011-01-01</p> <p>Cells exhibit propagating membrane waves which involve the actin cytoskeleton. One type of such membranal waves are Circular Dorsal Ruffles (CDR) which are related to endocytosis and receptor internalization. Experimentally, CDRs have been associated with membrane bound activators of actin polymerization of concave shape. We present experimental evidence for the localization of convex membrane proteins in these structures, and their insensitivity to inhibition of myosin II contractility in immortalized mouse embryo fibroblasts cell cultures. These observations lead us to propose a theoretical model which explains the formation of these waves due to the interplay between complexes that contain activators of actin polymerization and membrane-bound curved proteins of both types of curvature (concave and convex). Our model predicts that the activity of both types of curved proteins is essential for sustaining propagating waves, which are abolished when one type of curved activator is removed. Within this model waves are initiated when the level of actin polymerization induced by the curved activators is higher than some threshold value, which allows the cell to control CDR formation. We demonstrate that the model can explain many features of CDRs, and give several testable predictions. This work demonstrates the importance of curved membrane proteins in organizing the actin cytoskeleton and cell shape. PMID:21533032</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030032265','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030032265"><span>3D Finite Element Analysis of Particle-Reinforced Aluminum</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shen, H.; Lissenden, C. J.</p> <p>2002-01-01</p> <p>Deformation in particle-reinforced aluminum has been simulated using three distinct types of finite element model: a three-dimensional repeating unit cell, a three-dimensional multi-particle model, and two-dimensional multi-particle models. The repeating unit cell model represents a fictitious periodic cubic array of particles. The 3D multi-particle (3D-MP) model represents randomly placed and oriented particles. The 2D generalized plane strain multi-particle models were obtained from planar sections through the 3D-MP model. These models were used to study the tensile macroscopic stress-strain response and the associated stress and strain distributions in an elastoplastic matrix. The results indicate that the 2D model having a particle area fraction equal to the particle representative volume fraction of the 3D models predicted the same macroscopic stress-strain response as the 3D models. However, there are fluctuations in the particle area fraction in a representative volume element. As expected, predictions from 2D models having different particle area fractions do not agree with predictions from 3D models. More importantly, it was found that the microscopic stress and strain distributions from the 2D models do not agree with those from the 3D-MP model. Specifically, the plastic strain distribution predicted by the 2D model is banded along lines inclined at 45 deg from the loading axis while the 3D model prediction is not. Additionally, the triaxial stress and maximum principal stress distributions predicted by 2D and 3D models do not agree. Thus, it appears necessary to use a multi-particle 3D model to accurately predict material responses that depend on local effects, such as strain-to-failure, fracture toughness, and fatigue life.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910030287&hterms=TSA&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DTSA','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910030287&hterms=TSA&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DTSA"><span>Characterization of tapered slot antenna feeds and feed arrays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kim, Young-Sik; Yngvesson, K. Sigfrid</p> <p>1990-01-01</p> <p>A class of feed antennas and feed antenna arrays used in the focal plane of paraboloid reflectors and exhibiting higher than normal levels of cross-polarized radiation in the diagonal planes is addressed. A model which allows prediction of element gain and aperture efficiency of the feed/reflector system is presented. The predictions are in good agreement with experimental results. Tapered slot antenna (TSA) elements are used an example of an element of this type. It is shown that TSA arrays used in multibeam systems with small beam spacings are competitive in terms of aperture efficiency with other, more standard types of arrays incorporating waveguide type elements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24772272','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24772272"><span>Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A</p> <p>2014-04-01</p> <p>Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9387295','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9387295"><span>[Study on factors influencing survival in patients with advanced gastric carcinoma after resection by Cox's proportional hazard model].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, S; Sun, Z; Wang, S</p> <p>1996-11-01</p> <p>A prospective follow-up study of 539 advanced gastric carcinoma patients after resection was undertaken between 1 January 1980 and 31 December 1989, with a follow-up rate of 95.36%. A multivariate analysis of possible factors influencing survival of these patients was performed, and their predicting models of survival rates was established by Cox proportional hazard model. The results showed that the major significant prognostic factors influencing survival of these patients were rate and station of lymph node metastases, type of operation, hepatic metastases, size of tumor, age and location of tumor. The most important factor was the rate of lymph node metastases. According to their regression coefficients, the predicting value (PV) of each patient was calculated, then all patients were divided into five risk groups according to PV, their predicting models of survival rates after resection were established in groups. The goodness-fit of estimated predicting models of survival rates were checked by fitting curve and residual plot, and the estimated models tallied with the actual situation. The results suggest that the patients with advanced gastric cancer after resection without lymph node metastases and hepatic metastases had a better prognosis, and their survival probability may be predicted according to the predicting model of survival rates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25542080','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25542080"><span>Estimating community health needs against a Triple Aim background: What can we learn from current predictive risk models?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elissen, Arianne M J; Struijs, Jeroen N; Baan, Caroline A; Ruwaard, Dirk</p> <p>2015-05-01</p> <p>To support providers and commissioners in accurately assessing their local populations' health needs, this study produces an overview of Dutch predictive risk models for health care, focusing specifically on the type, combination and relevance of included determinants for achieving the Triple Aim (improved health, better care experience, and lower costs). We conducted a mixed-methods study combining document analyses, interviews and a Delphi study. Predictive risk models were identified based on a web search and expert input. Participating in the study were Dutch experts in predictive risk modelling (interviews; n=11) and experts in healthcare delivery, insurance and/or funding methodology (Delphi panel; n=15). Ten predictive risk models were analysed, comprising 17 unique determinants. Twelve were considered relevant by experts for estimating community health needs. Although some compositional similarities were identified between models, the combination and operationalisation of determinants varied considerably. Existing predictive risk models provide a good starting point, but optimally balancing resources and targeting interventions on the community level will likely require a more holistic approach to health needs assessment. Development of additional determinants, such as measures of people's lifestyle and social network, may require policies pushing the integration of routine data from different (healthcare) sources. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/wri/1983/4286/report.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/wri/1983/4286/report.pdf"><span>Calibration and verification of a rainfall-runoff model and a runoff-quality model for several urban basins in the Denver metropolitan area, Colorado</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Lindner-Lunsford, J. B.; Ellis, S.R.</p> <p>1984-01-01</p> <p>The U.S. Geological Survey 's Distributed Routing Rainfall-Runoff Model--Version II was calibrated and verified for five urban basins in the Denver metropolitan area. Land-use types in the basins were light commerical, multifamily housing, single-family housing, and a shopping center. The overall accuracy of model predictions of peak flows and runoff volumes was about 15 percent for storms with rainfall intensities of less than 1 inch per hour and runoff volume of greater than 0.01 inch. Predictions generally were unsatisfactory for storm having a rainfall intensity of more than 1 inch per hour, or runoff of 0.01 inch or less. The Distributed Routing Rainfall-Runoff Model-Quality, a multievent runoff-quality model developed by the U.S. Geological Survey, was calibrated and verified on four basins. The model was found to be most useful in the prediction of seasonal loads of constituents in the runoff resulting from rainfall. The model was not very accurate in the prediction of runoff loads of individual constituents. (USGS)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29553059','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29553059"><span>A Point System for Predicting 10-Year Risk of Developing Type 2 Diabetes Mellitus in Japanese Men: Aichi Workers' Cohort Study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yatsuya, Hiroshi; Li, Yuanying; Hirakawa, Yoshihisa; Ota, Atsuhiko; Matsunaga, Masaaki; Haregot, Hilawe Esayas; Chiang, Chifa; Zhang, Yan; Tamakoshi, Koji; Toyoshima, Hideaki; Aoyama, Atsuko</p> <p>2018-03-17</p> <p>Relatively little evidence exists for type 2 diabetes mellitus (T2DM) prediction models from long-term follow-up studies in East Asians. This study aims to develop a point-based prediction model for 10-year risk of developing T2DM in middle-aged Japanese men. We followed 3,540 male participants of Aichi Workers' Cohort Study, who were aged 35-64 years and were free of diabetes in 2002, until March 31, 2015. Baseline age, body mass index (BMI), smoking status, alcohol consumption, regular exercise, medication for dyslipidemia, diabetes family history, and blood levels of triglycerides (TG), high density lipoprotein cholesterol (HDLC) and fasting blood glucose (FBG) were examined using Cox proportional hazard model. Variables significantly associated with T2DM in univariable models were simultaneously entered in a multivariable model for determination of the final model using backward variable selection. Performance of an existing T2DM model when applied to the current dataset was compared to that obtained in the present study's model. During the median follow-up of 12.2 years, 342 incident T2DM cases were documented. The prediction system using points assigned to age, BMI, smoking status, diabetes family history, and TG and FBG showed reasonable discrimination (c-index: 0.77) and goodness-of-fit (Hosmer-Lemeshow test, P = 0.22). The present model outperformed the previous one in the present subjects. The point system, once validated in the other populations, could be applied to middle-aged Japanese male workers to identify those at high risk of developing T2DM. In addition, further investigation is also required to examine whether the use of this system will reduce incidence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFMEP31B3539H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFMEP31B3539H"><span>Evaluation of anthropogenic influence in probabilistic forecasting of coastal change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hapke, C. J.; Wilson, K.; Adams, P. N.</p> <p>2014-12-01</p> <p>Prediction of large scale coastal behavior is especially challenging in areas of pervasive human activity. Many coastal zones on the Gulf and Atlantic coasts are moderately to highly modified through the use of soft sediment and hard stabilization techniques. These practices have the potential to alter sediment transport and availability, as well as reshape the beach profile, ultimately transforming the natural evolution of the coastal system. We present the results of a series of probabilistic models, designed to predict the observed geomorphic response to high wave events at Fire Island, New York. The island comprises a variety of land use types, including inhabited communities with modified beaches, where beach nourishment and artificial dune construction (scraping) occur, unmodified zones, and protected national seashore. This variation in land use presents an opportunity for comparison of model accuracy across highly modified and rarely modified stretches of coastline. Eight models with basic and expanded structures were developed, resulting in sixteen models, informed with observational data from Fire Island. The basic model type does not include anthropogenic modification. The expanded model includes records of nourishment and scraping, designed to quantify the improved accuracy when anthropogenic activity is represented. Modification was included as frequency of occurrence divided by the time since the most recent event, to distinguish between recent and historic events. All but one model reported improved predictive accuracy from the basic to expanded form. The addition of nourishment and scraping parameters resulted in a maximum reduction in predictive error of 36%. The seven improved models reported an average 23% reduction in error. These results indicate that it is advantageous to incorporate the human forcing into a coastal hazards probability model framework.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JHyd..325..241S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JHyd..325..241S"><span>Predicting streamflow regime metrics for ungauged streamsin Colorado, Washington, and Oregon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanborn, Stephen C.; Bledsoe, Brian P.</p> <p>2006-06-01</p> <p>Streamflow prediction in ungauged basins provides essential information for water resources planning and management and ecohydrological studies yet remains a fundamental challenge to the hydrological sciences. A methodology is presented for stratifying streamflow regimes of gauged locations, classifying the regimes of ungauged streams, and developing models for predicting a suite of ecologically pertinent streamflow metrics for these streams. Eighty-four streamflow metrics characterizing various flow regime attributes were computed along with physical and climatic drainage basin characteristics for 150 streams with little or no streamflow modification in Colorado, Washington, and Oregon. The diverse hydroclimatology of the study area necessitates flow regime stratification and geographically independent clusters were identified and used to develop separate predictive models for each flow regime type. Multiple regression models for flow magnitude, timing, and rate of change metrics were quite accurate with many adjusted R2 values exceeding 0.80, while models describing streamflow variability did not perform as well. Separate stratification schemes for high, low, and average flows did not considerably improve models for metrics describing those particular aspects of the regime over a scheme based on the entire flow regime. Models for streams identified as 'snowmelt' type were improved if sites in Colorado and the Pacific Northwest were separated to better stratify the processes driving streamflow in these regions thus revealing limitations of geographically independent streamflow clusters. This study demonstrates that a broad suite of ecologically relevant streamflow characteristics can be accurately modeled across large heterogeneous regions using this framework. Applications of the resulting models include stratifying biomonitoring sites and quantifying linkages between specific aspects of flow regimes and aquatic community structure. In particular, the results bode well for modeling ecological processes related to high-flow magnitude, timing, and rate of change such as the recruitment of fish and riparian vegetation across large regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26819587','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26819587"><span>Prediction of Soil Deformation in Tunnelling Using Artificial Neural Networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lai, Jinxing; Qiu, Junling; Feng, Zhihua; Chen, Jianxun; Fan, Haobo</p> <p>2016-01-01</p> <p>In the past few decades, as a new tool for analysis of the tough geotechnical problems, artificial neural networks (ANNs) have been successfully applied to address a number of engineering problems, including deformation due to tunnelling in various types of rock mass. Unlike the classical regression methods in which a certain form for the approximation function must be presumed, ANNs do not require the complex constitutive models. Additionally, it is traced that the ANN prediction system is one of the most effective ways to predict the rock mass deformation. Furthermore, it could be envisaged that ANNs would be more feasible for the dynamic prediction of displacements in tunnelling in the future, especially if ANN models are combined with other research methods. In this paper, we summarized the state-of-the-art and future research challenges of ANNs on the tunnel deformation prediction. And the application cases as well as the improvement of ANN models were also presented. The presented ANN models can serve as a benchmark for effective prediction of the tunnel deformation with characters of nonlinearity, high parallelism, fault tolerance, learning, and generalization capability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4706869','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4706869"><span>Prediction of Soil Deformation in Tunnelling Using Artificial Neural Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lai, Jinxing</p> <p>2016-01-01</p> <p>In the past few decades, as a new tool for analysis of the tough geotechnical problems, artificial neural networks (ANNs) have been successfully applied to address a number of engineering problems, including deformation due to tunnelling in various types of rock mass. Unlike the classical regression methods in which a certain form for the approximation function must be presumed, ANNs do not require the complex constitutive models. Additionally, it is traced that the ANN prediction system is one of the most effective ways to predict the rock mass deformation. Furthermore, it could be envisaged that ANNs would be more feasible for the dynamic prediction of displacements in tunnelling in the future, especially if ANN models are combined with other research methods. In this paper, we summarized the state-of-the-art and future research challenges of ANNs on the tunnel deformation prediction. And the application cases as well as the improvement of ANN models were also presented. The presented ANN models can serve as a benchmark for effective prediction of the tunnel deformation with characters of nonlinearity, high parallelism, fault tolerance, learning, and generalization capability. PMID:26819587</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2796713','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2796713"><span>QSAR Modeling of Rat Acute Toxicity by Oral Exposure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhu, Hao; Martin, Todd M.; Ye, Lin; Sedykh, Alexander; Young, Douglas M.; Tropsha, Alexander</p> <p>2009-01-01</p> <p>Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. In this study, a comprehensive dataset of 7,385 compounds with their most conservative lethal dose (LD50) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire dataset was selected that included all 3,472 compounds used in the TOPKAT’s training set. The remaining 3,913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R2 of linear regression between actual and predicted LD50 values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R2 ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD50 for every compound using all 5 models. The consensus models afforded higher prediction accuracy for the external validation dataset with the higher coverage as compared to individual constituent models. The validated consensus LD50 models developed in this study can be used as reliable computational predictors of in vivo acute toxicity. PMID:19845371</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19845371','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19845371"><span>Quantitative structure-activity relationship modeling of rat acute toxicity by oral exposure.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhu, Hao; Martin, Todd M; Ye, Lin; Sedykh, Alexander; Young, Douglas M; Tropsha, Alexander</p> <p>2009-12-01</p> <p>Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28516832','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28516832"><span>The Neighborhood Context of Hate Crime: A Comparison of Violent and Property Offenses Using Rare Events Modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Benier, Kathryn</p> <p>2017-08-01</p> <p>Many studies into the antecedents of hate crime in the neighborhood combine offense categories, meaning that it is unclear whether or not there are distinct contextual factors associated with violent and property hate offenses. This study uses rare events modeling to examine the household and neighborhood factors associated with violent and property offenses. Using the Australian Community Capacity Study, the study focuses on the neighborhood characteristics influencing self-reported violent and property hate crime for 4,396 residents in Brisbane. Findings demonstrate important differences between the offense types. Violence is predicted by household renting and non-English language, whereas property offenses are predicted by household non-English language, neighborhood median income, and change in non-English-speaking residents. In both offense types, neighborhood place attachment acts as a protective factor. These findings highlight the theoretical implications of combining distinct hate crime types for methodological reasons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70171283','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70171283"><span>Nowcasting recreational water quality</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Boehm, Alexandria B.; Whitman, Richard L.; Nevers, Meredith; Hou, Deyi; Weisberg, Stephen B.</p> <p>2007-01-01</p> <p>Advances in molecular techniques may soon provide new opportunities to provide more timely information on whether recreational beaches are free from fecal contamination. However, an alternative approach is the use of predictive models. This chapter presents a summary of these developing efforts. First, we describe documented physical, chemical, and biological factors that have been demonstrated by researchers to affect bacterial concentrations at beaches and thus represent logical parameters for inclusion in a model. Then, we illustrate how various types of models can be applied to predict water quality at freshwater and marine beaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1993RaPC...41..803G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1993RaPC...41..803G"><span>Predictive aging results in radiation environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gillen, Kenneth T.; Clough, Roger L.</p> <p>1993-06-01</p> <p>We have previously derived a time-temperature-dose rate superposition methodology, which, when applicable, can be used to predict polymer degradation versus dose rate, temperature and exposure time. This methodology results in predictive capabilities at the low dose rates and long time periods appropriate, for instance, to ambient nuclear power plant environments. The methodology was successfully applied to several polymeric cable materials and then verified for two of the materials by comparisons of the model predictions with 12 year, low-dose-rate aging data on these materials from a nuclear environment. In this paper, we provide a more detailed discussion of the methodology and apply it to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicone rubber and two ethylene-tetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7-9 year) low-dose-rate results recently obtained for the same material types actually aged under bnuclear power plant conditions. Based on a combination of the modelling and long-term results, we find indications of reasonably similar degradation responses among several different commercial formulations for each of the following "generic" materials: hypalon, ethylene-tetrafluoroethylene, silicone rubber and PVC. If such "generic" behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29073279','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29073279"><span>Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Cai; Mezencev, Roman; McDonald, John F; Vannberg, Fredrik</p> <p>2017-01-01</p> <p>Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM) algorithm combined with a standard recursive feature elimination (RFE) approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60). The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC) patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19640143','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19640143"><span>Thermal therapy in urologic systems: a comparison of arrhenius and thermal isoeffective dose models in predicting hyperthermic injury.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>He, Xiaoming; Bhowmick, Sankha; Bischof, John C</p> <p>2009-07-01</p> <p>The Arrhenius and thermal isoeffective dose (TID) models are the two most commonly used models for predicting hyperthermic injury. The TID model is essentially derived from the Arrhenius model, but due to a variety of assumptions and simplifications now leads to different predictions, particularly at temperatures higher than 50 degrees C. In the present study, the two models are compared and their appropriateness tested for predicting hyperthermic injury in both the traditional hyperthermia (usually, 43-50 degrees C) and thermal surgery (or thermal therapy/thermal ablation, usually, >50 degrees C) regime. The kinetic parameters of thermal injury in both models were obtained from the literature (or literature data), tabulated, and analyzed for various prostate and kidney systems. It was found that the kinetic parameters vary widely, and were particularly dependent on the cell or tissue type, injury assay used, and the time when the injury assessment was performed. In order to compare the capability of the two models for thermal injury prediction, thermal thresholds for complete killing (i.e., 99% cell or tissue injury) were predicted using the models in two important urologic systems, viz., the benign prostatic hyperplasia tissue and the normal porcine kidney tissue. The predictions of the two models matched well at temperatures below 50 degrees C. At higher temperatures, however, the thermal thresholds predicted using the TID model with a constant R value of 0.5, the value commonly used in the traditional hyperthermia literature, are much lower than those predicted using the Arrhenius model. This suggests that traditional use of the TID model (i.e., R=0.5) is inappropriate for predicting hyperthermic injury in the thermal surgery regime (>50 degrees C). Finally, the time-temperature relationships for complete killing (i.e., 99% injury) were calculated and analyzed using the Arrhenius model for the various prostate and kidney systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19750022176','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19750022176"><span>Orbital maneuvering engine feed system coupled stability investigation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kahn, D. R.; Schuman, M. D.; Hunting, J. K.; Fertig, K. W.</p> <p>1975-01-01</p> <p>A digital computer model used to analyze and predict engine feed system coupled instabilities over a frequency range of 10 to 1000 Hz was developed and verified. The analytical approach to modeling the feed system hydrodynamics, combustion dynamics, chamber dynamics, and overall engineering model structure is described and the governing equations in each of the technical areas are presented. This is followed by a description of the generalized computer model, including formulation of the discrete subprograms and their integration into an overall engineering model structure. The operation and capabilities of the engineering model were verified by comparing the model's theoretical predictions with experimental data from an OMS-type engine with a known feed system/engine chugging history.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4081576','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4081576"><span>QSAR-Based Models for Designing Quinazoline/Imidazothiazoles/Pyrazolopyrimidines Based Inhibitors against Wild and Mutant EGFR</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chauhan, Jagat Singh; Dhanda, Sandeep Kumar; Singla, Deepak; Agarwal, Subhash M.; Raghava, Gajendra P. S.</p> <p>2014-01-01</p> <p>Overexpression of EGFR is responsible for causing a number of cancers, including lung cancer as it activates various downstream signaling pathways. Thus, it is important to control EGFR function in order to treat the cancer patients. It is well established that inhibiting ATP binding within the EGFR kinase domain regulates its function. The existing quinazoline derivative based drugs used for treating lung cancer that inhibits the wild type of EGFR. In this study, we have made a systematic attempt to develop QSAR models for designing quinazoline derivatives that could inhibit wild EGFR and imidazothiazoles/pyrazolopyrimidines derivatives against mutant EGFR. In this study, three types of prediction methods have been developed to design inhibitors against EGFR (wild, mutant and both). First, we developed models for predicting inhibitors against wild type EGFR by training and testing on dataset containing 128 quinazoline based inhibitors. This dataset was divided into two subsets called wild_train and wild_valid containing 103 and 25 inhibitors respectively. The models were trained and tested on wild_train dataset while performance was evaluated on the wild_valid called validation dataset. We achieved a maximum correlation between predicted and experimentally determined inhibition (IC50) of 0.90 on validation dataset. Secondly, we developed models for predicting inhibitors against mutant EGFR (L858R) on mutant_train, and mutant_valid dataset and achieved a maximum correlation between 0.834 to 0.850 on these datasets. Finally, an integrated hybrid model has been developed on a dataset containing wild and mutant inhibitors and got maximum correlation between 0.761 to 0.850 on different datasets. In order to promote open source drug discovery, we developed a webserver for designing inhibitors against wild and mutant EGFR along with providing standalone (http://osddlinux.osdd.net/) and Galaxy (http://osddlinux.osdd.net:8001) version of software. We hope our webserver (http://crdd.osdd.net/oscadd/ntegfr/) will play a vital role in designing new anticancer drugs. PMID:24992720</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4813892','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4813892"><span>SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Young-Joo; Cho, Soojin</p> <p>2016-01-01</p> <p>Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19980206235&hterms=modeling+transfer+heat&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dmodeling%2Btransfer%2Bheat','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19980206235&hterms=modeling+transfer+heat&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dmodeling%2Btransfer%2Bheat"><span>Transition Heat Transfer Modeling Based on the Characteristics of Turbulent Spots</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Simon, Fred; Boyle, Robert</p> <p>1998-01-01</p> <p>While turbulence models are being developed which show promise for simulating the transition region on a turbine blade or vane, it is believed that the best approach with the greatest potential for practical use is the use of models which incorporate the physics of turbulent spots present in the transition region. This type of modeling results in the prediction of transition region intermittency which when incorporated in turbulence models give a good to excellent prediction of the transition region heat transfer. Some models are presented which show how turbulent spot characteristics and behavior can be employed to predict the effect of pressure gradient and Mach number on the transition region. The models predict the spot formation rate which is needed, in addition to the transition onset location, in the Narasimha concentrated breakdown intermittency equation. A simplified approach is taken for modeling turbulent spot growth and interaction in the transition region which utilizes the turbulent spot variables governing transition length and spot generation rate. The models are expressed in terms of spot spreading angle, dimensionless spot velocity, dimensionless spot area, disturbance frequency and Mach number. The models are used in conjunction with a computer code to predict the effects of pressure gradient and Mach number on the transition region and compared with VKI experimental turbine data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28269842','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28269842"><span>Hierarchical Bayesian Logistic Regression to forecast metabolic control in type 2 DM patients.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dagliati, Arianna; Malovini, Alberto; Decata, Pasquale; Cogni, Giulia; Teliti, Marsida; Sacchi, Lucia; Cerra, Carlo; Chiovato, Luca; Bellazzi, Riccardo</p> <p>2016-01-01</p> <p>In this work we present our efforts in building a model able to forecast patients' changes in clinical conditions when repeated measurements are available. In this case the available risk calculators are typically not applicable. We propose a Hierarchical Bayesian Logistic Regression model, which allows taking into account individual and population variability in model parameters estimate. The model is used to predict metabolic control and its variation in type 2 diabetes mellitus. In particular we have analyzed a population of more than 1000 Italian type 2 diabetic patients, collected within the European project Mosaic. The results obtained in terms of Matthews Correlation Coefficient are significantly better than the ones gathered with standard logistic regression model, based on data pooling.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>