Sample records for model results predict

  1. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  2. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  3. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  4. The Role of Multimodel Combination in Improving Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Li, W.

    2008-12-01

    Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.

  5. Application of General Regression Neural Network to the Prediction of LOD Change

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  6. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    PubMed

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  7. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    NASA Astrophysics Data System (ADS)

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.

  8. Model-data assimilation of multiple phenological observations to constrain and predict leaf area index.

    PubMed

    Viskari, Toni; Hardiman, Brady; Desai, Ankur R; Dietze, Michael C

    2015-03-01

    Our limited ability to accurately simulate leaf phenology is a leading source of uncertainty in models of ecosystem carbon cycling. We evaluate if continuously updating canopy state variables with observations is beneficial for predicting phenological events. We employed ensemble adjustment Kalman filter (EAKF) to update predictions of leaf area index (LAI) and leaf extension using tower-based photosynthetically active radiation (PAR) and moderate resolution imaging spectrometer (MODIS) data for 2002-2005 at Willow Creek, Wisconsin, USA, a mature, even-aged, northern hardwood, deciduous forest. The ecosystem demography model version 2 (ED2) was used as the prediction model, forced by offline climate data. EAKF successfully incorporated information from both the observations and model predictions weighted by their respective uncertainties. The resulting. estimate reproduced the observed leaf phenological cycle in the spring and the fall better than a parametric model prediction. These results indicate that during spring the observations contribute most in determining the correct bud-burst date, after which the model performs well, but accurately modeling fall leaf senesce requires continuous model updating from observations. While the predicted net ecosystem exchange (NEE) of CO2 precedes tower observations and unassimilated model predictions in the spring, overall the prediction follows observed NEE better than the model alone. Our results show state data assimilation successfully simulates the evolution of plant leaf phenology and improves model predictions of forest NEE.

  9. Ski jump takeoff performance predictions for a mixed-flow, remote-lift STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Birckelbaw, Lourdes G.

    1992-01-01

    A ski jump model was developed to predict ski jump takeoff performance for a short takeoff and vertical landing (STOVL) aircraft. The objective was to verify the model with results from a piloted simulation of a mixed flow, remote lift STOVL aircraft. The prediction model is discussed. The predicted results are compared with the piloted simulation results. The ski jump model can be utilized for basic research of other thrust vectoring STOVL aircraft performing a ski jump takeoff.

  10. Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)

    NASA Astrophysics Data System (ADS)

    Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab

    2017-08-01

    The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.

  11. Quantitative thickness prediction of tectonically deformed coal using Extreme Learning Machine and Principal Component Analysis: a case study

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Li, Yan; Chen, Tongjun; Yan, Qiuyan; Ma, Li

    2017-04-01

    The thickness of tectonically deformed coal (TDC) has positive correlation associations with gas outbursts. In order to predict the TDC thickness of coal beds, we propose a new quantitative predicting method using an extreme learning machine (ELM) algorithm, a principal component analysis (PCA) algorithm, and seismic attributes. At first, we build an ELM prediction model using the PCA attributes of a synthetic seismic section. The results suggest that the ELM model can produce a reliable and accurate prediction of the TDC thickness for synthetic data, preferring Sigmoid activation function and 20 hidden nodes. Then, we analyze the applicability of the ELM model on the thickness prediction of the TDC with real application data. Through the cross validation of near-well traces, the results suggest that the ELM model can produce a reliable and accurate prediction of the TDC. After that, we use 250 near-well traces from 10 wells to build an ELM predicting model and use the model to forecast the TDC thickness of the No. 15 coal in the study area using the PCA attributes as the inputs. Comparing the predicted results, it is noted that the trained ELM model with two selected PCA attributes yields better predication results than those from the other combinations of the attributes. Finally, the trained ELM model with real seismic data have a different number of hidden nodes (10) than the trained ELM model with synthetic seismic data. In summary, it is feasible to use an ELM model to predict the TDC thickness using the calculated PCA attributes as the inputs. However, the input attributes, the activation function and the number of hidden nodes in the ELM model should be selected and tested carefully based on individual application.

  12. Predictive information processing in music cognition. A critical review.

    PubMed

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    PubMed

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  14. Comparison of Prediction Model for Cardiovascular Autonomic Dysfunction Using Artificial Neural Network and Logistic Regression Analysis

    PubMed Central

    Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo

    2013-01-01

    Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593

  15. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4540130','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4540130"><span>Assessment of quantitative structure-activity relationship of toxicity prediction models for Korean chemical substance control legislation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai</p> <p>2015-01-01</p> <p>Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28912803','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28912803"><span>Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo</p> <p>2017-01-01</p> <p>Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5585556','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5585556"><span>Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2017-01-01</p> <p>Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23534394','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23534394"><span>Comparison of in silico models for prediction of mutagenicity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bakhtyari, Nazanin G; Raitano, Giuseppa; Benfenati, Emilio; Martin, Todd; Young, Douglas</p> <p>2013-01-01</p> <p>Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2786910','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2786910"><span>Long-term prediction of fish growth under varying ambient temperature using a multiscale dynamic model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2009-01-01</p> <p>Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism) known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention), and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models. PMID:19903354</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li class="active"><span>1</span></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_1 --> <div id="page_2" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li class="active"><span>2</span></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="21"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5792013','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5792013"><span>Vesicular stomatitis forecasting based on Google Trends</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lu, Yi; Zhou, GuangYa; Chen, Qin</p> <p>2018-01-01</p> <p>Background Vesicular stomatitis (VS) is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends. Methods American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression. Results For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity), SP (specificity) and ACC (prediction accuracy) values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively. Conclusion This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast. PMID:29385198</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29297351','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29297351"><span>Probability-based collaborative filtering model for predicting gene-disease associations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan</p> <p>2017-12-28</p> <p>Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26958207','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26958207"><span>Clinical Predictive Modeling Development and Deployment through FHIR Web Services.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng</p> <p>2015-01-01</p> <p>Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4765683','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4765683"><span>Clinical Predictive Modeling Development and Deployment through FHIR Web Services</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng</p> <p>2015-01-01</p> <p>Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3662169','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3662169"><span>The novel application of artificial neural network on bioelectrical impedance analysis to assess the body composition in elderly</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27584594','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27584594"><span>Automatically updating predictive modeling workflows support decision-making in drug design.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O</p> <p>2016-09-01</p> <p>Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999WRR....35.1089F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999WRR....35.1089F"><span>Prediction of relative and absolute permeabilities for gas and water from soil water retention curves using a pore-scale network model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fischer, Ulrich; Celia, Michael A.</p> <p>1999-04-01</p> <p>Functional relationships for unsaturated flow in soils, including those between capillary pressure, saturation, and relative permeabilities, are often described using analytical models based on the bundle-of-tubes concept. These models are often limited by, for example, inherent difficulties in prediction of absolute permeabilities, and in incorporation of a discontinuous nonwetting phase. To overcome these difficulties, an alternative approach may be formulated using pore-scale network models. In this approach, the pore space of the network model is adjusted to match retention data, and absolute and relative permeabilities are then calculated. A new approach that allows more general assignments of pore sizes within the network model provides for greater flexibility to match measured data. This additional flexibility is especially important for simultaneous modeling of main imbibition and drainage branches. Through comparisons between the network model results, analytical model results, and measured data for a variety of both undisturbed and repacked soils, the network model is seen to match capillary pressure-saturation data nearly as well as the analytical model, to predict water phase relative permeabilities equally well, and to predict gas phase relative permeabilities significantly better than the analytical model. The network model also provides very good estimates for intrinsic permeability and thus for absolute permeabilities. Both the network model and the analytical model lost accuracy in predicting relative water permeabilities for soils characterized by a van Genuchten exponent n≲3. Overall, the computational results indicate that reliable predictions of both relative and absolute permeabilities are obtained with the network model when the model matches the capillary pressure-saturation data well. The results also indicate that measured imbibition data are crucial to good predictions of the complete hysteresis loop.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AcASn..58...28C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AcASn..58...28C"><span>The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.</p> <p>2017-05-01</p> <p>The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70189181','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70189181"><span>Evaluating model structure adequacy: The case of the Maggia Valley groundwater system, southern Switzerland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,</p> <p>2013-01-01</p> <p>Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790008550','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790008550"><span>Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tew, R. C., Jr.; Thieme, L. G.; Miao, D.</p> <p>1979-01-01</p> <p>A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IAUGA..2256137W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IAUGA..2256137W"><span>Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Qijie</p> <p>2015-08-01</p> <p>The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..346a2037S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..346a2037S"><span>Prediction of surface roughness in turning of Ti-6Al-4V using cutting parameters, forces and tool vibration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sahu, Neelesh Kumar; Andhare, Atul B.; Andhale, Sandip; Raju Abraham, Roja</p> <p>2018-04-01</p> <p>Present work deals with prediction of surface roughness using cutting parameters along with in-process measured cutting force and tool vibration (acceleration) during turning of Ti-6Al-4V with cubic boron nitride (CBN) inserts. Full factorial design is used for design of experiments using cutting speed, feed rate and depth of cut as design variables. Prediction model for surface roughness is developed using response surface methodology with cutting speed, feed rate, depth of cut, resultant cutting force and acceleration as control variables. Analysis of variance (ANOVA) is performed to find out significant terms in the model. Insignificant terms are removed after performing statistical test using backward elimination approach. Effect of each control variables on surface roughness is also studied. Correlation coefficient (R2 pred) of 99.4% shows that model correctly explains the experiment results and it behaves well even when adjustment is made in factors or new factors are added or eliminated. Validation of model is done with five fresh experiments and measured forces and acceleration values. Average absolute error between RSM model and experimental measured surface roughness is found to be 10.2%. Additionally, an artificial neural network model is also developed for prediction of surface roughness. The prediction results of modified regression model are compared with ANN. It is found that RSM model and ANN (average absolute error 7.5%) are predicting roughness with more than 90% accuracy. From the results obtained it is found that including cutting force and vibration for prediction of surface roughness gives better prediction than considering only cutting parameters. Also, ANN gives better prediction over RSM models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AcASn..57...78W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AcASn..57...78W"><span>A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.</p> <p>2016-01-01</p> <p>In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014ACPD...1420997H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014ACPD...1420997H"><span>Long-term particulate matter modeling for health effects studies in California - Part 1: Model performance on temporal and spatial variations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, J.; Zhang, H.; Ying, Q.; Chen, S.-H.; Vandenberghe, F.; Kleeman, M. J.</p> <p>2014-08-01</p> <p>For the first time, a decadal (9 years from 2000 to 2008) air quality model simulation with 4 km horizontal resolution and daily time resolution has been conducted in California to provide air quality data for health effects studies. Model predictions are compared to measurements to evaluate the accuracy of the simulation with an emphasis on spatial and temporal variations that could be used in epidemiology studies. Better model performance is found at longer averaging times, suggesting that model results with averaging times ≥ 1 month should be the first to be considered in epidemiological studies. The UCD/CIT model predicts spatial and temporal variations in the concentrations of O3, PM2.5, EC, OC, nitrate, and ammonium that meet standard modeling performance criteria when compared to monthly-averaged measurements. Predicted sulfate concentrations do not meet target performance metrics due to missing sulfur sources in the emissions. Predicted seasonal and annual variations of PM2.5, EC, OC, nitrate, and ammonium have mean fractional biases that meet the model performance criteria in 95%, 100%, 71%, 73%, and 92% of the simulated months, respectively. The base dataset provides an improvement for predicted population exposure to PM concentrations in California compared to exposures estimated by central site monitors operated one day out of every 3 days at a few urban locations. Uncertainties in the model predictions arise from several issues. Incomplete understanding of secondary organic aerosol formation mechanisms leads to OC bias in the model results in summertime but does not affect OC predictions in winter when concentrations are typically highest. The CO and NO (species dominated by mobile emissions) results reveal temporal and spatial uncertainties associated with the mobile emissions generated by the EMFAC 2007 model. The WRF model tends to over-predict wind speed during stagnation events, leading to under-predictions of high PM concentrations, usually in winter months. The WRF model also generally under-predicts relative humidity, resulting in less particulate nitrate formation especially during winter months. These issues will be improved in future studies. All model results included in the current manuscript can be downloaded free of charge at <a href="http://faculty.engineering.ucdavis.edu/kleeman/"target="_blank">http://faculty.engineering.ucdavis.edu/kleeman/</a>.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.sciencedirect.com/science/article/pii/S002216941500534X','USGSPUBS'); return false;" href="http://www.sciencedirect.com/science/article/pii/S002216941500534X"><span>Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Curtis, Gary P.; Lu, Dan; Ye, Ming</p> <p>2015-01-01</p> <p>While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000116643','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000116643"><span>Innovative Liner Concepts: Experiments and Impedance Modeling of Liners Including the Effect of Bias Flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris</p> <p>2000-01-01</p> <p>The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3667849','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3667849"><span>Biological Networks for Predicting Chemical Hepatocarcinogenicity Using Gene Expression Data from Treated Mice and Relevance across Human and Rat Species</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Thomas, Reuben; Thomas, Russell S.; Auerbach, Scott S.; Portier, Christopher J.</p> <p>2013-01-01</p> <p>Background Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. Objectives To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Methods Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Results Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Conclusions Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species. PMID:23737943</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23267564','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23267564"><span>Ternary isocratic mobile phase optimization utilizing resolution Design Space based on retention time and peak width modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kawabe, Takefumi; Tomitsuka, Toshiaki; Kajiro, Toshi; Kishi, Naoyuki; Toyo'oka, Toshimasa</p> <p>2013-01-18</p> <p>An optimization procedure of ternary isocratic mobile phase composition in the HPLC method using a statistical prediction model and visualization technique is described. In this report, two prediction models were first evaluated to obtain reliable prediction results. The retention time prediction model was constructed by modification from past respectable knowledge of retention modeling against ternary solvent strength changes. An excellent correlation between observed and predicted retention time was given in various kinds of pharmaceutical compounds by the multiple regression modeling of solvent strength parameters. The peak width of half height prediction model employed polynomial fitting of the retention time, because a linear relationship between the peak width of half height and the retention time was not obtained even after taking into account the contribution of the extra-column effect based on a moment method. Accurate prediction results were able to be obtained by such model, showing mostly over 0.99 value of correlation coefficient between observed and predicted peak width of half height. Then, a procedure to visualize a resolution Design Space was tried as the secondary challenge. An artificial neural network method was performed to link directly between ternary solvent strength parameters and predicted resolution, which were determined by accurate prediction results of retention time and a peak width of half height, and to visualize appropriate ternary mobile phase compositions as a range of resolution over 1.5 on the contour profile. By using mixtures of similar pharmaceutical compounds in case studies, we verified a possibility of prediction to find the optimal range of condition. Observed chromatographic results on the optimal condition mostly matched with the prediction and the average of difference between observed and predicted resolution were approximately 0.3. This means that enough accuracy for prediction could be achieved by the proposed procedure. Consequently, the procedure to search the optimal range of ternary solvent strength achieving an appropriate separation is provided by using the resolution Design Space based on accurate prediction. Copyright © 2012 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26482809','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26482809"><span>Verifying the performance of artificial neural network and multiple linear regression in predicting the mean seasonal municipal solid waste generation rate: A case study of Fars province, Iran.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Azadi, Sama; Karimi-Jashni, Ayoub</p> <p>2016-02-01</p> <p>Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27368775','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27368775"><span>Prediction uncertainty and optimal experimental design for learning dynamical systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P</p> <p>2016-06-01</p> <p>Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li class="active"><span>2</span></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_2 --> <div id="page_3" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li class="active"><span>3</span></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="41"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810015023','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810015023"><span>Thermal Pollution Mathematical Model. Volume 4: Verification of Three-Dimensional Rigid-Lid Model at Lake Keowee. [envrionment impact of thermal discharges from power plants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.</p> <p>1980-01-01</p> <p>The rigid lid model was developed to predict three dimensional temperature and velocity distributions in lakes. This model was verified at various sites (Lake Belews, Biscayne Bay, etc.) and th verification at Lake Keowee was the last of these series of verification runs. The verification at Lake Keowee included the following: (1) selecting the domain of interest, grid systems, and comparing the preliminary results with archival data; (2) obtaining actual ground truth and infrared scanner data both for summer and winter; and (3) using the model to predict the measured data for the above periods and comparing the predicted results with the actual data. The model results compared well with measured data. Thus, the model can be used as an effective predictive tool for future sites.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70176649','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70176649"><span>Predicting thermally stressful events in rivers with a strategy to evaluate management alternatives</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Maloney, K.O.; Cole, J.C.; Schmid, M.</p> <p>2016-01-01</p> <p>Water temperature is an important factor in river ecology. Numerous models have been developed to predict river temperature. However, many were not designed to predict thermally stressful periods. Because such events are rare, traditionally applied analyses are inappropriate. Here, we developed two logistic regression models to predict thermally stressful events in the Delaware River at the US Geological Survey gage near Lordville, New York. One model predicted the probability of an event >20.0 °C, and a second predicted an event >22.2 °C. Both models were strong (independent test data sensitivity 0.94 and 1.00, specificity 0.96 and 0.96) predicting 63 of 67 events in the >20.0 °C model and all 15 events in the >22.2 °C model. Both showed negative relationships with released volume from the upstream Cannonsville Reservoir and positive relationships with difference between air temperature and previous day's water temperature at Lordville. We further predicted how increasing release volumes from Cannonsville Reservoir affected the probabilities of correctly predicted events. For the >20.0 °C model, an increase of 0.5 to a proportionally adjusted release (that accounts for other sources) resulted in 35.9% of events in the training data falling below cutoffs; increasing this adjustment by 1.0 resulted in 81.7% falling below cutoffs. For the >22.2 °C these adjustments resulted in 71.1% and 100.0% of events falling below cutoffs. Results from these analyses can help managers make informed decisions on alternative release scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29466394','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29466394"><span>A hybrid intelligent method for three-dimensional short-term prediction of dissolved oxygen content in aquaculture.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang</p> <p>2018-01-01</p> <p>A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19887169','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19887169"><span>Development of an accident duration prediction model on the Korean Freeway Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chung, Younshik</p> <p>2010-01-01</p> <p>Since duration prediction is one of the most important steps in an accident management process, there have been several approaches developed for modeling accident duration. This paper presents a model for the purpose of accident duration prediction based on accurately recorded and large accident dataset from the Korean Freeway Systems. To develop the duration prediction model, this study utilizes the log-logistic accelerated failure time (AFT) metric model and a 2-year accident duration dataset from 2006 to 2007. Specifically, the 2006 dataset is utilized to develop the prediction model and then, the 2007 dataset was employed to test the temporal transferability of the 2006 model. Although the duration prediction model has limitations such as large prediction error due to the individual differences of the accident treatment teams in terms of clearing similar accidents, the results from the 2006 model yielded a reasonable prediction based on the mean absolute percentage error (MAPE) scale. Additionally, the results of the statistical test for temporal transferability indicated that the estimated parameters in the duration prediction model are stable over time. Thus, this temporal stability suggests that the model may have potential to be used as a basis for making rational diversion and dispatching decisions in the event of an accident. Ultimately, such information will beneficially help in mitigating traffic congestion due to accidents.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25054174','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25054174"><span>A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling</p> <p>2014-01-01</p> <p>Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012NHESS..12.3799C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012NHESS..12.3799C"><span>Predicting typhoon-induced storm surge tide with a two-dimensional hydrodynamic model and artificial neural network model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, W.-B.; Liu, W.-C.; Hsu, M.-H.</p> <p>2012-12-01</p> <p>Precise predictions of storm surges during typhoon events have the necessity for disaster prevention in coastal seas. This paper explores an artificial neural network (ANN) model, including the back propagation neural network (BPNN) and adaptive neuro-fuzzy inference system (ANFIS) algorithms used to correct poor calculations with a two-dimensional hydrodynamic model in predicting storm surge height during typhoon events. The two-dimensional model has a fine horizontal resolution and considers the interaction between storm surges and astronomical tides, which can be applied for describing the complicated physical properties of storm surges along the east coast of Taiwan. The model is driven by the tidal elevation at the open boundaries using a global ocean tidal model and is forced by the meteorological conditions using a cyclone model. The simulated results of the hydrodynamic model indicate that this model fails to predict storm surge height during the model calibration and verification phases as typhoons approached the east coast of Taiwan. The BPNN model can reproduce the astronomical tide level but fails to modify the prediction of the storm surge tide level. The ANFIS model satisfactorily predicts both the astronomical tide level and the storm surge height during the training and verification phases and exhibits the lowest values of mean absolute error and root-mean-square error compared to the simulated results at the different stations using the hydrodynamic model and the BPNN model. Comparison results showed that the ANFIS techniques could be successfully applied in predicting water levels along the east coastal of Taiwan during typhoon events.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010073450','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010073450"><span>An Assessment of the Predictability of Northern Winter Seasonal Means with the NSIPP 1 AGCM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Suarez, Max J. (Editor); Pegion, Philip J.; Schubert, Siegfried D.</p> <p>2000-01-01</p> <p>This atlas assesses the predictability of January-February-March (JFM) means using version 1 of the NASA Seasonal-to-Interannual Prediction Project Atmospheric General Circulation Model (the NSIPP 1 AGCM). The AGCM is part of the NSIPP coupled atmosphere-land-ocean model. For these results, the atmosphere was run uncoupled from the ocean, but coupled with an interactive land model. The results are based on 20 ensembles of nine JFM hindcasts for the period 1980-1999, with sea surface temperature (SST) and sea ice specified from observations. The model integrations were started from initial atmospheric conditions (taken from NCEP/NCAR reanalyses) centered on December 15. The analysis focuses on 200 mb height, precipitation, surface temperature, and sea-level pressure. The results address issues of both predictability and forecast skill. Various signal-to-noise measures are computed to demonstrate the potential for skillful prediction on seasonal time scales under the assumption of a perfect model and perfectly known oceanic boundary forcings. The results show that the model produces a realistic ENSO response in both the tropics and extratropics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4278705','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4278705"><span>Application of a Novel Grey Self-Memory Coupling Model to Forecast the Incidence Rates of Two Notifiable Diseases in China: Dysentery and Gonorrhea</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling</p> <p>2014-01-01</p> <p>Objective In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. Methods The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Results Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. Conclusion The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control. PMID:25546054</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006ClDy...26..285K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006ClDy...26..285K"><span>Examination of multi-model ensemble seasonal prediction methods using a simple climate system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kang, In-Sik; Yoo, Jin Ho</p> <p>2006-02-01</p> <p>A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/29694','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/29694"><span>Crash prediction modeling for curved segments of rural two-lane two-way highways in Utah.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2015-10-01</p> <p>This report contains the results of the development of crash prediction models for curved segments of rural : two-lane two-way highways in the state of Utah. The modeling effort included the calibration of the predictive : model found in the Highway ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.971a2017F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.971a2017F"><span>Predicting Jakarta composite index using hybrid of fuzzy time series and support vector regression models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin</p> <p>2018-03-01</p> <p>The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMNG22A..03J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMNG22A..03J"><span>Numerical weather prediction model tuning via ensemble prediction system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.</p> <p>2011-12-01</p> <p>This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16922571','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16922571"><span>Microstructure and rheology of thermoreversible nanoparticle gels.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ramakrishnan, S; Zukoski, C F</p> <p>2006-08-29</p> <p>Naïve mode coupling theory is applied to particles interacting with short-range Yukawa attractions. Model results for the location of the gel line and the modulus of the resulting gels are reduced to algebraic equations capturing the effects of the range and strength of attraction. This model is then applied to thermo reversible gels composed of octadecyl silica particles suspended in decalin. The application of the model to the experimental system requires linking the experimental variable controlling strength of attraction, temperature, to the model strength of attraction. With this link, the model predicts temperature and volume fraction dependencies of gelation and modulus with five parameters: particle size, particle volume fraction, overlap volume of surface hairs, and theta temperature. In comparing model predictions with experimental results, we first observe that in these thermal gels there is no evidence of clustering as has been reported in depletion gels. One consequence of this observation is that there are no additional adjustable parameters required to make quantitative comparisons between experimental results and model predictions. Our results indicate that the naïve mode coupling approach taken here in conjunction with a model linking temperature to strength of attraction provides a robust approach for making quantitative predictions of gel mechanical properties. Extension of model predictions to additional experimental systems requires linking experimental variables to the Yukawa strength and range of attraction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJASS.tmp...17J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJASS.tmp...17J"><span>Comparative Study on the Prediction of Aerodynamic Characteristics of Aircraft with Turbulence Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jang, Yujin; Huh, Jinbum; Lee, Namhun; Lee, Seungsoo; Park, Youngmin</p> <p>2018-04-01</p> <p>The RANS equations are widely used to analyze complex flows over aircraft. The equations require a turbulence model for turbulent flow analyses. A suitable turbulence must be selected for accurate predictions of aircraft aerodynamic characteristics. In this study, numerical analyses of three-dimensional aircraft are performed to compare the results of various turbulence models for the prediction of aircraft aerodynamic characteristics. A 3-D RANS solver, MSAPv, is used for the aerodynamic analysis. The four turbulence models compared are the Sparlart-Allmaras (SA) model, Coakley's q-ω model, Huang and Coakley's k-ɛ model, and Menter's k-ω SST model. Four aircrafts are considered: an ARA-M100, DLR-F6 wing-body, DLR-F6 wing-body-nacelle-pylon from the second drag prediction workshop, and a high wing aircraft with nacelles. The CFD results are compared with experimental data and other published computational results. The details of separation patterns, shock positions, and Cp distributions are discussed to find the characteristics of the turbulence models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24954809','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24954809"><span>Finite element based model predictive control for active vibration suppression of a one-link flexible manipulator.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dubay, Rickey; Hassan, Marwan; Li, Chunying; Charest, Meaghan</p> <p>2014-09-01</p> <p>This paper presents a unique approach for active vibration control of a one-link flexible manipulator. The method combines a finite element model of the manipulator and an advanced model predictive controller to suppress vibration at its tip. This hybrid methodology improves significantly over the standard application of a predictive controller for vibration control. The finite element model used in place of standard modelling in the control algorithm provides a more accurate prediction of dynamic behavior, resulting in enhanced control. Closed loop control experiments were performed using the flexible manipulator, instrumented with strain gauges and piezoelectric actuators. In all instances, experimental and simulation results demonstrate that the finite element based predictive controller provides improved active vibration suppression in comparison with using a standard predictive control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20681431','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20681431"><span>Evaluation of relative response factor methodology for demonstrating attainment of ozone in Houston, Texas.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vizuete, William; Biton, Leiran; Jeffries, Harvey E; Couzo, Evan</p> <p>2010-07-01</p> <p>In 2007, the U.S. Environmental Protection Agency (EPA) released guidance on demonstrating attainment of the federal ozone (O3) standard. This guidance recommended a change in the use of air quality model (AQM) predictions from an absolute to a relative way. This was accomplished by using a ratio, and not the absolute difference of AQM O3 predictions from a historical year to an attainment year. This ratio of O3 concentrations, labeled the relative response factor (RRF), is multiplied by an average of observed concentrations at every monitor. In this analysis, whether the methodology used to calculate RRFs is severing the source-receptor relationship for a given monitor was investigated. Model predictions were generated with a regulatory AQM system used to support the 2004 Houston-Galveston-Brazoria State Implementation Plan. Following the procedures in the EPA guidance, an attainment demonstration was completed using regulatory AQM predictions and measurements from the Houston ground-monitoring network. Results show that the model predictions used for the RRF calculation were often based on model conditions that were geographically remote from observations and counter to wind flow. Many of the monitors used the same model predictions for an RRF, even if that O3 plume did not impact it. The RRF methodology resulted in severing the true source-receptor relationship for a monitor. This analysis also showed that model performance could influence RRF values, and values at monitoring sites appear to be sensitive to model bias. Results indicate an inverse linear correlation of RRFs with model bias at each monitor (R2 = 0.47), resulting in a change in future O3 design values up to 5 parts per billion (ppb). These results suggest that the application of RRF methodology in Houston, TX, should be changed from using all model predictions above 85 ppb to a method that removes any predictions that are not relevant to the observed source-receptor relationship.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70034772','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70034772"><span>The effect of bathymetric filtering on nearshore process model results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.</p> <p>2009-01-01</p> <p>Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24726969','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24726969"><span>Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh</p> <p>2014-07-01</p> <p>This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/8310163','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/8310163"><span>Comparing toxicologic and epidemiologic studies: methylene chloride--a case study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stayner, L T; Bailer, A J</p> <p>1993-12-01</p> <p>Exposure to methylene chloride induces lung and liver cancers in mice. The mouse bioassay data have been used as the basis for several cancer risk assessments. The results from epidemiologic studies of workers exposed to methylene chloride have been mixed with respect to demonstrating an increased cancer risk. The results from a negative epidemiologic study of Kodak workers have been used by two groups of investigators to test the predictions from the EPA risk assessment models. These two groups used very different approaches to this problem, which resulted in opposite conclusions regarding the consistency between the animal model predictions and the Kodak study results. The results from the Kodak study are used to test the predictions from OSHA's multistage models of liver and lung cancer risk. Confidence intervals for the standardized mortality ratios (SMRs) from the Kodak study are compared with the predicted confidence intervals derived from OSHA's risk assessment models. Adjustments for the "healthy worker effect," differences in length of follow-up, and dosimetry between animals and humans were incorporated into these comparisons. Based on these comparisons, we conclude that the negative results from the Kodak study are not inconsistent with the predictions from OSHA's risk assessment model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1265806-maximum-likelihood-bayesian-model-averaging-its-predictive-analysis-groundwater-reactive-transport-models','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1265806-maximum-likelihood-bayesian-model-averaging-its-predictive-analysis-groundwater-reactive-transport-models"><span>Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Lu, Dan; Ye, Ming; Curtis, Gary P.</p> <p>2015-08-01</p> <p>While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li class="active"><span>3</span></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_3 --> <div id="page_4" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="61"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20070031867&hterms=Magnetic+Flux&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DMagnetic%2BFlux','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20070031867&hterms=Magnetic+Flux&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DMagnetic%2BFlux"><span>Using a Magnetic Flux Transport Model to Predict the Solar Cycle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lyatskaya, S.; Hathaway, D.; Winebarger, A.</p> <p>2007-01-01</p> <p>We present the results of an investigation into the use of a magnetic flux transport model to predict the amplitude of future solar cycles. Recently Dikpati, de Toma, & Gilman (2006) showed how their dynamo model could be used to accurately predict the amplitudes of the last eight solar cycles and offered a prediction for the next solar cycle - a large amplitude cycle. Cameron & Schussler (2007) found that they could reproduce this predictive skill with a simple 1-dimensional surface flux transport model - provided they used the same parameters and data as Dikpati, de Toma, & Gilman. However, when they tried incorporating the data in what they argued was a more realistic manner, they found that the predictive skill dropped dramatically. We have written our own code for examining this problem and have incorporated updated and corrected data for the source terms - the emergence of magnetic flux in active regions. We present both the model itself and our results from it - in particular our tests of its effectiveness at predicting solar cycles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy..tmp...25A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy..tmp...25A"><span>Assessment of prediction skill in equatorial Pacific Ocean in high resolution model of CFS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arora, Anika; Rao, Suryachandra A.; Pillai, Prasanth; Dhakate, Ashish; Salunke, Kiran; Srivastava, Ankur</p> <p>2018-01-01</p> <p>The effect of increasing atmospheric resolution on prediction skill of El Niño southern oscillation phenomenon in climate forecast system model is explored in this paper. Improvement in prediction skill for sea surface temperature (SST) and winds at all leads compared to low resolution model in the tropical Indo-Pacific basin is observed. High resolution model is able to capture extreme events reasonably well. As a result, the signal to noise ratio is improved in the high resolution model. However, spring predictability barrier (SPB) for summer months in Nino 3 and Nino 3.4 region is stronger in high resolution model, in spite of improvement in overall prediction skill and dynamics everywhere else. Anomaly correlation coefficient of SST in high resolution model with observations in Nino 3.4 region targeting boreal summer months when predicted at lead times of 3-8 months in advance decreased compared its lower resolution counterpart. It is noted that higher variance of winds predicted in spring season over central equatorial Pacific compared to observed variance of winds results in stronger than normal response on subsurface ocean, hence increases SPB for boreal summer months in high resolution model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1378012-case-study-combination-ndvi-forecasting-model-based-entropy-weight-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1378012-case-study-combination-ndvi-forecasting-model-based-entropy-weight-method"><span>A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Huang, Shengzhi; Ming, Bo; Huang, Qiang</p> <p></p> <p>It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5821340','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5821340"><span>A hybrid intelligent method for three-dimensional short-term prediction of dissolved oxygen content in aquaculture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang</p> <p>2018-01-01</p> <p>A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies. PMID:29466394</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4090460','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4090460"><span>A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling</p> <p>2014-01-01</p> <p>Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4086706','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4086706"><span>A genome-scale metabolic flux model of Escherichia coli K–12 derived from the EcoCyc database</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Background Constraint-based models of Escherichia coli metabolic flux have played a key role in computational studies of cellular metabolism at the genome scale. We sought to develop a next-generation constraint-based E. coli model that achieved improved phenotypic prediction accuracy while being frequently updated and easy to use. We also sought to compare model predictions with experimental data to highlight open questions in E. coli biology. Results We present EcoCyc–18.0–GEM, a genome-scale model of the E. coli K–12 MG1655 metabolic network. The model is automatically generated from the current state of EcoCyc using the MetaFlux software, enabling the release of multiple model updates per year. EcoCyc–18.0–GEM encompasses 1445 genes, 2286 unique metabolic reactions, and 1453 unique metabolites. We demonstrate a three-part validation of the model that breaks new ground in breadth and accuracy: (i) Comparison of simulated growth in aerobic and anaerobic glucose culture with experimental results from chemostat culture and simulation results from the E. coli modeling literature. (ii) Essentiality prediction for the 1445 genes represented in the model, in which EcoCyc–18.0–GEM achieves an improved accuracy of 95.2% in predicting the growth phenotype of experimental gene knockouts. (iii) Nutrient utilization predictions under 431 different media conditions, for which the model achieves an overall accuracy of 80.7%. The model’s derivation from EcoCyc enables query and visualization via the EcoCyc website, facilitating model reuse and validation by inspection. We present an extensive investigation of disagreements between EcoCyc–18.0–GEM predictions and experimental data to highlight areas of interest to E. coli modelers and experimentalists, including 70 incorrect predictions of gene essentiality on glucose, 80 incorrect predictions of gene essentiality on glycerol, and 83 incorrect predictions of nutrient utilization. Conclusion Significant advantages can be derived from the combination of model organism databases and flux balance modeling represented by MetaFlux. Interpretation of the EcoCyc database as a flux balance model results in a highly accurate metabolic model and provides a rigorous consistency check for information stored in the database. PMID:24974895</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23624047','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23624047"><span>Effects of turbulence modelling on prediction of flow characteristics in a bench-scale anaerobic gas-lift digester.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Coughtrie, A R; Borman, D J; Sleigh, P A</p> <p>2013-06-01</p> <p>Flow in a gas-lift digester with a central draft-tube was investigated using computational fluid dynamics (CFD) and different turbulence closure models. The k-ω Shear-Stress-Transport (SST), Renormalization-Group (RNG) k-∊, Linear Reynolds-Stress-Model (RSM) and Transition-SST models were tested for a gas-lift loop reactor under Newtonian flow conditions validated against published experimental work. The results identify that flow predictions within the reactor (where flow is transitional) are particularly sensitive to the turbulence model implemented; the Transition-SST model was found to be the most robust for capturing mixing behaviour and predicting separation reliably. Therefore, Transition-SST is recommended over k-∊ models for use in comparable mixing problems. A comparison of results obtained using multiphase Euler-Lagrange and singlephase approaches are presented. The results support the validity of the singlephase modelling assumptions in obtaining reliable predictions of the reactor flow. Solver independence of results was verified by comparing two independent finite-volume solvers (Fluent-13.0sp2 and OpenFOAM-2.0.1). Copyright © 2013 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1512511C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1512511C"><span>Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea</p> <p>2013-04-01</p> <p>Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..108c2034L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..108c2034L"><span>Prediction of soft soil foundation settlement in Guangxi granite area based on fuzzy neural network model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Luo, Junhui; Wu, Chao; Liu, Xianlin; Mi, Decai; Zeng, Fuquan; Zeng, Yongjun</p> <p>2018-01-01</p> <p>At present, the prediction of soft foundation settlement mostly use the exponential curve and hyperbola deferred approximation method, and the correlation between the results is poor. However, the application of neural network in this area has some limitations, and none of the models used in the existing cases adopted the TS fuzzy neural network of which calculation combines the characteristics of fuzzy system and neural network to realize the mutual compatibility methods. At the same time, the developed and optimized calculation program is convenient for engineering designers. Taking the prediction and analysis of soft foundation settlement of gully soft soil in granite area of Guangxi Guihe road as an example, the fuzzy neural network model is established and verified to explore the applicability. The TS fuzzy neural network is used to construct the prediction model of settlement and deformation, and the corresponding time response function is established to calculate and analyze the settlement of soft foundation. The results show that the prediction of short-term settlement of the model is accurate and the final settlement prediction result has certain engineering reference value.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AtmEn..39.4025G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AtmEn..39.4025G"><span>A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gokhale, Sharad; Khare, Mukesh</p> <p></p> <p>Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014IJBm...58.1119D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014IJBm...58.1119D"><span>Challenges in predicting climate change impacts on pome fruit phenology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.</p> <p>2014-08-01</p> <p>Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5087603','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5087603"><span>Improved Rubin-Bodner Model for the Prediction of Soft Tissue Deformations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Guangming; Xia, James J.; Liebschner, Michael; Zhang, Xiaoyan; Kim, Daeseung; Zhou, Xiaobo</p> <p>2016-01-01</p> <p>In craniomaxillofacial (CMF) surgery, a reliable way of simulating the soft tissue deformation resulted from skeletal reconstruction is vitally important for preventing the risks of facial distortion postoperatively. However, it is difficult to simulate the soft tissue behaviors affected by different types of CMF surgery. This study presents an integrated bio-mechanical and statistical learning model to improve accuracy and reliability of predictions on soft facial tissue behavior. The Rubin-Bodner (RB) model is initially used to describe the biomechanical behavior of the soft facial tissue. Subsequently, a finite element model (FEM) computers the stress of each node in soft facial tissue mesh data resulted from bone displacement. Next, the Generalized Regression Neural Network (GRNN) method is implemented to obtain the relationship between the facial soft tissue deformation and the stress distribution corresponding to different CMF surgical types and to improve evaluation of elastic parameters included in the RB model. Therefore, the soft facial tissue deformation can be predicted by biomechanical properties and statistical model. Leave-one-out cross-validation is used on eleven patients. As a result, the average prediction error of our model (0.7035mm) is lower than those resulting from other approaches. It also demonstrates that the more accurate bio-mechanical information the model has, the better prediction performance it could achieve. PMID:27717593</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29103791','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29103791"><span>Reversal of Glaucoma Hemifield Test Results and Visual Field Features in Glaucoma.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Mengyu; Pasquale, Louis R; Shen, Lucy Q; Boland, Michael V; Wellik, Sarah R; De Moraes, Carlos Gustavo; Myers, Jonathan S; Wang, Hui; Baniasadi, Neda; Li, Dian; Silva, Rafaella Nascimento E; Bex, Peter J; Elze, Tobias</p> <p>2018-03-01</p> <p>To develop a visual field (VF) feature model to predict the reversal of glaucoma hemifield test (GHT) results to within normal limits (WNL) after 2 consecutive outside normal limits (ONL) results. Retrospective cohort study. Visual fields of 44 503 eyes from 26 130 participants. Eyes with 3 or more consecutive reliable VFs measured with the Humphrey Field Analyzer (Swedish interactive threshold algorithm standard 24-2) were included. Eyes with ONL GHT results for the 2 baseline VFs were selected. We extracted 3 categories of VF features from the baseline tests: (1) VF global indices (mean deviation [MD] and pattern standard deviation), (2) mismatch between baseline VFs, and (3) VF loss patterns (archetypes). Logistic regression was applied to predict the GHT results reversal. Cross-validation was applied to evaluate the model on testing data by the area under the receiver operating characteristic curve (AUC). We ascertained clinical glaucoma status on a patient subset (n = 97) to determine the usefulness of our model. Predictive models for GHT results reversal using VF features. For the 16 604 eyes with 2 initial ONL results, the prevalence of a subsequent WNL result increased from 0.1% for MD < -12 dB to 13.8% for MD ≥-3 dB. Compared with models with VF global indices, the AUC of predictive models increased from 0.669 (MD ≥-3 dB) and 0.697 (-6 dB ≤ MD < -3 dB) to 0.770 and 0.820, respectively, by adding VF mismatch features and computationally derived VF archetypes (P < 0.001 for both). The GHT results reversal was associated with a large mismatch between baseline VFs. Moreover, the GHT results reversal was associated more with VF archetypes of nonglaucomatous loss, severe widespread loss, and lens rim artifacts. For a subset of 97 eyes, using our model to predict absence of glaucoma based on clinical evidence after 2 ONL results yielded significantly better prediction accuracy (87.7%; P < 0.001) than predicting GHT results reversal (68.8%) with a prescribed specificity 67.7%. Using VF features may predict the GHT results reversal to WNL after 2 consecutive ONL results. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110003571','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110003571"><span>Data-Based Predictive Control with Multirate Prediction Step</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barlow, Jonathan S.</p> <p>2010-01-01</p> <p>Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5120822','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5120822"><span>A Global Model for Bankruptcy Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Alaminos, David; del Castillo, Agustín; Fernández, Manuel Ángel</p> <p>2016-01-01</p> <p>The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy. PMID:27880810</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.2740J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.2740J"><span>NWP model forecast skill optimization via closure parameter variations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.</p> <p>2012-04-01</p> <p>We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29284916','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29284916"><span>Comparison of Basic and Ensemble Data Mining Methods in Predicting 5-Year Survival of Colorectal Cancer Patients.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza</p> <p>2017-12-01</p> <p>Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.976a2007S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.976a2007S"><span>Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan</p> <p>2018-02-01</p> <p>The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMSH34B..05W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMSH34B..05W"><span>Solar Atmosphere to Earth's Surface: Long Lead Time dB/dt Predictions with the Space Weather Modeling Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Welling, D. T.; Manchester, W.; Savani, N.; Sokolov, I.; van der Holst, B.; Jin, M.; Toth, G.; Liemohn, M. W.; Gombosi, T. I.</p> <p>2017-12-01</p> <p>The future of space weather prediction depends on the community's ability to predict L1 values from observations of the solar atmosphere, which can yield hours of lead time. While both empirical and physics-based L1 forecast methods exist, it is not yet known if this nascent capability can translate to skilled dB/dt forecasts at the Earth's surface. This paper shows results for the first forecast-quality, solar-atmosphere-to-Earth's-surface dB/dt predictions. Two methods are used to predict solar wind and IMF conditions at L1 for several real-world coronal mass ejection events. The first method is an empirical and observationally based system to estimate the plasma characteristics. The magnetic field predictions are based on the Bz4Cast system which assumes that the CME has a cylindrical flux rope geometry locally around Earth's trajectory. The remaining plasma parameters of density, temperature and velocity are estimated from white-light coronagraphs via a variety of triangulation methods and forward based modelling. The second is a first-principles-based approach that combines the Eruptive Event Generator using Gibson-Low configuration (EEGGL) model with the Alfven Wave Solar Model (AWSoM). EEGGL specifies parameters for the Gibson-Low flux rope such that it erupts, driving a CME in the coronal model that reproduces coronagraph observations and propagates to 1AU. The resulting solar wind predictions are used to drive the operational Space Weather Modeling Framework (SWMF) for geospace. Following the configuration used by NOAA's Space Weather Prediction Center, this setup couples the BATS-R-US global magnetohydromagnetic model to the Rice Convection Model (RCM) ring current model and a height-integrated ionosphere electrodynamics model. The long lead time predictions of dB/dt are compared to model results that are driven by L1 solar wind observations. Both are compared to real-world observations from surface magnetometers at a variety of geomagnetic latitudes. Metrics are calculated to examine how the simulated solar wind drivers impact forecast skill. These results illustrate the current state of long-lead-time forecasting and the promise of this technology for operational use.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5792926','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5792926"><span>A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yang, Chuanlei; Wang, Yinyan; Wang, Hechun</p> <p>2018-01-01</p> <p>To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future. PMID:29410849</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_4 --> <div id="page_5" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="81"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29410849','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29410849"><span>A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Xu; Yang, Chuanlei; Wang, Yinyan; Wang, Hechun</p> <p>2018-01-01</p> <p>To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..108b2078Q','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..108b2078Q"><span>Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Qian, Xiaoshan</p> <p>2018-01-01</p> <p>The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1410625-swmf-global-magnetosphere-simulations-january-geomagnetic-indices-cross-polar-cap-potential','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1410625-swmf-global-magnetosphere-simulations-january-geomagnetic-indices-cross-polar-cap-potential"><span>SWMF Global Magnetosphere Simulations of January 2005: Geomagnetic Indices and Cross-Polar Cap Potential</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.; ...</p> <p>2017-10-30</p> <p>We simulated the entire month of January, 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model, and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, Sym-H, AL, and cross polar cap potential (CPCP). We find that the model does an excellent job of predicting the Sym-H index, with an RMSE of 17-18 nT. Kp is predicted well during storm-time conditions, but over-predicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonablymore » well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to over-predict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resoution, with the exception of the rate of occurrence for strongly negative AL values. As a result, the use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1410625-swmf-global-magnetosphere-simulations-january-geomagnetic-indices-cross-polar-cap-potential','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1410625-swmf-global-magnetosphere-simulations-january-geomagnetic-indices-cross-polar-cap-potential"><span>SWMF Global Magnetosphere Simulations of January 2005: Geomagnetic Indices and Cross-Polar Cap Potential</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.</p> <p></p> <p>We simulated the entire month of January, 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model, and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, Sym-H, AL, and cross polar cap potential (CPCP). We find that the model does an excellent job of predicting the Sym-H index, with an RMSE of 17-18 nT. Kp is predicted well during storm-time conditions, but over-predicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonablymore » well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to over-predict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resoution, with the exception of the rate of occurrence for strongly negative AL values. As a result, the use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19921452','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19921452"><span>The importance of molecular structures, endpoints' values, and predictivity parameters in QSAR research: QSAR analysis of a series of estrogen receptor binders.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Jiazhong; Gramatica, Paola</p> <p>2010-11-01</p> <p>Quantitative structure-activity relationship (QSAR) methodology aims to explore the relationship between molecular structures and experimental endpoints, producing a model for the prediction of new data; the predictive performance of the model must be checked by external validation. Clearly, the qualities of chemical structure information and experimental endpoints, as well as the statistical parameters used to verify the external predictivity have a strong influence on QSAR model reliability. Here, we emphasize the importance of these three aspects by analyzing our models on estrogen receptor binders (Endocrine disruptor knowledge base (EDKB) database). Endocrine disrupting chemicals, which mimic or antagonize the endogenous hormones such as estrogens, are a hot topic in environmental and toxicological sciences. QSAR shows great values in predicting the estrogenic activity and exploring the interactions between the estrogen receptor and ligands. We have verified our previously published model for additional external validation on new EDKB chemicals. Having found some errors in the used 3D molecular conformations, we redevelop a new model using the same data set with corrected structures, the same method (ordinary least-square regression, OLS) and DRAGON descriptors. The new model, based on some different descriptors, is more predictive on external prediction sets. Three different formulas to calculate correlation coefficient for the external prediction set (Q2 EXT) were compared, and the results indicated that the new proposal of Consonni et al. had more reasonable results, consistent with the conclusions from regression line, Williams plot and root mean square error (RMSE) values. Finally, the importance of reliable endpoints values has been highlighted by comparing the classification assignments of EDKB with those of another estrogen receptor binders database (METI): we found that 16.1% assignments of the common compounds were opposite (20 among 124 common compounds). In order to verify the real assignments for these inconsistent compounds, we predicted these samples, as a blind external set, by our regression models and compared the results with the two databases. The results indicated that most of the predictions were consistent with METI. Furthermore, we built a kNN classification model using the 104 consistent compounds to predict those inconsistent ones, and most of the predictions were also in agreement with METI database.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19860062164&hterms=viscoelastic&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dviscoelastic','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19860062164&hterms=viscoelastic&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dviscoelastic"><span>Viscoelastic behavior and lifetime (durability) predictions. [for laminated fiber reinforced plastics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Brinson, R. F.</p> <p>1985-01-01</p> <p>A method for lifetime or durability predictions for laminated fiber reinforced plastics is given. The procedure is similar to but not the same as the well known time-temperature-superposition principle for polymers. The method is better described as an analytical adaptation of time-stress-super-position methods. The analytical constitutive modeling is based upon a nonlinear viscoelastic constitutive model developed by Schapery. Time dependent failure models are discussed and are related to the constitutive models. Finally, results of an incremental lamination analysis using the constitutive and failure model are compared to experimental results. Favorable results between theory and predictions are presented using data from creep tests of about two months duration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18051508','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18051508"><span>[Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun</p> <p>2007-09-01</p> <p>Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29060494','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29060494"><span>Predictive local receptive fields based respiratory motion tracking for motion-adaptive radiotherapy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H</p> <p>2017-07-01</p> <p>Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28339854','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28339854"><span>Prediction versus aetiology: common pitfalls and how to avoid them.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Diepen, Merel; Ramspek, Chava L; Jager, Kitty J; Zoccali, Carmine; Dekker, Friedo W</p> <p>2017-04-01</p> <p>Prediction research is a distinct field of epidemiologic research, which should be clearly separated from aetiological research. Both prediction and aetiology make use of multivariable modelling, but the underlying research aim and interpretation of results are very different. Aetiology aims at uncovering the causal effect of a specific risk factor on an outcome, adjusting for confounding factors that are selected based on pre-existing knowledge of causal relations. In contrast, prediction aims at accurately predicting the risk of an outcome using multiple predictors collectively, where the final prediction model is usually based on statistically significant, but not necessarily causal, associations in the data at hand.In both scientific and clinical practice, however, the two are often confused, resulting in poor-quality publications with limited interpretability and applicability. A major problem is the frequently encountered aetiological interpretation of prediction results, where individual variables in a prediction model are attributed causal meaning. This article stresses the differences in use and interpretation of aetiological and prediction studies, and gives examples of common pitfalls. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27999611','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27999611"><span>Poisson Mixture Regression Models for Heart Disease Prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mufudza, Chipo; Erol, Hamza</p> <p>2016-01-01</p> <p>Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5141555','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5141555"><span>Poisson Mixture Regression Models for Heart Disease Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Erol, Hamza</p> <p>2016-01-01</p> <p>Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/sir/2008/5039/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/sir/2008/5039/"><span>Modeling to Predict Escherichia coli at Presque Isle Beach 2, City of Erie, Erie County, Pennsylvania</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Zimmerman, Tammy M.</p> <p>2008-01-01</p> <p>The Lake Erie beaches in Pennsylvania are a valuable recreational resource for Erie County. Concentrations of Escherichia coli (E. coli) at monitored beaches in Presque Isle State Park in Erie, Pa., occasionally exceed the single-sample bathing-water standard of 235 colonies per 100 milliliters resulting in potentially unsafe swimming conditions and prompting beach managers to post public advisories or to close beaches to recreation. To supplement the current method for assessing recreational water quality (E. coli concentrations from the previous day), a predictive regression model for E. coli concentrations at Presque Isle Beach 2 was developed from data collected during the 2004 and 2005 recreational seasons. Model output included predicted E. coli concentrations and exceedance probabilities--the probability that E. coli concentrations would exceed the standard. For this study, E. coli concentrations and other water-quality and environmental data were collected during the 2006 recreational season at Presque Isle Beach 2. The data from 2006, an independent year, were used to test (validate) the 2004-2005 predictive regression model and compare the model performance to the current method. Using 2006 data, the 2004-2005 model yielded more correct responses and better predicted exceedances of the standard than the use of E. coli concentrations from the previous day. The differences were not pronounced, however, and more data are needed. For example, the model correctly predicted exceedances of the standard 11 percent of the time (1 out of 9 exceedances that occurred in 2006) whereas using the E. coli concentrations from the previous day did not result in any correctly predicted exceedances. After validation, new models were developed by adding the 2006 data to the 2004-2005 dataset and by analyzing the data in 2- and 3-year combinations. Results showed that excluding the 2004 data (using 2005 and 2006 data only) yielded the best model. Explanatory variables in the 2005-2006 model were log10 turbidity, bird count, and wave height. The 2005-2006 model correctly predicted when the standard would not be exceeded (specificity) with a response of 95.2 percent (178 out of 187 nonexceedances) and correctly predicted when the standard would be exceeded (sensitivity) with a response of 64.3 percent (9 out of 14 exceedances). In all cases, the results from predictive modeling produced higher percentages of correct predictions than using E. coli concentrations from the previous day. Additional data collected each year can be used to test and possibly improve the model. The results of this study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to close a beach or post an advisory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3790277','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3790277"><span>Predictive modeling of nanomaterial exposure effects in biological systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Xiong; Tang, Kaizhi; Harper, Stacey; Harper, Bryan; Steevens, Jeffery A; Xu, Roger</p> <p>2013-01-01</p> <p>Background Predictive modeling of the biological effects of nanomaterials is critical for industry and policymakers to assess the potential hazards resulting from the application of engineered nanomaterials. Methods We generated an experimental dataset on the toxic effects experienced by embryonic zebrafish due to exposure to nanomaterials. Several nanomaterials were studied, such as metal nanoparticles, dendrimer, metal oxide, and polymeric materials. The embryonic zebrafish metric (EZ Metric) was used as a screening-level measurement representative of adverse effects. Using the dataset, we developed a data mining approach to model the toxic endpoints and the overall biological impact of nanomaterials. Data mining techniques, such as numerical prediction, can assist analysts in developing risk assessment models for nanomaterials. Results We found several important attributes that contribute to the 24 hours post-fertilization (hpf) mortality, such as dosage concentration, shell composition, and surface charge. These findings concur with previous studies on nanomaterial toxicity using embryonic zebrafish. We conducted case studies on modeling the overall effect/impact of nanomaterials and the specific toxic endpoints such as mortality, delayed development, and morphological malformations. The results show that we can achieve high prediction accuracy for certain biological effects, such as 24 hpf mortality, 120 hpf mortality, and 120 hpf heart malformation. The results also show that the weighting scheme for individual biological effects has a significant influence on modeling the overall impact of nanomaterials. Sample prediction models can be found at http://neiminer.i-a-i.com/nei_models. Conclusion The EZ Metric-based data mining approach has been shown to have predictive power. The results provide valuable insights into the modeling and understanding of nanomaterial exposure effects. PMID:24098077</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26180127','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26180127"><span>Prediction of Drug-Drug Interactions with Crizotinib as the CYP3A Substrate Using a Physiologically Based Pharmacokinetic Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yamazaki, Shinji; Johnson, Theodore R; Smith, Bill J</p> <p>2015-10-01</p> <p>An orally available multiple tyrosine kinase inhibitor, crizotinib (Xalkori), is a CYP3A substrate, moderate time-dependent inhibitor, and weak inducer. The main objectives of the present study were to: 1) develop and refine a physiologically based pharmacokinetic (PBPK) model of crizotinib on the basis of clinical single- and multiple-dose results, 2) verify the crizotinib PBPK model from crizotinib single-dose drug-drug interaction (DDI) results with multiple-dose coadministration of ketoconazole or rifampin, and 3) apply the crizotinib PBPK model to predict crizotinib multiple-dose DDI outcomes. We also focused on gaining insights into the underlying mechanisms mediating crizotinib DDIs using a dynamic PBPK model, the Simcyp population-based simulator. First, PBPK model-predicted crizotinib exposures adequately matched clinically observed results in the single- and multiple-dose studies. Second, the model-predicted crizotinib exposures sufficiently matched clinically observed results in the crizotinib single-dose DDI studies with ketoconazole or rifampin, resulting in the reasonably predicted fold-increases in crizotinib exposures. Finally, the predicted fold-increases in crizotinib exposures in the multiple-dose DDI studies were roughly comparable to those in the single-dose DDI studies, suggesting that the effects of crizotinib CYP3A time-dependent inhibition (net inhibition) on the multiple-dose DDI outcomes would be negligible. Therefore, crizotinib dose-adjustment in the multiple-dose DDI studies could be made on the basis of currently available single-dose results. Overall, we believe that the crizotinib PBPK model developed, refined, and verified in the present study would adequately predict crizotinib oral exposures in other clinical studies, such as DDIs with weak/moderate CYP3A inhibitors/inducers and drug-disease interactions in patients with hepatic or renal impairment. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28865348','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28865348"><span>Prediction of muscle activation for an eye movement with finite element modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karami, Abbas; Eghtesad, Mohammad; Haghpanah, Seyyed Arash</p> <p>2017-10-01</p> <p>In this paper, a 3D finite element (FE) modeling is employed in order to predict extraocular muscles' activation and investigate force coordination in various motions of the eye orbit. A continuum constitutive hyperelastic model is employed for material description in dynamic modeling of the extraocular muscles (EOMs). Two significant features of this model are accurate mass modeling with FE method and stimulating EOMs for motion through muscle activation parameter. In order to validate the eye model, a forward dynamics simulation of the eye motion is carried out by variation of the muscle activation. Furthermore, to realize muscle activation prediction in various eye motions, two different tracking-based inverse controllers are proposed. The performance of these two inverse controllers is investigated according to their resulted muscle force magnitude and muscle force coordination. The simulation results are compared with the available experimental data and the well-known existing neurological laws. The comparison authenticates both the validation and the prediction results. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070035071','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070035071"><span>Prediction of Size Effects in Notched Laminates Using Continuum Damage Mechanics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Camanho, D. P.; Maimi, P.; Davila, C. G.</p> <p>2007-01-01</p> <p>This paper examines the use of a continuum damage model to predict strength and size effects in notched carbon-epoxy laminates. The effects of size and the development of a fracture process zone before final failure are identified in an experimental program. The continuum damage model is described and the resulting predictions of size effects are compared with alternative approaches: the point stress and the inherent flaw models, the Linear-Elastic Fracture Mechanics approach, and the strength of materials approach. The results indicate that the continuum damage model is the most accurate technique to predict size effects in composites. Furthermore, the continuum damage model does not require any calibration and it is applicable to general geometries and boundary conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.7220S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.7220S"><span>Confronting uncertainty in flood damage predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno</p> <p>2015-04-01</p> <p>Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70031311','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70031311"><span>Predicting wetland plant community responses to proposed water-level-regulation plans for Lake Ontario: GIS-based modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Wilcox, D.A.; Xie, Y.</p> <p>2007-01-01</p> <p>Integrated, GIS-based, wetland predictive models were constructed to assist in predicting the responses of wetland plant communities to proposed new water-level regulation plans for Lake Ontario. The modeling exercise consisted of four major components: 1) building individual site wetland geometric models; 2) constructing generalized wetland geometric models representing specific types of wetlands (rectangle model for drowned river mouth wetlands, half ring model for open embayment wetlands, half ellipse model for protected embayment wetlands, and ellipse model for barrier beach wetlands); 3) assigning wetland plant profiles to the generalized wetland geometric models that identify associations between past flooding / dewatering events and the regulated water-level changes of a proposed water-level-regulation plan; and 4) predicting relevant proportions of wetland plant communities and the time durations during which they would be affected under proposed regulation plans. Based on this conceptual foundation, the predictive models were constructed using bathymetric and topographic wetland models and technical procedures operating on the platform of ArcGIS. An example of the model processes and outputs for the drowned river mouth wetland model using a test regulation plan illustrates the four components and, when compared against other test regulation plans, provided results that met ecological expectations. The model results were also compared to independent data collected by photointerpretation. Although data collections were not directly comparable, the predicted extent of meadow marsh in years in which photographs were taken was significantly correlated with extent of mapped meadow marsh in all but barrier beach wetlands. The predictive model for wetland plant communities provided valuable input into International Joint Commission deliberations on new regulation plans and was also incorporated into faunal predictive models used for that purpose.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGRA..123..993G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGRA..123..993G"><span>A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha</p> <p>2018-01-01</p> <p>It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830010444','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830010444"><span>Comparison of simulator fidelity model predictions with in-simulator evaluation data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.</p> <p>1983-01-01</p> <p>A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_5 --> <div id="page_6" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="101"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28448843','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28448843"><span>Prediction of PM2.5 along urban highway corridor under mixed traffic conditions using CALINE4 model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dhyani, Rajni; Sharma, Niraj; Maity, Animesh Kumar</p> <p>2017-08-01</p> <p>The present study deals with spatial-temporal distribution of PM 2.5 along a highly trafficked national highway corridor (NH-2) in Delhi, India. Population residing in areas near roads and highways of high vehicular activities are exposed to high levels of PM 2.5 resulting in various health issues. The spatial extent of PM 2.5 has been assessed with the help of CALINE4 model. Various input parameters of the model were estimated and used to predict PM 2.5 concentration along the selected highway corridor. The results indicated that there are many factors involved which affects the prediction of PM 2.5 concentration by CALINE4 model. In fact, these factors either not considered by model or have little influence on model's prediction capabilities. Therefore, in the present study CALINE4 model performance was observed to be unsatisfactory for prediction of PM 2.5 concentration. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080006489','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080006489"><span>Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control over a Hump Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rumsey, Christopher L.</p> <p>2006-01-01</p> <p>The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060005591','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060005591"><span>Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control Over a Hump Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rumsey, Christopher L.</p> <p>2006-01-01</p> <p>The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AtmEn..40.3909M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AtmEn..40.3909M"><span>Numerical study of single and two interacting turbulent plumes in atmospheric cross flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mokhtarzadeh-Dehghan, M. R.; König, C. S.; Robins, A. G.</p> <p></p> <p>The paper presents a numerical study of two interacting full-scale dry plumes issued into neutral boundary layer cross flow. The study simulates plumes from a mechanical draught cooling tower. The plumes are placed in tandem or side-by-side. Results are first presented for plumes with a density ratio of 0.74 and plume-to-crosswind speed ratio of 2.33, for which data from a small-scale wind tunnel experiment were available and were used to assess the accuracy of the numerical results. Further results are then presented for the more physically realistic density ratio of 0.95, maintaining the same speed ratio. The sensitivity of the results with respect to three turbulence models, namely, the standard k- ɛ model, the RNG k- ɛ model and the Differential Flux Model (DFM) is presented. Comparisons are also made between the predicted rise height and the values obtained from existing integral models. The formation of two counter-rotating vortices is well predicted. The results show good agreement for the rise height predicted by different turbulence models, but the DFM predicts temperature profiles more accurately. The values of predicted rise height are also in general agreement. However, discrepancies between the present results for the rise height for single and multiple plumes and the values obtained from known analytical relations are apparent and possible reasons for these are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080030797','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080030797"><span>Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cabell, Randolph H.</p> <p>2008-01-01</p> <p>Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5731680','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5731680"><span>A backwards glance at words: Using reversed-interior masked primes to test models of visual word identification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lupker, Stephen J.</p> <p>2017-01-01</p> <p>The experiments reported here used “Reversed-Interior” (RI) primes (e.g., cetupmor-COMPUTER) in three different masked priming paradigms in order to test between different models of orthographic coding/visual word recognition. The results of Experiment 1, using a standard masked priming methodology, showed no evidence of priming from RI primes, in contrast to the predictions of the Bayesian Reader and LTRS models. By contrast, Experiment 2, using a sandwich priming methodology, showed significant priming from RI primes, in contrast to the predictions of open bigram models, which predict that there should be no orthographic similarity between these primes and their targets. Similar results were obtained in Experiment 3, using a masked prime same-different task. The results of all three experiments are most consistent with the predictions derived from simulations of the Spatial-coding model. PMID:29244824</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21F2211K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21F2211K"><span>Can decadal climate predictions be improved by ocean ensemble dispersion filtering?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.</p> <p>2017-12-01</p> <p>Decadal predictions by Earth system models aim to capture the state and phase of the climate several years inadvance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-termweather forecasts represent an initial value problem and long-term climate projections represent a boundarycondition problem, the decadal climate prediction falls in-between these two time scales. The ocean memorydue to its heat capacity holds big potential skill on the decadal scale. In recent years, more precise initializationtechniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions.Ensembles are another important aspect. Applying slightly perturbed predictions results in an ensemble. Insteadof using and evaluating one prediction, but the whole ensemble or its ensemble average, improves a predictionsystem. However, climate models in general start losing the initialized signal and its predictive skill from oneforecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improvedby a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. Wefound that this procedure, called ensemble dispersion filter, results in more accurate results than the standarddecadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions showan increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with largerensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from oceanensemble dispersion filtering toward the ensemble mean. This study is part of MiKlip (fona-miklip.de) - a major project on decadal climate prediction in Germany.We focus on the Max-Planck-Institute Earth System Model using the low-resolution version (MPI-ESM-LR) andMiKlip's basic initialization strategy as in 2017 published decadal climate forecast: http://www.fona-miklip.de/decadal-forecast-2017-2026/decadal-forecast-for-2017-2026/ More informations about this study in JAMES:DOI: 10.1002/2016MS000787</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPhCS.656a2054P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPhCS.656a2054P"><span>Numerical Modelling and Prediction of Erosion Induced by Hydrodynamic Cavitation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peters, A.; Lantermann, U.; el Moctar, O.</p> <p>2015-12-01</p> <p>The present work aims to predict cavitation erosion using a numerical flow solver together with a new developed erosion model. The erosion model is based on the hypothesis that collapses of single cavitation bubbles near solid boundaries form high velocity microjets, which cause sonic impacts with high pressure amplitudes damaging the surface. The erosion model uses information from a numerical Euler-Euler flow simulation to predict erosion sensitive areas and assess the erosion aggressiveness of the flow. The obtained numerical results were compared to experimental results from tests of an axisymmetric nozzle.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23737943','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23737943"><span>Biological networks for predicting chemical hepatocarcinogenicity using gene expression data from treated mice and relevance across human and rat species.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thomas, Reuben; Thomas, Russell S; Auerbach, Scott S; Portier, Christopher J</p> <p>2013-01-01</p> <p>Several groups have employed genomic data from subchronic chemical toxicity studies in rodents (90 days) to derive gene-centric predictors of chronic toxicity and carcinogenicity. Genes are annotated to belong to biological processes or molecular pathways that are mechanistically well understood and are described in public databases. To develop a molecular pathway-based prediction model of long term hepatocarcinogenicity using 90-day gene expression data and to evaluate the performance of this model with respect to both intra-species, dose-dependent and cross-species predictions. Genome-wide hepatic mRNA expression was retrospectively measured in B6C3F1 mice following subchronic exposure to twenty-six (26) chemicals (10 were positive, 2 equivocal and 14 negative for liver tumors) previously studied by the US National Toxicology Program. Using these data, a pathway-based predictor model for long-term liver cancer risk was derived using random forests. The prediction model was independently validated on test sets associated with liver cancer risk obtained from mice, rats and humans. Using 5-fold cross validation, the developed prediction model had reasonable predictive performance with the area under receiver-operator curve (AUC) equal to 0.66. The developed prediction model was then used to extrapolate the results to data associated with rat and human liver cancer. The extrapolated model worked well for both extrapolated species (AUC value of 0.74 for rats and 0.91 for humans). The prediction models implied a balanced interplay between all pathway responses leading to carcinogenicity predictions. Pathway-based prediction models estimated from sub-chronic data hold promise for predicting long-term carcinogenicity and also for its ability to extrapolate results across multiple species.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/49854','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/49854"><span>A general method for assessing the effects of uncertainty in individual-tree volume model predictions on large-area volume estimates with a subtropical forest illustration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans</p> <p>2015-01-01</p> <p>Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22256273','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22256273"><span>Mining data from CFD simulation for aneurysm and carotid bifurcation models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Miloš, Radović; Dejan, Petrović; Nenad, Filipović</p> <p>2011-01-01</p> <p>Arterial geometry variability is present both within and across individuals. To analyze the influence of geometric parameters, blood density, dynamic viscosity and blood velocity on wall shear stress (WSS) distribution in the human carotid artery bifurcation and aneurysm, the computer simulations were run to generate the data pertaining to this phenomenon. In our work we evaluate two prediction models for modeling these relationships: neural network model and k-nearest neighbor model. The results revealed that both models have high prediction ability for this prediction task. The achieved results represent progress in assessment of stroke risk for a given patient data in real time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810005491','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810005491"><span>Review and developments of dissemination models for airborne carbon fibers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Elber, W.</p> <p>1980-01-01</p> <p>Dissemination prediction models were reviewed to determine their applicability to a risk assessment for airborne carbon fibers. The review showed that the Gaussian prediction models using partial reflection at the ground agreed very closely with a more elaborate diffusion analysis developed for the study. For distances beyond 10,000 m the Gaussian models predicted a slower fall-off in exposure levels than the diffusion models. This resulting level of conservatism was preferred for the carbon fiber risk assessment. The results also showed that the perfect vertical-mixing models developed herein agreed very closely with the diffusion analysis for all except the most stable atmospheric conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..121b2003L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..121b2003L"><span>Prediction model of dissolved oxygen in ponds based on ELM neural network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Xinfei; Ai, Jiaoyan; Lin, Chunhuan; Guan, Haibin</p> <p>2018-02-01</p> <p>Dissolved oxygen in ponds is affected by many factors, and its distribution is unbalanced. In this paper, in order to improve the imbalance of dissolved oxygen distribution more effectively, the dissolved oxygen prediction model of Extreme Learning Machine (ELM) intelligent algorithm is established, based on the method of improving dissolved oxygen distribution by artificial push flow. Select the Lake Jing of Guangxi University as the experimental area. Using the model to predict the dissolved oxygen concentration of different voltage pumps, the results show that the ELM prediction accuracy is higher than the BP algorithm, and its mean square error is MSEELM=0.0394, the correlation coefficient RELM=0.9823. The prediction results of the 24V voltage pump push flow show that the discrete prediction curve can approximate the measured values well. The model can provide the basis for the artificial improvement of the dissolved oxygen distribution decision.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AcASn..55..127C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AcASn..55..127C"><span>Evolution of the Radial Abundance Gradient and Cold Gas along the Milky Way Disk</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Q. S.; Chang, R. X.; Yin, J.</p> <p>2014-03-01</p> <p>We have constructed a phenomenological model of the chemical evolution of the Milky Way disk, and treated the molecular and atomic gas separately. Using this model, we explore the radial profiles of oxygen abundance, the surface density of cold gas, and their time evolutions. It is shown that the model predictions are very sensitive to the adopted infall time-scale. By comparing the model predictions with the observations, we find that the model adopting the star formation law based on H_2 can properly predict the observed radial distributions of cold gas and oxygen abundance gradient along the disk. We also compare the model results with the predictions of the model which adopts the instantaneous recycling approximation (IRA), and find that the IRA assumption has little influence on the model results, especially in the low-density gas region.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990115944&hterms=kodak&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dkodak','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990115944&hterms=kodak&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dkodak"><span>AXAF-1 High Resolution Assembly Image Model and Comparison with X-Ray Ground Test Image</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zissa, David E.</p> <p>1999-01-01</p> <p>The x-ray ground test of the AXAF-I High Resolution Mirror Assembly was completed in 1997 at the X-ray Calibration Facility at Marshall Space Flight Center. Mirror surface measurements by HDOS, alignment results from Kodak, and predicted gravity distortion in the horizontal test configuration are being used to model the x-ray test image. The Marshall Space Flight Center (MSFC) image modeling serves as a cross check with Smithsonian Astrophysical observatory modeling. The MSFC image prediction software has evolved from the MSFC model of the x-ray test of the largest AXAF-I mirror pair in 1991. The MSFC image modeling software development is being assisted by the University of Alabama in Huntsville. The modeling process, modeling software, and image prediction will be discussed. The image prediction will be compared with the x-ray test results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16266739','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16266739"><span>Model simulation of meteorology and air quality during the summer PUMA intensive measurement campaign in the UK West Midlands conurbation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baggott, Sarah; Cai, Xiaoming; McGregor, Glenn; Harrison, Roy M</p> <p>2006-05-01</p> <p>The Regional Atmospheric Modeling System (RAMS) and Urban Airshed Model (UAM IV) have been implemented for prediction of air pollutant concentrations within the West Midlands conurbation of the United Kingdom. The modelling results for wind speed, direction and temperature are in reasonable agreement with observations for two stations, one in a rural area and the other in an urban area. Predictions of surface temperature are generally good for both stations, but the results suggest that the quality of temperature prediction is sensitive to whether cloud cover is reproduced reliably by the model. Wind direction is captured very well by the model, while wind speed is generally overestimated. The air pollution climate of the UK West Midlands is very different to those for which the UAM model was primarily developed, and the methods used to overcome these limitations are described. The model shows a tendency towards under-prediction of primary pollutant (NOx and CO) concentrations, but with suitable attention to boundary conditions and vertical profiles gives fairly good predictions of ozone concentrations. Hourly updating of chemical concentration boundary conditions yields the best results, with input of vertical profiles desirable. The model seriously underpredicts NO2/NO ratios within the urban area and this appears to relate to inadequate production of peroxy radicals. Overall, the chemical reactivity predicted by the model appears to fall well below that occurring in the atmosphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28273856','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28273856"><span>Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian</p> <p>2017-03-04</p> <p>Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)₄ model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16504473','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16504473"><span>Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Decker, Gifford Z; Thomson, Scott L</p> <p>2007-05-01</p> <p>The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15348304','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15348304"><span>Replacing the nucleus pulposus of the intervertebral disk: prediction of suitable properties of a replacement material using finite element analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Meakin, J R</p> <p>2001-03-01</p> <p>An axisymmetric finite element model of a human lumbar disk was developed to investigate the properties required of an implant to replace the nucleus pulposus. In the intact disk, the nucleus was modeled as a fluid, and the annulus as an elastic solid. The Young's modulus of the annulus was determined empirically by matching model predictions to experimental results. The model was checked for sensitivity to the input parameter values and found to give reasonable behavior. The model predicted that removal of the nucleus would change the response of the annulus to compression. This prediction was consistent with experimental results, thus validating the model. Implants to fill the cavity produced by nucleus removal were modeled as elastic solids. The Poisson's ratio was fixed at 0.49, and the Young's modulus was varied from 0.5 to 100 MPa. Two sizes of implant were considered: full size (filling the cavity) and small size (smaller than the cavity). The model predicted that a full size implant would reverse the changes to annulus behavior, but a smaller implant would not. By comparing the stress distribution in the annulus, the ideal Young's modulus was predicted to be approximately 3 MPa. These predictions have implications for current nucleus implant designs. Copyright 2001 Kluwer Academic Publishers</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930019093','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930019093"><span>Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.</p> <p>1993-01-01</p> <p>Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_6 --> <div id="page_7" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="121"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2592718','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2592718"><span>Model-driven analysis of experimentally determined growth phenotypes for 465 yeast gene deletion mutants under 16 different conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Snitkin, Evan S; Dudley, Aimée M; Janse, Daniel M; Wong, Kaisheen; Church, George M; Segrè, Daniel</p> <p>2008-01-01</p> <p>Background Understanding the response of complex biochemical networks to genetic perturbations and environmental variability is a fundamental challenge in biology. Integration of high-throughput experimental assays and genome-scale computational methods is likely to produce insight otherwise unreachable, but specific examples of such integration have only begun to be explored. Results In this study, we measured growth phenotypes of 465 Saccharomyces cerevisiae gene deletion mutants under 16 metabolically relevant conditions and integrated them with the corresponding flux balance model predictions. We first used discordance between experimental results and model predictions to guide a stage of experimental refinement, which resulted in a significant improvement in the quality of the experimental data. Next, we used discordance still present in the refined experimental data to assess the reliability of yeast metabolism models under different conditions. In addition to estimating predictive capacity based on growth phenotypes, we sought to explain these discordances by examining predicted flux distributions visualized through a new, freely available platform. This analysis led to insight into the glycerol utilization pathway and the potential effects of metabolic shortcuts on model results. Finally, we used model predictions and experimental data to discriminate between alternative raffinose catabolism routes. Conclusions Our study demonstrates how a new level of integration between high throughput measurements and flux balance model predictions can improve understanding of both experimental and computational results. The added value of a joint analysis is a more reliable platform for specific testing of biological hypotheses, such as the catabolic routes of different carbon sources. PMID:18808699</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70159346','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70159346"><span>Evaluation of wave runup predictions from numerical and parametric models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.</p> <p>2014-01-01</p> <p>Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27414041','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27414041"><span>Predicting tibiotalar and subtalar joint angles from skin-marker data with dual-fluoroscopy as a reference standard.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nichols, Jennifer A; Roach, Koren E; Fiorentino, Niccolo M; Anderson, Andrew E</p> <p>2016-09-01</p> <p>Evidence suggests that the tibiotalar and subtalar joints provide near six degree-of-freedom (DOF) motion. Yet, kinematic models frequently assume one DOF at each of these joints. In this study, we quantified the accuracy of kinematic models to predict joint angles at the tibiotalar and subtalar joints from skin-marker data. Models included 1 or 3 DOF at each joint. Ten asymptomatic subjects, screened for deformities, performed 1.0m/s treadmill walking and a balanced, single-leg heel-rise. Tibiotalar and subtalar joint angles calculated by inverse kinematics for the 1 and 3 DOF models were compared to those measured directly in vivo using dual-fluoroscopy. Results demonstrated that, for each activity, the average error in tibiotalar joint angles predicted by the 1 DOF model were significantly smaller than those predicted by the 3 DOF model for inversion/eversion and internal/external rotation. In contrast, neither model consistently demonstrated smaller errors when predicting subtalar joint angles. Additionally, neither model could accurately predict discrete angles for the tibiotalar and subtalar joints on a per-subject basis. Differences between model predictions and dual-fluoroscopy measurements were highly variable across subjects, with joint angle errors in at least one rotation direction surpassing 10° for 9 out of 10 subjects. Our results suggest that both the 1 and 3 DOF models can predict trends in tibiotalar joint angles on a limited basis. However, as currently implemented, neither model can predict discrete tibiotalar or subtalar joint angles for individual subjects. Inclusion of subject-specific attributes may improve the accuracy of these models. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19844800','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19844800"><span>Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Durdu, Omer Faruk</p> <p>2010-10-01</p> <p>In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A51P0308B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A51P0308B"><span>Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.</p> <p>2015-12-01</p> <p>Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23276196','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23276196"><span>Family-supportive work environments and psychological strain: a longitudinal test of two theories.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Odle-Dusseau, Heather N; Herleman, Hailey A; Britt, Thomas W; Moore, Dewayne D; Castro, Carl A; McGurk, Dennis</p> <p>2013-01-01</p> <p>Based on the Job Demands-Resources (JDR) model (E. Demerouti, A. B. Bakker, F. Nachreiner, & W. B. Schaufeli, 2001, The job demands-resources model of burnout. Journal of Applied Psychology, 86, 499-512) and Conservation of Resources (COR) theory (S. E. Hobfoll, 2002, Social and psychological resources and adaptation. Review of General Psychology, 6, 307-324), we tested three competing models that predict different directions of causation for relationships over time between family-supportive work environments (FSWE) and psychological strain, with two waves of data from a military sample. Results revealed support for both the JDR and COR theories, first in the static model where FSWE at Time 1 predicted psychological strain at Time 2 and when testing the opposite direction, where psychological strain at Time 1 predicted FSWE at Time 2. For change models, FSWE predicted changes in psychological strain across time, although the reverse causation model was not supported (psychological strain at Time 1 did not predict changes in FSWE). Also, changes in FSWE across time predicted psychological strain at Time 2, whereas changes in psychological strain did not predict FSWE at Time 2. Theoretically, these results are important for the work-family interface in that they demonstrate the application of a systems approach to studying work and family interactions, as support was obtained for both the JDR model with perceptions of FSWE predicting psychological strain (in both the static and change models), and for COR theory where psychological strain predicts FSWE across time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IEITC..91.2142S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IEITC..91.2142S"><span>Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sato, Naoyuki; Yamaguchi, Yoko</p> <p></p> <p>Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JThSc..27...78Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JThSc..27...78Y"><span>Analysis of turbulence and surface growth models on the estimation of soot level in ethylene non-premixed flames</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yunardi, Y.; Munawar, Edi; Rinaldi, Wahyu; Razali, Asbar; Iskandar, Elwina; Fairweather, M.</p> <p>2018-02-01</p> <p>Soot prediction in a combustion system has become a subject of attention, as many factors influence its accuracy. An accurate temperature prediction will likely yield better soot predictions, since the inception, growth and destruction of the soot are affected by the temperature. This paper reported the study on the influences of turbulence closure and surface growth models on the prediction of soot levels in turbulent flames. The results demonstrated that a substantial distinction was observed in terms of temperature predictions derived using the k-ɛ and the Reynolds stress models, for the two ethylene flames studied here amongst the four types of surface growth rate model investigated, the assumption of the soot surface growth rate proportional to the particle number density, but independent on the surface area of soot particles, f ( A s ) = ρ N s , yields in closest agreement with the radial data. Without any adjustment to the constants in the surface growth term, other approaches where the surface growth directly proportional to the surface area and square root of surface area, f ( A s ) = A s and f ( A s ) = √ A s , result in an under- prediction of soot volume fraction. These results suggest that predictions of soot volume fraction are sensitive to the modelling of surface growth.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21110893','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21110893"><span>A theoretical model of the application of RF energy to the airway wall and its experimental validation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jarrard, Jerry; Wizeman, Bill; Brown, Robert H; Mitzner, Wayne</p> <p>2010-11-27</p> <p>Bronchial thermoplasty is a novel technique designed to reduce an airway's ability to contract by reducing the amount of airway smooth muscle through controlled heating of the airway wall. This method has been examined in animal models and as a treatment for asthma in human subjects. At the present time, there has been little research published about how radiofrequency (RF) energy and heat is transferred to the airways of the lung during bronchial thermoplasty procedures. In this manuscript we describe a computational, theoretical model of the delivery of RF energy to the airway wall. An electro-thermal finite-element-analysis model was designed to simulate the delivery of temperature controlled RF energy to airway walls of the in vivo lung. The model includes predictions of heat generation due to RF joule heating and transfer of heat within an airway wall due to thermal conduction. To implement the model, we use known physical characteristics and dimensions of the airway and lung tissues. The model predictions were tested with measurements of temperature, impedance, energy, and power in an experimental canine model. Model predictions of electrode temperature, voltage, and current, along with tissue impedance and delivered energy were compared to experiment measurements and were within ± 5% of experimental averages taken over 157 sample activations.The experimental results show remarkable agreement with the model predictions, and thus validate the use of this model to predict the heat generation and transfer within the airway wall following bronchial thermoplasty. The model also demonstrated the importance of evaporation as a loss term that affected both electrical measurements and heat distribution. The model predictions showed excellent agreement with the empirical results, and thus support using the model to develop the next generation of devices for bronchial thermoplasty. Our results suggest that comparing model results to RF generator electrical measurements may be a useful tool in the early evaluation of a model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SpWea..15..192W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SpWea..15..192W"><span>Exploring predictive performance: A reanalysis of the geospace model transition challenge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Welling, D. T.; Anderson, B. J.; Crowley, G.; Pulkkinen, A. A.; Rastätter, L.</p> <p>2017-01-01</p> <p>The Pulkkinen et al. (2013) study evaluated the ability of five different geospace models to predict surface dB/dt as a function of upstream solar drivers. This was an important step in the assessment of research models for predicting and ultimately preventing the damaging effects of geomagnetically induced currents. Many questions remain concerning the capabilities of these models. This study presents a reanalysis of the Pulkkinen et al. (2013) results in an attempt to better understand the models' performance. The range of validity of the models is determined by examining the conditions corresponding to the empirical input data. It is found that the empirical conductance models on which global magnetohydrodynamic models rely are frequently used outside the limits of their input data. The prediction error for the models is sorted as a function of solar driving and geomagnetic activity. It is found that all models show a bias toward underprediction, especially during active times. These results have implications for future research aimed at improving operational forecast models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JCoPh.309...52N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JCoPh.309...52N"><span>Gaussian functional regression for output prediction: Model assimilation and experimental design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, N. C.; Peraire, J.</p> <p>2016-03-01</p> <p>In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ArtSa..51...19L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ArtSa..51...19L"><span>Application of Grey Model GM(1, 1) to Ultra Short-Term Predictions of Universal Time</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lei, Yu; Guo, Min; Zhao, Danning; Cai, Hongbing; Hu, Dandan</p> <p>2016-03-01</p> <p>A mathematical model known as one-order one-variable grey differential equation model GM(1, 1) has been herein employed successfully for the ultra short-term (<10days) predictions of universal time (UT1-UTC). The results of predictions are analyzed and compared with those obtained by other methods. It is shown that the accuracy of the predictions is comparable with that obtained by other prediction methods. The proposed method is able to yield an exact prediction even though only a few observations are provided. Hence it is very valuable in the case of a small size dataset since traditional methods, e.g., least-squares (LS) extrapolation, require longer data span to make a good forecast. In addition, these results can be obtained without making any assumption about an original dataset, and thus is of high reliability. Another advantage is that the developed method is easy to use. All these reveal a great potential of the GM(1, 1) model for UT1-UTC predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JInst..13P2022B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JInst..13P2022B"><span>Experimental evaluation of models for predicting Cherenkov light intensities from short-cooled nuclear fuel assemblies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Branger, E.; Grape, S.; Jansson, P.; Jacobsson Svärd, S.</p> <p>2018-02-01</p> <p>The Digital Cherenkov Viewing Device (DCVD) is a tool used by nuclear safeguards inspectors to verify irradiated nuclear fuel assemblies in wet storage based on the recording of Cherenkov light produced by the assemblies. One type of verification involves comparing the measured light intensity from an assembly with a predicted intensity, based on assembly declarations. Crucial for such analyses is the performance of the prediction model used, and recently new modelling methods have been introduced to allow for enhanced prediction capabilities by taking the irradiation history into account, and by including the cross-talk radiation from neighbouring assemblies in the predictions. In this work, the performance of three models for Cherenkov-light intensity prediction is evaluated by applying them to a set of short-cooled PWR 17x17 assemblies for which experimental DCVD measurements and operator-declared irradiation data was available; (1) a two-parameter model, based on total burnup and cooling time, previously used by the safeguards inspectors, (2) a newly introduced gamma-spectrum-based model, which incorporates cycle-wise burnup histories, and (3) the latter gamma-spectrum-based model with the addition to account for contributions from neighbouring assemblies. The results show that the two gamma-spectrum-based models provide significantly higher precision for the measured inventory compared to the two-parameter model, lowering the standard deviation between relative measured and predicted intensities from 15.2 % to 8.1 % respectively 7.8 %. The results show some systematic differences between assemblies of different designs (produced by different manufacturers) in spite of their similar PWR 17x17 geometries, and possible ways are discussed to address such differences, which may allow for even higher prediction capabilities. Still, it is concluded that the gamma-spectrum-based models enable confident verification of the fuel assembly inventory at the currently used detection limit for partial defects, being a 30 % discrepancy between measured and predicted intensities, while some false detection occurs with the two-parameter model. The results also indicate that the gamma-spectrum-based prediction methods are accurate enough that the 30 % discrepancy limit could potentially be lowered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25896647','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25896647"><span>Risk prediction for chronic kidney disease progression using heterogeneous electronic health record data and time series analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Perotte, Adler; Ranganath, Rajesh; Hirsch, Jamie S; Blei, David; Elhadad, Noémie</p> <p>2015-07-01</p> <p>As adoption of electronic health records continues to increase, there is an opportunity to incorporate clinical documentation as well as laboratory values and demographics into risk prediction modeling. The authors develop a risk prediction model for chronic kidney disease (CKD) progression from stage III to stage IV that includes longitudinal data and features drawn from clinical documentation. The study cohort consisted of 2908 primary-care clinic patients who had at least three visits prior to January 1, 2013 and developed CKD stage III during their documented history. Development and validation cohorts were randomly selected from this cohort and the study datasets included longitudinal inpatient and outpatient data from these populations. Time series analysis (Kalman filter) and survival analysis (Cox proportional hazards) were combined to produce a range of risk models. These models were evaluated using concordance, a discriminatory statistic. A risk model incorporating longitudinal data on clinical documentation and laboratory test results (concordance 0.849) predicts progression from state III CKD to stage IV CKD more accurately when compared to a similar model without laboratory test results (concordance 0.733, P<.001), a model that only considers the most recent laboratory test results (concordance 0.819, P < .031) and a model based on estimated glomerular filtration rate (concordance 0.779, P < .001). A risk prediction model that takes longitudinal laboratory test results and clinical documentation into consideration can predict CKD progression from stage III to stage IV more accurately than three models that do not take all of these variables into consideration. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70026167','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70026167"><span>Methods for using groundwater model predictions to guide hydrogeologic data collection, with application to the Death Valley regional groundwater flow system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.</p> <p>2003-01-01</p> <p>Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27438600','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27438600"><span>A Novel RSSI Prediction Using Imperialist Competition Algorithm (ICA), Radial Basis Function (RBF) and Firefly Algorithm (FFA) in Wireless Networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Goudarzi, Shidrokh; Haslina Hassan, Wan; Abdalla Hashim, Aisha-Hassan; Soleymani, Seyed Ahmad; Anisi, Mohammad Hossein; Zakaria, Omar M</p> <p>2016-01-01</p> <p>This study aims to design a vertical handover prediction method to minimize unnecessary handovers for a mobile node (MN) during the vertical handover process. This relies on a novel method for the prediction of a received signal strength indicator (RSSI) referred to as IRBF-FFA, which is designed by utilizing the imperialist competition algorithm (ICA) to train the radial basis function (RBF), and by hybridizing with the firefly algorithm (FFA) to predict the optimal solution. The prediction accuracy of the proposed IRBF-FFA model was validated by comparing it to support vector machines (SVMs) and multilayer perceptron (MLP) models. In order to assess the model's performance, we measured the coefficient of determination (R2), correlation coefficient (r), root mean square error (RMSE) and mean absolute percentage error (MAPE). The achieved results indicate that the IRBF-FFA model provides more precise predictions compared to different ANNs, namely, support vector machines (SVMs) and multilayer perceptron (MLP). The performance of the proposed model is analyzed through simulated and real-time RSSI measurements. The results also suggest that the IRBF-FFA model can be applied as an efficient technique for the accurate prediction of vertical handover.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.995a2046N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.995a2046N"><span>Optimizing Blasting’s Air Overpressure Prediction Model using Swarm Intelligence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd</p> <p>2018-04-01</p> <p>Air overpressure (AOp) resulting from blasting can cause damage and nuisance to nearby civilians. Thus, it is important to be able to predict AOp accurately. In this study, 8 different Artificial Neural Network (ANN) were developed for the purpose of prediction of AOp. The ANN models were trained using different variants of Particle Swarm Optimization (PSO) algorithm. AOp predictions were also made using an empirical equation, as suggested by United States Bureau of Mines (USBM), to serve as a benchmark. In order to develop the models, 76 blasting operations in Hulu Langat were investigated. All the ANN models were found to outperform the USBM equation in three performance metrics; root mean square error (RMSE), mean absolute percentage error (MAPE) and coefficient of determination (R2). Using a performance ranking method, MSO-Rand-Mut was determined to be the best prediction model for AOp with a performance metric of RMSE=2.18, MAPE=1.73% and R2=0.97. The result shows that ANN models trained using PSO are capable of predicting AOp with great accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19718448','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19718448"><span>Microarray-based cancer prediction using soft computing approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Xiaosheng; Gotoh, Osamu</p> <p>2009-05-26</p> <p>One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPRS..123....1X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPRS..123....1X"><span>Incorporation of satellite remote sensing pan-sharpened imagery into digital soil prediction and mapping models to characterize soil property variability in small agricultural fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Yiming; Smith, Scot E.; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P.</p> <p>2017-01-01</p> <p>Soil prediction models based on spectral indices from some multispectral images are too coarse to characterize spatial pattern of soil properties in small and heterogeneous agricultural lands. Image pan-sharpening has seldom been utilized in Digital Soil Mapping research before. This research aimed to analyze the effects of pan-sharpened (PAN) remote sensing spectral indices on soil prediction models in smallholder farm settings. This research fused the panchromatic band and multispectral (MS) bands of WorldView-2, GeoEye-1, and Landsat 8 images in a village in Southern India by Brovey, Gram-Schmidt and Intensity-Hue-Saturation methods. Random Forest was utilized to develop soil total nitrogen (TN) and soil exchangeable potassium (Kex) prediction models by incorporating multiple spectral indices from the PAN and MS images. Overall, our results showed that PAN remote sensing spectral indices have similar spectral characteristics with soil TN and Kex as MS remote sensing spectral indices. There is no soil prediction model incorporating the specific type of pan-sharpened spectral indices always had the strongest prediction capability of soil TN and Kex. The incorporation of pan-sharpened remote sensing spectral data not only increased the spatial resolution of the soil prediction maps, but also enhanced the prediction accuracy of soil prediction models. Small farms with limited footprint, fragmented ownership and diverse crop cycle should benefit greatly from the pan-sharpened high spatial resolution imagery for soil property mapping. Our results show that multiple high and medium resolution images can be used to map soil properties suggesting the possibility of an improvement in the maps' update frequency. Additionally, the results should benefit the large agricultural community through the reduction of routine soil sampling cost and improved prediction accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9638E..12T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9638E..12T"><span>Prediction models for CO2 emission in Malaysia using best subsets regression and multi-linear regression</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tan, C. H.; Matjafri, M. Z.; Lim, H. S.</p> <p>2015-10-01</p> <p>This paper presents the prediction models which analyze and compute the CO2 emission in Malaysia. Each prediction model for CO2 emission will be analyzed based on three main groups which is transportation, electricity and heat production as well as residential buildings and commercial and public services. The prediction models were generated using data obtained from World Bank Open Data. Best subset method will be used to remove irrelevant data and followed by multi linear regression to produce the prediction models. From the results, high R-square (prediction) value was obtained and this implies that the models are reliable to predict the CO2 emission by using specific data. In addition, the CO2 emissions from these three groups are forecasted using trend analysis plots for observation purpose.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100024261','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100024261"><span>Control of Systems With Slow Actuators Using Time Scale Separation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stepanyan, Vehram; Nguyen, Nhan</p> <p>2009-01-01</p> <p>This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009ChPhC..33..633D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009ChPhC..33..633D"><span>NUCLEAR AND HEAVY ION PHYSICS: α-decay half-lives of superheavy nuclei and general predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dong, Jian-Min; Zhang, Hong-Fei; Wang, Yan-Zhao; Zuo, Wei; Su, Xin-Ning; Li, Jun-Qing</p> <p>2009-08-01</p> <p>The generalized liquid drop model (GLDM) and the cluster model have been employed to calculate the α-decay half-lives of superheavy nuclei (SHN) using the experimental α-decay Q values. The results of the cluster model are slightly poorer than those from the GLDM if experimental Q values are used. The prediction powers of these two models with theoretical Q values from Audi et al. (QAudi) and Muntian et al. (QM) have been tested to find that the cluster model with QAudi and QM could provide reliable results for Z > 112 but the GLDM with QAudi for Z <= 112. The half-lives of some still unknown nuclei are predicted by these two models and these results may be useful for future experimental assignment and identification.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880025726&hterms=Bayes&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DBayes','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880025726&hterms=Bayes&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DBayes"><span>The use of the logistic model in space motion sickness prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lin, Karl K.; Reschke, Millard F.</p> <p>1987-01-01</p> <p>The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016rpcg.book..139E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016rpcg.book..139E"><span>Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Everett, R. A.; Packer, A. M.; Kuang, Y.</p> <p></p> <p>Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014BpRL....9..173E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014BpRL....9..173E"><span>Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Everett, R. A.; Packer, A. M.; Kuang, Y.</p> <p>2014-04-01</p> <p>Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040082149','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040082149"><span>Radicals and Reservoirs in the GMI Chemistry and Transport Model: Comparison to Measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Douglas, Anne R.; Stolarski, Richard S.; Strahan, Susan E.; Connell, Peter S.</p> <p>2004-01-01</p> <p>The most important use of atmospheric chemistry and transport models is to predict the future composition of the atmosphere. The amounts of gases like chlorofluorcarbons, methyl bromide, nitrous oxide and methane are changing and the stratospheric ozone layer will change because these gases are changing. Methyl bromide, nitrous oxide and methane all have natural sources, and also change because of human activity. Chlorofluorcarbons are man-made gases; these are known to decrease stratospheric ozone and future production is banned. They are long-lived gases, and many decades will pass before they are insignificant in the atmosphere. The models are used to predict changes in ozone and other gases; this is a straightforward application. The models must be also tested using observations for the present day atmosphere. This is a challenging task, because the model contains more than 50 species and more than 150 chemical reactions. Data from satellites, ground stations, aircraft and balloons are used to evaluate the model. Different models that are used in international assessments produce different results; in the most recent assessment some predict that ozone will return to 1980 levels by 2025 and others predict that this will not happen until 2050. Since all the parts of the models are conceptually the same, there must be differences in implementation that produce these differences, This work takes a single model, two different sets of winds and temperatures, and repeats the same prediction for the future. Here we compare the results for these two simulations with many observations. The purpose is to identify differences in the model results for the present atmosphere that will lead to different predictions. This sort of controlled comparison will reduce uncertainty in the predictions for stratospheric ozone.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRB..117.2307R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRB..117.2307R"><span>Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 2. Laboratory earthquakes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.</p> <p>2012-02-01</p> <p>The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28691113','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28691113"><span>Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao</p> <p>2017-06-30</p> <p>Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5494643','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5494643"><span>Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2017-01-01</p> <p>Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26091769','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26091769"><span>Support vector machine in crash prediction at the level of traffic analysis zones: Assessing the spatial proximity effects.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dong, Ni; Huang, Helai; Zheng, Liang</p> <p>2015-09-01</p> <p>In zone-level crash prediction, accounting for spatial dependence has become an extensively studied topic. This study proposes Support Vector Machine (SVM) model to address complex, large and multi-dimensional spatial data in crash prediction. Correlation-based Feature Selector (CFS) was applied to evaluate candidate factors possibly related to zonal crash frequency in handling high-dimension spatial data. To demonstrate the proposed approaches and to compare them with the Bayesian spatial model with conditional autoregressive prior (i.e., CAR), a dataset in Hillsborough county of Florida was employed. The results showed that SVM models accounting for spatial proximity outperform the non-spatial model in terms of model fitting and predictive performance, which indicates the reasonableness of considering cross-zonal spatial correlations. The best model predictive capability, relatively, is associated with the model considering proximity of the centroid distance by choosing the RBF kernel and setting the 10% of the whole dataset as the testing data, which further exhibits SVM models' capacity for addressing comparatively complex spatial data in regional crash prediction modeling. Moreover, SVM models exhibit the better goodness-of-fit compared with CAR models when utilizing the whole dataset as the samples. A sensitivity analysis of the centroid-distance-based spatial SVM models was conducted to capture the impacts of explanatory variables on the mean predicted probabilities for crash occurrence. While the results conform to the coefficient estimation in the CAR models, which supports the employment of the SVM model as an alternative in regional safety modeling. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70019218','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70019218"><span>Testing prediction methods: Earthquake clustering versus the Poisson model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Michael, A.J.</p> <p>1997-01-01</p> <p>Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29096115','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29096115"><span>Model-based predictions for dopamine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Langdon, Angela J; Sharpe, Melissa J; Schoenbaum, Geoffrey; Niv, Yael</p> <p>2018-04-01</p> <p>Phasic dopamine responses are thought to encode a prediction-error signal consistent with model-free reinforcement learning theories. However, a number of recent findings highlight the influence of model-based computations on dopamine responses, and suggest that dopamine prediction errors reflect more dimensions of an expected outcome than scalar reward value. Here, we review a selection of these recent results and discuss the implications and complications of model-based predictions for computational theories of dopamine and learning. Copyright © 2017. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27932671','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27932671"><span>Coupling of EIT with computational lung modeling for predicting patient-specific ventilatory responses.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Roth, Christian J; Becher, Tobias; Frerichs, Inéz; Weiler, Norbert; Wall, Wolfgang A</p> <p>2017-04-01</p> <p>Providing optimal personalized mechanical ventilation for patients with acute or chronic respiratory failure is still a challenge within a clinical setting for each case anew. In this article, we integrate electrical impedance tomography (EIT) monitoring into a powerful patient-specific computational lung model to create an approach for personalizing protective ventilatory treatment. The underlying computational lung model is based on a single computed tomography scan and able to predict global airflow quantities, as well as local tissue aeration and strains for any ventilation maneuver. For validation, a novel "virtual EIT" module is added to our computational lung model, allowing to simulate EIT images based on the patient's thorax geometry and the results of our numerically predicted tissue aeration. Clinically measured EIT images are not used to calibrate the computational model. Thus they provide an independent method to validate the computational predictions at high temporal resolution. The performance of this coupling approach has been tested in an example patient with acute respiratory distress syndrome. The method shows good agreement between computationally predicted and clinically measured airflow data and EIT images. These results imply that the proposed framework can be used for numerical prediction of patient-specific responses to certain therapeutic measures before applying them to an actual patient. In the long run, definition of patient-specific optimal ventilation protocols might be assisted by computational modeling. NEW & NOTEWORTHY In this work, we present a patient-specific computational lung model that is able to predict global and local ventilatory quantities for a given patient and any selected ventilation protocol. For the first time, such a predictive lung model is equipped with a virtual electrical impedance tomography module allowing real-time validation of the computed results with the patient measurements. First promising results obtained in an acute respiratory distress syndrome patient show the potential of this approach for personalized computationally guided optimization of mechanical ventilation in future. Copyright © 2017 the American Physiological Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10453E..1XH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10453E..1XH"><span>A model for prediction of color change after tooth bleaching based on CIELAB color space</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Herrera, Luis J.; Santana, Janiley; Yebra, Ana; Rivas, María. José; Pulgar, Rosa; Pérez, María. M.</p> <p>2017-08-01</p> <p>An experimental study aiming to develop a model based on CIELAB color space for prediction of color change after a tooth bleaching procedure is presented. Multivariate linear regression models were obtained to predict the L*, a*, b* and W* post-bleaching values using the pre-bleaching L*, a*and b*values. Moreover, univariate linear regression models were obtained to predict the variation in chroma (C*), hue angle (h°) and W*. The results demonstrated that is possible to estimate color change when using a carbamide peroxide tooth-bleaching system. The models obtained can be applied in clinic to predict the colour change after bleaching.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3658967','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3658967"><span>Prediction models for clustered data: comparison of a random intercept and standard regression model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27260782','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27260782"><span>Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L</p> <p>2016-08-01</p> <p>Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5299217','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5299217"><span>Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yu, Ying; Wang, Yirui; Tang, Zheng</p> <p>2017-01-01</p> <p>With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28246527','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28246527"><span>Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng</p> <p>2017-01-01</p> <p>With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26140748','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26140748"><span>Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza</p> <p>2015-09-15</p> <p>The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyE...98...45X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyE...98...45X"><span>Elastic properties of graphene: A pseudo-beam model with modified internal bending moment and its application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xia, Z. M.; Wang, C. G.; Tan, H. F.</p> <p>2018-04-01</p> <p>A pseudo-beam model with modified internal bending moment is presented to predict elastic properties of graphene, including the Young's modulus and Poisson's ratio. In order to overcome a drawback in existing molecular structural mechanics models, which only account for pure bending (constant bending moment), the presented model accounts for linear bending moments deduced from the balance equations. Based on this pseudo-beam model, an analytical prediction is accomplished to predict the Young's modulus and Poisson's ratio of graphene based on the equation of the strain energies by using Castigliano second theorem. Then, the elastic properties of graphene are calculated compared with results available in literature, which verifies the feasibility of the pseudo-beam model. Finally, the pseudo-beam model is utilized to study the twisting wrinkling characteristics of annular graphene. Due to modifications of the internal bending moment, the wrinkling behaviors of graphene sheet are predicted accurately. The obtained results show that the pseudo-beam model has a good ability to predict the elastic properties of graphene accurately, especially the out-of-plane deformation behavior.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20924933','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20924933"><span>A simplified building airflow model for agent concentration prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jacques, David R; Smith, David A</p> <p>2010-11-01</p> <p>A simplified building airflow model is presented that can be used to predict the spread of a contaminant agent from a chemical or biological attack. If the dominant means of agent transport throughout the building is an air-handling system operating at steady-state, a linear time-invariant (LTI) model can be constructed to predict the concentration in any room of the building as a result of either an internal or external release. While the model does not capture weather-driven and other temperature-driven effects, it is suitable for concentration predictions under average daily conditions. The model is easily constructed using information that should be accessible to a building manager, supplemented with assumptions based on building codes and standard air-handling system design practices. The results of the model are compared with a popular multi-zone model for a simple building and are demonstrated for building examples containing one or more air-handling systems. The model can be used for rapid concentration prediction to support low-cost placement strategies for chemical and biological detection sensors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27838913','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27838913"><span>Validation of the Economic and Health Outcomes Model of Type 2 Diabetes Mellitus (ECHO-T2DM).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Willis, Michael; Johansen, Pierre; Nilsson, Andreas; Asseburg, Christian</p> <p>2017-03-01</p> <p>The Economic and Health Outcomes Model of Type 2 Diabetes Mellitus (ECHO-T2DM) was developed to address study questions pertaining to the cost-effectiveness of treatment alternatives in the care of patients with type 2 diabetes mellitus (T2DM). Naturally, the usefulness of a model is determined by the accuracy of its predictions. A previous version of ECHO-T2DM was validated against actual trial outcomes and the model predictions were generally accurate. However, there have been recent upgrades to the model, which modify model predictions and necessitate an update of the validation exercises. The objectives of this study were to extend the methods available for evaluating model validity, to conduct a formal model validation of ECHO-T2DM (version 2.3.0) in accordance with the principles espoused by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM), and secondarily to evaluate the relative accuracy of four sets of macrovascular risk equations included in ECHO-T2DM. We followed the ISPOR/SMDM guidelines on model validation, evaluating face validity, verification, cross-validation, and external validation. Model verification involved 297 'stress tests', in which specific model inputs were modified systematically to ascertain correct model implementation. Cross-validation consisted of a comparison between ECHO-T2DM predictions and those of the seminal National Institutes of Health model. In external validation, study characteristics were entered into ECHO-T2DM to replicate the clinical results of 12 studies (including 17 patient populations), and model predictions were compared to observed values using established statistical techniques as well as measures of average prediction error, separately for the four sets of macrovascular risk equations supported in ECHO-T2DM. Sub-group analyses were conducted for dependent vs. independent outcomes and for microvascular vs. macrovascular vs. mortality endpoints. All stress tests were passed. ECHO-T2DM replicated the National Institutes of Health cost-effectiveness application with numerically similar results. In external validation of ECHO-T2DM, model predictions agreed well with observed clinical outcomes. For all sets of macrovascular risk equations, the results were close to the intercept and slope coefficients corresponding to a perfect match, resulting in high R 2 and failure to reject concordance using an F test. The results were similar for sub-groups of dependent and independent validation, with some degree of under-prediction of macrovascular events. ECHO-T2DM continues to match health outcomes in clinical trials in T2DM, with prediction accuracy similar to other leading models of T2DM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880016537','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880016537"><span>Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Elrod, David Alan</p> <p>1988-01-01</p> <p>The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19537550','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19537550"><span>Comparing niche- and process-based models to reduce prediction uncertainty in species range shifts under climate change.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morin, Xavier; Thuiller, Wilfried</p> <p>2009-05-01</p> <p>Obtaining reliable predictions of species range shifts under climate change is a crucial challenge for ecologists and stakeholders. At the continental scale, niche-based models have been widely used in the last 10 years to predict the potential impacts of climate change on species distributions all over the world, although these models do not include any mechanistic relationships. In contrast, species-specific, process-based predictions remain scarce at the continental scale. This is regrettable because to secure relevant and accurate predictions it is always desirable to compare predictions derived from different kinds of models applied independently to the same set of species and using the same raw data. Here we compare predictions of range shifts under climate change scenarios for 2100 derived from niche-based models with those of a process-based model for 15 North American boreal and temperate tree species. A general pattern emerged from our comparisons: niche-based models tend to predict a stronger level of extinction and a greater proportion of colonization than the process-based model. This result likely arises because niche-based models do not take phenotypic plasticity and local adaptation into account. Nevertheless, as the two kinds of models rely on different assumptions, their complementarity is revealed by common findings. Both modeling approaches highlight a major potential limitation on species tracking their climatic niche because of migration constraints and identify similar zones where species extirpation is likely. Such convergent predictions from models built on very different principles provide a useful way to offset uncertainties at the continental scale. This study shows that the use in concert of both approaches with their own caveats and advantages is crucial to obtain more robust results and that comparisons among models are needed in the near future to gain accuracy regarding predictions of range shifts under climate change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26084541','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26084541"><span>Fuzzy association rule mining and classification for the prediction of malaria in South Korea.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Buczak, Anna L; Baugher, Benjamin; Guven, Erhan; Ramac-Thomas, Liane C; Elbert, Yevgeniy; Babin, Steven M; Lewis, Sheri H</p> <p>2015-06-18</p> <p>Malaria is the world's most prevalent vector-borne disease. Accurate prediction of malaria outbreaks may lead to public health interventions that mitigate disease morbidity and mortality. We describe an application of a method for creating prediction models utilizing Fuzzy Association Rule Mining to extract relationships between epidemiological, meteorological, climatic, and socio-economic data from Korea. These relationships are in the form of rules, from which the best set of rules is automatically chosen and forms a classifier. Two classifiers have been built and their results fused to become a malaria prediction model. Future malaria cases are predicted as Low, Medium or High, where these classes are defined as a total of 0-2, 3-16, and above 17 cases, respectively, for a region in South Korea during a two-week period. Based on user recommendations, HIGH is considered an outbreak. Model accuracy is described by Positive Predictive Value (PPV), Sensitivity, and F-score for each class, computed on test data not previously used to develop the model. For predictions made 7-8 weeks in advance, model PPV and Sensitivity are 0.842 and 0.681, respectively, for the HIGH classes. The F0.5 and F3 scores (which combine PPV and Sensitivity) are 0.804 and 0.694, respectively, for the HIGH classes. The overall FARM results (as measured by F-scores) are significantly better than those obtained by Decision Tree, Random Forest, Support Vector Machine, and Holt-Winters methods for the HIGH class. For the Medium class, Random Forest and FARM obtain comparable results, with FARM being better at F0.5, and Random Forest obtaining a higher F3. A previously described method for creating disease prediction models has been modified and extended to build models for predicting malaria. In addition, some new input variables were used, including indicators of intervention measures. The South Korea malaria prediction models predict Low, Medium or High cases 7-8 weeks in the future. This paper demonstrates that our data driven approach can be used for the prediction of different diseases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27511005','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27511005"><span>Silent Expectations: Dynamic Causal Modeling of Cortical Prediction and Attention to Sounds That Weren't.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Shtyrov, Yury; Bekinschtein, Tristan A; Henson, Richard</p> <p>2016-08-10</p> <p>There is increasing evidence that human perception is realized by a hierarchy of neural processes in which predictions sent backward from higher levels result in prediction errors that are fed forward from lower levels, to update the current model of the environment. Moreover, the precision of prediction errors is thought to be modulated by attention. Much of this evidence comes from paradigms in which a stimulus differs from that predicted by the recent history of other stimuli (generating a so-called "mismatch response"). There is less evidence from situations where a prediction is not fulfilled by any sensory input (an "omission" response). This situation arguably provides a more direct measure of "top-down" predictions in the absence of confounding "bottom-up" input. We applied Dynamic Causal Modeling of evoked electromagnetic responses recorded by EEG and MEG to an auditory paradigm in which we factorially crossed the presence versus absence of "bottom-up" stimuli with the presence versus absence of "top-down" attention. Model comparison revealed that both mismatch and omission responses were mediated by increased forward and backward connections, differing primarily in the driving input. In both responses, modeling results suggested that the presence of attention selectively modulated backward "prediction" connections. Our results provide new model-driven evidence of the pure top-down prediction signal posited in theories of hierarchical perception, and highlight the role of attentional precision in strengthening this prediction. Human auditory perception is thought to be realized by a network of neurons that maintain a model of and predict future stimuli. Much of the evidence for this comes from experiments where a stimulus unexpectedly differs from previous ones, which generates a well-known "mismatch response." But what happens when a stimulus is unexpectedly omitted altogether? By measuring the brain's electromagnetic activity, we show that it also generates an "omission response" that is contingent on the presence of attention. We model these responses computationally, revealing that mismatch and omission responses only differ in the location of inputs into the same underlying neuronal network. In both cases, we show that attention selectively strengthens the brain's prediction of the future. Copyright © 2016 Chennu et al.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29937531','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29937531"><span>Four Major South Korea's Rivers Using Deep Learning Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Sangmok; Lee, Donghyun</p> <p>2018-06-24</p> <p>Harmful algal blooms are an annual phenomenon that cause environmental damage, economic losses, and disease outbreaks. A fundamental solution to this problem is still lacking, thus, the best option for counteracting the effects of algal blooms is to improve advance warnings (predictions). However, existing physical prediction models have difficulties setting a clear coefficient indicating the relationship between each factor when predicting algal blooms, and many variable data sources are required for the analysis. These limitations are accompanied by high time and economic costs. Meanwhile, artificial intelligence and deep learning methods have become increasingly common in scientific research; attempts to apply the long short-term memory (LSTM) model to environmental research problems are increasing because the LSTM model exhibits good performance for time-series data prediction. However, few studies have applied deep learning models or LSTM to algal bloom prediction, especially in South Korea, where algal blooms occur annually. Therefore, we employed the LSTM model for algal bloom prediction in four major rivers of South Korea. We conducted short-term (one week) predictions by employing regression analysis and deep learning techniques on a newly constructed water quality and quantity dataset drawn from 16 dammed pools on the rivers. Three deep learning models (multilayer perceptron, MLP; recurrent neural network, RNN; and long short-term memory, LSTM) were used to predict chlorophyll-a, a recognized proxy for algal activity. The results were compared to those from OLS (ordinary least square) regression analysis and actual data based on the root mean square error (RSME). The LSTM model showed the highest prediction rate for harmful algal blooms and all deep learning models out-performed the OLS regression analysis. Our results reveal the potential for predicting algal blooms using LSTM and deep learning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/403630-predicting-overload-affected-fatigue-crack-growth-steels','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/403630-predicting-overload-affected-fatigue-crack-growth-steels"><span>Predicting overload-affected fatigue crack growth in steels</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Skorupa, M.; Skorupa, A.; Ladecki, B.</p> <p>1996-12-01</p> <p>The ability of semi-empirical crack closure models to predict the effect of overloads on fatigue crack growth in low-alloy steels has been investigated. With this purpose, the CORPUS model developed for aircraft metals and spectra has been checked first through comparisons between the simulated and observed results for a low-alloy steel. The CORPUS predictions of crack growth under several types of simple load histories containing overloads appeared generally unconservative which prompted the authors to formulate a new model, more suitable for steels. With the latter approach, the assumed evolution of the crack opening stress during the delayed retardation stage hasmore » been based on experimental results reported for various steels. For all the load sequences considered, the predictions from the proposed model appeared to be by far more accurate than those from CORPUS. Based on the analysis results, the capability of semi-empirical prediction concepts to cover experimentally observed trends that have been reported for sequences with overloads is discussed. Finally, possibilities of improving the model performance are considered.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..MARE35010F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..MARE35010F"><span>Predicting community composition from pairwise interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Friedman, Jonathan; Higgins, Logan; Gore, Jeff</p> <p></p> <p>The ability to predict the structure of complex, multispecies communities is crucial for understanding the impact of species extinction and invasion on natural communities, as well as for engineering novel, synthetic communities. Communities are often modeled using phenomenological models, such as the classical generalized Lotka-Volterra (gLV) model. While a lot of our intuition comes from such models, their predictive power has rarely been tested experimentally. To directly assess the predictive power of this approach, we constructed synthetic communities comprised of up to 8 soil bacteria. We measured the outcome of competition between all species pairs, and used these measurements to predict the composition of communities composed of more than 2 species. The pairwise competitions resulted in a diverse set of outcomes, including coexistence, exclusion, and bistability, and displayed evidence for both interference and facilitation. Most pair outcomes could be captured by the gLV framework, and the composition of multispecies communities could be predicted for communities composed solely of such pairs. Our results demonstrate the predictive ability and utility of simple phenomenology, which enables accurate predictions in the absence of mechanistic details.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26ES..100a2146L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26ES..100a2146L"><span>Passenger Flow Forecasting Research for Airport Terminal Based on SARIMA Time Series Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Ziyu; Bi, Jun; Li, Zhiyin</p> <p>2017-12-01</p> <p>Based on the data of practical operating of Kunming Changshui International Airport during2016, this paper proposes Seasonal Autoregressive Integrated Moving Average (SARIMA) model to predict the passenger flow. This article not only considers the non-stationary and autocorrelation of the sequence, but also considers the daily periodicity of the sequence. The prediction results can accurately describe the change trend of airport passenger flow and provide scientific decision support for the optimal allocation of airport resources and optimization of departure process. The result shows that this model is applicable to the short-term prediction of airport terminal departure passenger traffic and the average error ranges from 1% to 3%. The difference between the predicted and the true values of passenger traffic flow is quite small, which indicates that the model has fairly good passenger traffic flow prediction ability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150006028','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150006028"><span>Assessment of Geometry and In-Flow Effects on Contra-Rotating Open Rotor Broadband Noise Predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zawodny, Nikolas S.; Nark, Douglas M.; Boyd, D. Douglas, Jr.</p> <p>2015-01-01</p> <p>Application of previously formulated semi-analytical models for the prediction of broadband noise due to turbulent rotor wake interactions and rotor blade trailing edges is performed on the historical baseline F31/A31 contra-rotating open rotor configuration. Simplified two-dimensional blade element analysis is performed on cambered NACA 4-digit airfoil profiles, which are meant to serve as substitutes for the actual rotor blade sectional geometries. Rotor in-flow effects such as induced axial and tangential velocities are incorporated into the noise prediction models based on supporting computational fluid dynamics (CFD) results and simplified in-flow velocity models. Emphasis is placed on the development of simplified rotor in-flow models for the purpose of performing accurate noise predictions independent of CFD information. The broadband predictions are found to compare favorably with experimental acoustic results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930001568','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930001568"><span>Improve SSME power balance model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Karr, Gerald R.</p> <p>1992-01-01</p> <p>Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.7982S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.7982S"><span>Opportunities of probabilistic flood loss models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno</p> <p>2016-04-01</p> <p>Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved in comparison to uni-variable Stage damage function. Overall, Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9840E..0NE','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9840E..0NE"><span>Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.</p> <p>2016-05-01</p> <p>Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ACP....15.3445H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ACP....15.3445H"><span>Long-term particulate matter modeling for health effect studies in California - Part 1: Model performance on temporal and spatial variations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, J.; Zhang, H.; Ying, Q.; Chen, S.-H.; Vandenberghe, F.; Kleeman, M. J.</p> <p>2015-03-01</p> <p>For the first time, a ~ decadal (9 years from 2000 to 2008) air quality model simulation with 4 km horizontal resolution over populated regions and daily time resolution has been conducted for California to provide air quality data for health effect studies. Model predictions are compared to measurements to evaluate the accuracy of the simulation with an emphasis on spatial and temporal variations that could be used in epidemiology studies. Better model performance is found at longer averaging times, suggesting that model results with averaging times ≥ 1 month should be the first to be considered in epidemiological studies. The UCD/CIT model predicts spatial and temporal variations in the concentrations of O3, PM2.5, elemental carbon (EC), organic carbon (OC), nitrate, and ammonium that meet standard modeling performance criteria when compared to monthly-averaged measurements. Predicted sulfate concentrations do not meet target performance metrics due to missing sulfur sources in the emissions. Predicted seasonal and annual variations of PM2.5, EC, OC, nitrate, and ammonium have mean fractional biases that meet the model performance criteria in 95, 100, 71, 73, and 92% of the simulated months, respectively. The base data set provides an improvement for predicted population exposure to PM concentrations in California compared to exposures estimated by central site monitors operated 1 day out of every 3 days at a few urban locations. Uncertainties in the model predictions arise from several issues. Incomplete understanding of secondary organic aerosol formation mechanisms leads to OC bias in the model results in summertime but does not affect OC predictions in winter when concentrations are typically highest. The CO and NO (species dominated by mobile emissions) results reveal temporal and spatial uncertainties associated with the mobile emissions generated by the EMFAC 2007 model. The WRF model tends to overpredict wind speed during stagnation events, leading to underpredictions of high PM concentrations, usually in winter months. The WRF model also generally underpredicts relative humidity, resulting in less particulate nitrate formation, especially during winter months. These limitations must be recognized when using data in health studies. All model results included in the current manuscript can be downloaded free of charge at <a href="http://faculty.engineering.ucdavis.edu/kleeman/"target="_blank">http://faculty.engineering.ucdavis.edu/kleeman/</a> .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10168E..3PA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10168E..3PA"><span>Prediction-error variance in Bayesian model updating: a comparative study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Asadollahi, Parisa; Li, Jian; Huang, Yong</p> <p>2017-04-01</p> <p>In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PMB....63j5015J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PMB....63j5015J"><span>Incorporating drug delivery into an imaging-driven, mechanics-coupled reaction diffusion model for predicting the response of breast cancer to neoadjuvant chemotherapy: theory and preliminary clinical results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jarrett, Angela M.; Hormuth, David A.; Barnes, Stephanie L.; Feng, Xinzeng; Huang, Wei; Yankeelov, Thomas E.</p> <p>2018-05-01</p> <p>Clinical methods for assessing tumor response to therapy are largely rudimentary, monitoring only temporal changes in tumor size. Our goal is to predict the response of breast tumors to therapy using a mathematical model that utilizes magnetic resonance imaging (MRI) data obtained non-invasively from individual patients. We extended a previously established, mechanically coupled, reaction-diffusion model for predicting tumor response initialized with patient-specific diffusion weighted MRI (DW-MRI) data by including the effects of chemotherapy drug delivery, which is estimated using dynamic contrast-enhanced (DCE-) MRI data. The extended, drug incorporated, model is initialized using patient-specific DW-MRI and DCE-MRI data. Data sets from five breast cancer patients were used—obtained before, after one cycle, and at mid-point of neoadjuvant chemotherapy. The DCE-MRI data was used to estimate spatiotemporal variations in tumor perfusion with the extended Kety–Tofts model. The physiological parameters derived from DCE-MRI were used to model changes in delivery of therapy drugs within the tumor for incorporation in the extended model. We simulated the original model and the extended model in both 2D and 3D and compare the results for this five-patient cohort. Preliminary results show reductions in the error of model predicted tumor cellularity and size compared to the experimentally-measured results for the third MRI scan when therapy was incorporated. Comparing the two models for agreement between the predicted total cellularity and the calculated total cellularity (from the DW-MRI data) reveals an increased concordance correlation coefficient from 0.81 to 0.98 for the 2D analysis and 0.85 to 0.99 for the 3D analysis (p  <  0.01 for each) when the extended model was used in place of the original model. This study demonstrates the plausibility of using DCE-MRI data as a means to estimate drug delivery on a patient-specific basis in predictive models and represents a step toward the goal of achieving individualized prediction of tumor response to therapy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5555569','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5555569"><span>Short-term prediction of solar energy in Saudi Arabia using automated-design fuzzy logic systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2017-01-01</p> <p>Solar energy is considered as one of the main sources for renewable energy in the near future. However, solar energy and other renewable energy sources have a drawback related to the difficulty in predicting their availability in the near future. This problem affects optimal exploitation of solar energy, especially in connection with other resources. Therefore, reliable solar energy prediction models are essential to solar energy management and economics. This paper presents work aimed at designing reliable models to predict the global horizontal irradiance (GHI) for the next day in 8 stations in Saudi Arabia. The designed models are based on computational intelligence methods of automated-design fuzzy logic systems. The fuzzy logic systems are designed and optimized with two models using fuzzy c-means clustering (FCM) and simulated annealing (SA) algorithms. The first model uses FCM based on the subtractive clustering algorithm to automatically design the predictor fuzzy rules from data. The second model is using FCM followed by simulated annealing algorithm to enhance the prediction accuracy of the fuzzy logic system. The objective of the predictor is to accurately predict next-day global horizontal irradiance (GHI) using previous-day meteorological and solar radiation observations. The proposed models use observations of 10 variables of measured meteorological and solar radiation data to build the model. The experimentation and results of the prediction are detailed where the root mean square error of the prediction was approximately 88% for the second model tuned by simulated annealing compared to 79.75% accuracy using the first model. This results demonstrate a good modeling accuracy of the second model despite that the training and testing of the proposed models were carried out using spatially and temporally independent data. PMID:28806754</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28806754','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28806754"><span>Short-term prediction of solar energy in Saudi Arabia using automated-design fuzzy logic systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Almaraashi, Majid</p> <p>2017-01-01</p> <p>Solar energy is considered as one of the main sources for renewable energy in the near future. However, solar energy and other renewable energy sources have a drawback related to the difficulty in predicting their availability in the near future. This problem affects optimal exploitation of solar energy, especially in connection with other resources. Therefore, reliable solar energy prediction models are essential to solar energy management and economics. This paper presents work aimed at designing reliable models to predict the global horizontal irradiance (GHI) for the next day in 8 stations in Saudi Arabia. The designed models are based on computational intelligence methods of automated-design fuzzy logic systems. The fuzzy logic systems are designed and optimized with two models using fuzzy c-means clustering (FCM) and simulated annealing (SA) algorithms. The first model uses FCM based on the subtractive clustering algorithm to automatically design the predictor fuzzy rules from data. The second model is using FCM followed by simulated annealing algorithm to enhance the prediction accuracy of the fuzzy logic system. The objective of the predictor is to accurately predict next-day global horizontal irradiance (GHI) using previous-day meteorological and solar radiation observations. The proposed models use observations of 10 variables of measured meteorological and solar radiation data to build the model. The experimentation and results of the prediction are detailed where the root mean square error of the prediction was approximately 88% for the second model tuned by simulated annealing compared to 79.75% accuracy using the first model. This results demonstrate a good modeling accuracy of the second model despite that the training and testing of the proposed models were carried out using spatially and temporally independent data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21553177','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21553177"><span>The prediction of intelligence in preschool children using alternative models to regression.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Finch, W Holmes; Chang, Mei; Davis, Andrew S; Holden, Jocelyn E; Rothlisberg, Barbara A; McIntosh, David E</p> <p>2011-12-01</p> <p>Statistical prediction of an outcome variable using multiple independent variables is a common practice in the social and behavioral sciences. For example, neuropsychologists are sometimes called upon to provide predictions of preinjury cognitive functioning for individuals who have suffered a traumatic brain injury. Typically, these predictions are made using standard multiple linear regression models with several demographic variables (e.g., gender, ethnicity, education level) as predictors. Prior research has shown conflicting evidence regarding the ability of such models to provide accurate predictions of outcome variables such as full-scale intelligence (FSIQ) test scores. The present study had two goals: (1) to demonstrate the utility of a set of alternative prediction methods that have been applied extensively in the natural sciences and business but have not been frequently explored in the social sciences and (2) to develop models that can be used to predict premorbid cognitive functioning in preschool children. Predictions of Stanford-Binet 5 FSIQ scores for preschool-aged children is used to compare the performance of a multiple regression model with several of these alternative methods. Results demonstrate that classification and regression trees provided more accurate predictions of FSIQ scores than does the more traditional regression approach. Implications of these results are discussed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MMTA...47.3533T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MMTA...47.3533T"><span>Process Optimization of Dual-Laser Beam Welding of Advanced Al-Li Alloys Through Hot Cracking Susceptibility Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra</p> <p>2016-07-01</p> <p>Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28424293','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28424293"><span>Prediction suppression and surprise enhancement in monkey inferotemporal cortex.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R</p> <p>2017-07-01</p> <p>Exposing monkeys, over the course of days and weeks, to pairs of images presented in fixed sequence, so that each leading image becomes a predictor for the corresponding trailing image, affects neuronal visual responsiveness in area TE. At the end of the training period, neurons respond relatively weakly to a trailing image when it appears in a trained sequence and, thus, confirms prediction, whereas they respond relatively strongly to the same image when it appears in an untrained sequence and, thus, violates prediction. This effect could arise from prediction suppression (reduced firing in response to the occurrence of a probable event) or surprise enhancement (elevated firing in response to the omission of a probable event). To identify its cause, we compared firing under the prediction-confirming and prediction-violating conditions to firing under a prediction-neutral condition. The results provide strong evidence for prediction suppression and limited evidence for surprise enhancement. NEW & NOTEWORTHY In predictive coding models of the visual system, neurons carry signed prediction error signals. We show here that monkey inferotemporal neurons exhibit prediction-modulated firing, as posited by these models, but that the signal is unsigned. The response to a prediction-confirming image is suppressed, and the response to a prediction-violating image may be enhanced. These results are better explained by a model in which the visual system emphasizes unpredicted events than by a predictive coding model. Copyright © 2017 the American Physiological Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21233046','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21233046"><span>Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ding, Jinliang; Chai, Tianyou; Wang, Hong</p> <p>2011-03-01</p> <p>This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28588533','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28588533"><span>A Probabilistic Model of Meter Perception: Simulating Enculturation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van der Weij, Bastiaan; Pearce, Marcus T; Honing, Henkjan</p> <p>2017-01-01</p> <p>Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29243988','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29243988"><span>Prostate Cancer Probability Prediction By Machine Learning Technique.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena</p> <p>2017-11-26</p> <p>The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.tmp..152W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.tmp..152W"><span>Evaluation of global climate model on performances of precipitation simulation and prediction in the Huaihe River basin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Yenan; Zhong, Ping-an; Xu, Bin; Zhu, Feilin; Fu, Jisi</p> <p>2017-06-01</p> <p>Using climate models with high performance to predict the future climate changes can increase the reliability of results. In this paper, six kinds of global climate models that selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) under Representative Concentration Path (RCP) 4.5 scenarios were compared to the measured data during baseline period (1960-2000) and evaluate the simulation performance on precipitation. Since the results of single climate models are often biased and highly uncertain, we examine the back propagation (BP) neural network and arithmetic mean method in assembling the precipitation of multi models. The delta method was used to calibrate the result of single model and multimodel ensembles by arithmetic mean method (MME-AM) during the validation period (2001-2010) and the predicting period (2011-2100). We then use the single models and multimodel ensembles to predict the future precipitation process and spatial distribution. The result shows that BNU-ESM model has the highest simulation effect among all the single models. The multimodel assembled by BP neural network (MME-BP) has a good simulation performance on the annual average precipitation process and the deterministic coefficient during the validation period is 0.814. The simulation capability on spatial distribution of precipitation is: calibrated MME-AM > MME-BP > calibrated BNU-ESM. The future precipitation predicted by all models tends to increase as the time period increases. The order of average increase amplitude of each season is: winter > spring > summer > autumn. These findings can provide useful information for decision makers to make climate-related disaster mitigation plans.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29238832','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29238832"><span>Dynamic Simulation of Human Gait Model With Predictive Capability.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sun, Jinming; Wu, Shaoli; Voglewede, Philip A</p> <p>2018-03-01</p> <p>In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MMTB...47.2302Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MMTB...47.2302Y"><span>Critical Evaluation of Prediction Models for Phosphorus Partition between CaO-based Slags and Iron-based Melts during Dephosphorization Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Xue-Min; Li, Jin-Yan; Chai, Guo-Ming; Duan, Dong-Ping; Zhang, Jian</p> <p>2016-08-01</p> <p>According to the experimental results of hot metal dephosphorization by CaO-based slags at a commercial-scale hot metal pretreatment station, the collected 16 models of equilibrium quotient k_{{P}} or phosphorus partition L_{{P}} between CaO-based slags and iron-based melts from the literature have been evaluated. The collected 16 models for predicting equilibrium quotient k_{{P}} can be transferred to predict phosphorus partition L_{{P}} . The predicted results by the collected 16 models cannot be applied to be criteria for evaluating k_{{P}} or L_{{P}} due to various forms or definitions of k_{{P}} or L_{{P}} . Thus, the measured phosphorus content [pct P] in a hot metal bath at the end point of the dephosphorization pretreatment process is applied to be the fixed criteria for evaluating the collected 16 models. The collected 16 models can be described in the form of linear functions as y = c0 + c1 x , in which independent variable x represents the chemical composition of slags, intercept c0 including the constant term depicts the temperature effect and other unmentioned or acquiescent thermodynamic factors, and slope c1 is regressed by the experimental results of k_{{P}} or L_{{P}} . Thus, a general approach to developing the thermodynamic model for predicting equilibrium quotient k_{{P}} or phosphorus partition L P or [pct P] in iron-based melts during the dephosphorization process is proposed by revising the constant term in intercept c0 for the summarized 15 models except for Suito's model (M3). The better models with an ideal revising possibility or flexibility among the collected 16 models have been selected and recommended. Compared with the predicted result by the revised 15 models and Suito's model (M3), the developed IMCT- L_{{P}} model coupled with the proposed dephosphorization mechanism by the present authors can be applied to accurately predict phosphorus partition L_{{P}} with the lowest mean deviation δ_{{L_{{P}} }} of log L_{{P}} as 2.33, as well as to predict [pct P] in a hot metal bath with the smallest mean deviation δ_{{[% {{ P}}]}} of [pct P] as 12.31.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23319711','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23319711"><span>Accuracy of the actuator disc-RANS approach for predicting the performance and wake of tidal turbines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Batten, W M J; Harrison, M E; Bahaj, A S</p> <p>2013-02-28</p> <p>The actuator disc-RANS model has widely been used in wind and tidal energy to predict the wake of a horizontal axis turbine. The model is appropriate where large-scale effects of the turbine on a flow are of interest, for example, when considering environmental impacts, or arrays of devices. The accuracy of the model for modelling the wake of tidal stream turbines has not been demonstrated, and flow predictions presented in the literature for similar modelled scenarios vary significantly. This paper compares the results of the actuator disc-RANS model, where the turbine forces have been derived using a blade-element approach, to experimental data measured in the wake of a scaled turbine. It also compares the results with those of a simpler uniform actuator disc model. The comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, therefore demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads. It can therefore be applied to similar scenarios with confidence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..280a2041S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..280a2041S"><span>Development of Multi-Layered Floating Floor for Cabin Noise Reduction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Song, Jee-Hun; Hong, Suk-Yoon; Kwon, Hyun-Wung</p> <p>2017-12-01</p> <p>Recently, regulations pertaining to the noise and vibration environment of ship cabins have been strengthened. In this paper, a numerical model is developed for multi-layered floating floor to predict the structure-borne noise in ship cabins. The theoretical model consists of multi-panel structures lined with high-density mineral wool. The predicted results for structure-borne noise when multi-layered floating floor is used are compared to the measure-ments made of a mock-up. A comparison of the predicted results and the experimental one shows that the developed model could be an effective tool for predicting structure-borne noise in ship cabins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5359075','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5359075"><span>SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.</p> <p>2017-01-01</p> <p>Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1914782H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1914782H"><span>Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix</p> <p>2017-04-01</p> <p>It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/15082','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/15082"><span>Predicting diameters inside bark for 10 important hardwood species</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Donald E. Hilt; Everette D. Rast; Herman J. Bailey</p> <p>1983-01-01</p> <p>General models for predicting DIB/DOB ratios up the stem, applicable over wide geographic areas, have been developed for 10 important hardwood species. Results indicate that the ratios either decrease or remain constant up the stem. Methods for adjusting the general models to local conditions are presented. The prediction models can be used in conjunction with optical...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23861828','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23861828"><span>Thematic and spatial resolutions affect model-based predictions of tree species distribution.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei</p> <p>2013-01-01</p> <p>Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3701650','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3701650"><span>Thematic and Spatial Resolutions Affect Model-Based Predictions of Tree Species Distribution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liang, Yu; He, Hong S.; Fraser, Jacob S.; Wu, ZhiWei</p> <p>2013-01-01</p> <p>Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution. PMID:23861828</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5966666','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5966666"><span>Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Hongpo</p> <p>2018-01-01</p> <p>Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4511118','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4511118"><span>Neuropsychological tests for predicting cognitive decline in older adults</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Baerresen, Kimberly M; Miller, Karen J; Hanson, Eric R; Miller, Justin S; Dye, Richelin V; Hartman, Richard E; Vermeersch, David; Small, Gary W</p> <p>2015-01-01</p> <p>Summary Aim To determine neuropsychological tests likely to predict cognitive decline. Methods A sample of nonconverters (n = 106) was compared with those who declined in cognitive status (n = 24). Significant univariate logistic regression prediction models were used to create multivariate logistic regression models to predict decline based on initial neuropsychological testing. Results Rey–Osterrieth Complex Figure Test (RCFT) Retention predicted conversion to mild cognitive impairment (MCI) while baseline Buschke Delay predicted conversion to Alzheimer’s disease (AD). Due to group sample size differences, additional analyses were conducted using a subsample of demographically matched nonconverters. Analyses indicated RCFT Retention predicted conversion to MCI and AD, and Buschke Delay predicted conversion to AD. Conclusion Results suggest RCFT Retention and Buschke Delay may be useful in predicting cognitive decline. PMID:26107318</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JCAMD..22..367G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JCAMD..22..367G"><span>Utilizing high throughput screening data for predictive toxicology models: protocols and application to MLSCN assays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guha, Rajarshi; Schürer, Stephan C.</p> <p>2008-06-01</p> <p>Computational toxicology is emerging as an encouraging alternative to experimental testing. The Molecular Libraries Screening Center Network (MLSCN) as part of the NIH Molecular Libraries Roadmap has recently started generating large and diverse screening datasets, which are publicly available in PubChem. In this report, we investigate various aspects of developing computational models to predict cell toxicity based on cell proliferation screening data generated in the MLSCN. By capturing feature-based information in those datasets, such predictive models would be useful in evaluating cell-based screening results in general (for example from reporter assays) and could be used as an aid to identify and eliminate potentially undesired compounds. Specifically we present the results of random forest ensemble models developed using different cell proliferation datasets and highlight protocols to take into account their extremely imbalanced nature. Depending on the nature of the datasets and the descriptors employed we were able to achieve percentage correct classification rates between 70% and 85% on the prediction set, though the accuracy rate dropped significantly when the models were applied to in vivo data. In this context we also compare the MLSCN cell proliferation results with animal acute toxicity data to investigate to what extent animal toxicity can be correlated and potentially predicted by proliferation results. Finally, we present a visualization technique that allows one to compare a new dataset to the training set of the models to decide whether the new dataset may be reliably predicted.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.treesearch.fs.fed.us/pubs/15661','USGSPUBS'); return false;" href="http://www.treesearch.fs.fed.us/pubs/15661"><span>Validation of behave fire behavior predictions in oak savannas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Grabner, Keith W.; Dwyer, John; Cutter, Bruce E.</p> <p>1997-01-01</p> <p>Prescribed fire is a valuable tool in the restoration and management of oak savannas. BEHAVE, a fire behavior prediction system developed by the United States Forest Service, can be a useful tool when managing oak savannas with prescribed fire. BEHAVE predictions of fire rate-of-spread and flame length were validated using four standardized fuel models: Fuel Model 1 (short grass), Fuel Model 2 (timber and grass), Fuel Model 3 (tall grass), and Fuel Model 9 (hardwood litter). Also, a customized oak savanna fuel model (COSFM) was created and validated. Results indicate that standardized fuel model 2 and the COSFM reliably estimate mean rate-of-spread (MROS). The COSFM did not appreciably reduce MROS variation when compared to fuel model 2. Fuel models 1, 3, and 9 did not reliably predict MROS. Neither the standardized fuel models nor the COSFM adequately predicted flame lengths. We concluded that standardized fuel model 2 should be used with BEHAVE when predicting fire rates-of-spread in established oak savannas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24057664','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24057664"><span>Using a neural network approach and time series data from an international monitoring station in the Yellow Sea for modeling marine ecosystems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Yingying; Wang, Juncheng; Vorontsov, A M; Hou, Guangli; Nikanorova, M N; Wang, Hongliang</p> <p>2014-01-01</p> <p>The international marine ecological safety monitoring demonstration station in the Yellow Sea was developed as a collaborative project between China and Russia. It is a nonprofit technical workstation designed as a facility for marine scientific research for public welfare. By undertaking long-term monitoring of the marine environment and automatic data collection, this station will provide valuable information for marine ecological protection and disaster prevention and reduction. The results of some initial research by scientists at the research station into predictive modeling of marine ecological environments and early warning are described in this paper. Marine ecological processes are influenced by many factors including hydrological and meteorological conditions, biological factors, and human activities. Consequently, it is very difficult to incorporate all these influences and their interactions in a deterministic or analysis model. A prediction model integrating a time series prediction approach with neural network nonlinear modeling is proposed for marine ecological parameters. The model explores the natural fluctuations in marine ecological parameters by learning from the latest observed data automatically, and then predicting future values of the parameter. The model is updated in a "rolling" fashion with new observed data from the monitoring station. Prediction experiments results showed that the neural network prediction model based on time series data is effective for marine ecological prediction and can be used for the development of early warning systems.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5885405','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5885405"><span>A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2018-01-01</p> <p>The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29765399','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29765399"><span>A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He</p> <p>2018-01-01</p> <p>The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5526681','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5526681"><span>Using connectome-based predictive modeling to predict individual behavior from brain connectivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd</p> <p>2017-01-01</p> <p>Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1836b0069G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1836b0069G"><span>A hybrid predictive model for acoustic noise in urban areas based on time series analysis and artificial neural network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine</p> <p>2017-06-01</p> <p>The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5369098','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5369098"><span>Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian</p> <p>2017-01-01</p> <p>Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)4 model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends. PMID:28273856</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3154188','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3154188"><span>Development and Validation of a Risk Model for Prediction of Hazardous Alcohol Consumption in General Practice Attendees: The PredictAL Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I.; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin</p> <p>2011-01-01</p> <p>Background Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. Results 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse. PMID:21853028</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009LNCS.5866..646B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009LNCS.5866..646B"><span>Aggregation Trade Offs in Family Based Recommendations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berkovsky, Shlomo; Freyne, Jill; Coombe, Mac</p> <p></p> <p>Personalized information access tools are frequently based on collaborative filtering recommendation algorithms. Collaborative filtering recommender systems typically suffer from a data sparsity problem, where systems do not have sufficient user data to generate accurate and reliable predictions. Prior research suggested using group-based user data in the collaborative filtering recommendation process to generate group-based predictions and partially resolve the sparsity problem. Although group recommendations are less accurate than personalized recommendations, they are more accurate than general non-personalized recommendations, which are the natural fall back when personalized recommendations cannot be generated. In this work we present initial results of a study that exploits the browsing logs of real families of users gathered in an eHealth portal. The browsing logs allowed us to experimentally compare the accuracy of two group-based recommendation strategies: aggregated group models and aggregated predictions. Our results showed that aggregating individual models into group models resulted in more accurate predictions than aggregating individual predictions into group predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1113599-injection-molded-long-fiber-thermoplastic-composites-from-process-modeling-prediction-mechanical-properties','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1113599-injection-molded-long-fiber-thermoplastic-composites-from-process-modeling-prediction-mechanical-properties"><span>Injection-Molded Long-Fiber Thermoplastic Composites: From Process Modeling to Prediction of Mechanical Properties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Nguyen, Ba Nghiep; Kunc, Vlastimil; Jin, Xiaoshi</p> <p>2013-12-18</p> <p>This article illustrates the predictive capabilities for long-fiber thermoplastic (LFT) composites that first simulate the injection molding of LFT structures by Autodesk® Simulation Moldflow® Insight (ASMI) to accurately predict fiber orientation and length distributions in these structures. After validating fiber orientation and length predictions against the experimental data, the predicted results are used by ASMI to compute distributions of elastic properties in the molded structures. In addition, local stress-strain responses and damage accumulation under tensile loading are predicted by an elastic-plastic damage model of EMTA-NLA, a nonlinear analysis tool implemented in ABAQUS® via user-subroutines using an incremental Eshelby-Mori-Tanaka approach. Predictedmore » stress-strain responses up to failure and damage accumulations are compared to the experimental results to validate the model.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.C33F..03B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.C33F..03B"><span>Seasonal predictability of Arctic Sea Ice: assessing its limits and potential in a GCM and implications for observations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Blanchard-Wrigglesworth, E.</p> <p>2012-12-01</p> <p>Arctic sea ice has exhibited a dramatic decrease both in area and thickness over the recent decades, particularly during the summer months. This decrease has led to growing interest in the potential predictability of summer sea ice, spurred in part by the socioeconomic implications. Here we present results of several parallel experiments designed to assess and understand the limits and potential for seasonal predictability of Arctic sea ice, with an emphasis on the summer minimum. Building on our experience from the SEARCH Outlook, we present results of a coupled general circulation model (GCM) hindcast simulation of Arctic summer sea ice variability for the satellite period (1979-present). These are initialized with spring sea ice volume anomalies obtained from a modelling and assimilation system, considered to be a close representation of reality. We show that there is significant predictability, yet the stochastic forcing imparted mainly by the atmosphere can lead to large errors in the hindcast. The model, however, can simulate anomalous runs that lie beyond a Gaussian distribution. Additionally, we investigate the regional characteristics of predictability and its links to sea ice dynamics and the spatio-temporal behavior of sea ice anomalies. We show a distinct difference between models. Unfortunately, observational data of thickness are not yet detailed enough to assess the models. Our results indicate the potential for detailed ice thickness observations in improving regional predictability. Finally, we discuss the importance of experiment design in predictability experiments, and show that predictions made with models that have a large mean state bias in sea ice require a careful initialization in order to fully capture all initial value predictability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhDT.......145L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhDT.......145L"><span>Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Litvay, Robyn Olson</p> <p></p> <p>Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880005485','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880005485"><span>Prediction of physical workload in reduced gravity environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Goldberg, Joseph H.</p> <p>1987-01-01</p> <p>The background, development, and application of a methodology to predict human energy expenditure and physical workload in low gravity environments, such as a Lunar or Martian base, is described. Based on a validated model to predict energy expenditures in Earth-based industrial jobs, the model relies on an elemental analysis of the proposed job. Because the job itself need not physically exist, many alternative job designs may be compared in their physical workload. The feasibility of using the model for prediction of low gravity work was evaluated by lowering body and load weights, while maintaining basal energy expenditure. Comparison of model results was made both with simulated low gravity energy expenditure studies and with reported Apollo 14 Lunar EVA expenditure. Prediction accuracy was very good for walking and for cart pulling on slopes less than 15 deg, but the model underpredicted the most difficult work conditions. This model was applied to example core sampling and facility construction jobs, as presently conceptualized for a Lunar or Martian base. Resultant energy expenditures and suggested work-rest cycles were well within the range of moderate work difficulty. Future model development requirements were also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AtmEn.134..168N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AtmEn.134..168N"><span>A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu</p> <p>2016-06-01</p> <p>To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29587758','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29587758"><span>Prediction equations of forced oscillation technique: the insidious role of collinearity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Narchi, Hassib; AlBlooshi, Afaf</p> <p>2018-03-27</p> <p>Many studies have reported reference data for forced oscillation technique (FOT) in healthy children. The prediction equation of FOT parameters were derived from a multivariable regression model examining the effect of age, gender, weight and height on each parameter. As many of these variables are likely to be correlated, collinearity might have affected the accuracy of the model, potentially resulting in misleading, erroneous or difficult to interpret conclusions.The aim of this work was: To review all FOT publications in children since 2005 to analyze whether collinearity was considered in the construction of the published prediction equations. Then to compare these prediction equations with our own study. And to analyse, in our study, how collinearity between the explanatory variables might affect the predicted equations if it was not considered in the model. The results showed that none of the ten reviewed studies had stated whether collinearity was checked for. Half of the reports had also included in their equations variables which are physiologically correlated, such as age, weight and height. The predicted resistance varied by up to 28% amongst these studies. And in our study, multicollinearity was identified between the explanatory variables initially considered for the regression model (age, weight and height). Ignoring it would have resulted in inaccuracies in the coefficients of the equation, their signs (positive or negative), their 95% confidence intervals, their significance level and the model goodness of fit. In Conclusion with inaccurately constructed and improperly reported models, understanding the results and reproducing the models for future research might be compromised.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3735390','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3735390"><span>Artificial neural network models for prediction of cardiovascular autonomic dysfunction in general Chinese population</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background The present study aimed to develop an artificial neural network (ANN) based prediction model for cardiovascular autonomic (CA) dysfunction in the general population. Methods We analyzed a previous dataset based on a population sample consisted of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN analysis. Performances of these prediction models were evaluated in the validation set. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with CA dysfunction (P < 0.05). The mean area under the receiver-operating curve was 0.762 (95% CI 0.732–0.793) for prediction model developed using ANN analysis. The mean sensitivity, specificity, positive and negative predictive values were similar in the prediction models was 0.751, 0.665, 0.330 and 0.924, respectively. All HL statistics were less than 15.0. Conclusion ANN is an effective tool for developing prediction models with high value for predicting CA dysfunction among the general population. PMID:23902963</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.147g4503Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.147g4503Z"><span>Temperature dependence of the hydrated electron's excited-state relaxation. I. Simulation predictions of resonance Raman and pump-probe transient absorption spectra of cavity and non-cavity models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zho, Chen-Chen; Farr, Erik P.; Glover, William J.; Schwartz, Benjamin J.</p> <p>2017-08-01</p> <p>We use one-electron non-adiabatic mixed quantum/classical simulations to explore the temperature dependence of both the ground-state structure and the excited-state relaxation dynamics of the hydrated electron. We compare the results for both the traditional cavity picture and a more recent non-cavity model of the hydrated electron and make definite predictions for distinguishing between the different possible structural models in future experiments. We find that the traditional cavity model shows no temperature-dependent change in structure at constant density, leading to a predicted resonance Raman spectrum that is essentially temperature-independent. In contrast, the non-cavity model predicts a blue-shift in the hydrated electron's resonance Raman O-H stretch with increasing temperature. The lack of a temperature-dependent ground-state structural change of the cavity model also leads to a prediction of little change with temperature of both the excited-state lifetime and hot ground-state cooling time of the hydrated electron following photoexcitation. This is in sharp contrast to the predictions of the non-cavity model, where both the excited-state lifetime and hot ground-state cooling time are expected to decrease significantly with increasing temperature. These simulation-based predictions should be directly testable by the results of future time-resolved photoelectron spectroscopy experiments. Finally, the temperature-dependent differences in predicted excited-state lifetime and hot ground-state cooling time of the two models also lead to different predicted pump-probe transient absorption spectroscopy of the hydrated electron as a function of temperature. We perform such experiments and describe them in Paper II [E. P. Farr et al., J. Chem. Phys. 147, 074504 (2017)], and find changes in the excited-state lifetime and hot ground-state cooling time with temperature that match well with the predictions of the non-cavity model. In particular, the experiments reveal stimulated emission from the excited state with an amplitude and lifetime that decreases with increasing temperature, a result in contrast to the lack of stimulated emission predicted by the cavity model but in good agreement with the non-cavity model. Overall, until ab initio calculations describing the non-adiabatic excited-state dynamics of an excess electron with hundreds of water molecules at a variety of temperatures become computationally feasible, the simulations presented here provide a definitive route for connecting the predictions of cavity and non-cavity models of the hydrated electron with future experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920020454','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920020454"><span>Validation of finite element and boundary element methods for predicting structural vibration and radiated noise</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Seybert, A. F.; Wu, X. F.; Oswald, Fred B.</p> <p>1992-01-01</p> <p>Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960009062','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960009062"><span>Verification of a three-dimensional resin transfer molding process simulation model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fingerson, John C.; Loos, Alfred C.; Dexter, H. Benson</p> <p>1995-01-01</p> <p>Experimental evidence was obtained to complete the verification of the parameters needed for input to a three-dimensional finite element model simulating the resin flow and cure through an orthotropic fabric preform. The material characterizations completed include resin kinetics and viscosity models, as well as preform permeability and compaction models. The steady-state and advancing front permeability measurement methods are compared. The results indicate that both methods yield similar permeabilities for a plain weave, bi-axial fiberglass fabric. Also, a method to determine principal directions and permeabilities is discussed and results are shown for a multi-axial warp knit preform. The flow of resin through a blade-stiffened preform was modeled and experiments were completed to verify the results. The predicted inlet pressure was approximately 65% of the measured value. A parametric study was performed to explain differences in measured and predicted flow front advancement and inlet pressures. Furthermore, PR-500 epoxy resin/IM7 8HS carbon fabric flat panels were fabricated by the Resin Transfer Molding process. Tests were completed utilizing both perimeter injection and center-port injection as resin inlet boundary conditions. The mold was instrumented with FDEMS sensors, pressure transducers, and thermocouples to monitor the process conditions. Results include a comparison of predicted and measured inlet pressures and flow front position. For the perimeter injection case, the measured inlet pressure and flow front results compared well to the predicted results. The results of the center-port injection case showed that the predicted inlet pressure was approximately 50% of the measured inlet pressure. Also, measured flow front position data did not agree well with the predicted results. Possible reasons for error include fiber deformation at the resin inlet and a lag in FDEMS sensor wet-out due to low mold pressures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFM.S22C..08W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFM.S22C..08W"><span>Testing the Predictive Power of Coulomb Stress on Aftershock Sequences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.</p> <p>2009-12-01</p> <p>Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4483750','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4483750"><span>Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Dong-jun; Li, Li</p> <p>2015-01-01</p> <p>For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011MMTB...42..783K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011MMTB...42..783K"><span>Grain Floatation During Equiaxed Solidification of an Al-Cu Alloy in a Side-Cooled Cavity: Part II—Numerical Studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kumar, Arvind; Walker, Mike J.; Sundarraj, Suresh; Dutta, Pradip</p> <p>2011-08-01</p> <p>In this article, a single-phase, one-domain macroscopic model is developed for studying binary alloy solidification with moving equiaxed solid phase, along with the associated transport phenomena. In this model, issues such as thermosolutal convection, motion of solid phase relative to liquid and viscosity variations of the solid-liquid mixture with solid fraction in the mobile zone are taken into account. Using the model, the associated transport phenomena during solidification of Al-Cu alloys in a rectangular cavity are predicted. The results for temperature variation, segregation patterns, and eutectic fraction distribution are compared with data from in-house experiments. The model predictions compare well with the experimental results. To highlight the influence of solid phase movement on convection and final macrosegregation, the results of the current model are also compared with those obtained from the conventional solidification model with stationary solid phase. By including the independent movement of the solid phase into the fluid transport model, better predictions of macrosegregation, microstructure, and even shrinkage locations were obtained. Mechanical property prediction models based on microstructure will benefit from the improved accuracy of this model.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AcSpA.167...12S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AcSpA.167...12S"><span>Enhancing prediction power of chemometric models through manipulation of the fed spectrophotometric data: A comparative study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saad, Ahmed S.; Hamdy, Abdallah M.; Salama, Fathy M.; Abdelkawy, Mohamed</p> <p>2016-10-01</p> <p>Effect of data manipulation in preprocessing step proceeding construction of chemometric models was assessed. The same set of UV spectral data was used for construction of PLS and PCR models directly and after mathematically manipulation as per well known first and second derivatives of the absorption spectra, ratio spectra and first and second derivatives of the ratio spectra spectrophotometric methods, meanwhile the optimal working wavelength ranges were carefully selected for each model and the models were constructed. Unexpectedly, number of latent variables used for models' construction varied among the different methods. The prediction power of the different models was compared using a validation set of 8 mixtures prepared as per the multilevel multifactor design and results were statistically compared using two-way ANOVA test. Root mean squares error of prediction (RMSEP) was used for further comparison of the predictability among different constructed models. Although no significant difference was found between results obtained using Partial Least Squares (PLS) and Principal Component Regression (PCR) models, however, discrepancies among results was found to be attributed to the variation in the discrimination power of adopted spectrophotometric methods on spectral data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26110332','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26110332"><span>Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Dong-jun; Li, Li</p> <p>2015-06-23</p> <p>For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2246184','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2246184"><span>Mammographic density, breast cancer risk and risk prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vachon, Celine M; van Gils, Carla H; Sellers, Thomas A; Ghosh, Karthik; Pruthi, Sandhya; Brandt, Kathleen R; Pankratz, V Shane</p> <p>2007-01-01</p> <p>In this review, we examine the evidence for mammographic density as an independent risk factor for breast cancer, describe the risk prediction models that have incorporated density, and discuss the current and future implications of using mammographic density in clinical practice. Mammographic density is a consistent and strong risk factor for breast cancer in several populations and across age at mammogram. Recently, this risk factor has been added to existing breast cancer risk prediction models, increasing the discriminatory accuracy with its inclusion, albeit slightly. With validation, these models may replace the existing Gail model for clinical risk assessment. However, absolute risk estimates resulting from these improved models are still limited in their ability to characterize an individual's probability of developing cancer. Promising new measures of mammographic density, including volumetric density, which can be standardized using full-field digital mammography, will likely result in a stronger risk factor and improve accuracy of risk prediction models. PMID:18190724</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5795401','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5795401"><span>Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Xiaoqing; Wang, Yu</p> <p>2018-01-01</p> <p>Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29351254','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29351254"><span>Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu</p> <p>2018-01-19</p> <p>Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21801435','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21801435"><span>Hill-Climbing search and diversification within an evolutionary approach to protein structure prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chira, Camelia; Horvath, Dragos; Dumitrescu, D</p> <p>2011-07-30</p> <p>Proteins are complex structures made of amino acids having a fundamental role in the correct functioning of living cells. The structure of a protein is the result of the protein folding process. However, the general principles that govern the folding of natural proteins into a native structure are unknown. The problem of predicting a protein structure with minimum-energy starting from the unfolded amino acid sequence is a highly complex and important task in molecular and computational biology. Protein structure prediction has important applications in fields such as drug design and disease prediction. The protein structure prediction problem is NP-hard even in simplified lattice protein models. An evolutionary model based on hill-climbing genetic operators is proposed for protein structure prediction in the hydrophobic - polar (HP) model. Problem-specific search operators are implemented and applied using a steepest-ascent hill-climbing approach. Furthermore, the proposed model enforces an explicit diversification stage during the evolution in order to avoid local optimum. The main features of the resulting evolutionary algorithm - hill-climbing mechanism and diversification strategy - are evaluated in a set of numerical experiments for the protein structure prediction problem to assess their impact to the efficiency of the search process. Furthermore, the emerging consolidated model is compared to relevant algorithms from the literature for a set of difficult bidimensional instances from lattice protein models. The results obtained by the proposed algorithm are promising and competitive with those of related methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160013905','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160013905"><span>Rolling Bearing Life Prediction, Theory, and Application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zaretsky, Erwin V.</p> <p>2016-01-01</p> <p>A tutorial is presented outlining the evolution, theory, and application of rolling-element bearing life prediction from that of A. Palmgren, 1924; W. Weibull, 1939; G. Lundberg and A. Palmgren, 1947 and 1952; E. Ioannides and T. Harris, 1985; and E. Zaretsky, 1987. Comparisons are made between these life models. The Ioannides-Harris model without a fatigue limit is identical to the Lundberg-Palmgren model. The Weibull model is similar to that of Zaretsky if the exponents are chosen to be identical. Both the load-life and Hertz stress-life relations of Weibull, Lundberg and Palmgren, and Ioannides and Harris reflect a strong dependence on the Weibull slope. The Zaretsky model decouples the dependence of the critical shear stress-life relation from the Weibull slope. This results in a nominal variation of the Hertz stress-life exponent. For 9th- and 8th-power Hertz stress-life exponents for ball and roller bearings, respectively, the Lundberg-Palmgren model best predicts life. However, for 12th- and 10th-power relations reflected by modern bearing steels, the Zaretsky model based on the Weibull equation is superior. Under the range of stresses examined, the use of a fatigue limit would suggest that (for most operating conditions under which a rolling-element bearing will operate) the bearing will not fail from classical rolling-element fatigue. Realistically, this is not the case. The use of a fatigue limit will significantly overpredict life over a range of normal operating Hertz stresses. (The use of ISO 281:2007 with a fatigue limit in these calculations would result in a bearing life approaching infinity.) Since the predicted lives of rolling-element bearings are high, the problem can become one of undersizing a bearing for a particular application. Rules had been developed to distinguish and compare predicted lives with those actually obtained. Based upon field and test results of 51 ball and roller bearing sets, 98 percent of these bearing sets had acceptable life results using the Lundberg- Palmgren equations with life adjustment factors to predict bearing life. That is, they had lives equal to or greater than that predicted. The Lundberg-Palmgren model was used to predict the life of a commercial turboprop gearbox. The life prediction was compared with the field lives of 64 gearboxes. From these results, the roller bearing lives exhibited a load-life exponent of 5.2, which correlated with the Zaretsky model. The use of the ANSI/ABMA and ISO standards load-life exponent of 10/3 to predict roller bearing life is not reflective of modern roller bearings and will underpredict bearing lives.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=interest+AND+compound&pg=6&id=EJ732314','ERIC'); return false;" href="https://eric.ed.gov/?q=interest+AND+compound&pg=6&id=EJ732314"><span>Students' Understanding of the Descriptive and Predictive Nature of Teaching Models in Organic Chemistry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Treagust, David F.; Chittleborough, Gail D.; Mamiala, Thapelo L.</p> <p>2004-01-01</p> <p>The purpose of the study was to investigate secondary students' understanding of the descriptive and predictive nature of teaching models used in representing compounds in introductory organic chemistry. Of interest were the relationships between teaching models, scientific models, and students' mental models and expressed models. The results from…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1955d0090H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1955d0090H"><span>Research on light rail electric load forecasting based on ARMA model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Yifan</p> <p>2018-04-01</p> <p>The article compares a variety of time series models and combines the characteristics of power load forecasting. Then, a light load forecasting model based on ARMA model is established. Based on this model, a light rail system is forecasted. The prediction results show that the accuracy of the model prediction is high.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20510597','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20510597"><span>Waste tyre pyrolysis: modelling of a moving bed reactor.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aylón, E; Fernández-Colino, A; Murillo, R; Grasa, G; Navarro, M V; García, T; Mastral, A M</p> <p>2010-12-01</p> <p>This paper describes the development of a new model for waste tyre pyrolysis in a moving bed reactor. This model comprises three different sub-models: a kinetic sub-model that predicts solid conversion in terms of reaction time and temperature, a heat transfer sub-model that calculates the temperature profile inside the particle and the energy flux from the surroundings to the tyre particles and, finally, a hydrodynamic model that predicts the solid flow pattern inside the reactor. These three sub-models have been integrated in order to develop a comprehensive reactor model. Experimental results were obtained in a continuous moving bed reactor and used to validate model predictions, with good approximation achieved between the experimental and simulated results. In addition, a parametric study of the model was carried out, which showed that tyre particle heating is clearly faster than average particle residence time inside the reactor. Therefore, this fast particle heating together with fast reaction kinetics enables total solid conversion to be achieved in this system in accordance with the predictive model. Copyright © 2010 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMSH21A2637H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMSH21A2637H"><span>One-month validation of the Space Weather Modeling Framework geospace model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Haiducek, J. D.; Welling, D. T.; Ganushkina, N. Y.; Morley, S.; Ozturk, D. S.</p> <p>2017-12-01</p> <p>The Space Weather Modeling Framework (SWMF) geospace model consists of a magnetohydrodynamic (MHD) simulation coupled to an inner magnetosphere model and an ionosphere model. This provides a predictive capability for magnetopsheric dynamics, including ground-based and space-based magnetic fields, geomagnetic indices, currents and densities throughout the magnetosphere, cross-polar cap potential, and magnetopause and bow shock locations. The only inputs are solar wind parameters and F10.7 radio flux. We have conducted a rigorous validation effort consisting of a continuous simulation covering the month of January, 2005 using three different model configurations. This provides a relatively large dataset for assessment of the model's predictive capabilities. We find that the model does an excellent job of predicting the Sym-H index, and performs well at predicting Kp and CPCP during active times. Dayside magnetopause and bow shock positions are also well predicted. The model tends to over-predict Kp and CPCP during quiet times and under-predicts the magnitude of AL during disturbances. The model under-predicts the magnitude of night-side geosynchronous Bz, and over-predicts the radial distance to the flank magnetopause and bow shock. This suggests that the model over-predicts stretching of the magnetotail and the overall size of the magnetotail. With the exception of the AL index and the nightside geosynchronous magnetic field, we find the results to be insensitive to grid resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..231a2143Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..231a2143Z"><span>Simulation analysis of adaptive cruise prediction control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Li; Cui, Sheng Min</p> <p>2017-09-01</p> <p>Predictive control is suitable for multi-variable and multi-constraint system control.In order to discuss the effect of predictive control on the vehicle longitudinal motion, this paper establishes the expected spacing model by combining variable pitch spacing and the of safety distance strategy. The model predictive control theory and the optimization method based on secondary planning are designed to obtain and track the best expected acceleration trajectory quickly. Simulation models are established including predictive and adaptive fuzzy control. Simulation results show that predictive control can realize the basic function of the system while ensuring the safety. The application of predictive and fuzzy adaptive algorithm in cruise condition indicates that the predictive control effect is better.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26539483','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26539483"><span>Comparison of Primary Models to Predict Microbial Growth by the Plate Count and Absorbance Methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pla, María-Leonor; Oltra, Sandra; Esteban, María-Dolores; Andreu, Santiago; Palop, Alfredo</p> <p>2015-01-01</p> <p>The selection of a primary model to describe microbial growth in predictive food microbiology often appears to be subjective. The objective of this research was to check the performance of different mathematical models in predicting growth parameters, both by absorbance and plate count methods. For this purpose, growth curves of three different microorganisms (Bacillus cereus, Listeria monocytogenes, and Escherichia coli) grown under the same conditions, but with different initial concentrations each, were analysed. When measuring the microbial growth of each microorganism by optical density, almost all models provided quite high goodness of fit (r(2) > 0.93) for all growth curves. The growth rate remained approximately constant for all growth curves of each microorganism, when considering one growth model, but differences were found among models. Three-phase linear model provided the lowest variation for growth rate values for all three microorganisms. Baranyi model gave a variation marginally higher, despite a much better overall fitting. When measuring the microbial growth by plate count, similar results were obtained. These results provide insight into predictive microbiology and will help food microbiologists and researchers to choose the proper primary growth predictive model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24648393','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24648393"><span>Validating spatiotemporal predictions of an important pest of small grains.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J</p> <p>2015-01-01</p> <p>Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.5935L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.5935L"><span>Improvement of PM concentration predictability using WRF-CMAQ-DLM coupled system and its applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Soon Hwan; Kim, Ji Sun; Lee, Kang Yeol; Shon, Keon Tae</p> <p>2017-04-01</p> <p>Air quality due to increasing Particulate Matter(PM) in Korea in Asia is getting worse. At present, the PM forecast is announced based on the PM concentration predicted from the air quality prediction numerical model. However, forecast accuracy is not as high as expected due to various uncertainties for PM physical and chemical characteristics. The purpose of this study was to develop a numerical-statistically ensemble models to improve the accuracy of prediction of PM10 concentration. Numerical models used in this study are the three dimensional atmospheric model Weather Research and Forecasting(WRF) and the community multiscale air quality model (CMAQ). The target areas for the PM forecast are Seoul, Busan, Daegu, and Daejeon metropolitan areas in Korea. The data used in the model development are PM concentration and CMAQ predictions and the data period is 3 months (March 1 - May 31, 2014). The dynamic-statistical technics for reducing the systematic error of the CMAQ predictions was applied to the dynamic linear model(DLM) based on the Baysian Kalman filter technic. As a result of applying the metrics generated from the dynamic linear model to the forecasting of PM concentrations accuracy was improved. Especially, at the high PM concentration where the damage is relatively large, excellent improvement results are shown.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25851283','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25851283"><span>A prediction model for colon cancer surveillance data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Good, Norm M; Suresh, Krithika; Young, Graeme P; Lockett, Trevor J; Macrae, Finlay A; Taylor, Jeremy M G</p> <p>2015-08-15</p> <p>Dynamic prediction models make use of patient-specific longitudinal data to update individualized survival probability predictions based on current and past information. Colonoscopy (COL) and fecal occult blood test (FOBT) results were collected from two Australian surveillance studies on individuals characterized as high-risk based on a personal or family history of colorectal cancer. Motivated by a Poisson process, this paper proposes a generalized nonlinear model with a complementary log-log link as a dynamic prediction tool that produces individualized probabilities for the risk of developing advanced adenoma or colorectal cancer (AAC). This model allows predicted risk to depend on a patient's baseline characteristics and time-dependent covariates. Information on the dates and results of COLs and FOBTs were incorporated using time-dependent covariates that contributed to patient risk of AAC for a specified period following the test result. These covariates serve to update a person's risk as additional COL, and FOBT test information becomes available. Model selection was conducted systematically through the comparison of Akaike information criterion. Goodness-of-fit was assessed with the use of calibration plots to compare the predicted probability of event occurrence with the proportion of events observed. Abnormal COL results were found to significantly increase risk of AAC for 1 year following the test. Positive FOBTs were found to significantly increase the risk of AAC for 3 months following the result. The covariates that incorporated the updated test results were of greater significance and had a larger effect on risk than the baseline variables. Copyright © 2015 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26405950','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26405950"><span>Prediction model of sinoatrial node field potential using high order partial least squares.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Feng, Yu; Cao, Hui; Zhang, Yanbin</p> <p>2015-01-01</p> <p>High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25361075','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25361075"><span>Experimental and computational prediction of glass transition temperature of drugs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S</p> <p>2014-12-22</p> <p>Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20445993','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20445993"><span>Artificial neural network modelling of a large-scale wastewater treatment plant operation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Güçlü, Dünyamin; Dursun, Sükrü</p> <p>2010-11-01</p> <p>Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19740010566','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19740010566"><span>Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.</p> <p>1974-01-01</p> <p>The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17981391','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17981391"><span>Flash-point prediction for binary partially miscible mixtures of flammable solvents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng</p> <p>2008-05-30</p> <p>Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPS...374..121L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPS...374..121L"><span>A variable capacitance based modeling and power capability predicting method for ultracapacitor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang</p> <p>2018-01-01</p> <p>Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870059208&hterms=ozone+layer&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dozone%2Blayer','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870059208&hterms=ozone+layer&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dozone%2Blayer"><span>The use of atmospheric measurements to constrain model predictions of ozone change from chlorine perturbations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Douglass, Anne R.; Stolarski, Richard S.</p> <p>1987-01-01</p> <p>Atmospheric photochemistry models have been used to predict the sensitivity of the ozone layer to various perturbations. These same models also predict concentrations of chemical species in the present day atmosphere which can be compared to observations. Model results for both present day values and sensitivity to perturbation depend upon input data for reaction rates, photodissociation rates, and boundary conditions. A method of combining the results of a Monte Carlo uncertainty analysis with the existing set of present atmospheric species measurements is developed. The method is used to examine the range of values for the sensitivity of ozone to chlorine perturbations that is possible within the currently accepted ranges for input data. It is found that model runs which predict ozone column losses much greater than 10 percent as a result of present fluorocarbon fluxes produce concentrations and column amounts in the present atmosphere which are inconsistent with the measurements for ClO, HCl, NO, NO2, and HNO3.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4954717','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4954717"><span>A Novel RSSI Prediction Using Imperialist Competition Algorithm (ICA), Radial Basis Function (RBF) and Firefly Algorithm (FFA) in Wireless Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Goudarzi, Shidrokh; Haslina Hassan, Wan; Abdalla Hashim, Aisha-Hassan; Soleymani, Seyed Ahmad; Anisi, Mohammad Hossein; Zakaria, Omar M.</p> <p>2016-01-01</p> <p>This study aims to design a vertical handover prediction method to minimize unnecessary handovers for a mobile node (MN) during the vertical handover process. This relies on a novel method for the prediction of a received signal strength indicator (RSSI) referred to as IRBF-FFA, which is designed by utilizing the imperialist competition algorithm (ICA) to train the radial basis function (RBF), and by hybridizing with the firefly algorithm (FFA) to predict the optimal solution. The prediction accuracy of the proposed IRBF–FFA model was validated by comparing it to support vector machines (SVMs) and multilayer perceptron (MLP) models. In order to assess the model’s performance, we measured the coefficient of determination (R2), correlation coefficient (r), root mean square error (RMSE) and mean absolute percentage error (MAPE). The achieved results indicate that the IRBF–FFA model provides more precise predictions compared to different ANNs, namely, support vector machines (SVMs) and multilayer perceptron (MLP). The performance of the proposed model is analyzed through simulated and real-time RSSI measurements. The results also suggest that the IRBF–FFA model can be applied as an efficient technique for the accurate prediction of vertical handover. PMID:27438600</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18082757','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18082757"><span>Prediction of the partitioning behaviour of proteins in aqueous two-phase systems using only their amino acid composition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Salgado, J Cristian; Andrews, Barbara A; Ortuzar, Maria Fernanda; Asenjo, Juan A</p> <p>2008-01-18</p> <p>The prediction of the partition behaviour of proteins in aqueous two-phase systems (ATPS) using mathematical models based on their amino acid composition was investigated. The predictive models are based on the average surface hydrophobicity (ASH). The ASH was estimated by means of models that use the three-dimensional structure of proteins and by models that use only the amino acid composition of proteins. These models were evaluated for a set of 11 proteins with known experimental partition coefficient in four-phase systems: polyethylene glycol (PEG) 4000/phosphate, sulfate, citrate and dextran and considering three levels of NaCl concentration (0.0% w/w, 0.6% w/w and 8.8% w/w). The results indicate that such prediction is feasible even though the quality of the prediction depends strongly on the ATPS and its operational conditions such as the NaCl concentration. The ATPS 0 model which use the three-dimensional structure obtains similar results to those given by previous models based on variables measured in the laboratory. In addition it maintains the main characteristics of the hydrophobic resolution and intrinsic hydrophobicity reported before. Three mathematical models, ATPS I-III, based only on the amino acid composition were evaluated. The best results were obtained by the ATPS I model which assumes that all of the amino acids are completely exposed. The performance of the ATPS I model follows the behaviour reported previously, i.e. its correlation coefficients improve as the NaCl concentration increases in the system and, therefore, the effect of the protein hydrophobicity prevails over other effects such as charge or size. Its best predictive performance was obtained for the PEG/dextran system at high NaCl concentration. An increase in the predictive capacity of at least 54.4% with respect to the models which use the three-dimensional structure of the protein was obtained for that system. In addition, the ATPS I model exhibits high correlation coefficients in that system being higher than 0.88 on average. The ATPS I model exhibited correlation coefficients higher than 0.67 for the rest of the ATPS at high NaCl concentration. Finally, we tested our best model, the ATPS I model, on the prediction of the partition coefficient of the protein invertase. We found that the predictive capacities of the ATPS I model are better in PEG/dextran systems, where the relative error of the prediction with respect to the experimental value is 15.6%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70190323','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70190323"><span>The development of a probabilistic approach to forecast coastal change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Lentz, Erika E.; Hapke, Cheryl J.; Rosati, Julie D.; Wang, Ping; Roberts, Tiffany M.</p> <p>2011-01-01</p> <p>This study demonstrates the applicability of a Bayesian probabilistic model as an effective tool in predicting post-storm beach changes along sandy coastlines. Volume change and net shoreline movement are modeled for two study sites at Fire Island, New York in response to two extratropical storms in 2007 and 2009. Both study areas include modified areas adjacent to unmodified areas in morphologically different segments of coast. Predicted outcomes are evaluated against observed changes to test model accuracy and uncertainty along 163 cross-shore transects. Results show strong agreement in the cross validation of predictions vs. observations, with 70-82% accuracies reported. Although no consistent spatial pattern in inaccurate predictions could be determined, the highest prediction uncertainties appeared in locations that had been recently replenished. Further testing and model refinement are needed; however, these initial results show that Bayesian networks have the potential to serve as important decision-support tools in forecasting coastal change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22097832','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22097832"><span>[Determination of acidity and vitamin C in apples using portable NIR analyzer].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Fan; Li, Ya-Ting; Gu, Xuan; Ma, Jiang; Fan, Xing; Wang, Xiao-Xuan; Zhang, Zhuo-Yong</p> <p>2011-09-01</p> <p>Near infrared (NIR) spectroscopy technology based on a portable NIR analyzer, combined with kernel Isomap algorithm and generalized regression neural network (GRNN) has been applied to establishing quantitative models for prediction of acidity and vitamin C in six kinds of apple samples. The obtained results demonstrated that the fitting and the predictive accuracy of the models with kernel Isomap algorithm were satisfactory. The correlation between actual and predicted values of calibration samples (R(c)) obtained by the acidity model was 0.999 4, and for prediction samples (R(p)) was 0.979 9. The root mean square error of prediction set (RMSEP) was 0.055 8. For the vitamin C model, R(c) was 0.989 1, R(p) was 0.927 2, and RMSEP was 4.043 1. Results proved that the portable NIR analyzer can be a feasible tool for the determination of acidity and vitamin C in apples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900027595&hterms=Classical+Perspectives&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DClassical%2BPerspectives','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900027595&hterms=Classical+Perspectives&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DClassical%2BPerspectives"><span>Tropical forecasting - Predictability perspective</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shukla, J.</p> <p>1989-01-01</p> <p>Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5827348','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5827348"><span>Predicting plant biomass accumulation from image-derived parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian</p> <p>2018-01-01</p> <p>Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29462177','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29462177"><span>Improving stability of prediction models based on correlated omics data by using network approaches.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar</p> <p>2018-01-01</p> <p>Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/5294107-ontogenetic-loss-phenotypic-plasticity-age-metamorphosis-tadpoles','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/5294107-ontogenetic-loss-phenotypic-plasticity-age-metamorphosis-tadpoles"><span>Ontogenetic loss of phenotypic plasticity of age at metamorphosis in tadpoles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hensley, F.R.</p> <p>1993-12-01</p> <p>Amphibian larvae exhibit phenotypic plasticity in size at metamorphosis and duration of the larval period. I used Pseudacris crucifer tadpoles to test two models for predicting tadpole age and size at metamorphosis under changing environmental conditions. The Wilbur-Collins model states that metamorphosis is initiated as a function of a tadpole's size and relative growth rate, and predicts that changes in growth rate throughout the larval period affect age and size at metamorphosis. An alternative model, the fixed-rate model, states that age at metamorphosis is fixed early in larval life, and subsequent changes in growth rate will have no effect onmore » the length of the larval period. My results confirm that food supplies affect both age and size at metamorphosis, but developmental rates became fixed at approximately Gosner (1960) stages 35-37. Neither model completely predicted these results. I suggest that the generally accepted Wilbur-Collins model is improved by incorporating a point of fixed developmental timing. Growth trajectories predicted from this modified model fit the results of this study better than trajectories based on either of the original models. The results of this study suggests a constraint that limits the simultaneous optimization of age and size at metamorphosis. 32 refs., 5 figs., 1 tab.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1344662-hierarchical-multi-scale-approach-validation-uncertainty-quantification-hyper-spectral-image-modeling','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1344662-hierarchical-multi-scale-approach-validation-uncertainty-quantification-hyper-spectral-image-modeling"><span>Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.</p> <p></p> <p>Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27981450','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27981450"><span>Drug Distribution. Part 1. Models to Predict Membrane Partitioning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nagar, Swati; Korzekwa, Ken</p> <p>2017-03-01</p> <p>Tissue partitioning is an important component of drug distribution and half-life. Protein binding and lipid partitioning together determine drug distribution. Two structure-based models to predict partitioning into microsomal membranes are presented. An orientation-based model was developed using a membrane template and atom-based relative free energy functions to select drug conformations and orientations for neutral and basic drugs. The resulting model predicts the correct membrane positions for nine compounds tested, and predicts the membrane partitioning for n = 67 drugs with an average fold-error of 2.4. Next, a more facile descriptor-based model was developed for acids, neutrals and bases. This model considers the partitioning of neutral and ionized species at equilibrium, and can predict membrane partitioning with an average fold-error of 2.0 (n = 92 drugs). Together these models suggest that drug orientation is important for membrane partitioning and that membrane partitioning can be well predicted from physicochemical properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AIPC.1060..174J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AIPC.1060..174J"><span>Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jung, Insung; Koo, Lockjo; Wang, Gi-Nam</p> <p>2008-11-01</p> <p>The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20160000368&hterms=analysis+climatic&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Danalysis%2Bclimatic','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20160000368&hterms=analysis+climatic&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Danalysis%2Bclimatic"><span>Uncertainties in Predicting Rice Yield by Current Crop Models Under a Wide Range of Climatic Conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Adam, Myriam; Bregaglio, Simone; Buis, Samuel; Confalonieri, Roberto; Fumoto, Tamon; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20160000368'); toggleEditAbsImage('author_20160000368_show'); toggleEditAbsImage('author_20160000368_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20160000368_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20160000368_hide"></p> <p>2014-01-01</p> <p>Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We evaluated 13 rice models against multi-year experimental yield data at four sites with diverse climatic conditions in Asia and examined whether different modeling approaches on major physiological processes attribute to the uncertainties of prediction to field measured yields and to the uncertainties of sensitivity to changes in temperature and CO2 concentration [CO2]. We also examined whether a use of an ensemble of crop models can reduce the uncertainties. Individual models did not consistently reproduce both experimental and regional yields well, and uncertainty was larger at the warmest and coolest sites. The variation in yield projections was larger among crop models than variation resulting from 16 global climate model-based scenarios. However, the mean of predictions of all crop models reproduced experimental data, with an uncertainty of less than 10 percent of measured yields. Using an ensemble of eight models calibrated only for phenology or five models calibrated in detail resulted in the uncertainty equivalent to that of the measured yield in well-controlled agronomic field experiments. Sensitivity analysis indicates the necessity to improve the accuracy in predicting both biomass and harvest index in response to increasing [CO2] and temperature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26009266','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26009266"><span>Assessment of driver stopping prediction models before and after the onset of yellow using two driving simulator datasets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ghanipoor Machiani, Sahar; Abbas, Montasir</p> <p>2016-11-01</p> <p>Accurate modeling of driver decisions in dilemma zones (DZ), where drivers are not sure whether to stop or go at the onset of yellow, can be used to increase safety at signalized intersections. This study utilized data obtained from two different driving simulator studies (VT-SCORES and NADS datasets) to investigate the possibility of developing accurate driver-decision prediction/classification models in DZ. Canonical discriminant analysis was used to construct the prediction models, and two timeframes were considered. The first timeframe used data collected during green immediately before the onset of yellow, and the second timeframe used data collected during the first three seconds after the onset of yellow. Signal protection algorithms could use the results of the prediction model during the first timeframe to decide the best time for ending the green signal, and could use the results of the prediction model during the first three seconds of yellow to extend the clearance interval. It was found that the discriminant model using data collected during the first three seconds of yellow was the most accurate, at 99% accuracy. It was also found that data collection should focus on variables that are related to speed, acceleration, time, and distance to intersection, as opposed to secondary variables, such as pavement conditions, since secondary variables did not significantly change the accuracy of the prediction models. The results reveal a promising possibility for incorporating the developed models in traffic-signal controllers to improve DZ-protection strategies. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10696E..2CS','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10696E..2CS"><span>Researches of fruit quality prediction model based on near infrared spectrum</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Yulin; Li, Lian</p> <p>2018-04-01</p> <p>With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27794285','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27794285"><span>Assessment of traffic noise levels in urban areas using different soft computing techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D</p> <p>2016-10-01</p> <p>Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960023330','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960023330"><span>Prediction of Airfoil Characteristics With Higher Order Turbulence Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gatski, Thomas B.</p> <p>1996-01-01</p> <p>This study focuses on the prediction of airfoil characteristics, including lift and drag over a range of Reynolds numbers. Two different turbulence models, which represent two different types of models, are tested. The first is a standard isotropic eddy-viscosity two-equation model, and the second is an explicit algebraic stress model (EASM). The turbulent flow field over a general-aviation airfoil (GA(W)-2) at three Reynolds numbers is studied. At each Reynolds number, predicted lift and drag values at different angles of attack are compared with experimental results, and predicted variations of stall locations with Reynolds number are compared with experimental data. Finally, the size of the separation zone predicted by each model is analyzed, and correlated with the behavior of the lift coefficient near stall. In summary, the EASM model is able to predict the lift and drag coefficients over a wider range of angles of attack than the two-equation model for the three Reynolds numbers studied. However, both models are unable to predict the correct lift and drag behavior near the stall angle, and for the lowest Reynolds number case, the two-equation model did not predict separation on the airfoil near stall.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013CG.....51..350P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013CG.....51..350P"><span>A comparative study on the predictive ability of the decision tree, support vector machine and neuro-fuzzy models in landslide susceptibility mapping using GIS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pradhan, Biswajeet</p> <p>2013-02-01</p> <p>The purpose of the present study is to compare the prediction performances of three different approaches such as decision tree (DT), support vector machine (SVM) and adaptive neuro-fuzzy inference system (ANFIS) for landslide susceptibility mapping at Penang Hill area, Malaysia. The necessary input parameters for the landslide susceptibility assessments were obtained from various sources. At first, landslide locations were identified by aerial photographs and field surveys and a total of 113 landslide locations were constructed. The study area contains 340,608 pixels while total 8403 pixels include landslides. The landslide inventory was randomly partitioned into two subsets: (1) part 1 that contains 50% (4000 landslide grid cells) was used in the training phase of the models; (2) part 2 is a validation dataset 50% (4000 landslide grid cells) for validation of three models and to confirm its accuracy. The digitally processed images of input parameters were combined in GIS. Finally, landslide susceptibility maps were produced, and the performances were assessed and discussed. Total fifteen landslide susceptibility maps were produced using DT, SVM and ANFIS based models, and the resultant maps were validated using the landslide locations. Prediction performances of these maps were checked by receiver operating characteristics (ROC) by using both success rate curve and prediction rate curve. The validation results showed that, area under the ROC curve for the fifteen models produced using DT, SVM and ANFIS varied from 0.8204 to 0.9421 for success rate curve and 0.7580 to 0.8307 for prediction rate curves, respectively. Moreover, the prediction curves revealed that model 5 of DT has slightly higher prediction performance (83.07), whereas the success rate showed that model 5 of ANFIS has better prediction (94.21) capability among all models. The results of this study showed that landslide susceptibility mapping in the Penang Hill area using the three approaches (e.g., DT, SVM and ANFIS) is viable. As far as the performance of the models are concerned, the results appeared to be quite satisfactory, i.e., the zones determined on the map being zones of relative susceptibility.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1165163','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1165163"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Branstator, Grant</p> <p></p> <p>The overall aim of our project was to quantify and characterize predictability of the climate as it pertains to decadal time scale predictions. By predictability we mean the degree to which a climate forecast can be distinguished from the climate that exists at initial forecast time, taking into consideration the growth of uncertainty that occurs as a result of the climate system being chaotic. In our project we were especially interested in predictability that arises from initializing forecasts from some specific state though we also contrast this predictability with predictability arising from forecasting the reaction of the system to externalmore » forcing – for example changes in greenhouse gas concentration. Also, we put special emphasis on the predictability of prominent intrinsic patterns of the system because they often dominate system behavior. Highlights from this work include: • Development of novel methods for estimating the predictability of climate forecast models. • Quantification of the initial value predictability limits of ocean heat content and the overturning circulation in the Atlantic as they are represented in various state of the art climate models. These limits varied substantially from model to model but on average were about a decade with North Atlantic heat content tending to be more predictable than North Pacific heat content. • Comparison of predictability resulting from knowledge of the current state of the climate system with predictability resulting from estimates of how the climate system will react to changes in greenhouse gas concentrations. It turned out that knowledge of the initial state produces a larger impact on forecasts for the first 5 to 10 years of projections. • Estimation of the predictability of dominant patterns of ocean variability including well-known patterns of variability in the North Pacific and North Atlantic. For the most part these patterns were predictable for 5 to 10 years. • Determination of especially predictable patterns in the North Atlantic. The most predictable of these retain predictability substantially longer than generic patterns, with some being predictable for two decades.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110010236','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110010236"><span>Investigating Some Technical Issues on Cohesive Zone Modeling of Fracture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wang, John T.</p> <p>2011-01-01</p> <p>This study investigates some technical issues related to the use of cohesive zone models (CZMs) in modeling fracture processes. These issues include: why cohesive laws of different shapes can produce similar fracture predictions; under what conditions CZM predictions have a high degree of agreement with linear elastic fracture mechanics (LEFM) analysis results; when the shape of cohesive laws becomes important in the fracture predictions; and why the opening profile along the cohesive zone length needs to be accurately predicted. Two cohesive models were used in this study to address these technical issues. They are the linear softening cohesive model and the Dugdale perfectly plastic cohesive model. Each cohesive model constitutes five cohesive laws of different maximum tractions. All cohesive laws have the same cohesive work rate (CWR) which is defined by the area under the traction-separation curve. The effects of the maximum traction on the cohesive zone length and the critical remote applied stress are investigated for both models. For a CZM to predict a fracture load similar to that obtained by an LEFM analysis, the cohesive zone length needs to be much smaller than the crack length, which reflects the small scale yielding condition requirement for LEFM analysis to be valid. For large-scale cohesive zone cases, the predicted critical remote applied stresses depend on the shape of cohesive models used and can significantly deviate from LEFM results. Furthermore, this study also reveals the importance of accurately predicting the cohesive zone profile in determining the critical remote applied load.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3965570','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3965570"><span>Uniting Cheminformatics and Chemical Theory To Predict the Intrinsic Aqueous Solubility of Crystalline Druglike Molecules</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22356882','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22356882"><span>Predicting students' physical activity and health-related well-being: a prospective cross-domain investigation of motivation across school physical education and exercise settings.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Standage, Martyn; Gillison, Fiona B; Ntoumanis, Nikos; Treasure, Darren C</p> <p>2012-02-01</p> <p>A three-wave prospective design was used to assess a model of motivation guided by self-determination theory (Ryan & Deci, 2008) spanning the contexts of school physical education (PE) and exercise. The outcome variables examined were health-related quality of life (HRQoL), physical self-concept (PSC), and 4 days of objectively assessed estimates of activity. Secondary school students (n = 494) completed questionnaires at three separate time points and were familiarized with how to use a sealed pedometer. Results of structural equation modeling supported a model in which perceptions of autonomy support from a PE teacher positively predicted PE-related need satisfaction (autonomy, competence, and relatedness). Competence predicted PSC, whereas relatedness predicted HRQoL. Autonomy and competence positively predicted autonomous motivation toward PE, which in turn positively predicted autonomous motivation toward exercise (i.e., 4-day pedometer step count). Autonomous motivation toward exercise positively predicted step count, HRQoL, and PSC. Results of multisample structural equation modeling supported gender invariance. Suggestions for future work are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/19955','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/19955"><span>A model for predicting air quality along highways.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1973-01-01</p> <p>The subject of this report is an air quality prediction model for highways, AIRPOL Version 2, July 1973. AIRPOL has been developed by modifying the basic Gaussian approach to gaseous dispersion. The resultant model is smooth and continuous throughout...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70029717','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70029717"><span>Latin hypercube approach to estimate uncertainty in ground water vulnerability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.</p> <p>2007-01-01</p> <p>A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..121e2001W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..121e2001W"><span>Artificial neural network analysis based on genetic algorithm to predict the performance characteristics of a cross flow cooling tower</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Jiasheng; Cao, Lin; Zhang, Guoqiang</p> <p>2018-02-01</p> <p>Cooling tower of air conditioning has been widely used as cooling equipment, and there will be broad application prospect if it can be reversibly used as heat source under heat pump heating operation condition. In view of the complex non-linear relationship of each parameter in the process of heat and mass transfer inside tower, In this paper, the BP neural network model based on genetic algorithm optimization (GABP neural network model) is established for the reverse use of cross flow cooling tower. The model adopts the structure of 6 inputs, 13 hidden nodes and 8 outputs. With this model, the outlet air dry bulb temperature, wet bulb temperature, water temperature, heat, sensible heat ratio and heat absorbing efficiency, Lewis number, a total of 8 the proportion of main performance parameters were predicted. Furthermore, the established network model is used to predict the water temperature and heat absorption of the tower at different inlet temperatures. The mean relative error MRE between BP predicted value and experimental value are 4.47%, 3.63%, 2.38%, 3.71%, 6.35%,3.14%, 13.95% and 6.80% respectively; the mean relative error MRE between GABP predicted value and experimental value are 2.66%, 3.04%, 2.27%, 3.02%, 6.89%, 3.17%, 11.50% and 6.57% respectively. The results show that the prediction results of GABP network model are better than that of BP network model; the simulation results are basically consistent with the actual situation. The GABP network model can well predict the heat and mass transfer performance of the cross flow cooling tower.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AcASn..56..483L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AcASn..56..483L"><span>Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Q. B.; Wang, Q. J.; Lei, M. F.</p> <p>2015-09-01</p> <p>It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3053181','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3053181"><span>Improving CSF biomarker accuracy in predicting prevalent and incident Alzheimer disease</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fagan, A.M.; Williams, M.M.; Ghoshal, N.; Aeschleman, M.; Grant, E.A.; Marcus, D.S.; Mintun, M.A.; Holtzman, D.M.; Morris, J.C.</p> <p>2011-01-01</p> <p>Objective: To investigate factors, including cognitive and brain reserve, which may independently predict prevalent and incident dementia of the Alzheimer type (DAT) and to determine whether inclusion of identified factors increases the predictive accuracy of the CSF biomarkers Aβ42, tau, ptau181, tau/Aβ42, and ptau181/Aβ42. Methods: Logistic regression identified variables that predicted prevalent DAT when considered together with each CSF biomarker in a cross-sectional sample of 201 participants with normal cognition and 46 with DAT. The area under the receiver operating characteristic curve (AUC) from the resulting model was compared with the AUC generated using the biomarker alone. In a second sample with normal cognition at baseline and longitudinal data available (n = 213), Cox proportional hazards models identified variables that predicted incident DAT together with each biomarker, and the models' concordance probability estimate (CPE), which was compared to the CPE generated using the biomarker alone. Results: APOE genotype including an ε4 allele, male gender, and smaller normalized whole brain volumes (nWBV) were cross-sectionally associated with DAT when considered together with every biomarker. In the longitudinal sample (mean follow-up = 3.2 years), 14 participants (6.6%) developed DAT. Older age predicted a faster time to DAT in every model, and greater education predicted a slower time in 4 of 5 models. Inclusion of ancillary variables resulted in better cross-sectional prediction of DAT for all biomarkers (p < 0.0021), and better longitudinal prediction for 4 of 5 biomarkers (p < 0.0022). Conclusions: The predictive accuracy of CSF biomarkers is improved by including age, education, and nWBV in analyses. PMID:21228296</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23698560','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23698560"><span>Animal models for microbicide safety and efficacy testing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Veazey, Ronald S</p> <p>2013-07-01</p> <p>Early studies have cast doubt on the utility of animal models for predicting success or failure of HIV-prevention strategies, but results of multiple human phase 3 microbicide trials, and interrogations into the discrepancies between human and animal model trials, indicate that animal models were, and are, predictive of safety and efficacy of microbicide candidates. Recent studies have shown that topically applied vaginal gels, and oral prophylaxis using single or combination antiretrovirals are indeed effective in preventing sexual HIV transmission in humans, and all of these successes were predicted in animal models. Further, prior discrepancies between animal and human results are finally being deciphered as inadequacies in study design in the model, or quite often, noncompliance in human trials, the latter being increasingly recognized as a major problem in human microbicide trials. Successful microbicide studies in humans have validated results in animal models, and several ongoing studies are further investigating questions of tissue distribution, duration of efficacy, and continued safety with repeated application of these, and other promising microbicide candidates in both murine and nonhuman primate models. Now that we finally have positive correlations with prevention strategies and protection from HIV transmission, we can retrospectively validate animal models for their ability to predict these results, and more importantly, prospectively use these models to select and advance even safer, more effective, and importantly, more durable microbicide candidates into human trials.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27865487','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27865487"><span>Prediction and validation of residual feed intake and dry matter intake in Danish lactating dairy cows using mid-infrared spectroscopy of milk.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shetty, N; Løvendahl, P; Lund, M S; Buitenhuis, A J</p> <p>2017-01-01</p> <p>The present study explored the effectiveness of Fourier transform mid-infrared (FT-IR) spectral profiles as a predictor for dry matter intake (DMI) and residual feed intake (RFI). The partial least squares regression method was used to develop the prediction models. The models were validated using different external test sets, one randomly leaving out 20% of the records (validation A), the second randomly leaving out 20% of cows (validation B), and a third (for DMI prediction models) randomly leaving out one cow (validation C). The data included 1,044 records from 140 cows; 97 were Danish Holstein and 43 Danish Jersey. Results showed better accuracies for validation A compared with other validation methods. Milk yield (MY) contributed largely to DMI prediction; MY explained 59% of the variation and the validated model error root mean square error of prediction (RMSEP) was 2.24kg. The model was improved by adding live weight (LW) as an additional predictor trait, where the accuracy R 2 increased from 0.59 to 0.72 and error RMSEP decreased from 2.24 to 1.83kg. When only the milk FT-IR spectral profile was used in DMI prediction, a lower prediction ability was obtained, with R 2 =0.30 and RMSEP=2.91kg. However, once the spectral information was added, along with MY and LW as predictors, model accuracy improved and R 2 increased to 0.81 and RMSEP decreased to 1.49kg. Prediction accuracies of RFI changed throughout lactation. The RFI prediction model for the early-lactation stage was better compared with across lactation or mid- and late-lactation stages, with R 2 =0.46 and RMSEP=1.70. The most important spectral wavenumbers that contributed to DMI and RFI prediction models included fat, protein, and lactose peaks. Comparable prediction results were obtained when using infrared-predicted fat, protein, and lactose instead of full spectra, indicating that FT-IR spectral data do not add significant new information to improve DMI and RFI prediction models. Therefore, in practice, if full FT-IR spectral data are not stored, it is possible to achieve similar DMI or RFI prediction results based on standard milk control data. For DMI, the milk fat region was responsible for the major variation in milk spectra; for RFI, the major variation in milk spectra was within the milk protein region. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100036216','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100036216"><span>The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dahl, Milo D.; Khavaran, Abbas</p> <p>2010-01-01</p> <p>Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1168639-covariant-spectator-theory-np-scattering-deuteron-quadrupole-moment','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1168639-covariant-spectator-theory-np-scattering-deuteron-quadrupole-moment"><span>Covariant spectator theory of np scattering: Deuteron quadrupole moment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Gross, Franz</p> <p>2015-01-26</p> <p>The deuteron quadrupole moment is calculated using two CST model wave functions obtained from the 2007 high precision fits to np scattering data. Included in the calculation are a new class of isoscalar np interaction currents automatically generated by the nuclear force model used in these fits. The prediction for model WJC-1, with larger relativistic P-state components, is 2.5% smaller that the experiential result, in common with the inability of models prior to 2014 to predict this important quantity. However, model WJC-2, with very small P-state components, gives agreement to better than 1%, similar to the results obtained recently frommore » XEFT predictions to order N 3LO.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16713240','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16713240"><span>Gray correlation analysis and prediction models of living refuse generation in Shanghai city.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Gousheng; Yu, Jianguo</p> <p>2007-01-01</p> <p>A better understanding of the factors that affect the generation of municipal living refuse (MLF) and the accurate prediction of its generation are crucial for municipal planning projects and city management. Up to now, most of the design efforts have been based on a rough prediction of MLF without any actual support. In this paper, based on published data of socioeconomic variables and MLF generation from 1990 to 2003 in the city of Shanghai, the main factors that affect MLF generation have been quantitatively studied using the method of gray correlation coefficient. Several gray models, such as GM(1,1), GIM(1), GPPM(1) and GLPM(1), have been studied, and predicted results are verified with subsequent residual test. Results show that, among the selected seven factors, consumption of gas, water and electricity are the largest three factors affecting MLF generation, and GLPM(1) is the optimized model to predict MLF generation. Through this model, the predicted MLF generation in 2010 in Shanghai will be 7.65 million tons. The methods and results developed in this paper can provide valuable information for MLF management and related municipal planning projects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29453689','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29453689"><span>Sustained sensorimotor control as intermittent decisions about prediction errors: computational framework and application to ground vehicle steering.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Markkula, Gustav; Boer, Erwin; Romano, Richard; Merat, Natasha</p> <p>2018-06-01</p> <p>A conceptual and computational framework is proposed for modelling of human sensorimotor control and is exemplified for the sensorimotor task of steering a car. The framework emphasises control intermittency and extends on existing models by suggesting that the nervous system implements intermittent control using a combination of (1) motor primitives, (2) prediction of sensory outcomes of motor actions, and (3) evidence accumulation of prediction errors. It is shown that approximate but useful sensory predictions in the intermittent control context can be constructed without detailed forward models, as a superposition of simple prediction primitives, resembling neurobiologically observed corollary discharges. The proposed mathematical framework allows straightforward extension to intermittent behaviour from existing one-dimensional continuous models in the linear control and ecological psychology traditions. Empirical data from a driving simulator are used in model-fitting analyses to test some of the framework's main theoretical predictions: it is shown that human steering control, in routine lane-keeping and in a demanding near-limit task, is better described as a sequence of discrete stepwise control adjustments, than as continuous control. Results on the possible roles of sensory prediction in control adjustment amplitudes, and of evidence accumulation mechanisms in control onset timing, show trends that match the theoretical predictions; these warrant further investigation. The results for the accumulation-based model align with other recent literature, in a possibly converging case against the type of threshold mechanisms that are often assumed in existing models of intermittent control.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1834d0033W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1834d0033W"><span>Predictive modeling of surimi cake shelf life at different storage temperatures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Yatong; Hou, Yanhua; Wang, Quanfu; Cui, Bingqing; Zhang, Xiangyu; Li, Xuepeng; Li, Yujin; Liu, Yuanping</p> <p>2017-04-01</p> <p>The Arrhenius model of the shelf life prediction which based on the TBARS index was established in this study. The results showed that the significant changed of AV, POV, COV and TBARS with temperature increased, and the reaction rate constants k was obtained by the first order reaction kinetics model. Then the secondary model fitting was based on the Arrhenius equation. There was the optimal fitting accuracy of TBARS in the first and the secondary model fitting (R2≥0.95). The verification test indicated that the relative error between the shelf life model prediction value and actual value was within ±10%, suggesting the model could predict the shelf life of surimi cake.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28757221','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28757221"><span>An empirical model for prediction of household solid waste generation rate - A case study of Dhanbad, India.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kumar, Atul; Samadder, S R</p> <p>2017-10-01</p> <p>Accurate prediction of the quantity of household solid waste generation is very much essential for effective management of municipal solid waste (MSW). In actual practice, modelling methods are often found useful for precise prediction of MSW generation rate. In this study, two models have been proposed that established the relationships between the household solid waste generation rate and the socioeconomic parameters, such as household size, total family income, education, occupation and fuel used in the kitchen. Multiple linear regression technique was applied to develop the two models, one for the prediction of biodegradable MSW generation rate and the other for non-biodegradable MSW generation rate for individual households of the city Dhanbad, India. The results of the two models showed that the coefficient of determinations (R 2 ) were 0.782 for biodegradable waste generation rate and 0.676 for non-biodegradable waste generation rate using the selected independent variables. The accuracy tests of the developed models showed convincing results, as the predicted values were very close to the observed values. Validation of the developed models with a new set of data indicated a good fit for actual prediction purpose with predicted R 2 values of 0.76 and 0.64 for biodegradable and non-biodegradable MSW generation rate respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT.......152K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT.......152K"><span>Predicting the Overall Spatial Quality of Automotive Audio Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Koya, Daisuke</p> <p></p> <p>The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24855881','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24855881"><span>Predictive models of alcohol use based on attitudes and individual values.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>García del Castillo Rodríguez, José A; López-Sánchez, Carmen; Quiles Soler, M Carmen; García del Castillo-López, Alvaro; Gázquez Pertusa, Mónica; Marzo Campos, Juan Carlos; Inglés, Candido J</p> <p>2013-01-01</p> <p>Two predictive models are developed in this article: the first is designed to predict people's attitudes to alcoholic drinks, while the second sets out to predict the use of alcohol in relation to selected individual values. University students (N = 1,500) were recruited through stratified sampling based on sex and academic discipline. The questionnaire used obtained information on participants' alcohol use, attitudes and personal values. The results show that the attitudes model correctly classifies 76.3% of cases. Likewise, the model for level of alcohol use correctly classifies 82% of cases. According to our results, we can conclude that there are a series of individual values that influence drinking and attitudes to alcohol use, which therefore provides us with a potentially powerful instrument for developing preventive intervention programs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70042544','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70042544"><span>Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki</p> <p>2012-01-01</p> <p>The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29422455','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29422455"><span>Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi</p> <p>2018-03-13</p> <p>Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models' performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27097603','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27097603"><span>A new seasonal-deciduous spring phenology submodel in the Community Land Model 4.5: impacts on carbon and water cycling under future climate scenarios.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Min; Melaas, Eli K; Gray, Josh M; Friedl, Mark A; Richardson, Andrew D</p> <p>2016-11-01</p> <p>A spring phenology model that combines photoperiod with accumulated heating and chilling to predict spring leaf-out dates is optimized using PhenoCam observations and coupled into the Community Land Model (CLM) 4.5. In head-to-head comparison (using satellite data from 2003 to 2013 for validation) for model grid cells over the Northern Hemisphere deciduous broadleaf forests (5.5 million km 2 ), we found that the revised model substantially outperformed the standard CLM seasonal-deciduous spring phenology submodel at both coarse (0.9 × 1.25°) and fine (1 km) scales. The revised model also does a better job of representing recent (decadal) phenological trends observed globally by MODIS, as well as long-term trends (1950-2014) in the PEP725 European phenology dataset. Moreover, forward model runs suggested a stronger advancement (up to 11 days) of spring leaf-out by the end of the 21st century for the revised model. Trends toward earlier advancement are predicted for deciduous forests across the whole Northern Hemisphere boreal and temperate deciduous forest region for the revised model, whereas the standard model predicts earlier leaf-out in colder regions, but later leaf-out in warmer regions, and no trend globally. The earlier spring leaf-out predicted by the revised model resulted in enhanced gross primary production (up to 0.6 Pg C yr -1 ) and evapotranspiration (up to 24 mm yr -1 ) when results were integrated across the study region. These results suggest that the standard seasonal-deciduous submodel in CLM should be reconsidered, otherwise substantial errors in predictions of key land-atmosphere interactions and feedbacks may result. © 2016 John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70034454','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70034454"><span>Nearshore Tsunami Inundation Model Validation: Toward Sediment Transport Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Apotsos, Alex; Buckley, Mark; Gelfenbaum, Guy; Jaffe, Bruce; Vatvani, Deepak</p> <p>2011-01-01</p> <p>Model predictions from a numerical model, Delft3D, based on the nonlinear shallow water equations are compared with analytical results and laboratory observations from seven tsunami-like benchmark experiments, and with field observations from the 26 December 2004 Indian Ocean tsunami. The model accurately predicts the magnitude and timing of the measured water levels and flow velocities, as well as the magnitude of the maximum inundation distance and run-up, for both breaking and non-breaking waves. The shock-capturing numerical scheme employed describes well the total decrease in wave height due to breaking, but does not reproduce the observed shoaling near the break point. The maximum water levels observed onshore near Kuala Meurisi, Sumatra, following the 26 December 2004 tsunami are well predicted given the uncertainty in the model setup. The good agreement between the model predictions and the analytical results and observations demonstrates that the numerical solution and wetting and drying methods employed are appropriate for modeling tsunami inundation for breaking and non-breaking long waves. Extension of the model to include sediment transport may be appropriate for long, non-breaking tsunami waves. Using available sediment transport formulations, the sediment deposit thickness at Kuala Meurisi is predicted generally within a factor of 2.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25826945','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25826945"><span>[Spatial distribution prediction of surface soil Pb in a battery contaminated site].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Geng; Niu, Jun-Jie; Zhang, Chao; Zhao, Xin; Guo, Guan-Lin</p> <p>2014-12-01</p> <p>In order to enhance the reliability of risk estimation and to improve the accuracy of pollution scope determination in a battery contaminated site with the soil characteristic pollutant Pb, four spatial interpolation models, including Combination Prediction Model (OK(LG) + TIN), kriging model (OK(BC)), Inverse Distance Weighting model (IDW), and Spline model were employed to compare their effects on the spatial distribution and pollution assessment of soil Pb. The results showed that Pb concentration varied significantly and the data was severely skewed. The variation coefficient of the site was higher in the local region. OK(LG) + TIN was found to be more accurate than the other three models in predicting the actual pollution situations of the contaminated site. The prediction accuracy of other models was lower, due to the effect of the principle of different models and datum feature. The interpolation results of OK(BC), IDW and Spline could not reflect the detailed characteristics of seriously contaminated areas, and were not suitable for mapping and spatial distribution prediction of soil Pb in this site. This study gives great contributions and provides useful references for defining the remediation boundary and making remediation decision of contaminated sites.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28279452','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28279452"><span>Predicting lymphatic filariasis transmission and elimination dynamics using a multi-model ensemble framework.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Smith, Morgan E; Singh, Brajendra K; Irvine, Michael A; Stolk, Wilma A; Subramanian, Swaminathan; Hollingsworth, T Déirdre; Michael, Edwin</p> <p>2017-03-01</p> <p>Mathematical models of parasite transmission provide powerful tools for assessing the impacts of interventions. Owing to complexity and uncertainty, no single model may capture all features of transmission and elimination dynamics. Multi-model ensemble modelling offers a framework to help overcome biases of single models. We report on the development of a first multi-model ensemble of three lymphatic filariasis (LF) models (EPIFIL, LYMFASIM, and TRANSFIL), and evaluate its predictive performance in comparison with that of the constituents using calibration and validation data from three case study sites, one each from the three major LF endemic regions: Africa, Southeast Asia and Papua New Guinea (PNG). We assessed the performance of the respective models for predicting the outcomes of annual MDA strategies for various baseline scenarios thought to exemplify the current endemic conditions in the three regions. The results show that the constructed multi-model ensemble outperformed the single models when evaluated across all sites. Single models that best fitted calibration data tended to do less well in simulating the out-of-sample, or validation, intervention data. Scenario modelling results demonstrate that the multi-model ensemble is able to compensate for variance between single models in order to produce more plausible predictions of intervention impacts. Our results highlight the value of an ensemble approach to modelling parasite control dynamics. However, its optimal use will require further methodological improvements as well as consideration of the organizational mechanisms required to ensure that modelling results and data are shared effectively between all stakeholders. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.6900L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.6900L"><span>Understanding seasonal variability of uncertainty in hydrological prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, M.; Wang, Q. J.</p> <p>2012-04-01</p> <p>Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1960o0013S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1960o0013S"><span>A comparative study on the forming limit diagram prediction between Marciniak-Kuczynski model and modified maximum force criterion by using the evolving non-associated Hill48 plasticity model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian</p> <p>2018-05-01</p> <p>Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19720044605&hterms=Aorta&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DAorta','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19720044605&hterms=Aorta&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DAorta"><span>A model for predicting aortic dynamic response to -G sub z impact acceleration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Advani, S. H.; Tarnay, T. J.; Byars, E. F.; Love, J. S.</p> <p>1972-01-01</p> <p>A steady state dynamic response model for the radial motion of the aorta is developed from in vivo pressure-displacement and nerve stimulation experiments on canines. The model represented by a modified Van der Pol wave motion oscillator closely predicts steady state and perturbed response results. The applicability of the steady state canine aortic model to tailward acting impact forces is studied by means of the perturbed phase plane of the oscillator. The backflow through the aortic arch resulting from a specified acceleration-time profile is computed and an analysis for predicting the forced motion aortic response is presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5462717','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5462717"><span>Predictive risk models for proximal aortic surgery</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Díaz, Rocío; Pascual, Isaac; Álvarez, Rubén; Alperi, Alberto; Rozado, Jose; Morales, Carlos; Silva, Jacobo; Morís, César</p> <p>2017-01-01</p> <p>Predictive risk models help improve decision making, information to our patients and quality control comparing results between surgeons and between institutions. The use of these models promotes competitiveness and led to increasingly better results. All these virtues are of utmost importance when the surgical operation entails high-risk. Although proximal aortic surgery is less frequent than other cardiac surgery operations, this procedure itself is more challenging and technically demanding than other common cardiac surgery techniques. The aim of this study is to review the current status of predictive risk models for patients who undergo proximal aortic surgery, which means aortic root replacement, supracoronary ascending aortic replacement or aortic arch surgery. PMID:28616348</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005IJCli..25.1881C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005IJCli..25.1881C"><span>Weather and seasonal climate prediction for South America using a multi-model superensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chaves, Rosane R.; Ross, Robert S.; Krishnamurti, T. N.</p> <p>2005-11-01</p> <p>This work examines the feasibility of weather and seasonal climate predictions for South America using the multi-model synthetic superensemble approach for climate, and the multi-model conventional superensemble approach for numerical weather prediction, both developed at Florida State University (FSU). The effect on seasonal climate forecasts of the number of models used in the synthetic superensemble is investigated. It is shown that the synthetic superensemble approach for climate and the conventional superensemble approach for numerical weather prediction can reduce the errors over South America in seasonal climate prediction and numerical weather prediction.For climate prediction, a suite of 13 models is used. The forecast lead-time is 1 month for the climate forecasts, which consist of precipitation and surface temperature forecasts. The multi-model ensemble is comprised of four versions of the FSU-Coupled Ocean-Atmosphere Model, seven models from the Development of a European Multi-model Ensemble System for Seasonal to Interannual Prediction (DEMETER), a version of the Community Climate Model (CCM3), and a version of the predictive Ocean Atmosphere Model for Australia (POAMA). The results show that conditions over South America are appropriately simulated by the Florida State University Synthetic Superensemble (FSUSSE) in comparison to observations and that the skill of this approach increases with the use of additional models in the ensemble. When compared to observations, the forecasts are generally better than those from both a single climate model and the multi-model ensemble mean, for the variables tested in this study.For numerical weather prediction, the conventional Florida State University Superensemble (FSUSE) is used to predict the mass and motion fields over South America. Predictions of mean sea level pressure, 500 hPa geopotential height, and 850 hPa wind are made with a multi-model superensemble comprised of six global models for the period January, February, and December of 2000. The six global models are from the following forecast centers: FSU, Bureau of Meteorology Research Center (BMRC), Japan Meteorological Agency (JMA), National Centers for Environmental Prediction (NCEP), Naval Research Laboratory (NRL), and Recherche en Prevision Numerique (RPN). Predictions of precipitation are made for the period January, February, and December of 2001 with a multi-analysis-multi-model superensemble where, in addition to the six forecast models just mentioned, five additional versions of the FSU model are used in the ensemble, each with a different initialization (analysis) based on different physical initialization procedures. On the basis of observations, the results show that the FSUSE provides the best forecasts of the mass and motion field variables to forecast day 5, when compared to both the models comprising the ensemble and the multi-model ensemble mean during the wet season of December-February over South America. Individual case studies show that the FSUSE provides excellent predictions of rainfall for particular synoptic events to forecast day 3. Copyright</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28497044','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28497044"><span>Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang</p> <p>2017-01-01</p> <p>The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5401747','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5401747"><span>Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xiaolei, Wang</p> <p>2017-01-01</p> <p>The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server. PMID:28497044</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4128719','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4128719"><span>A Four-Stage Hybrid Model for Hydrological Time Series Forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Di, Chongli; Yang, Xiaohua; Wang, Xiaochao</p> <p>2014-01-01</p> <p>Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25111782','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25111782"><span>A four-stage hybrid model for hydrological time series forecasting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Di, Chongli; Yang, Xiaohua; Wang, Xiaochao</p> <p>2014-01-01</p> <p>Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1889b0013H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1889b0013H"><span>Moist air state above counterflow wet-cooling tower fill based on Merkel, generalised Merkel and Klimanek & Białecky models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hyhlík, Tomáš</p> <p>2017-09-01</p> <p>The article deals with an evaluation of moist air state above counterflow wet-cooling tower fill. The results based on Klimanek & Białecky model are compared with results of Merkel model and generalised Merkel model. Based on the numerical simulation it is shown that temperature is predicted correctly by using generalised Merkel model in the case of saturated or super-saturated air above the fill, but the temperature is underpredicted in the case of unsaturated moist air above the fill. The classical Merkel model always under predicts temperature above the fill. The density of moist air above the fill, which is calculated using generalised Merkel model, is strongly over predicted in the case of unsaturated moist air above the fill.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26708483','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26708483"><span>Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia</p> <p>2016-02-01</p> <p>To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EaSci..30..269W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EaSci..30..269W"><span>Real-time 3-D space numerical shake prediction for earthquake early warning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang</p> <p>2017-12-01</p> <p>In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/6291','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/6291"><span>Highway noise measurements for verification of prediction models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1978-01-01</p> <p>Accurate prediction of highway noise has been a major problem for state highway departments. Many noise models have been proposed to alleviate this problem. Results contained in this report will be used to analyze some of these models, and to determi...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJAEO..38...78S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJAEO..38...78S"><span>An analysis of cropland mask choice and ancillary data for annual corn yield forecasting using MODIS data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shao, Yang; Campbell, James B.; Taff, Gregory N.; Zheng, Baojuan</p> <p>2015-06-01</p> <p>The Midwestern United States is one of the world's most important corn-producing regions. Monitoring and forecasting of corn yields in this intensive agricultural region are important activities to support food security, commodity markets, bioenergy industries, and formation of national policies. This study aims to develop forecasting models that have the capability to provide mid-season prediction of county-level corn yields for the entire Midwestern United States. We used multi-temporal MODIS NDVI (normalized difference vegetation index) 16-day composite data as the primary input, with digital elevation model (DEM) and parameter-elevation relationships on independent slopes model (PRISM) climate data as additional inputs. The DEM and PRISM data, along with three types of cropland masks were tested and compared to evaluate their impacts on model predictive accuracy. Our results suggested that the use of general cropland masks (e.g., summer crop or cultivated crops) generated similar results compared with use of an annual corn-specific mask. Leave-one-year-out cross-validation resulted in an average R2 of 0.75 and RMSE value of 1.10 t/ha. Using a DEM as an additional model input slightly improved performance, while inclusion of PRISM climate data appeared not to be important for our regional corn-yield model. Furthermore, our model has potential for real-time/early prediction. Our corn yield esitmates are available as early as late July, which is an improvement upon previous corn-yield prediction models. In addition to annual corn yield forecasting, we examined model uncertainties through spatial and temporal analysis of the model's predictive error distribution. The magnitude of predictive error (by county) appears to be associated with the spatial patterns of corn fields in the study area.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19840035358&hterms=time+series+modeling&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dtime%2Bseries%2Bmodeling','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19840035358&hterms=time+series+modeling&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dtime%2Bseries%2Bmodeling"><span>Modeling to predict pilot performance during CDTI-based in-trail following experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sorensen, J. A.; Goka, T.</p> <p>1984-01-01</p> <p>A mathematical model was developed of the flight system with the pilot using a cockpit display of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. Both in-trail and vertical dynamics were included. The nominal spacing was based on one of three criteria (Constant Time Predictor; Constant Time Delay; or Acceleration Cue). This model was used to simulate digitally the dynamics of a string of multiple following aircraft, including response to initial position errors. The simulation was used to predict the outcome of a series of in-trail following experiments, including pilot performance in maintaining correct longitudinal spacing and vertical position. The experiments were run in the NASA Ames Research Center multi-cab cockpit simulator facility. The experimental results were then used to evaluate the model and its prediction accuracy. Model parameters were adjusted, so that modeled performance matched experimental results. Lessons learned in this modeling and prediction study are summarized.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2243937','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2243937"><span>Predicting ICU mortality: a comparison of stationary and nonstationary temporal models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kayaalp, M.; Cooper, G. F.; Clermont, G.</p> <p>2000-01-01</p> <p>OBJECTIVE: This study evaluates the effectiveness of the stationarity assumption in predicting the mortality of intensive care unit (ICU) patients at the ICU discharge. DESIGN: This is a comparative study. A stationary temporal Bayesian network learned from data was compared to a set of (33) nonstationary temporal Bayesian networks learned from data. A process observed as a sequence of events is stationary if its stochastic properties stay the same when the sequence is shifted in a positive or negative direction by a constant time parameter. The temporal Bayesian networks forecast mortalities of patients, where each patient has one record per day. The predictive performance of the stationary model is compared with nonstationary models using the area under the receiver operating characteristics (ROC) curves. RESULTS: The stationary model usually performed best. However, one nonstationary model using large data sets performed significantly better than the stationary model. CONCLUSION: Results suggest that using a combination of stationary and nonstationary models may predict better than using either alone. PMID:11079917</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9387295','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9387295"><span>[Study on factors influencing survival in patients with advanced gastric carcinoma after resection by Cox's proportional hazard model].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, S; Sun, Z; Wang, S</p> <p>1996-11-01</p> <p>A prospective follow-up study of 539 advanced gastric carcinoma patients after resection was undertaken between 1 January 1980 and 31 December 1989, with a follow-up rate of 95.36%. A multivariate analysis of possible factors influencing survival of these patients was performed, and their predicting models of survival rates was established by Cox proportional hazard model. The results showed that the major significant prognostic factors influencing survival of these patients were rate and station of lymph node metastases, type of operation, hepatic metastases, size of tumor, age and location of tumor. The most important factor was the rate of lymph node metastases. According to their regression coefficients, the predicting value (PV) of each patient was calculated, then all patients were divided into five risk groups according to PV, their predicting models of survival rates after resection were established in groups. The goodness-fit of estimated predicting models of survival rates were checked by fitting curve and residual plot, and the estimated models tallied with the actual situation. The results suggest that the patients with advanced gastric cancer after resection without lymph node metastases and hepatic metastases had a better prognosis, and their survival probability may be predicted according to the predicting model of survival rates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28628322','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28628322"><span>Applying Mondrian Cross-Conformal Prediction To Estimate Prediction Confidence on Large Imbalanced Bioactivity Data Sets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming</p> <p>2017-07-24</p> <p>Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPS...364..316D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPS...364..316D"><span>Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei</p> <p>2017-10-01</p> <p>To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5621749','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5621749"><span>A New Hybrid Spatio-Temporal Model For Estimating Daily Multi-Year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel</p> <p>2017-01-01</p> <p>Background The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. Methods We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003–2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Results Our model performance was excellent (mean out-of-sample R2=0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R2=0.87, R2=0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Conclusion Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region. PMID:28966552</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GeoRL..41.1035T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GeoRL..41.1035T"><span>Seasonal to interannual Arctic sea ice predictability in current global climate models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tietsche, S.; Day, J. J.; Guemas, V.; Hurlin, W. J.; Keeley, S. P. E.; Matei, D.; Msadek, R.; Collins, M.; Hawkins, E.</p> <p>2014-02-01</p> <p>We establish the first intermodel comparison of seasonal to interannual predictability of present-day Arctic climate by performing coordinated sets of idealized ensemble predictions with four state-of-the-art global climate models. For Arctic sea ice extent and volume, there is potential predictive skill for lead times of up to 3 years, and potential prediction errors have similar growth rates and magnitudes across the models. Spatial patterns of potential prediction errors differ substantially between the models, but some features are robust. Sea ice concentration errors are largest in the marginal ice zone, and in winter they are almost zero away from the ice edge. Sea ice thickness errors are amplified along the coasts of the Arctic Ocean, an effect that is dominated by sea ice advection. These results give an upper bound on the ability of current global climate models to predict important aspects of Arctic climate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.3035S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.3035S"><span>Coupling of Bayesian Networks with GIS for wildfire risk assessment on natural and agricultural areas of the Mediterranean</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Scherb, Anke; Papakosta, Panagiota; Straub, Daniel</p> <p>2014-05-01</p> <p>Wildfires cause severe damages to ecosystems, socio-economic assets, and human lives in the Mediterranean. To facilitate coping with wildfire risks, an understanding of the factors influencing wildfire occurrence and behavior (e.g. human activity, weather conditions, topography, fuel loads) and their interaction is of importance, as is the implementation of this knowledge in improved wildfire hazard and risk prediction systems. In this project, a probabilistic wildfire risk prediction model is developed, with integrated fire occurrence and fire propagation probability and potential impact prediction on natural and cultivated areas. Bayesian Networks (BNs) are used to facilitate the probabilistic modeling. The final BN model is a spatial-temporal prediction system at the meso scale (1 km2 spatial and 1 day temporal resolution). The modeled consequences account for potential restoration costs and production losses referred to forests, agriculture, and (semi-) natural areas. BNs and a geographic information system (GIS) are coupled within this project to support a semi-automated BN model parameter learning and the spatial-temporal risk prediction. The coupling also enables the visualization of prediction results by means of daily maps. The BN parameters are learnt for Cyprus with data from 2006-2009. Data from 2010 is used as validation data set. A special focus is put on the performance evaluation of the BN for fire occurrence, which is modeled as binary classifier and thus, could be validated by means of Receiver Operator Characteristic (ROC) curves. With the final best models, AUC values of more than 70% for validation could be achieved, which indicates potential for reliable prediction performance via BN. Maps of selected days in 2010 are shown to illustrate final prediction results. The resulting system can be easily expanded to predict additional expected damages in the mesoscale (e.g. building and infrastructure damages). The system can support planning of preventive measures (e.g. state resources allocation for wildfire prevention and preparedness) and assist recuperation plans of damaged areas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...82..356M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...82..356M"><span>Analyses of the most influential factors for vibration monitoring of planetary power transmissions in pellet mills by adaptive neuro-fuzzy technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Milovančević, Miloš; Nikolić, Vlastimir; Anđelković, Boban</p> <p>2017-01-01</p> <p>Vibration-based structural health monitoring is widely recognized as an attractive strategy for early damage detection in civil structures. Vibration monitoring and prediction is important for any system since it can save many unpredictable behaviors of the system. If the vibration monitoring is properly managed, that can ensure economic and safe operations. Potentials for further improvement of vibration monitoring lie in the improvement of current control strategies. One of the options is the introduction of model predictive control. Multistep ahead predictive models of vibration are a starting point for creating a successful model predictive strategy. For the purpose of this article, predictive models of are created for vibration monitoring of planetary power transmissions in pellet mills. The models were developed using the novel method based on ANFIS (adaptive neuro fuzzy inference system). The aim of this study is to investigate the potential of ANFIS for selecting the most relevant variables for predictive models of vibration monitoring of pellet mills power transmission. The vibration data are collected by PIC (Programmable Interface Controller) microcontrollers. The goal of the predictive vibration monitoring of planetary power transmissions in pellet mills is to indicate deterioration in the vibration of the power transmissions before the actual failure occurs. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of vibration monitoring. It was also used to select the minimal input subset of variables from the initial set of input variables - current and lagged variables (up to 11 steps) of vibration. The obtained results could be used for simplification of predictive methods so as to avoid multiple input variables. It was preferable to used models with less inputs because of overfitting between training and testing data. While the obtained results are promising, further work is required in order to get results that could be directly applied in practice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19700825','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19700825"><span>Predictive optimal control of sewer networks using CORAL tool: application to Riera Blanca catchment in Barcelona.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Puig, V; Cembrano, G; Romera, J; Quevedo, J; Aznar, B; Ramón, G; Cabot, J</p> <p>2009-01-01</p> <p>This paper deals with the global control of the Riera Blanca catchment in the Barcelona sewer network using a predictive optimal control approach. This catchment has been modelled using a conceptual modelling approach based on decomposing the catchments in subcatchments and representing them as virtual tanks. This conceptual modelling approach allows real-time model calibration and control of the sewer network. The global control problem of the Riera Blanca catchment is solved using a optimal/predictive control algorithm. To implement the predictive optimal control of the Riera Blanca catchment, a software tool named CORAL is used. The on-line control is simulated by interfacing CORAL with a high fidelity simulator of sewer networks (MOUSE). CORAL interchanges readings from the limnimeters and gate commands with MOUSE as if it was connected with the real SCADA system. Finally, the global control results obtained using the predictive optimal control are presented and compared against the results obtained using current local control system. The results obtained using the global control are very satisfactory compared to those obtained using the local control.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100040418','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100040418"><span>Modeling Progressive Damage Using Local Displacement Discontinuities Within the FEAMAC Multiscale Modeling Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ranatunga, Vipul; Bednarcyk, Brett A.; Arnold, Steven M.</p> <p>2010-01-01</p> <p>A method for performing progressive damage modeling in composite materials and structures based on continuum level interfacial displacement discontinuities is presented. The proposed method enables the exponential evolution of the interfacial compliance, resulting in unloading of the tractions at the interface after delamination or failure occurs. In this paper, the proposed continuum displacement discontinuity model has been used to simulate failure within both isotropic and orthotropic materials efficiently and to explore the possibility of predicting the crack path, therein. Simulation results obtained from Mode-I and Mode-II fracture compare the proposed approach with the cohesive element approach and Virtual Crack Closure Techniques (VCCT) available within the ABAQUS (ABAQUS, Inc.) finite element software. Furthermore, an eccentrically loaded 3-point bend test has been simulated with the displacement discontinuity model, and the resulting crack path prediction has been compared with a prediction based on the extended finite element model (XFEM) approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16938969','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16938969"><span>Experimental evaluation of radiosity for room sound-field prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hodgson, Murray; Nosal, Eva-Marie</p> <p>2006-08-01</p> <p>An acoustical radiosity model was evaluated for how it performs in predicting real room sound fields. This was done by comparing radiosity predictions with experimental results for three existing rooms--a squash court, a classroom, and an office. Radiosity predictions were also compared with those by ray tracing--a "reference" prediction model--for both specular and diffuse surface reflection. Comparisons were made for detailed and discretized echograms, sound-decay curves, sound-propagation curves, and the variations with frequency of four room-acoustical parameters--EDT, RT, D50, and C80. In general, radiosity and diffuse ray tracing gave very similar predictions. Predictions by specular ray tracing were often very different. Radiosity agreed well with experiment in some cases, less well in others. Definitive conclusions regarding the accuracy with which the rooms were modeled, or the accuracy of the radiosity approach, were difficult to draw. The results suggest that radiosity predicts room sound fields with some accuracy, at least as well as diffuse ray tracing and, in general, better than specular ray tracing. The predictions of detailed echograms are less accurate, those of derived room-acoustical parameters more accurate. The results underline the need to develop experimental methods for accurately characterizing the absorptive and reflective characteristics of room surfaces, possible including phase.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29455350','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29455350"><span>Analysing the accuracy of machine learning techniques to develop an integrated influent time series model: case study of a sewage treatment plant, Malaysia.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed</p> <p>2018-04-01</p> <p>The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70036702','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70036702"><span>Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Nevers, Meredith B.; Whitman, Richard L.</p> <p>2011-01-01</p> <p>Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..250a2011W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..250a2011W"><span>Electric Power Engineering Cost Predicting Model Based on the PCA-GA-BP</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wen, Lei; Yu, Jiake; Zhao, Xin</p> <p>2017-10-01</p> <p>In this paper a hybrid prediction algorithm: PCA-GA-BP model is proposed. PCA algorithm is established to reduce the correlation between indicators of original data and decrease difficulty of BP neural network in complex dimensional calculation. The BP neural network is established to estimate the cost of power transmission project. The results show that PCA-GA-BP algorithm can improve result of prediction of electric power engineering cost.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=322719','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=322719"><span>Demonstration of the Water Erosion Prediction Project (WEPP) internet interface and services</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>The Water Erosion Prediction Project (WEPP) model is a process-based FORTRAN computer simulation program for prediction of runoff and soil erosion by water at hillslope profile, field, and small watershed scales. To effectively run the WEPP model and interpret results additional software has been de...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=applications+AND+artificial+AND+intelligence&pg=5&id=EJ647391','ERIC'); return false;" href="https://eric.ed.gov/?q=applications+AND+artificial+AND+intelligence&pg=5&id=EJ647391"><span>Artificial Neural Networks: A New Approach to Predicting Application Behavior.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gonzalez, Julie M. Byers; DesJardins, Stephen L.</p> <p>2002-01-01</p> <p>Applied the technique of artificial neural networks to predict which students were likely to apply to one research university. Compared the results to the traditional analysis tool, logistic regression modeling. Found that the addition of artificial intelligence models was a useful new tool for predicting student application behavior. (EV)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=46588&Lab=NHEERL&keyword=electromagnetic&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=46588&Lab=NHEERL&keyword=electromagnetic&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>REVIEW OF NUMERICAL MODELS FOR PREDICTING THE ENERGY DEPOSITION AND RESULTANT THERMAL RESPONSE OF HUMANS EXPOSED TO ELECTROMAGNETIC FIELDS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>For humans exposed to electromagnetic (EM) radiation, the resulting thermophysiologic response is not well understood. Because it is unlikely that this information will be determined from quantitative experimentation, it is necessary to develop theoretical models which predict th...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29241658','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29241658"><span>Spatiotemporal Bayesian networks for malaria prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haddawy, Peter; Hasan, A H M Imrul; Kasantikul, Rangwan; Lawpoolsri, Saranath; Sa-Angchai, Patiwat; Kaewkungwal, Jaranit; Singhasivanon, Pratap</p> <p>2018-01-01</p> <p>Targeted intervention and resource allocation are essential for effective malaria control, particularly in remote areas, with predictive models providing important information for decision making. While a diversity of modeling technique have been used to create predictive models of malaria, no work has made use of Bayesian networks. Bayes nets are attractive due to their ability to represent uncertainty, model time lagged and nonlinear relations, and provide explanations. This paper explores the use of Bayesian networks to model malaria, demonstrating the approach by creating village level models with weekly temporal resolution for Tha Song Yang district in northern Thailand. The networks are learned using data on cases and environmental covariates. Three types of networks are explored: networks for numeric prediction, networks for outbreak prediction, and networks that incorporate spatial autocorrelation. Evaluation of the numeric prediction network shows that the Bayes net has prediction accuracy in terms of mean absolute error of about 1.4 cases for 1 week prediction and 1.7 cases for 6 week prediction. The network for outbreak prediction has an ROC AUC above 0.9 for all prediction horizons. Comparison of prediction accuracy of both Bayes nets against several traditional modeling approaches shows the Bayes nets to outperform the other models for longer time horizon prediction of high incidence transmission. To model spread of malaria over space, we elaborate the models with links between the village networks. This results in some very large models which would be far too laborious to build by hand. So we represent the models as collections of probability logic rules and automatically generate the networks. Evaluation of the models shows that the autocorrelation links significantly improve prediction accuracy for some villages in regions of high incidence. We conclude that spatiotemporal Bayesian networks are a highly promising modeling alternative for prediction of malaria and other vector-borne diseases. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060051861','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060051861"><span>Classification and Prediction of RF Coupling inside A-320 and A-319 Airplanes using Feed Forward Neural Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jafri, Madiha; Ely, Jay; Vahala, Linda</p> <p>2006-01-01</p> <p>Neural Network Modeling is introduced in this paper to classify and predict Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data and a plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPhCS.734c2112L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPhCS.734c2112L"><span>Cold formability prediction by the modified maximum force criterion with a non-associated Hill48 model accounting for anisotropic hardening</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lian, J.; Ahn, D. C.; Chae, D. C.; Münstermann, S.; Bleck, W.</p> <p>2016-08-01</p> <p>Experimental and numerical investigations on the characterisation and prediction of cold formability of a ferritic steel sheet are performed in this study. Tensile tests and Nakajima tests were performed for the plasticity characterisation and the forming limit diagram determination. In the numerical prediction, the modified maximum force criterion is selected as the localisation criterion. For the plasticity model, a non-associated formulation of the Hill48 model is employed. With the non-associated flow rule, the model can result in a similar predictive capability of stress and r-value directionality to the advanced non-quadratic associated models. To accurately characterise the anisotropy evolution during hardening, the anisotropic hardening is also calibrated and implemented into the model for the prediction of the formability.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70041944','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70041944"><span>Comparison of five modelling techniques to predict the spatial distribution and abundance of seabirds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite</p> <p>2012-01-01</p> <p>Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3679490','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3679490"><span>Developing symptom-based predictive models of endometriosis as a clinical screening tool: results from a multicenter study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nnoaham, Kelechi E.; Hummelshoj, Lone; Kennedy, Stephen H.; Jenkinson, Crispin; Zondervan, Krina T.</p> <p>2012-01-01</p> <p>Objective To generate and validate symptom-based models to predict endometriosis among symptomatic women prior to undergoing their first laparoscopy. Design Prospective, observational, two-phase study, in which women completed a 25-item questionnaire prior to surgery. Setting Nineteen hospitals in 13 countries. Patient(s) Symptomatic women (n = 1,396) scheduled for laparoscopy without a previous surgical diagnosis of endometriosis. Intervention(s) None. Main Outcome Measure(s) Sensitivity and specificity of endometriosis diagnosis predicted by symptoms and patient characteristics from optimal models developed using multiple logistic regression analyses in one data set (phase I), and independently validated in a second data set (phase II) by receiver operating characteristic (ROC) curve analysis. Result(s) Three hundred sixty (46.7%) women in phase I and 364 (58.2%) in phase II were diagnosed with endometriosis at laparoscopy. Menstrual dyschezia (pain on opening bowels) and a history of benign ovarian cysts most strongly predicted both any and stage III and IV endometriosis in both phases. Prediction of any-stage endometriosis, although improved by ultrasound scan evidence of cyst/nodules, was relatively poor (area under the curve [AUC] = 68.3). Stage III and IV disease was predicted with good accuracy (AUC = 84.9, sensitivity of 82.3% and specificity 75.8% at an optimal cut-off of 0.24). Conclusion(s) Our symptom-based models predict any-stage endometriosis relatively poorly and stage III and IV disease with good accuracy. Predictive tools based on such models could help to prioritize women for surgical investigation in clinical practice and thus contribute to reducing time to diagnosis. We invite other researchers to validate the key models in additional populations. PMID:22657249</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18404093','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18404093"><span>How does a three-dimensional continuum muscle model affect the kinematics and muscle strains of a finite element neck model compared to a discrete muscle model in rear-end, frontal, and lateral impacts.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hedenstierna, Sofia; Halldin, Peter</p> <p>2008-04-15</p> <p>A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720008276','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720008276"><span>Crime prediction modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1971-01-01</p> <p>A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3810665','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3810665"><span>On the Predictability of Future Impact in Science</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Penner, Orion; Pan, Raj K.; Petersen, Alexander M.; Kaski, Kimmo; Fortunato, Santo</p> <p>2013-01-01</p> <p>Correctly assessing a scientist's past research impact and potential for future impact is key in recruitment decisions and other evaluation processes. While a candidate's future impact is the main concern for these decisions, most measures only quantify the impact of previous work. Recently, it has been argued that linear regression models are capable of predicting a scientist's future impact. By applying that future impact model to 762 careers drawn from three disciplines: physics, biology, and mathematics, we identify a number of subtle, but critical, flaws in current models. Specifically, cumulative non-decreasing measures like the h-index contain intrinsic autocorrelation, resulting in significant overestimation of their “predictive power”. Moreover, the predictive power of these models depend heavily upon scientists' career age, producing least accurate estimates for young researchers. Our results place in doubt the suitability of such models, and indicate further investigation is required before they can be used in recruiting decisions. PMID:24165898</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA551699','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA551699"><span>AHPCRC (Army High Performance Computing Rsearch Center) Bulletin. Volume 1, Issue 4</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2011-01-01</p> <p>Computational and Mathematical Engineering, Stanford University esgs@stanford.edu (650) 723-3764 Molecular Dynamics Models of Antimicrobial ...simulations using low-fidelity Reynolds-av- eraged models illustrate the limited predictive capabili- ties of these schemes. The predictions for scalar and...driving force. The AHPCRC group has used their models to predict nonuniform concentra- tion profiles across small channels as a result of variations</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940012383','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940012383"><span>Textile composite processing science</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Loos, Alfred C.; Hammond, Vincent H.; Kranbuehl, David E.; Hasko, Gregory H.</p> <p>1993-01-01</p> <p>A multi-dimensional model of the Resin Transfer Molding (RTM) process was developed for the prediction of the infiltration behavior of a resin into an anisotropic fiber preform. Frequency dependent electromagnetic sensing (FDEMS) was developed for in-situ monitoring of the RTM process. Flow visualization and mold filling experiments were conducted to verify sensor measurements and model predictions. Test results indicated good agreement between model predictions, sensor readings, and experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/841522','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/841522"><span>Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Somerville, R.C.J.; Iacobellis, S.F.</p> <p>2005-03-18</p> <p>Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19071605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19071605"><span>A consensus least squares support vector regression (LS-SVR) for analysis of near-infrared spectra of plant samples.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Yankun; Shao, Xueguang; Cai, Wensheng</p> <p>2007-04-15</p> <p>Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27664505','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27664505"><span>Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris</p> <p>2016-09-01</p> <p>Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have similar performances reaching AUC values 0.783 and 0.779 for traditional Lasso and Tree-Lasso, respectfully. However, information loss of Lasso models is 0.35 bits higher compared to Tree-Lasso model. We propose a method for building predictive models applicable for the detection of readmission risk based on Electronic Health records. Integration of domain knowledge (in the form of ICD-9-CM taxonomy) and a data-driven, sparse predictive algorithm (Tree-Lasso Logistic Regression) resulted in an increase of interpretability of the resulting model. The models are interpreted for the readmission prediction problem in general pediatric population in California, as well as several important subpopulations, and the interpretations of models comply with existing medical understanding of pediatric readmission. Finally, quantitative assessment of the interpretability of the models is given, that is beyond simple counts of selected low-level features. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ChJME..26.1091C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ChJME..26.1091C"><span>Modeling and evaluating of surface roughness prediction in micro-grinding on soda-lime glass considering tool characterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cheng, Jun; Gong, Yadong; Wang, Jinsheng</p> <p>2013-11-01</p> <p>The current research of micro-grinding mainly focuses on the optimal processing technology for different materials. However, the material removal mechanism in micro-grinding is the base of achieving high quality processing surface. Therefore, a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography is proposed in this paper. The differences of material removal mechanism between convention grinding process and micro-grinding process are analyzed. Topography characterization has been done on micro-grinding tools which are fabricated by electroplating. Models of grain density generation and grain interval are built, and new predicting model of micro-grinding surface roughness is developed. In order to verify the precision and application effect of the surface roughness prediction model proposed, a micro-grinding orthogonally experiment on soda-lime glass is designed and conducted. A series of micro-machining surfaces which are 78 nm to 0.98 μm roughness of brittle material is achieved. It is found that experimental roughness results and the predicting roughness data have an evident coincidence, and the component variable of describing the size effects in predicting model is calculated to be 1.5×107 by reverse method based on the experimental results. The proposed model builds a set of distribution to consider grains distribution densities in different protrusion heights. Finally, the characterization of micro-grinding tools which are used in the experiment has been done based on the distribution set. It is concluded that there is a significant coincidence between surface prediction data from the proposed model and measurements from experiment results. Therefore, the effectiveness of the model is demonstrated. This paper proposes a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography, which would provide significant research theory and experimental reference of material removal mechanism in micro-grinding of soda-lime glass.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29509074','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29509074"><span>Mathematical modeling of ethanol production in solid-state fermentation based on solid medium' dry weight variation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad</p> <p>2018-04-21</p> <p>In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22144385','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22144385"><span>Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Medvigy, David; Moorcroft, Paul R</p> <p>2012-01-19</p> <p>Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PhyA..387.3594C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PhyA..387.3594C"><span>Prediction of stock markets by the evolutionary mix-game model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping</p> <p>2008-06-01</p> <p>This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAP...123q4906L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAP...123q4906L"><span>Fourier and non-Fourier bio-heat transfer models to predict ex vivo temperature response to focused ultrasound heating</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Chenghai; Miao, Jiaming; Yang, Kexin; Guo, Xiasheng; Tu, Juan; Huang, Pintong; Zhang, Dong</p> <p>2018-05-01</p> <p>Although predicting temperature variation is important for designing treatment plans for thermal therapies, research in this area is yet to investigate the applicability of prevalent thermal conduction models, such as the Pennes equation, the thermal wave model of bio-heat transfer, and the dual phase lag (DPL) model. To address this shortcoming, we heated a tissue phantom and ex vivo bovine liver tissues with focused ultrasound (FU), measured the temperature response, and compared the results with those predicted by these models. The findings show that, for a homogeneous-tissue phantom, the initial temperature increase is accurately predicted by the Pennes equation at the onset of FU irradiation, although the prediction deviates from the measured temperature with increasing FU irradiation time. For heterogeneous liver tissues, the predicted response is closer to the measured temperature for the non-Fourier models, especially the DPL model. Furthermore, the DPL model accurately predicts the temperature response in biological tissues because it increases the phase lag, which characterizes microstructural thermal interactions. These findings should help to establish more precise clinical treatment plans for thermal therapies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.tmp..153L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.tmp..153L"><span>A data-driven SVR model for long-term runoff prediction and uncertainty analysis based on the Bayesian framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liang, Zhongmin; Li, Yujie; Hu, Yiming; Li, Binquan; Wang, Jun</p> <p>2017-06-01</p> <p>Accurate and reliable long-term forecasting plays an important role in water resources management and utilization. In this paper, a hybrid model called SVR-HUP is presented to predict long-term runoff and quantify the prediction uncertainty. The model is created based on three steps. First, appropriate predictors are selected according to the correlations between meteorological factors and runoff. Second, a support vector regression (SVR) model is structured and optimized based on the LibSVM toolbox and a genetic algorithm. Finally, using forecasted and observed runoff, a hydrologic uncertainty processor (HUP) based on a Bayesian framework is used to estimate the posterior probability distribution of the simulated values, and the associated uncertainty of prediction was quantitatively analyzed. Six precision evaluation indexes, including the correlation coefficient (CC), relative root mean square error (RRMSE), relative error (RE), mean absolute percentage error (MAPE), Nash-Sutcliffe efficiency (NSE), and qualification rate (QR), are used to measure the prediction accuracy. As a case study, the proposed approach is applied in the Han River basin, South Central China. Three types of SVR models are established to forecast the monthly, flood season and annual runoff volumes. The results indicate that SVR yields satisfactory accuracy and reliability at all three scales. In addition, the results suggest that the HUP cannot only quantify the uncertainty of prediction based on a confidence interval but also provide a more accurate single value prediction than the initial SVR forecasting result. Thus, the SVR-HUP model provides an alternative method for long-term runoff forecasting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H21D1482L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H21D1482L"><span>Enhancing Flood Prediction Reliability Using Bayesian Model Averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Z.; Merwade, V.</p> <p>2017-12-01</p> <p>Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H31B1354A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H31B1354A"><span>Probabilistic flood inundation prediction within a coupled hydrodynamic, distributed hydrologic modeling framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Adams, T. E.</p> <p>2016-12-01</p> <p>Accurate and timely predictions of the lateral exent of floodwaters and water level depth in floodplain areas are critical globally. This paper demonstrates the coupling of hydrologic ensembles, derived from the use of numerical weather prediction (NWP) model forcings as input to a fully distributed hydrologic model. Resulting ensemble output from the distributed hydrologic model are used as upstream flow boundaries and lateral inflows to a 1-D hydrodynamic model. An example is presented for the Potomac River in the vicinity of Washington, DC (USA). The approach taken falls within the broader goals of the Hydrologic Ensemble Prediction EXperiment (HEPEX).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/111417','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/111417"><span>New model for burnout prediction in channels of various cross-section</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bobkov, V.P.; Kozina, N.V.; Vinogrado, V.N.</p> <p>1995-09-01</p> <p>The model developed to predict a critical heat flux (CHF) in various channels is presented together with the results of data analysis. A model is the realization of relative method of CHF describing based on the data for round tube and on the system of correction factors. The results of data description presented here are for rectangular and triangular channels, annuli and rod bundles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20365976','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20365976"><span>Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Giedt, Joel; Thomas, Anthony W; Young, Ross D</p> <p>2009-11-13</p> <p>Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFM.H34D..05H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFM.H34D..05H"><span>Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hughes, J. D.; White, J.; Doherty, J.</p> <p>2011-12-01</p> <p>Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=80965&keyword=kernel&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=80965&keyword=kernel&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>A predictive screening model was developed for fate and transport <br>of viruses in the unsaturated zone. A database of input parameters <br>allowed Monte Carlo analysis with the model. The resulting kernel <br>densities of predicted attenuation during percolation indicated very ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=234790&keyword=biofilm&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=234790&keyword=biofilm&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>An in-premise model for Legionella exposure during showering events</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>An exposure model was constructed to predict the critical Legionella densities in an engineered water system that might result in infection from inhalation of aerosols containing the pathogen while showering. The model predicted the Legionella densities in the shower air, water ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJT....37..122S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJT....37..122S"><span>Development of a Skin Burn Predictive Model adapted to Laser Irradiation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sonneck-Museux, N.; Scheer, E.; Perez, L.; Agay, D.; Autrique, L.</p> <p>2016-12-01</p> <p>Laser technology is increasingly used, and it is crucial for both safety and medical reasons that the impact of laser irradiation on human skin can be accurately predicted. This study is mainly focused on laser-skin interactions and potential lesions (burns). A mathematical model dedicated to heat transfers in skin exposed to infrared laser radiations has been developed. The model is validated by studying heat transfers in human skin and simultaneously performing experimentations an animal model (pig). For all experimental tests, pig's skin surface temperature is recorded. Three laser wavelengths have been tested: 808 nm, 1940 nm and 10 600 nm. The first is a diode laser producing radiation absorbed deep within the skin. The second wavelength has a more superficial effect. For the third wavelength, skin is an opaque material. The validity of the developed models is verified by comparison with experimental results (in vivo tests) and the results of previous studies reported in the literature. The comparison shows that the models accurately predict the burn degree caused by laser radiation over a wide range of conditions. The results show that the important parameter for burn prediction is the extinction coefficient. For the 1940 nm wavelength especially, significant differences between modeling results and literature have been observed, mainly due to this coefficient's value. This new model can be used as a predictive tool in order to estimate the amount of injury induced by several types (couple power-time) of laser aggressions on the arm, the face and on the palm of the hand.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16198995','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16198995"><span>Predicting dire outcomes of patients with community acquired pneumonia.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cooper, Gregory F; Abraham, Vijoy; Aliferis, Constantin F; Aronis, John M; Buchanan, Bruce G; Caruana, Richard; Fine, Michael J; Janosky, Janine E; Livingston, Gary; Mitchell, Tom; Monti, Stefano; Spirtes, Peter</p> <p>2005-10-01</p> <p>Community-acquired pneumonia (CAP) is an important clinical condition with regard to patient mortality, patient morbidity, and healthcare resource utilization. The assessment of the likely clinical course of a CAP patient can significantly influence decision making about whether to treat the patient as an inpatient or as an outpatient. That decision can in turn influence resource utilization, as well as patient well being. Predicting dire outcomes, such as mortality or severe clinical complications, is a particularly important component in assessing the clinical course of patients. We used a training set of 1601 CAP patient cases to construct 11 statistical and machine-learning models that predict dire outcomes. We evaluated the resulting models on 686 additional CAP-patient cases. The primary goal was not to compare these learning algorithms as a study end point; rather, it was to develop the best model possible to predict dire outcomes. A special version of an artificial neural network (NN) model predicted dire outcomes the best. Using the 686 test cases, we estimated the expected healthcare quality and cost impact of applying the NN model in practice. The particular, quantitative results of this analysis are based on a number of assumptions that we make explicit; they will require further study and validation. Nonetheless, the general implication of the analysis seems robust, namely, that even small improvements in predictive performance for prevalent and costly diseases, such as CAP, are likely to result in significant improvements in the quality and efficiency of healthcare delivery. Therefore, seeking models with the highest possible level of predictive performance is important. Consequently, seeking ever better machine-learning and statistical modeling methods is of great practical significance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28236678','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28236678"><span>A perturbative approach for enhancing the performance of time series forecasting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C</p> <p>2017-04-01</p> <p>This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFMOS22A..05P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFMOS22A..05P"><span>Review of Nearshore Morphologic Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Plant, N. G.; Dalyander, S.; Long, J.</p> <p>2014-12-01</p> <p>The evolution of the world's erodible coastlines will determine the balance between the benefits and costs associated with human and ecological utilization of shores, beaches, dunes, barrier islands, wetlands, and estuaries. So, we would like to predict coastal evolution to guide management and planning of human and ecological response to coastal changes. After decades of research investment in data collection, theoretical and statistical analysis, and model development we have a number of empirical, statistical, and deterministic models that can predict the evolution of the shoreline, beaches, dunes, and wetlands over time scales of hours to decades, and even predict the evolution of geologic strata over the course of millennia. Comparisons of predictions to data have demonstrated that these models can have meaningful predictive skill. But these comparisons also highlight the deficiencies in fundamental understanding, formulations, or data that are responsible for prediction errors and uncertainty. Here, we review a subset of predictive models of the nearshore to illustrate tradeoffs in complexity, predictive skill, and sensitivity to input data and parameterization errors. We identify where future improvement in prediction skill will result from improved theoretical understanding, and data collection, and model-data assimilation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920016104','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920016104"><span>A comparison between theoretical prediction and experimental measurement of the dynamic behavior of spur gears</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rebbechi, Brian; Forrester, B. David; Oswald, Fred B.; Townsend, Dennis P.</p> <p>1992-01-01</p> <p>A comparison was made between computer model predictions of gear dynamics behavior and experimental results. The experimental data were derived from the NASA gear noise rig, which was used to record dynamic tooth loads and vibration. The experimental results were compared with predictions from the DSTO Aeronautical Research Laboratory's gear dynamics code for a matrix of 28 load speed points. At high torque the peak dynamic load predictions agree with the experimental results with an average error of 5 percent in the speed range 800 to 6000 rpm. Tooth separation (or bounce), which was observed in the experimental data for light torque, high speed conditions, was simulated by the computer model. The model was also successful in simulating the degree of load sharing between gear teeth in the multiple tooth contact region.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.892a2008K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.892a2008K"><span>Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik</p> <p>2017-09-01</p> <p>This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27734318','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27734318"><span>Deep learning architecture for air quality predictions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Xiang; Peng, Ling; Hu, Yuan; Shao, Jing; Chi, Tianhe</p> <p>2016-11-01</p> <p>With the rapid development of urbanization and industrialization, many developing countries are suffering from heavy air pollution. Governments and citizens have expressed increasing concern regarding air pollution because it affects human health and sustainable development worldwide. Current air quality prediction methods mainly use shallow models; however, these methods produce unsatisfactory results, which inspired us to investigate methods of predicting air quality based on deep architecture models. In this paper, a novel spatiotemporal deep learning (STDL)-based air quality prediction method that inherently considers spatial and temporal correlations is proposed. A stacked autoencoder (SAE) model is used to extract inherent air quality features, and it is trained in a greedy layer-wise manner. Compared with traditional time series prediction models, our model can predict the air quality of all stations simultaneously and shows the temporal stability in all seasons. Moreover, a comparison with the spatiotemporal artificial neural network (STANN), auto regression moving average (ARMA), and support vector regression (SVR) models demonstrates that the proposed method of performing air quality predictions has a superior performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002STIN...0307848S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002STIN...0307848S"><span>Wake Vortex Prediction Models for Decay and Transport Within Stratified Environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Switzer, George F.; Proctor, Fred H.</p> <p>2002-01-01</p> <p>This paper proposes two simple models to predict vortex transport and decay. The models are determined empirically from results of three-dimensional large eddy simulations, and are applicable to wake vortices out of ground effect and not subjected to environmental winds. The results, from the large eddy simulations assume a range of ambient turbulence and stratification levels. The models and the results from the large eddy simulations support the hypothesis that the decay of the vortex hazard is decoupled from its change in descent rate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000115618','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000115618"><span>Evaluation of a Computational Model of Situational Awareness</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)</p> <p>2000-01-01</p> <p>Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..549..461R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..549..461R"><span>Wavelet-linear genetic programming: A new approach for modeling monthly streamflow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ravansalar, Masoud; Rajaee, Taher; Kisi, Ozgur</p> <p>2017-06-01</p> <p>The streamflows are important and effective factors in stream ecosystems and its accurate prediction is an essential and important issue in water resources and environmental engineering systems. A hybrid wavelet-linear genetic programming (WLGP) model, which includes a discrete wavelet transform (DWT) and a linear genetic programming (LGP) to predict the monthly streamflow (Q) in two gauging stations, Pataveh and Shahmokhtar, on the Beshar River at the Yasuj, Iran were used in this study. In the proposed WLGP model, the wavelet analysis was linked to the LGP model where the original time series of streamflow were decomposed into the sub-time series comprising wavelet coefficients. The results were compared with the single LGP, artificial neural network (ANN), a hybrid wavelet-ANN (WANN) and Multi Linear Regression (MLR) models. The comparisons were done by some of the commonly utilized relevant physical statistics. The Nash coefficients (E) were found as 0.877 and 0.817 for the WLGP model, for the Pataveh and Shahmokhtar stations, respectively. The comparison of the results showed that the WLGP model could significantly increase the streamflow prediction accuracy in both stations. Since, the results demonstrate a closer approximation of the peak streamflow values by the WLGP model, this model could be utilized for the simulation of cumulative streamflow data prediction in one month ahead.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ECSS..181..294D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ECSS..181..294D"><span>Using modelling to predict impacts of sea level rise and increased turbidity on seagrass distributions in estuarine embayments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Davis, Tom R.; Harasti, David; Smith, Stephen D. A.; Kelaher, Brendan P.</p> <p>2016-11-01</p> <p>Climate change induced sea level rise will affect shallow estuarine habitats, which are already under threat from multiple anthropogenic stressors. Here, we present the results of modelling to predict potential impacts of climate change associated processes on seagrass distributions. We use a novel application of relative environmental suitability (RES) modelling to examine relationships between variables of physiological importance to seagrasses (light availability, wave exposure, and current flow) and seagrass distributions within 5 estuarine embayments. Models were constructed separately for Posidonia australis and Zostera muelleri subsp. capricorni using seagrass data from Port Stephens estuary, New South Wales, Australia. Subsequent testing of models used independent datasets from four other estuarine embayments (Wallis Lake, Lake Illawarra, Merimbula Lake, and Pambula Lake) distributed along 570 km of the east Australian coast. Relative environmental suitability models provided adequate predictions for seagrass distributions within Port Stephens and the other estuarine embayments, indicating that they may have broad regional application. Under the predictions of RES models, both sea level rise and increased turbidity are predicted to cause substantial seagrass losses in deeper estuarine areas, resulting in a net shoreward movement of seagrass beds. Seagrass species distribution models developed in this study provide a valuable tool to predict future shifts in estuarine seagrass distributions, allowing identification of areas for protection, monitoring and rehabilitation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28268823','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28268823"><span>Prediction using patient comparison vs. modeling: a case study for mortality prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter</p> <p>2016-08-01</p> <p>Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4692524','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4692524"><span>Neural Network Prediction of ICU Length of Stay Following Cardiac Surgery Based on Pre-Incision Variables</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pothula, Venu M.; Yuan, Stanley C.; Maerz, David A.; Montes, Lucresia; Oleszkiewicz, Stephen M.; Yusupov, Albert; Perline, Richard</p> <p>2015-01-01</p> <p>Background Advanced predictive analytical techniques are being increasingly applied to clinical risk assessment. This study compared a neural network model to several other models in predicting the length of stay (LOS) in the cardiac surgical intensive care unit (ICU) based on pre-incision patient characteristics. Methods Thirty six variables collected from 185 cardiac surgical patients were analyzed for contribution to ICU LOS. The Automatic Linear Modeling (ALM) module of IBM-SPSS software identified 8 factors with statistically significant associations with ICU LOS; these factors were also analyzed with the Artificial Neural Network (ANN) module of the same software. The weighted contributions of each factor (“trained” data) were then applied to data for a “new” patient to predict ICU LOS for that individual. Results Factors identified in the ALM model were: use of an intra-aortic balloon pump; O2 delivery index; age; use of positive cardiac inotropic agents; hematocrit; serum creatinine ≥ 1.3 mg/deciliter; gender; arterial pCO2. The r2 value for ALM prediction of ICU LOS in the initial (training) model was 0.356, p <0.0001. Cross validation in prediction of a “new” patient yielded r2 = 0.200, p <0.0001. The same 8 factors analyzed with ANN yielded a training prediction r2 of 0.535 (p <0.0001) and a cross validation prediction r2 of 0.410, p <0.0001. Two additional predictive algorithms were studied, but they had lower prediction accuracies. Our validated neural network model identified the upper quartile of ICU LOS with an odds ratio of 9.8(p <0.0001). Conclusions ANN demonstrated a 2-fold greater accuracy than ALM in prediction of observed ICU LOS. This greater accuracy would be presumed to result from the capacity of ANN to capture nonlinear effects and higher order interactions. Predictive modeling may be of value in early anticipation of risks of post-operative morbidity and utilization of ICU facilities. PMID:26710254</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26ES...69a2089H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26ES...69a2089H"><span>A method for grounding grid corrosion rate prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Han, Juan; Du, Jingyi</p> <p>2017-06-01</p> <p>Involved in a variety of factors, prediction of grounding grid corrosion complex, and uncertainty in the acquisition process, we propose a combination of EAHP (extended AHP) and fuzzy nearness degree of effective grounding grid corrosion rate prediction model. EAHP is used to establish judgment matrix and calculate the weight of each factors corrosion of grounding grid; different sample classification properties have different corrosion rate of contribution, and combining the principle of close to predict corrosion rate.The application result shows, the model can better capture data variation, thus to improve the validity of the model to get higher prediction precision.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900004123','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900004123"><span>Atmospheric drag model calibrations for spacecraft lifetime prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Binebrink, A. L.; Radomski, M. S.; Samii, M. V.</p> <p>1989-01-01</p> <p>Although solar activity prediction uncertainty normally dominates decay prediction error budget for near-Earth spacecraft, the effect of drag force modeling errors for given levels of solar activity needs to be considered. Two atmospheric density models, the modified Harris-Priester model and the Jacchia-Roberts model, to reproduce the decay histories of the Solar Mesosphere Explorer (SME) and Solar Maximum Mission (SMM) spacecraft in the 490- to 540-kilometer altitude range were analyzed. Historical solar activity data were used in the input to the density computations. For each spacecraft and atmospheric model, a drag scaling adjustment factor was determined for a high-solar-activity year, such that the observed annual decay in the mean semimajor axis was reproduced by an averaged variation-of-parameters (VOP) orbit propagation. The SME (SMM) calibration was performed using calendar year 1983 (1982). The resulting calibration factors differ by 20 to 40 percent from the predictions of the prelaunch ballistic coefficients. The orbit propagations for each spacecraft were extended to the middle of 1988 using the calibrated drag models. For the Jaccia-Roberts density model, the observed decay in the mean semimajor axis of SME (SMM) over the 4.5-year (5.5-year) predictive period was reproduced to within 1.5 (4.4) percent. The corresponding figure for the Harris-Priester model was 8.6 (20.6) percent. Detailed results and conclusions regarding the importance of accurate drag force modeling for lifetime predictions are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJSyS..46.2072L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJSyS..46.2072L"><span>The assisted prediction modelling frame with hybridisation and ensemble for business risk forecasting and an implementation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie</p> <p>2015-08-01</p> <p>The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CompM..61..237H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CompM..61..237H"><span>Uncertainty aggregation and reduction in structure-material performance prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, Zhen; Mahadevan, Sankaran; Ao, Dan</p> <p>2018-02-01</p> <p>An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26468133','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26468133"><span>Toward Big Data Analytics: Review of Predictive Models in Management of Diabetes and Its Complications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cichosz, Simon Lebech; Johansen, Mette Dencker; Hejlesen, Ole</p> <p>2015-10-14</p> <p>Diabetes is one of the top priorities in medical science and health care management, and an abundance of data and information is available on these patients. Whether data stem from statistical models or complex pattern recognition models, they may be fused into predictive models that combine patient information and prognostic outcome results. Such knowledge could be used in clinical decision support, disease surveillance, and public health management to improve patient care. Our aim was to review the literature and give an introduction to predictive models in screening for and the management of prevalent short- and long-term complications in diabetes. Predictive models have been developed for management of diabetes and its complications, and the number of publications on such models has been growing over the past decade. Often multiple logistic or a similar linear regression is used for prediction model development, possibly owing to its transparent functionality. Ultimately, for prediction models to prove useful, they must demonstrate impact, namely, their use must generate better patient outcomes. Although extensive effort has been put in to building these predictive models, there is a remarkable scarcity of impact studies. © 2015 Diabetes Technology Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26526490','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26526490"><span>Modelling postharvest quality of blueberry affected by biological variability using image and spectral data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hu, Meng-Han; Dong, Qing-Li; Liu, Bao-Lin</p> <p>2016-08-01</p> <p>Hyperspectral reflectance and transmittance sensing as well as near-infrared (NIR) spectroscopy were investigated as non-destructive tools for estimating blueberry firmness, elastic modulus and soluble solid content (SSC). Least squares-support vector machine models were established from these three spectra based on samples from three cultivars viz. Bluecrop, Duke and M2 and two harvest years viz. 2014 and 2015 for predicting blueberry postharvest quality. One-cultivar reflectance models (establishing model using one cultivar) derived better results than the corresponding transmittance and NIR models for predicting blueberry firmness with few cultivar effects. Two-cultivar NIR models (establishing model using two cultivars) proved to be suitable for estimating blueberry SSC with correlations over 0.83. Rp (RMSEp ) values of the three-cultivar reflectance models (establishing model using 75% of three cultivars) were 0.73 (0.094) and 0.73 (0.186), respectively , for predicting blueberry firmness and elastic modulus. For SSC prediction, the three-cultivar NIR model was found to achieve an Rp (RMSEp ) value of 0.85 (0.090). Adding Bluecrop samples harvested in 2014 could enhance the three-cultivar model robustness for firmness and elastic modulus. The above results indicated the potential for using spatial and spectral techniques to develop robust models for predicting blueberry postharvest quality containing biological variability. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018RvGeo..56..108H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018RvGeo..56..108H"><span>Seasonal Drought Prediction: Advances, Challenges, and Future Prospects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hao, Zengchao; Singh, Vijay P.; Xia, Youlong</p> <p>2018-03-01</p> <p>Drought prediction is of critical importance to early warning for drought managements. This review provides a synthesis of drought prediction based on statistical, dynamical, and hybrid methods. Statistical drought prediction is achieved by modeling the relationship between drought indices of interest and a suite of potential predictors, including large-scale climate indices, local climate variables, and land initial conditions. Dynamical meteorological drought prediction relies on seasonal climate forecast from general circulation models (GCMs), which can be employed to drive hydrological models for agricultural and hydrological drought prediction with the predictability determined by both climate forcings and initial conditions. Challenges still exist in drought prediction at long lead time and under a changing environment resulting from natural and anthropogenic factors. Future research prospects to improve drought prediction include, but are not limited to, high-quality data assimilation, improved model development with key processes related to drought occurrence, optimal ensemble forecast to select or weight ensembles, and hybrid drought prediction to merge statistical and dynamical forecasts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA603409','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA603409"><span>Development of Virtual Blade Model for Modelling Helicopter Rotor Downwash in OpenFOAM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-12-01</p> <p>UNCLASSIFIED Development of Virtual Blade Model for Modelling Helicopter Rotor Downwash in OpenFOAM Stefano Wahono Aerospace...Georgia Institute of Technology. The OpenFOAM predicted result was also shown to compare favourably with ANSYS Fluent predictions. RELEASE...UNCLASSIFIED Development of Virtual Blade Model for Modelling Helicopter Rotor Downwash in OpenFOAM Executive Summary The Infrared</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26406085','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26406085"><span>Mental workload prediction based on attentional resource allocation and information processing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin</p> <p>2015-01-01</p> <p>Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15844447','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15844447"><span>External validation of EPIWIN biodegradation models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M</p> <p>2005-01-01</p> <p>The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5331917','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5331917"><span>Epigenome-wide cross-tissue predictive modeling and comparison of cord blood and placental methylation in a birth cohort</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>De Carli, Margherita M; Baccarelli, Andrea A; Trevisi, Letizia; Pantic, Ivan; Brennan, Kasey JM; Hacker, Michele R; Loudon, Holly; Brunst, Kelly J; Wright, Robert O; Wright, Rosalind J; Just, Allan C</p> <p>2017-01-01</p> <p>Aim: We compared predictive modeling approaches to estimate placental methylation using cord blood methylation. Materials & methods: We performed locus-specific methylation prediction using both linear regression and support vector machine models with 174 matched pairs of 450k arrays. Results: At most CpG sites, both approaches gave poor predictions in spite of a misleading improvement in array-wide correlation. CpG islands and gene promoters, but not enhancers, were the genomic contexts where the correlation between measured and predicted placental methylation levels achieved higher values. We provide a list of 714 sites where both models achieved an R2 ≥0.75. Conclusion: The present study indicates the need for caution in interpreting cross-tissue predictions. Few methylation sites can be predicted between cord blood and placenta. PMID:28234020</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20050231781&hterms=account+information&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Daccount%2Binformation','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20050231781&hterms=account+information&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Daccount%2Binformation"><span>An information maximization model of eye movements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra</p> <p>2005-01-01</p> <p>We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3693991','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3693991"><span>Using Pareto points for model identification in predictive toxicology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JPS...215..135B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JPS...215..135B"><span>A mathematical model for predicting the life of polymer electrolyte fuel cell membranes subjected to hydration cycling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Burlatsky, S. F.; Gummalla, M.; O'Neill, J.; Atrazhev, V. V.; Varyukhin, A. N.; Dmitriev, D. V.; Erikhman, N. S.</p> <p>2012-10-01</p> <p>Under typical Polymer Electrolyte Membrane Fuel Cell (PEMFC) fuel cell operating conditions, part of the membrane electrode assembly is subjected to humidity cycling due to variation of inlet gas RH and/or flow rate. Cyclic membrane hydration/dehydration would cause cyclic swelling/shrinking of the unconstrained membrane. In a constrained membrane, it causes cyclic stress resulting in mechanical failure in the area adjacent to the gas inlet. A mathematical modeling framework for prediction of the lifetime of a PEMFC membrane subjected to hydration cycling is developed in this paper. The model predicts membrane lifetime as a function of RH cycling amplitude and membrane mechanical properties. The modeling framework consists of three model components: a fuel cell RH distribution model, a hydration/dehydration induced stress model that predicts stress distribution in the membrane, and a damage accrual model that predicts membrane lifetime. Short descriptions of the model components along with overall framework are presented in the paper. The model was used for lifetime prediction of a GORE-SELECT membrane.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/6189683','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/6189683"><span>Predictive aging results for cable materials in nuclear power plants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Gillen, K.T.; Clough, R.L.</p> <p>1990-11-01</p> <p>In this report, we provide a detailed discussion of methodology of predicting cable degradation versus dose rate, temperature, and exposure time and its application to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicon rubber and two ethylenetetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7 to 9 years), low dose-rate results recently obtained for the same material types actually aged under nuclear power plant conditions. Based on a combination of the modelling and long-term results, we findmore » indications of reasonably similar degradation responses among several different commercial formulations for each of the following generic'' materials: hypalon, ethylenetetrafluoroethylene, silicone rubber and PVC. If such generic'' behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated. Finally, to aid utilities in their cable life extension decisions, we utilize our modelling results to generate lifetime prediction curves for the materials modelled to data. These curves plot expected material lifetime versus dose rate and temperature down to the levels of interest to nuclear power plant aging. 18 refs., 30 figs., 3 tabs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760005356','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760005356"><span>Verification by Remote Sensing of an Oil Slick Movement Prediction Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Klemas, V. (Principal Investigator); Davis, G.; Wang, H.</p> <p>1975-01-01</p> <p>The author has identified the following significant results. LANDSAT, aircraft, ships, and air-dropped current drogues were deployed to determine current circulation and to track oil slick movement on four different dates in Delaware Bay. Results were used to verify a predictive model for oil slick movement and dispersion. The model predicts the behavior of oil slicks given their size, location, tidal stage (current), weather (wind), and nature of crude. Both LANDSAT satellites provided valuable data on gross circulation patterns and convergent coastal fronts which by capturing oil slicks significantly influence their movement and dispersion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28196722','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28196722"><span>Comparison of models for predicting the changes in phytoplankton community composition in the receiving water system of an inter-basin water transfer project.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zeng, Qinghui; Liu, Yi; Zhao, Hongtao; Sun, Mingdong; Li, Xuyong</p> <p>2017-04-01</p> <p>Inter-basin water transfer projects might cause complex hydro-chemical and biological variation in the receiving aquatic ecosystems. Whether machine learning models can be used to predict changes in phytoplankton community composition caused by water transfer projects have rarely been studied. In the present study, we used machine learning models to predict the total algal cell densities and changes in phytoplankton community composition in Miyun reservoir caused by the middle route of the South-to-North Water Transfer Project (SNWTP). The model performances of four machine learning models, including regression trees (RT), random forest (RF), support vector machine (SVM), and artificial neural network (ANN) were evaluated and the best model was selected for further prediction. The results showed that the predictive accuracies (Pearson's correlation coefficient) of the models were RF (0.974), ANN (0.951), SVM (0.860), and RT (0.817) in the training step and RF (0.806), ANN (0.734), SVM (0.730), and RT (0.692) in the testing step. Therefore, the RF model was the best method for estimating total algal cell densities. Furthermore, the predicted accuracies of the RF model for dominant phytoplankton phyla (Cyanophyta, Chlorophyta, and Bacillariophyta) in Miyun reservoir ranged from 0.824 to 0.869 in the testing step. The predicted proportions with water transfer of the different phytoplankton phyla ranged from -8.88% to 9.93%, and the predicted dominant phyla with water transfer in each season remained unchanged compared to the phytoplankton succession without water transfer. The results of the present study provide a useful tool for predicting the changes in phytoplankton community caused by water transfer. The method is transferrable to other locations via establishment of models with relevant data to a particular area. Our findings help better understanding the possible changes in aquatic ecosystems influenced by inter-basin water transfer. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22189378','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22189378"><span>A QSPR model for prediction of diffusion coefficient of non-electrolyte organic compounds in air at ambient condition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi</p> <p>2012-03-01</p> <p>Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29285342','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29285342"><span>A study on the predictability of acute lymphoblastic leukaemia response to treatment using a hybrid oncosimulator.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ouzounoglou, Eleftherios; Kolokotroni, Eleni; Stanulla, Martin; Stamatakos, Georgios S</p> <p>2018-02-06</p> <p>Efficient use of Virtual Physiological Human (VPH)-type models for personalized treatment response prediction purposes requires a precise model parameterization. In the case where the available personalized data are not sufficient to fully determine the parameter values, an appropriate prediction task may be followed. This study, a hybrid combination of computational optimization and machine learning methods with an already developed mechanistic model called the acute lymphoblastic leukaemia (ALL) Oncosimulator which simulates ALL progression and treatment response is presented. These methods are used in order for the parameters of the model to be estimated for retrospective cases and to be predicted for prospective ones. The parameter value prediction is based on a regression model trained on retrospective cases. The proposed Hybrid ALL Oncosimulator system has been evaluated when predicting the pre-phase treatment outcome in ALL. This has been correctly achieved for a significant percentage of patient cases tested (approx. 70% of patients). Moreover, the system is capable of denying the classification of cases for which the results are not trustworthy enough. In that case, potentially misleading predictions for a number of patients are avoided, while the classification accuracy for the remaining patient cases further increases. The results obtained are particularly encouraging regarding the soundness of the proposed methodologies and their relevance to the process of achieving clinical applicability of the proposed Hybrid ALL Oncosimulator system and VPH models in general.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070022496','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070022496"><span>Twist Model Development and Results from the Active Aeroelastic Wing F/A-18 Aircraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lizotte, Andrew M.; Allen, Michael J.</p> <p>2007-01-01</p> <p>Understanding the wing twist of the active aeroelastic wing (AAW) F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption. This technique produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090011185','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090011185"><span>Symbolic Processing Combined with Model-Based Reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>James, Mark</p> <p>2009-01-01</p> <p>A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050111580','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050111580"><span>Twist Model Development and Results From the Active Aeroelastic Wing F/A-18 Aircraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lizotte, Andrew; Allen, Michael J.</p> <p>2005-01-01</p> <p>Understanding the wing twist of the active aeroelastic wing F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption and by using neural networks. These techniques produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090010211','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090010211"><span>Elongated Tetrakaidecahedron Micromechanics Model for Space Shuttle External Tank Foams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sullivan, Roy M.; Ghosn, Louis J.; Lerch, Bradley A.; Baker, Eric H.</p> <p>2009-01-01</p> <p>The results of microstructural characterization studies and physical and mechanical testing of BX-265 and NCFI24-124 foams are reported. A micromechanics model developed previously by the authors is reviewed, and the resulting equations for the elastic constants, the relative density, and the strength of the foam in the principal material directions are presented. The micromechanics model is also used to derive equations to predict the effect of vacuum on the tensile strength and the strains induced by exposure to vacuum. Using a combination of microstructural dimensions and physical and mechanical measurements as input, the equations for the elastic constants and the relative density are applied and the remaining microstructural dimensions are predicted. The predicted microstructural dimensions are in close agreement with the average measured values for both BX-265 and NCFI24-124. With the microstructural dimensions, the model predicts the ratio of the strengths in the principal material directions for both foams. The model is also used to predict the Poisson s ratios, the vacuum-induced strains, and the effect of vacuum on the tensile strengths. However, the comparison of these predicted values with the measured values is not as favorable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28624401','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28624401"><span>The External Validity of Prediction Models for the Diagnosis of Obstructive Coronary Artery Disease in Patients With Stable Chest Pain: Insights From the PROMISE Trial.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Genders, Tessa S S; Coles, Adrian; Hoffmann, Udo; Patel, Manesh R; Mark, Daniel B; Lee, Kerry L; Steyerberg, Ewout W; Hunink, M G Myriam; Douglas, Pamela S</p> <p>2018-03-01</p> <p>This study sought to externally validate prediction models for the presence of obstructive coronary artery disease (CAD). A better assessment of the probability of CAD may improve the identification of patients who benefit from noninvasive testing. Stable chest pain patients from the PROMISE (Prospective Multicenter Imaging Study for Evaluation of Chest Pain) trial with computed tomography angiography (CTA) or invasive coronary angiography (ICA) were included. The authors assumed that patients with CTA showing 0% stenosis and a coronary artery calcium (CAC) score of 0 were free of obstructive CAD (≥50% stenosis) on ICA, and they multiply imputed missing ICA results based on clinical variables and CTA results. Predicted CAD probabilities were calculated using published coefficients for 3 models: basic model (age, sex, chest pain type), clinical model (basic model + diabetes, hypertension, dyslipidemia, and smoking), and clinical + CAC score model. The authors assessed discrimination and calibration, and compared published effects with observed predictor effects. In 3,468 patients (1,805 women; mean 60 years of age; 779 [23%] with obstructive CAD on CTA), the models demonstrated moderate-good discrimination, with C-statistics of 0.69 (95% confidence interval [CI]: 0.67 to 0.72), 0.72 (95% CI: 0.69 to 0.74), and 0.86 (95% CI: 0.85 to 0.88) for the basic, clinical, and clinical + CAC score models, respectively. Calibration was satisfactory although typical chest pain and diabetes were less predictive and CAC score was more predictive than was suggested by the models. Among the 31% of patients for whom the clinical model predicted a low (≤10%) probability of CAD, actual prevalence was 7%; among the 48% for whom the clinical + CAC score model predicted a low probability the observed prevalence was 2%. In 2 sensitivity analyses excluding imputed data, similar results were obtained using CTA as the outcome, whereas in those who underwent ICA the models significantly underestimated CAD probability. Existing clinical prediction models can identify patients with a low probability of obstructive CAD. Obstructive CAD on ICA was imputed for 61% of patients; hence, further validation is necessary. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29334887','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29334887"><span>Bi-objective integer programming for RNA secondary structure prediction with pseudoknots.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Legendre, Audrey; Angel, Eric; Tahi, Fariza</p> <p>2018-01-15</p> <p>RNA structure prediction is an important field in bioinformatics, and numerous methods and tools have been proposed. Pseudoknots are specific motifs of RNA secondary structures that are difficult to predict. Almost all existing methods are based on a single model and return one solution, often missing the real structure. An alternative approach would be to combine different models and return a (small) set of solutions, maximizing its quality and diversity in order to increase the probability that it contains the real structure. We propose here an original method for predicting RNA secondary structures with pseudoknots, based on integer programming. We developed a generic bi-objective integer programming algorithm allowing to return optimal and sub-optimal solutions optimizing simultaneously two models. This algorithm was then applied to the combination of two known models of RNA secondary structure prediction, namely MEA and MFE. The resulting tool, called BiokoP, is compared with the other methods in the literature. The results show that the best solution (structure with the highest F 1 -score) is, in most cases, given by BiokoP. Moreover, the results of BiokoP are homogeneous, regardless of the pseudoknot type or the presence or not of pseudoknots. Indeed, the F 1 -scores are always higher than 70% for any number of solutions returned. The results obtained by BiokoP show that combining the MEA and the MFE models, as well as returning several optimal and several sub-optimal solutions, allow to improve the prediction of secondary structures. One perspective of our work is to combine better mono-criterion models, in particular to combine a model based on the comparative approach with the MEA and the MFE models. This leads to develop in the future a new multi-objective algorithm to combine more than two models. BiokoP is available on the EvryRNA platform: https://EvryRNA.ibisc.univ-evry.fr .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27195983','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27195983"><span>Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ramírez-Albores, Jorge E; Bustamante, Ramiro O; Badano, Ernesto I</p> <p>2016-01-01</p> <p>Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of naturally established individuals because this improves the accuracy of predictions about their distribution ranges.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4873032','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4873032"><span>Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ramírez-Albores, Jorge E.; Bustamante, Ramiro O.</p> <p>2016-01-01</p> <p>Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of naturally established individuals because this improves the accuracy of predictions about their distribution ranges. PMID:27195983</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24728753','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24728753"><span>Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir</p> <p>2014-01-01</p> <p>Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1249527','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1249527"><span>The State of the Art of Neutrino Cross Section Measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Harris, Deborah A.</p> <p>2015-06-08</p> <p>The study of neutrino interactions has recently experienced a renaissance, motivated by the fact that neutrino oscillation experiments depend critically on an accurate models of neutrino interactions. These models have to predict not only the signal and background populations that oscillation experiments see at near and far detectors, but they must also predict how the neutrino's energy which enters a nucleus gets transferred to energies of the particles that leave the nucleus after the neutrino interacts. Over the past year there have been a number of new results on many different neutrino (and antineutrino) interaction channels using several different targetmore » nuclei. These results are often not in agreement with predictions extraolated from charged lepton scattering measurements, or even from predictions anchored to neutrino measurements on deuterium. These new measurements are starting to give the community the handles needed to improve the theoretical description of neutrino interactions, which ultimately pave the way for precision oscillation measurements. This report briefly summarizes recent results and points out where those results differ from the predictions based on current models.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25543541','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25543541"><span>Predicting gaseous emissions from small-scale combustion of agricultural biomass fuels.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fournel, S; Marcos, B; Godbout, S; Heitz, M</p> <p>2015-03-01</p> <p>A prediction model of gaseous emissions (CO, CO2, NOx, SO2 and HCl) from small-scale combustion of agricultural biomass fuels was developed in order to rapidly assess their potential to be burned in accordance to current environmental threshold values. The model was established based on calculation of thermodynamic equilibrium of reactive multicomponent systems using Gibbs free energy minimization. Since this method has been widely used to estimate the composition of the syngas from wood gasification, the model was first validated by comparing its prediction results with those of similar models from the literature. The model was then used to evaluate the main gas emissions from the combustion of four dedicated energy crops (short-rotation willow, reed canary grass, switchgrass and miscanthus) previously burned in a 29-kW boiler. The prediction values revealed good agreement with the experimental results. The model was particularly effective in estimating the influence of harvest season on SO2 emissions. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JSMME...3.1202O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JSMME...3.1202O"><span>Numerical Simulation for Predicting Fatigue Damage Progress in Notched CFRP Laminates by Using Cohesive Elements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Okabe, Tomonaga; Yashiro, Shigeki</p> <p></p> <p>This study proposes the cohesive zone model (CZM) for predicting fatigue damage growth in notched carbon-fiber-reinforced composite plastic (CFRP) cross-ply laminates. In this model, damage growth in the fracture process of cohesive elements due to cyclic loading is represented by the conventional damage mechanics model. We preliminarily investigated whether this model can appropriately express fatigue damage growth for a circular crack embedded in isotropic solid material. This investigation demonstrated that this model could reproduce the results with the well-established fracture mechanics model plus the Paris' law by tuning adjustable parameters. We then numerically investigated the damage process in notched CFRP cross-ply laminates under tensile cyclic loading and compared the predicted damage patterns with those in experiments reported by Spearing et al. (Compos. Sci. Technol. 1992). The predicted damage patterns agreed with the experiment results, which exhibited the extension of multiple types of damage (e.g., splits, transverse cracks and delaminations) near the notches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010032403','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010032403"><span>Evaluation of CFD Turbulent Heating Prediction Techniques and Comparison With Hypersonic Experimental Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)</p> <p>2001-01-01</p> <p>Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) < 0.3, with the wall damping heating and skin friction 10-30% above the local damping results. Furthermore, the local damping predictions have reasonable or good agreement with the experimental heating data for all cases. The impact of the two formulations on the van Driest damping function and the turbulent eddy viscosity distribution for a cold wall case indicate the importance of including temperature gradient effects. Grid requirements for accurate turbulent heating predictions are also studied. These results indicate that a cell Reynolds number of 1 is required for grid converged heating predictions, but coarser grids with a y(sup +) less than 2 are adequate for design of hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3118138','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3118138"><span>Expanding a dynamic flux balance model of yeast fermentation to genome-scale</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2011-01-01</p> <p>Background Yeast is considered to be a workhorse of the biotechnology industry for the production of many value-added chemicals, alcoholic beverages and biofuels. Optimization of the fermentation is a challenging task that greatly benefits from dynamic models able to accurately describe and predict the fermentation profile and resulting products under different genetic and environmental conditions. In this article, we developed and validated a genome-scale dynamic flux balance model, using experimentally determined kinetic constraints. Results Appropriate equations for maintenance, biomass composition, anaerobic metabolism and nutrient uptake are key to improve model performance, especially for predicting glycerol and ethanol synthesis. Prediction profiles of synthesis and consumption of the main metabolites involved in alcoholic fermentation closely agreed with experimental data obtained from numerous lab and industrial fermentations under different environmental conditions. Finally, fermentation simulations of genetically engineered yeasts closely reproduced previously reported experimental results regarding final concentrations of the main fermentation products such as ethanol and glycerol. Conclusion A useful tool to describe, understand and predict metabolite production in batch yeast cultures was developed. The resulting model, if used wisely, could help to search for new metabolic engineering strategies to manage ethanol content in batch fermentations. PMID:21595919</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5217122','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5217122"><span>Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo</p> <p>2016-01-01</p> <p>The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27793970','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27793970"><span>Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo</p> <p>2017-01-05</p> <p>The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26523981','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26523981"><span>CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T</p> <p>2016-02-01</p> <p>The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28705497','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28705497"><span>Quicksilver: Fast predictive image registration - A deep learning approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc</p> <p>2017-09-01</p> <p>This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=factors+AND+making+AND+investment+AND+decision&pg=3&id=EJ733204','ERIC'); return false;" href="https://eric.ed.gov/?q=factors+AND+making+AND+investment+AND+decision&pg=3&id=EJ733204"><span>An Investment Model Analysis of Relationship Stability among Women Court-Mandated to Violence Interventions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Rhatigan, Deborah L.; Moore, Todd M.; Stuart, Gregory L.</p> <p>2005-01-01</p> <p>This investigation examined relationship stability among 60 women court-mandated to violence interventions by applying a general model (i.e., Rusbult's 1980 Investment Model) to predict intentions to leave current relationships. As in past research, results showed that Investment Model predictions were supported such that court-mandated women who…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=R+AND+selection+AND+traits&pg=3&id=ED189165','ERIC'); return false;" href="https://eric.ed.gov/?q=R+AND+selection+AND+traits&pg=3&id=ED189165"><span>Some Empirical Evidence for Latent Trait Model Selection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hutten, Leah R.</p> <p></p> <p>The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAdD....850003S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAdD....850003S"><span>Prediction of breakdown strength of cellulosic insulating materials using artificial neural networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Singh, Sakshi; Mohsin, M. M.; Masood, Aejaz</p> <p></p> <p>In this research work, a few sets of experiments have been performed in high voltage laboratory on various cellulosic insulating materials like diamond-dotted paper, paper phenolic sheets, cotton phenolic sheets, leatheroid, and presspaper, to measure different electrical parameters like breakdown strength, relative permittivity, loss tangent, etc. Considering the dependency of breakdown strength on other physical parameters, different Artificial Neural Network (ANN) models are proposed for the prediction of breakdown strength. The ANN model results are compared with those obtained experimentally and also with the values already predicted from an empirical relation suggested by Swanson and Dall. The reported results indicated that the breakdown strength predicted from the ANN model is in good agreement with the experimental values.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29879470','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29879470"><span>Benchmarking Deep Learning Models on Large Healthcare Datasets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan</p> <p>2018-06-04</p> <p>Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23579833','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23579833"><span>Soil moisture dynamics modeling considering multi-layer root zone.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kumar, R; Shankar, V; Jat, M K</p> <p>2013-01-01</p> <p>The moisture uptake by plant from soil is a key process for plant growth and movement of water in the soil-plant system. A non-linear root water uptake (RWU) model was developed for a multi-layer crop root zone. The model comprised two parts: (1) model formulation and (2) moisture flow prediction. The developed model was tested for its efficiency in predicting moisture depletion in a non-uniform root zone. A field experiment on wheat (Triticum aestivum) was conducted in the sub-temperate sub-humid agro-climate of Solan, Himachal Pradesh, India. Model-predicted soil moisture parameters, i.e., moisture status at various depths, moisture depletion and soil moisture profile in the root zone, are in good agreement with experiment results. The results of simulation emphasize the utility of the RWU model across different agro-climatic regions. The model can be used for sound irrigation management especially in water-scarce humid, temperate, arid and semi-arid regions and can also be integrated with a water transport equation to predict the solute uptake by plant biomass.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29696896','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29696896"><span>[Evaluating the performance of species distribution models Biomod2 and MaxEnt using the giant panda distribution data].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Luo, Mei; Wang, Hao; Lyu, Zhi</p> <p>2017-12-01</p> <p>Species distribution models (SDMs) are widely used by researchers and conservationists. Results of prediction from different models vary significantly, which makes users feel difficult in selecting models. In this study, we evaluated the performance of two commonly used SDMs, the Biomod2 and Maximum Entropy (MaxEnt), with real presence/absence data of giant panda, and used three indicators, i.e., area under the ROC curve (AUC), true skill statistics (TSS), and Cohen's Kappa, to evaluate the accuracy of the two model predictions. The results showed that both models could produce accurate predictions with adequate occurrence inputs and simulation repeats. Comparedto MaxEnt, Biomod2 made more accurate prediction, especially when occurrence inputs were few. However, Biomod2 was more difficult to be applied, required longer running time, and had less data processing capability. To choose the right models, users should refer to the error requirements of their objectives. MaxEnt should be considered if the error requirement was clear and both models could achieve, otherwise, we recommend the use of Biomod2 as much as possible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23781409','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23781409"><span>Prediction of morbidity and mortality in patients with type 2 diabetes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wells, Brian J; Roth, Rachel; Nowacki, Amy S; Arrigain, Susana; Yu, Changhong; Rosenkrans, Wayne A; Kattan, Michael W</p> <p>2013-01-01</p> <p>Introduction. The objective of this study was to create a tool that accurately predicts the risk of morbidity and mortality in patients with type 2 diabetes according to an oral hypoglycemic agent. Materials and Methods. The model was based on a cohort of 33,067 patients with type 2 diabetes who were prescribed a single oral hypoglycemic agent at the Cleveland Clinic between 1998 and 2006. Competing risk regression models were created for coronary heart disease (CHD), heart failure, and stroke, while a Cox regression model was created for mortality. Propensity scores were used to account for possible treatment bias. A prediction tool was created and internally validated using tenfold cross-validation. The results were compared to a Framingham model and a model based on the United Kingdom Prospective Diabetes Study (UKPDS) for CHD and stroke, respectively. Results and Discussion. Median follow-up for the mortality outcome was 769 days. The numbers of patients experiencing events were as follows: CHD (3062), heart failure (1408), stroke (1451), and mortality (3661). The prediction tools demonstrated the following concordance indices (c-statistics) for the specific outcomes: CHD (0.730), heart failure (0.753), stroke (0.688), and mortality (0.719). The prediction tool was superior to the Framingham model at predicting CHD and was at least as accurate as the UKPDS model at predicting stroke. Conclusions. We created an accurate tool for predicting the risk of stroke, coronary heart disease, heart failure, and death in patients with type 2 diabetes. The calculator is available online at http://rcalc.ccf.org under the heading "Type 2 Diabetes" and entitled, "Predicting 5-Year Morbidity and Mortality." This may be a valuable tool to aid the clinician's choice of an oral hypoglycemic, to better inform patients, and to motivate dialogue between physician and patient.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3600041','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3600041"><span>Incorporating geographical factors with artificial neural networks to predict reference values of erythrocyte sedimentation rate</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Methods and findings Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China. The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Conclusions Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships. PMID:23497145</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvF...3e3701D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvF...3e3701D"><span>Predictive model for convective flows induced by surface reactivity contrast</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali</p> <p>2018-05-01</p> <p>Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23497145','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23497145"><span>Incorporating geographical factors with artificial neural networks to predict reference values of erythrocyte sedimentation rate.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Qingsheng; Mwenda, Kevin M; Ge, Miao</p> <p>2013-03-12</p> <p>The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China.The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5593171','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5593171"><span>New Paradigm for Translational Modeling to Predict Long‐term Tuberculosis Treatment Response</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bartelink, IH; Zhang, N; Keizer, RJ; Strydom, N; Converse, PJ; Dooley, KE; Nuermberger, EL</p> <p>2017-01-01</p> <p>Abstract Disappointing results of recent tuberculosis chemotherapy trials suggest that knowledge gained from preclinical investigations was not utilized to maximal effect. A mouse‐to‐human translational pharmacokinetics (PKs) – pharmacodynamics (PDs) model built on a rich mouse database may improve clinical trial outcome predictions. The model included Mycobacterium tuberculosis growth function in mice, adaptive immune response effect on bacterial growth, relationships among moxifloxacin, rifapentine, and rifampin concentrations accelerating bacterial death, clinical PK data, species‐specific protein binding, drug‐drug interactions, and patient‐specific pathology. Simulations of recent trials testing 4‐month regimens predicted 65% (95% confidence interval [CI], 55–74) relapse‐free patients vs. 80% observed in the REMox‐TB trial, and 79% (95% CI, 72–87) vs. 82% observed in the Rifaquin trial. Simulation of 6‐month regimens predicted 97% (95% CI, 93–99) vs. 92% and 95% observed in 2RHZE/4RH control arms, and 100% predicted and observed in the 35 mg/kg rifampin arm of PanACEA MAMS. These results suggest that the model can inform regimen optimization and predict outcomes of ongoing trials. PMID:28561946</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4534590','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4534590"><span>Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian</p> <p>2015-01-01</p> <p>Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26294903','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26294903"><span>Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian</p> <p>2015-01-01</p> <p>Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..108c2059Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..108c2059Y"><span>A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan</p> <p>2018-01-01</p> <p>Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11728961','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11728961"><span>Artificial intelligence: a new approach for prescription and monitoring of hemodialysis therapy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Akl, A I; Sobh, M A; Enab, Y M; Tattersall, J</p> <p>2001-12-01</p> <p>The effect of dialysis on patients is conventionally predicted using a formal mathematical model. This approach requires many assumptions of the processes involved, and validation of these may be difficult. The validity of dialysis urea modeling using a formal mathematical model has been challenged. Artificial intelligence using neural networks (NNs) has been used to solve complex problems without needing a mathematical model or an understanding of the mechanisms involved. In this study, we applied an NN model to study and predict concentrations of urea during a hemodialysis session. We measured blood concentrations of urea, patient weight, and total urea removal by direct dialysate quantification (DDQ) at 30-minute intervals during the session (in 15 chronic hemodialysis patients). The NN model was trained to recognize the evolution of measured urea concentrations and was subsequently able to predict hemodialysis session time needed to reach a target solute removal index (SRI) in patients not previously studied by the NN model (in another 15 chronic hemodialysis patients). Comparing results of the NN model with the DDQ model, the prediction error was 10.9%, with a not significant difference between predicted total urea nitrogen (UN) removal and measured UN removal by DDQ. NN model predictions of time showed a not significant difference with actual intervals needed to reach the same SRI level at the same patient conditions, except for the prediction of SRI at the first 30-minute interval, which showed a significant difference (P = 0.001). This indicates the sensitivity of the NN model to what is called patient clearance time; the prediction error was 8.3%. From our results, we conclude that artificial intelligence applications in urea kinetics can give an idea of intradialysis profiling according to individual clinical needs. In theory, this approach can be extended easily to other solutes, making the NN model a step forward to achieving artificial-intelligent dialysis control.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29722842','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29722842"><span>A dynamic model for predicting growth in zinc-deficient stunted infants given supplemental zinc.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wastney, Meryl E; McDonald, Christine M; King, Janet C</p> <p>2018-05-01</p> <p>Zinc deficiency limits infant growth and increases susceptibility to infections, which further compromises growth. Zinc supplementation improves the growth of zinc-deficient stunted infants, but the amount, frequency, and duration of zinc supplementation required to restore growth in an individual child is unknown. A dynamic model of zinc metabolism that predicts changes in weight and length of zinc-deficient, stunted infants with dietary zinc would be useful to define effective zinc supplementation regimens. The aims of this study were to develop a dynamic model for zinc metabolism in stunted, zinc-deficient infants and to use that model to predict the growth response when those infants are given zinc supplements. A model of zinc metabolism was developed using data on zinc kinetics, tissue zinc, and growth requirements for healthy 9-mo-old infants. The kinetic model was converted to a dynamic model by replacing the rate constants for zinc absorption and excretion with functions for these processes that change with zinc intake. Predictions of the dynamic model, parameterized for zinc-deficient, stunted infants, were compared with the results of 5 published zinc intervention trials. The model was then used to predict the results for zinc supplementation regimes that varied in the amount, frequency, and duration of zinc dosing. Model predictions agreed with published changes in plasma zinc after zinc supplementation. Predictions of weight and length agreed with 2 studies, but overpredicted values from a third study in which other nutrient deficiencies may have been growth limiting; the model predicted that zinc absorption was impaired in that study. The model suggests that frequent, smaller doses (5-10 mg Zn/d) are more effective for increasing growth in stunted, zinc-deficient 9-mo-old infants than are larger, less-frequent doses. The dose amount affects the duration of dosing necessary to restore and maintain plasma zinc concentration and growth.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4326985','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4326985"><span>Prediction of p38 map kinase inhibitory activity of 3, 4-dihydropyrido [3, 2-d] pyrimidone derivatives using an expert system based on principal component analysis and least square support vector machine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shahlaei, M.; Saghaie, L.</p> <p>2014-01-01</p> <p>A quantitative structure–activity relationship (QSAR) study is suggested for the prediction of biological activity (pIC50) of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors. Modeling of the biological activities of compounds of interest as a function of molecular structures was established by means of principal component analysis (PCA) and least square support vector machine (LS-SVM) methods. The results showed that the pIC50 values calculated by LS-SVM are in good agreement with the experimental data, and the performance of the LS-SVM regression model is superior to the PCA-based model. The developed LS-SVM model was applied for the prediction of the biological activities of pyrimidone derivatives, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.460 for LS-SVM. The study provided a novel and effective approach for predicting biological activities of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors and disclosed that LS-SVM can be used as a powerful chemometrics tool for QSAR studies. PMID:26339262</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..113a2089Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..113a2089Y"><span>Application of Fracture Distribution Prediction Model in Xihu Depression of East China Sea</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yan, Weifeng; Duan, Feifei; Zhang, Le; Li, Ming</p> <p>2018-02-01</p> <p>There are different responses on each of logging data with the changes of formation characteristics and outliers caused by the existence of fractures. For this reason, the development of fractures in formation can be characterized by the fine analysis of logging curves. The well logs such as resistivity, sonic transit time, density, neutron porosity and gamma ray, which are classified as conventional well logs, are more sensitive to formation fractures. In view of traditional fracture prediction model, using the simple weighted average of different logging data to calculate the comprehensive fracture index, are more susceptible to subjective factors and exist a large deviation, a statistical method is introduced accordingly. Combining with responses of conventional logging data on the development of formation fracture, a prediction model based on membership function is established, and its essence is to analyse logging data with fuzzy mathematics theory. The fracture prediction results in a well formation in NX block of Xihu depression through two models are compared with that of imaging logging, which shows that the accuracy of fracture prediction model based on membership function is better than that of traditional model. Furthermore, the prediction results are highly consistent with imaging logs and can reflect the development of cracks much better. It can provide a reference for engineering practice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26ES...63a2005M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26ES...63a2005M"><span>Ultra-Short-Term Wind Power Prediction Using a Hybrid Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mohammed, E.; Wang, S.; Yu, J.</p> <p>2017-05-01</p> <p>This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29472694','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29472694"><span>Improving accuracies of genomic predictions for drought tolerance in maize by joint modeling of additive and dominance effects in multi-environment trials.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dias, Kaio Olímpio Das Graças; Gezan, Salvador Alejandro; Guimarães, Claudia Teixeira; Nazarian, Alireza; da Costa E Silva, Luciano; Parentoni, Sidney Netto; de Oliveira Guimarães, Paulo Evaristo; de Oliveira Anoni, Carina; Pádua, José Maria Villela; de Oliveira Pinto, Marcos; Noda, Roberto Willians; Ribeiro, Carlos Alexandre Gomes; de Magalhães, Jurandir Vieira; Garcia, Antonio Augusto Franco; de Souza, João Cândido; Guimarães, Lauro José Moreira; Pastina, Maria Marta</p> <p>2018-07-01</p> <p>Breeding for drought tolerance is a challenging task that requires costly, extensive, and precise phenotyping. Genomic selection (GS) can be used to maximize selection efficiency and the genetic gains in maize (Zea mays L.) breeding programs for drought tolerance. Here, we evaluated the accuracy of genomic selection (GS) using additive (A) and additive + dominance (AD) models to predict the performance of untested maize single-cross hybrids for drought tolerance in multi-environment trials. Phenotypic data of five drought tolerance traits were measured in 308 hybrids along eight trials under water-stressed (WS) and well-watered (WW) conditions over two years and two locations in Brazil. Hybrids' genotypes were inferred based on their parents' genotypes (inbred lines) using single-nucleotide polymorphism markers obtained via genotyping-by-sequencing. GS analyses were performed using genomic best linear unbiased prediction by fitting a factor analytic (FA) multiplicative mixed model. Two cross-validation (CV) schemes were tested: CV1 and CV2. The FA framework allowed for investigating the stability of additive and dominance effects across environments, as well as the additive-by-environment and the dominance-by-environment interactions, with interesting applications for parental and hybrid selection. Results showed differences in the predictive accuracy between A and AD models, using both CV1 and CV2, for the five traits in both water conditions. For grain yield (GY) under WS and using CV1, the AD model doubled the predictive accuracy in comparison to the A model. Through CV2, GS models benefit from borrowing information of correlated trials, resulting in an increase of 40% and 9% in the predictive accuracy of GY under WS for A and AD models, respectively. These results highlight the importance of multi-environment trial analyses using GS models that incorporate additive and dominance effects for genomic predictions of GY under drought in maize single-cross hybrids.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016Geomo.262....8S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016Geomo.262....8S"><span>Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas</p> <p>2016-06-01</p> <p>Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25546054','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25546054"><span>Application of a novel grey self-memory coupling model to forecast the incidence rates of two notifiable diseases in China: dysentery and gonorrhea.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling</p> <p>2014-01-01</p> <p>In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900004072','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900004072"><span>Thermal barrier coating life prediction model development, phase 1</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Demasi, Jeanine T.; Ortiz, Milton</p> <p>1989-01-01</p> <p>The objective of this program was to establish a methodology to predict thermal barrier coating (TBC) life on gas turbine engine components. The approach involved experimental life measurement coupled with analytical modeling of relevant degradation modes. Evaluation of experimental and flight service components indicate the predominant failure mode to be thermomechanical spallation of the ceramic coating layer resulting from propagation of a dominant near interface crack. Examination of fractionally exposed specimens indicated that dominant crack formation results from progressive structural damage in the form of subcritical microcrack link-up. Tests conducted to isolate important life drivers have shown MCrAlY oxidation to significantly affect the rate of damage accumulation. Mechanical property testing has shown the plasma deposited ceramic to exhibit a non-linear stress-strain response, creep and fatigue. The fatigue based life prediction model developed accounts for the unusual ceramic behavior and also incorporates an experimentally determined oxide rate model. The model predicts the growth of this oxide scale to influence the intensity of the mechanic driving force, resulting from cyclic strains and stresses caused by thermally induced and externally imposed mechanical loads.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MApFl...6b5006S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MApFl...6b5006S"><span>Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi</p> <p>2018-04-01</p> <p>Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17099194','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17099194"><span>Decision curve analysis: a novel method for evaluating prediction models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vickers, Andrew J; Elkin, Elena B</p> <p>2006-01-01</p> <p>Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5473775','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5473775"><span>Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose</p> <p>2017-01-01</p> <p>Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28455415','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28455415"><span>Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose</p> <p>2017-06-07</p> <p>Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26377675','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26377675"><span>Incorporating Psychological Predictors of Treatment Response into Health Economic Simulation Models: A Case Study in Type 1 Diabetes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan</p> <p>2015-10-01</p> <p>. Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies. © The Author(s) 2015.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24108707','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24108707"><span>Predicting free-living energy expenditure using a miniaturized ear-worn sensor: an evaluation against doubly labeled water.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bouarfa, Loubna; Atallah, Louis; Kwasnicki, Richard Mark; Pettitt, Claire; Frost, Gary; Yang, Guang-Zhong</p> <p>2014-02-01</p> <p>Accurate estimation of daily total energy expenditure (EE)is a prerequisite for assisted weight management and assessing certain health conditions. The use of wearable sensors for predicting free-living EE is challenged by consistent sensor placement, user compliance, and estimation methods used. This paper examines whether a single ear-worn accelerometer can be used for EE estimation under free-living conditions.An EE prediction model as first derived and validated in a controlled setting using healthy subjects involving different physical activities. Ten different activities were assessed showing a tenfold cross validation error of 0.24. Furthermore, the EE prediction model shows a mean absolute deviation(MAD) below 1.2 metabolic equivalent of tasks. The same model was applied to a free-living setting with a different population for further validation. The results were compared against those derived from doubly labeled water. In free-living settings, the predicted daily EE has a correlation of 0.74, p 0.008, and a MAD of 272 kcal day. These results demonstrate that laboratory-derived prediction models can be used to predict EE under free-living conditions [corrected].</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18189000','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18189000"><span>Risk prediction and aversion by anterior cingulate cortex.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brown, Joshua W; Braver, Todd S</p> <p>2007-12-01</p> <p>The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014E%26ES...22b2006K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014E%26ES...22b2006K"><span>Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ko, P.; Kurosawa, S.</p> <p>2014-03-01</p> <p>The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70026317','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70026317"><span>A method for evaluating the importance of system state observations to model predictions, with application to the Death Valley regional groundwater flow system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.; O'Brien, Grady M.</p> <p>2004-01-01</p> <p>We develop a new observation‐prediction (OPR) statistic for evaluating the importance of system state observations to model predictions. The OPR statistic measures the change in prediction uncertainty produced when an observation is added to or removed from an existing monitoring network, and it can be used to guide refinement and enhancement of the network. Prediction uncertainty is approximated using a first‐order second‐moment method. We apply the OPR statistic to a model of the Death Valley regional groundwater flow system (DVRFS) to evaluate the importance of existing and potential hydraulic head observations to predicted advective transport paths in the saturated zone underlying Yucca Mountain and underground testing areas on the Nevada Test Site. Important existing observations tend to be far from the predicted paths, and many unimportant observations are in areas of high observation density. These results can be used to select locations at which increased observation accuracy would be beneficial and locations that could be removed from the network. Important potential observations are mostly in areas of high hydraulic gradient far from the paths. Results for both existing and potential observations are related to the flow system dynamics and coarse parameter zonation in the DVRFS model. If system properties in different locations are as similar as the zonation assumes, then the OPR results illustrate a data collection opportunity whereby observations in distant, high‐gradient areas can provide information about properties in flatter‐gradient areas near the paths. If this similarity is suspect, then the analysis produces a different type of data collection opportunity involving testing of model assumptions critical to the OPR results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/204675','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/204675"><span>Development of Aspen: A microanalytic simulation model of the US economy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Pryor, R.J.; Basu, N.; Quint, T.</p> <p>1996-02-01</p> <p>This report describes the development of an agent-based microanalytic simulation model of the US economy. The microsimulation model capitalizes on recent technological advances in evolutionary learning and parallel computing. Results are reported for a test problem that was run using the model. The test results demonstrate the model`s ability to predict business-like cycles in an economy where prices and inventories are allowed to vary. Since most economic forecasting models have difficulty predicting any kind of cyclic behavior. These results show the potential of microanalytic simulation models to improve economic policy analysis and to provide new insights into underlying economic principles.more » Work already has begun on a more detailed model.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014HESS...18.3481D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014HESS...18.3481D"><span>Validating a spatially distributed hydrological model with soil morphology data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.</p> <p>2014-09-01</p> <p>Spatially distributed models are popular tools in hydrology claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for inputs of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography and artificial drainage. We translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to observed groundwater levels and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the the groundwater level predictions were not accurate enough to be used for the prediction of saturated areas. Groundwater level dynamics were not adequately reproduced and the predicted spatial saturation patterns did not correspond to those estimated from the soil map. Our results indicate that an accurate prediction of the groundwater level dynamics of the shallow groundwater in our catchment that is subject to artificial drainage would require a model that better represents processes at the boundary between the unsaturated and the saturated zone. However, data needed for such a more detailed model are not generally available. This severely hampers the practical use of such models despite their usefulness for scientific purposes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SpWea..15..441S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SpWea..15..441S"><span>Predicting the magnetic vectors within coronal mass ejections arriving at Earth: 2. Geomagnetic response</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Savani, N. P.; Vourlidas, A.; Richardson, I. G.; Szabo, A.; Thompson, B. J.; Pulkkinen, A.; Mays, M. L.; Nieves-Chinchilla, T.; Bothmer, V.</p> <p>2017-02-01</p> <p>This is a companion to Savani et al. (2015) that discussed how a first-order prediction of the internal magnetic field of a coronal mass ejection (CME) may be made from observations of its initial state at the Sun for space weather forecasting purposes (Bothmer-Schwenn scheme (BSS) model). For eight CME events, we investigate how uncertainties in their predicted magnetic structure influence predictions of the geomagnetic activity. We use an empirical relationship between the solar wind plasma drivers and Kp index together with the inferred magnetic vectors, to make a prediction of the time variation of Kp (Kp(BSS)). We find a 2σ uncertainty range on the magnetic field magnitude (|B|) provides a practical and convenient solution for predicting the uncertainty in geomagnetic storm strength. We also find the estimated CME velocity is a major source of error in the predicted maximum Kp. The time variation of Kp(BSS) is important for predicting periods of enhanced and maximum geomagnetic activity, driven by southerly directed magnetic fields, and periods of lower activity driven by northerly directed magnetic field. We compare the skill score of our model to a number of other forecasting models, including the NOAA/Space Weather Prediction Center (SWPC) and Community Coordinated Modeling Center (CCMC)/SWRC estimates. The BSS model was the most unbiased prediction model, while the other models predominately tended to significantly overforecast. The True skill score of the BSS prediction model (TSS = 0.43 ± 0.06) exceeds the results of two baseline models and the NOAA/SWPC forecast. The BSS model prediction performed equally with CCMC/SWRC predictions while demonstrating a lower uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820025716','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820025716"><span>A two-component rain model for the prediction of attenuation and diversity improvement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Crane, R. K.</p> <p>1982-01-01</p> <p>A new model was developed to predict attenuation statistics for a single Earth-satellite or terrestrial propagation path. The model was extended to provide predictions of the joint occurrences of specified or higher attenuation values on two closely spaced Earth-satellite paths. The joint statistics provide the information required to obtain diversity gain or diversity advantage estimates. The new model is meteorologically based. It was tested against available Earth-satellite beacon observations and terrestrial path measurements. The model employs the rain climate region descriptions of the Global rain model. The rms deviation between the predicted and observed attenuation values for the terrestrial path data was 35 percent, a result consistent with the expectations of the Global model when the rain rate distribution for the path is not used in the calculation. Within the United States the rms deviation between measurement and prediction was 36 percent but worldwide it was 79 percent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1945b0019R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1945b0019R"><span>Prediction of composite fatigue life under variable amplitude loading using artificial neural network trained by genetic algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rohman, Muhamad Nur; Hidayat, Mas Irfan P.; Purniawan, Agung</p> <p>2018-04-01</p> <p>Neural networks (NN) have been widely used in application of fatigue life prediction. In the use of fatigue life prediction for polymeric-base composite, development of NN model is necessary with respect to the limited fatigue data and applicable to be used to predict the fatigue life under varying stress amplitudes in the different stress ratios. In the present paper, Multilayer-Perceptrons (MLP) model of neural network is developed, and Genetic Algorithm was employed to optimize the respective weights of NN for prediction of polymeric-base composite materials under variable amplitude loading. From the simulation result obtained with two different composite systems, named E-glass fabrics/epoxy (layups [(±45)/(0)2]S), and E-glass/polyester (layups [90/0/±45/0]S), NN model were trained with fatigue data from two different stress ratios, which represent limited fatigue data, can be used to predict another four and seven stress ratios respectively, with high accuracy of fatigue life prediction. The accuracy of NN prediction were quantified with the small value of mean square error (MSE). When using 33% from the total fatigue data for training, the NN model able to produce high accuracy for all stress ratios. When using less fatigue data during training (22% from the total fatigue data), the NN model still able to produce high coefficient of determination between the prediction result compared with obtained by experiment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1405301-investigation-temporal-evolution-grain-refinement-copper-under-high-strain-rate-loading-via-situ-synchrotron-measurement-predictive-modeling','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1405301-investigation-temporal-evolution-grain-refinement-copper-under-high-strain-rate-loading-via-situ-synchrotron-measurement-predictive-modeling"><span>Investigation on temporal evolution of the grain refinement in copper under high strain rate loading via in-situ synchrotron measurement and predictive modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao</p> <p></p> <p>Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19790047355&hterms=gpu&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dgpu','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19790047355&hterms=gpu&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dgpu"><span>Initial comparison of single cylinder Stirling engine computer model predictions with test results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tew, R. C., Jr.; Thieme, L. G.; Miao, D.</p> <p>1979-01-01</p> <p>A NASA developed digital computer code for a Stirling engine, modelling the performance of a single cylinder rhombic drive ground performance unit (GPU), is presented and its predictions are compared to test results. The GPU engine incorporates eight regenerator/cooler units and the engine working space is modelled by thirteen control volumes. The model calculates indicated power and efficiency for a given engine speed, mean pressure, heater and expansion space metal temperatures and cooler water inlet temperature and flow rate. Comparison of predicted and observed powers implies that the reference pressure drop calculations underestimate actual pressure drop, possibly due to oil contamination in the regenerator/cooler units, methane contamination in the working gas or the underestimation of mechanical loss. For a working gas of hydrogen, the predicted values of brake power are from 0 to 6% higher than experimental values, and brake efficiency is 6 to 16% higher, while for helium the predicted brake power and efficiency are 2 to 15% higher than the experimental.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990007765','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990007765"><span>Prediction Accuracy of Error Rates for MPTB Space Experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.</p> <p>1998-01-01</p> <p>This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1405301-investigation-temporal-evolution-grain-refinement-copper-under-high-strain-rate-loading-via-situ-synchrotron-measurement-predictive-modeling','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1405301-investigation-temporal-evolution-grain-refinement-copper-under-high-strain-rate-loading-via-situ-synchrotron-measurement-predictive-modeling"><span>Investigation on temporal evolution of the grain refinement in copper under high strain rate loading via in-situ synchrotron measurement and predictive modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao</p> <p>2017-10-03</p> <p>Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23705023','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23705023"><span>Leuconostoc mesenteroides growth in food products: prediction and sensitivity analysis by adaptive-network-based fuzzy inference systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien</p> <p>2013-01-01</p> <p>An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R (2)). Graphical plots were also used for model comparison. The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3660370','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3660370"><span>Leuconostoc Mesenteroides Growth in Food Products: Prediction and Sensitivity Analysis by Adaptive-Network-Based Fuzzy Inference Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien</p> <p>2013-01-01</p> <p>Background An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. Methods The ANFIS and ANN models were compared in terms of six statistical indices calculated by comparing their prediction results with actual data: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R 2). Graphical plots were also used for model comparison. Conclusions The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. PMID:23705023</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28787927','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28787927"><span>The Bi-Directional Prediction of Carbon Fiber Production Using a Combination of Improved Particle Swarm Optimization and Support Vector Machine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xiao, Chuncai; Hao, Kuangrong; Ding, Yongsheng</p> <p>2014-12-30</p> <p>This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM) and improved particle swarm optimization (IPSO) algorithm (SVM-IPSO). In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN), the basic particle swarm optimization (PSO) method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO) method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25965196','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25965196"><span>Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam</p> <p>2015-06-22</p> <p>A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920018038&hterms=oxidation+protective+coatings&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Doxidation%2Bprotective%2Bcoatings','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920018038&hterms=oxidation+protective+coatings&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Doxidation%2Bprotective%2Bcoatings"><span>Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.</p> <p>1992-01-01</p> <p>Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1960o0008L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1960o0008L"><span>Forming limit prediction by an evolving non-quadratic yield criterion considering the anisotropic hardening and r-value evolution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lian, Junhe; Shen, Fuhui; Liu, Wenqi; Münstermann, Sebastian</p> <p>2018-05-01</p> <p>The constitutive model development has been driven to a very accurate and fine-resolution description of the material behaviour responding to various environmental variable changes. The evolving features of the anisotropic behaviour during deformation, therefore, has drawn particular attention due to its possible impacts on the sheet metal forming industry. An evolving non-associated Hill48 (enHill48) model was recently proposed and applied to the forming limit prediction by coupling with the modified maximum force criterion. On the one hand, the study showed the significance to include the anisotropic evolution for accurate forming limit prediction. On the other hand, it also illustrated that the enHill48 model introduced an instability region that suddenly decreases the formability. Therefore, in this study, an alternative model that is based on the associated flow rule and provides similar anisotropic predictive capability is extended to chapter the evolving effects and further applied to the forming limit prediction. The final results are compared with experimental data as well as the results by enHill48 model.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080022350','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080022350"><span>A Computational Fluid Dynamics Study of Transitional Flows in Low-Pressure Turbines under a Wide Range of Operating Conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Suzen, Y. B.; Huang, P. G.; Ashpis, D. E.; Volino, R. J.; Corke, T. C.; Thomas, F. O.; Huang, J.; Lake, J. P.; King, P. I.</p> <p>2007-01-01</p> <p>A transport equation for the intermittency factor is employed to predict the transitional flows in low-pressure turbines. The intermittent behavior of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, mu(sub p) with the intermittency factor, gamma. Turbulent quantities are predicted using Menter's two-equation turbulence model (SST). The intermittency factor is obtained from a transport equation model which can produce both the experimentally observed streamwise variation of intermittency and a realistic profile in the cross stream direction. The model had been previously validated against low-pressure turbine experiments with success. In this paper, the model is applied to predictions of three sets of recent low-pressure turbine experiments on the Pack B blade to further validate its predicting capabilities under various flow conditions. Comparisons of computational results with experimental data are provided. Overall, good agreement between the experimental data and computational results is obtained. The new model has been shown to have the capability of accurately predicting transitional flows under a wide range of low-pressure turbine conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26126418','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26126418"><span>The extension of total gain (TG) statistic in survival models: properties and applications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Choodari-Oskooei, Babak; Royston, Patrick; Parmar, Mahesh K B</p> <p>2015-07-01</p> <p>The results of multivariable regression models are usually summarized in the form of parameter estimates for the covariates, goodness-of-fit statistics, and the relevant p-values. These statistics do not inform us about whether covariate information will lead to any substantial improvement in prediction. Predictive ability measures can be used for this purpose since they provide important information about the practical significance of prognostic factors. R (2)-type indices are the most familiar forms of such measures in survival models, but they all have limitations and none is widely used. In this paper, we extend the total gain (TG) measure, proposed for a logistic regression model, to survival models and explore its properties using simulations and real data. TG is based on the binary regression quantile plot, otherwise known as the predictiveness curve. Standardised TG ranges from 0 (no explanatory power) to 1 ('perfect' explanatory power). The results of our simulations show that unlike many of the other R (2)-type predictive ability measures, TG is independent of random censoring. It increases as the effect of a covariate increases and can be applied to different types of survival models, including models with time-dependent covariate effects. We also apply TG to quantify the predictive ability of multivariable prognostic models developed in several disease areas. Overall, TG performs well in our simulation studies and can be recommended as a measure to quantify the predictive ability in survival models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ChJME..30..746G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ChJME..30..746G"><span>Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo</p> <p>2017-05-01</p> <p>Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820016676','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820016676"><span>Regression model estimation of early season crop proportions: North Dakota, some preliminary results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lin, K. K. (Principal Investigator)</p> <p>1982-01-01</p> <p>To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20426121','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20426121"><span>Predictive simulation of bidirectional Glenn shunt using a hybrid blood vessel model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Hao; Leow, Wee Kheng; Chiu, Ing-Sh</p> <p>2009-01-01</p> <p>This paper proposes a method for performing predictive simulation of cardiac surgery. It applies a hybrid approach to model the deformation of blood vessels. The hybrid blood vessel model consists of a reference Cosserat rod and a surface mesh. The reference Cosserat rod models the blood vessel's global bending, stretching, twisting and shearing in a physically correct manner, and the surface mesh models the surface details of the blood vessel. In this way, the deformation of blood vessels can be computed efficiently and accurately. Our predictive simulation system can produce complex surgical results given a small amount of user inputs. It allows the surgeon to easily explore various surgical options and evaluate them. Tests of the system using bidirectional Glenn shunt (BDG) as an application example show that the results produc by the system are similar to real surgical results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28364677','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28364677"><span>Forecasting stochastic neural network based on financial empirical mode decomposition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Jie; Wang, Jun</p> <p>2017-06-01</p> <p>In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10641E..0JW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10641E..0JW"><span>Intelligent path loss prediction engine design using machine learning in the urban outdoor environment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Ruichen; Lu, Jingyang; Xu, Yiran; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik</p> <p>2018-05-01</p> <p>Due to the progressive expansion of public mobile networks and the dramatic growth of the number of wireless users in recent years, researchers are motivated to study the radio propagation in urban environments and develop reliable and fast path loss prediction models. During last decades, different types of propagation models are developed for urban scenario path loss predictions such as the Hata model and the COST 231 model. In this paper, the path loss prediction model is thoroughly investigated using machine learning approaches. Different non-linear feature selection methods are deployed and investigated to reduce the computational complexity. The simulation results are provided to demonstratethe validity of the machine learning based path loss prediction engine, which can correctly determine the signal propagation in a wireless urban setting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AcASn..56...53L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AcASn..56...53L"><span>The Prediction of Length-of-day Variations Based on Gaussian Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.</p> <p>2015-01-01</p> <p>Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870019122','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870019122"><span>Analysis and test evaluation of the dynamic response and stability of three advanced turboprop models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bansal, P. N.; Arseneaux, P. J.; Smith, A. F.; Turnberg, J. E.; Brooks, B. M.</p> <p>1985-01-01</p> <p>Results of dynamic response and stability wind tunnel tests of three 62.2 cm (24.5 in) diameter models of the Prop-Fan, advanced turboprop, are presented. Measurements of dynamic response were made with the rotors mounted on an isolated nacelle, with varying tilt for nonuniform inflow. One model was also tested using a semi-span wing and fuselage configuration for response to realistic aircraft inflow. Stability tests were performed using tunnel turbulence or a nitrogen jet for excitation. Measurements are compared with predictions made using beam analysis methods for the model with straight blades, and finite element analysis methods for the models with swept blades. Correlations between measured and predicted rotating blade natural frequencies for all the models are very good. The IP dynamic response of the straight blade model is reasonably well predicted. The IP response of the swept blades is underpredicted and the wing induced response of the straight blade is overpredicted. Two models did not flutter, as predicted. One swept blade model encountered an instability at a higher RPM than predicted, showing predictions to be conservative.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMNH21B1516S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMNH21B1516S"><span>Dam-Break Flooding and Structural Damage in a Residential Neighborhood: Performance of a coupled hydrodynamic-damage model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanders, B. F.; Gallegos, H. A.; Schubert, J. E.</p> <p>2011-12-01</p> <p>The Baldwin Hills dam-break flood and associated structural damage is investigated in this study. The flood caused high velocity flows exceeding 5 m/s which destroyed 41 wood-framed residential structures, 16 of which were completed washed out. Damage is predicted by coupling a calibrated hydrodynamic flood model based on the shallow-water equations to structural damage models. The hydrodynamic and damage models are two-way coupled so building failure is predicted upon exceedance of a hydraulic intensity parameter, which in turn triggers a localized reduction in flow resistance which affects flood intensity predictions. Several established damage models and damage correlations reported in the literature are tested to evaluate the predictive skill for two damage states defined by destruction (Level 2) and washout (Level 3). Results show that high-velocity structural damage can be predicted with a remarkable level of skill using established damage models, but only with two-way coupling of the hydrodynamic and damage models. In contrast, when structural failure predictions have no influence on flow predictions, there is a significant reduction in predictive skill. Force-based damage models compare well with a subset of the damage models which were devised for similar types of structures. Implications for emergency planning and preparedness as well as monetary damage estimation are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20180002115','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20180002115"><span>Comparison of Experimental Surface and Flow Field Measurements to Computational Results of the Juncture Flow Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Roozeboom, Nettie H.; Lee, Henry C.; Simurda, Laura J.; Zilliac, Gregory G.; Pulliam, Thomas H.</p> <p>2016-01-01</p> <p>Wing-body juncture flow fields on commercial aircraft configurations are challenging to compute accurately. The NASA Advanced Air Vehicle Program's juncture flow committee is designing an experiment to provide data to improve Computational Fluid Dynamics (CFD) modeling in the juncture flow region. Preliminary design of the model was done using CFD, yet CFD tends to over-predict the separation in the juncture flow region. Risk reduction wind tunnel tests were requisitioned by the committee to obtain a better understanding of the flow characteristics of the designed models. NASA Ames Research Center's Fluid Mechanics Lab performed one of the risk reduction tests. The results of one case, accompanied by CFD simulations, are presented in this paper. Experimental results suggest the wall mounted wind tunnel model produces a thicker boundary layer on the fuselage than the CFD predictions, resulting in a larger wing horseshoe vortex suppressing the side of body separation in the juncture flow region. Compared to experimental results, CFD predicts a thinner boundary layer on the fuselage generates a weaker wing horseshoe vortex resulting in a larger side of body separation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7343E..1GJ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7343E..1GJ"><span>Predictive data modeling of human type II diabetes related statistics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.</p> <p>2009-04-01</p> <p>During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3587455','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3587455"><span>One way coupling of CMAQ and a road source dispersion model for fine scale air pollution predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Beevers, Sean D.; Kitwiroon, Nutthida; Williams, Martin L.; Carslaw, David C.</p> <p>2012-01-01</p> <p>In this paper we have coupled the CMAQ and ADMS air quality models to predict hourly concentrations of NOX, NO2 and O3 for London at a spatial scale of 20 m × 20 m. Model evaluation has demonstrated reasonable agreement with measurements from 80 monitoring sites in London. For NO2 the model evaluation statistics gave 73% of the hourly concentrations within a factor of two of observations, a mean bias of −4.7 ppb and normalised mean bias of −0.17, a RMSE value of 17.7 and an r value of 0.58. The equivalent results for O3 were 61% (FAC2), 2.8 ppb (MB), 0.15 (NMB), 12.1 (RMSE) and 0.64 (r). Analysis of the errors in the model predictions by hour of the week showed the need for improvements in predicting the magnitude of road transport related NOX emissions as well as the hourly emissions scaling in the model. These findings are consistent with recent evidence of UK road transport NOX emissions, reported elsewhere. The predictions of wind speed using the WRF model also influenced the model results and contributed to the daytime over prediction of NOX concentrations at the central London background site at Kensington and Chelsea. An investigation of the use of a simple NO–NO2–O3 chemistry scheme showed good performance close to road sources, and this is also consistent with previous studies. The coupling of the two models raises an issue of emissions double counting. Here, we have put forward a pragmatic solution to this problem with the result that a median double counting error of 0.42% exists across 39 roadside sites in London. Finally, whilst the model can be improved, the current results show promise and demonstrate that the use of a combination of regional scale and local scale models can provide a practical modelling tool for policy development at intergovernmental, national and local authority level, as well as for use in epidemiological studies. PMID:23471172</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H21F1543F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H21F1543F"><span>Statistical Approaches for Spatiotemporal Prediction of Low Flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fangmann, A.; Haberlandt, U.</p> <p>2017-12-01</p> <p>An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70026954','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70026954"><span>Importance of the habitat choice behavior assumed when modeling the effects of food and temperature on fish populations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Wildhaber, Mark L.; Lamberson, Peter J.</p> <p>2004-01-01</p> <p>Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JASTP.171..137B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JASTP.171..137B"><span>Ionosonde-based indices for improved representation of solar cycle variation in the International Reference Ionosphere model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brown, Steven; Bilitza, Dieter; Yiǧit, Erdal</p> <p>2018-06-01</p> <p>A new monthly ionospheric index, IGNS, is presented to improve the representation of the solar cycle variation of the ionospheric F2 peak plasma frequency, foF2. IGNS is calculated using a methodology similar to the construction of the "global effective sunspot number", IG, given by Liu et al. (1983) but selects ionosonde observations based on hemispheres. We incorporated the updated index into the International Reference Ionosphere (IRI) model and compared the foF2 model predictions with global ionospheric observations. We also investigated the influence of the underlying foF2 model on the IG index. IRI has two options for foF2 specification, the CCIR-66 and URSI-88 foF2 models. For the first time, we have calculated IG using URSI-88 and assessed the impact on model predictions. Through a retrospective model-data comparison, results show that the inclusion of the new monthly IGNS index in place of the current 12-month smoothed IG index reduce the foF2 model prediction errors by nearly a factor of two. These results apply to both day-time and nightime predictions. This is due to an overall improved prediction of foF2 seasonal and solar cycle variations in the different hemispheres.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26772608','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26772608"><span>A calibration hierarchy for risk models was defined: from utopia to empirical data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W</p> <p>2016-06-01</p> <p>Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29140266','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29140266"><span>Forecasting the Water Demand in Chongqing, China Using a Grey Prediction Model and Recommendations for the Sustainable Development of Urban Water Consumption.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wu, Hua'an; Zeng, Bo; Zhou, Meng</p> <p>2017-11-15</p> <p>High accuracy in water demand predictions is an important basis for the rational allocation of city water resources and forms the basis for sustainable urban development. The shortage of water resources in Chongqing, the youngest central municipality in Southwest China, has significantly increased with the population growth and rapid economic development. In this paper, a new grey water-forecasting model (GWFM) was built based on the data characteristics of water consumption. The parameter estimation and error checking methods of the GWFM model were investigated. Then, the GWFM model was employed to simulate the water demands of Chongqing from 2009 to 2015 and forecast it in 2016. The simulation and prediction errors of the GWFM model was checked, and the results show the GWFM model exhibits better simulation and prediction precisions than those of the classical Grey Model with one variable and single order equation GM(1,1) for short and the frequently-used Discrete Grey Model with one variable and single order equation, DGM(1,1) for short. Finally, the water demand in Chongqing from 2017 to 2022 was forecasted, and some corresponding control measures and recommendations were provided based on the prediction results to ensure a viable water supply and promote the sustainable development of the Chongqing economy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3077926','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3077926"><span>Model-based influences on humans’ choices and striatal prediction errors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.</p> <p>2011-01-01</p> <p>Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=data+AND+mining&pg=3&id=EJ1033677','ERIC'); return false;" href="https://eric.ed.gov/?q=data+AND+mining&pg=3&id=EJ1033677"><span>A Demonstration of Regression False Positive Selection in Data Mining</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Pinder, Jonathan P.</p> <p>2014-01-01</p> <p>Business analytics courses, such as marketing research, data mining, forecasting, and advanced financial modeling, have substantial predictive modeling components. The predictive modeling in these courses requires students to estimate and test many linear regressions. As a result, false positive variable selection ("type I errors") is…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20677430','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20677430"><span>[Application of predictive model to estimate concentrations of chemical substances in the work environment].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Jakubowski, Marek; Maciaszek, Piotr; Janasik, Beata</p> <p>2010-01-01</p> <p>Based on the Estimation and Assessment of Substance Exposure (EASE) predictive model implemented into the European Union System for the Evaluation of Substances (EUSES 2.1.), the exposure to three chosen organic solvents: toluene, ethyl acetate and acetone was estimated and compared with the results of measurements in workplaces. Prior to validation, the EASE model was pretested using three exposure scenarios. The scenarios differed in the decision tree of pattern of use. Five substances were chosen for the test: 1,4-dioxane tert-methyl-butyl ether, diethylamine, 1,1,1-trichloroethane and bisphenol A. After testing the EASE model, the next step was the validation by estimating the exposure level and comparing it with the results of measurements in the workplace. We used the results of measurements of toluene, ethyl acetate and acetone concentrations in the work environment of a paint and lacquer factory, a shoe factory and a refinery. Three types of exposure scenarios, adaptable to the description of working conditions were chosen to estimate inhalation exposure. Comparison of calculated exposure to toluene, ethyl acetate and acetone with measurements in workplaces showed that model predictions are comparable with the measurement results. Only for low concentration ranges, the measured concentrations were higher than those predicted. EASE is a clear, consistent system, which can be successfully used as an additional component of inhalation exposure estimation. If the measurement data are available, they should be preferred to values estimated from models. In addition to inhalation exposure estimation, the EASE model makes it possible not only to assess exposure-related risk but also to predict workers' dermal exposure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8205E..0TZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8205E..0TZ"><span>Prediction of wastewater treatment plants performance based on artificial fish school neural network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Ruicheng; Li, Chong</p> <p>2011-10-01</p> <p>A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5912497','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5912497"><span>Advanced Daily Prediction Model for National Suicide Numbers with Social Media Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Kyung Sang; Lee, Hyewon; Myung, Woojae; Song, Gil-Young; Lee, Kihwang; Kim, Ho; Carroll, Bernard J.; Kim, Doh Kwan</p> <p>2018-01-01</p> <p>Objective Suicide is a significant public health concern worldwide. Social media data have a potential role in identifying high suicide risk individuals and also in predicting suicide rate at the population level. In this study, we report an advanced daily suicide prediction model using social media data combined with economic/meteorological variables along with observed suicide data lagged by 1 week. Methods The social media data were drawn from weblog posts. We examined a total of 10,035 social media keywords for suicide prediction. We made predictions of national suicide numbers 7 days in advance daily for 2 years, based on a daily moving 5-year prediction modeling period. Results Our model predicted the likely range of daily national suicide numbers with 82.9% accuracy. Among the social media variables, words denoting economic issues and mood status showed high predictive strength. Observed number of suicides one week previously, recent celebrity suicide, and day of week followed by stock index, consumer price index, and sunlight duration 7 days before the target date were notable predictors along with the social media variables. Conclusion These results strengthen the case for social media data to supplement classical social/economic/climatic data in forecasting national suicide events. PMID:29614852</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70043710','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70043710"><span>Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Francy, Donna S.; Stelzer, Erin A.; Duris, Joseph W.; Brady, Amie M.G.; Harrison, John H.; Johnson, Heather E.; Ware, Michael W.</p> <p>2013-01-01</p> <p>Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23291550','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23291550"><span>Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Francy, Donna S; Stelzer, Erin A; Duris, Joseph W; Brady, Amie M G; Harrison, John H; Johnson, Heather E; Ware, Michael W</p> <p>2013-03-01</p> <p>Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4214046','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4214046"><span>Proactive Supply Chain Performance Management with Predictive Analytics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Stefanovic, Nenad</p> <p>2014-01-01</p> <p>Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25386605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25386605"><span>Proactive supply chain performance management with predictive analytics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stefanovic, Nenad</p> <p>2014-01-01</p> <p>Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1379662-evaluation-predictive-capability-coupled-thermo-hydro-mechanical-models-heated-bentonite-clay-system-he-mont-terri-rock-laboratory','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1379662-evaluation-predictive-capability-coupled-thermo-hydro-mechanical-models-heated-bentonite-clay-system-he-mont-terri-rock-laboratory"><span>Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Garitte, B.; Shao, H.; Wang, X. R.; ...</p> <p>2017-01-09</p> <p>Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1379662','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1379662"><span>Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Garitte, B.; Shao, H.; Wang, X. R.</p> <p></p> <p>Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3906747','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3906747"><span>Evaluation of multivariate linear regression and artificial neural networks in prediction of water quality parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>This paper examined the efficiency of multivariate linear regression (MLR) and artificial neural network (ANN) models in prediction of two major water quality parameters in a wastewater treatment plant. Biochemical oxygen demand (BOD) and chemical oxygen demand (COD) as well as indirect indicators of organic matters are representative parameters for sewer water quality. Performance of the ANN models was evaluated using coefficient of correlation (r), root mean square error (RMSE) and bias values. The computed values of BOD and COD by model, ANN method and regression analysis were in close agreement with their respective measured values. Results showed that the ANN performance model was better than the MLR model. Comparative indices of the optimized ANN with input values of temperature (T), pH, total suspended solid (TSS) and total suspended (TS) for prediction of BOD was RMSE = 25.1 mg/L, r = 0.83 and for prediction of COD was RMSE = 49.4 mg/L, r = 0.81. It was found that the ANN model could be employed successfully in estimating the BOD and COD in the inlet of wastewater biochemical treatment plants. Moreover, sensitive examination results showed that pH parameter have more effect on BOD and COD predicting to another parameters. Also, both implemented models have predicted BOD better than COD. PMID:24456676</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23414436','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23414436"><span>Prediction models for clustered data: comparison of a random intercept and standard regression model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne</p> <p>2013-02-15</p> <p>When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27313605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27313605"><span>A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren</p> <p>2016-01-01</p> <p>Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4893432','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4893432"><span>A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren</p> <p>2016-01-01</p> <p>Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25079842','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25079842"><span>Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Madarang, Krish J; Kang, Joo-Hyon</p> <p>2014-06-01</p> <p>Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29322923','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29322923"><span>A polynomial based model for cell fate prediction in human diseases.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ma, Lichun; Zheng, Jie</p> <p>2017-12-21</p> <p>Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27439379','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27439379"><span>A model describing diffusion in prostate cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gilani, Nima; Malcolm, Paul; Johnson, Glyn</p> <p>2017-07-01</p> <p>Quantitative diffusion MRI has frequently been studied as a means of grading prostate cancer. Interpretation of results is complicated by the nature of prostate tissue, which consists of four distinct compartments: vascular, ductal lumen, epithelium, and stroma. Current diffusion measurements are an ill-defined weighted average of these compartments. In this study, prostate diffusion is analyzed in terms of a model that takes explicit account of tissue compartmentalization, exchange effects, and the non-Gaussian behavior of tissue diffusion. The model assumes that exchange between the cellular (ie, stromal plus epithelial) and the vascular and ductal compartments is slow. Ductal and cellular diffusion characteristics are estimated by Monte Carlo simulation and a two-compartment exchange model, respectively. Vascular pseudodiffusion is represented by an additional signal at b = 0. Most model parameters are obtained either from published data or by comparing model predictions with the published results from 41 studies. Model prediction error is estimated using 10-fold cross-validation. Agreement between model predictions and published results is good. The model satisfactorily explains the variability of ADC estimates found in the literature. A reliable model that predicts the diffusion behavior of benign and cancerous prostate tissue of different Gleason scores has been developed. Magn Reson Med 78:316-326, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040172815&hterms=discrimination&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Ddiscrimination','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040172815&hterms=discrimination&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Ddiscrimination"><span>Object detection in natural backgrounds predicted by discrimination performance and models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.</p> <p>1997-01-01</p> <p>Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17306751','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17306751"><span>Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong</p> <p>2007-09-01</p> <p>Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4844798','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4844798"><span>A Physiologically Based Pharmacokinetic Model to Predict the Pharmacokinetics of Highly Protein-Bound Drugs and Impact of Errors in Plasma Protein Binding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ye, Min; Nagar, Swati; Korzekwa, Ken</p> <p>2015-01-01</p> <p>Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5147863','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5147863"><span>Towards Assessing the Human Trajectory Planning Horizon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk</p> <p>2016-01-01</p> <p>Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models. PMID:27936015</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22967536','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22967536"><span>Multiple testing of food contact materials: a predictive algorithm for assessing the global migration from silicone moulds.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elskens, Marc; Vloeberghs, Daniel; Van Elsen, Liesbeth; Baeyens, Willy; Goeyens, Leo</p> <p>2012-09-15</p> <p>For reasons of food safety, packaging and food contact materials must be submitted to migration tests. Testing of silicone moulds is often very laborious, since three replicate tests are required to decide about their compliancy. This paper presents a general modelling framework to predict the sample's compliance or non-compliance using results of the first two migration tests. It compares the outcomes of models with multiple continuous predictors with a class of models involving latent and dummy variables. The model's prediction ability was tested using cross and external validations, i.e. model revalidation each time a new measurement set became available. At the overall migration limit of 10 mg dm(-2), the relative uncertainty on a prediction was estimated to be ~10%. Taking the default values for α and β equal to 0.05, the maximum value that can be predicted for sample compliance was therefore 7 mg dm(-2). Beyond this limit the risk for false compliant results increases significantly, and a third migration test should be performed. The result of this latter test defines the sample's compliance or non-compliance. Propositions for compliancy control inspired by the current dioxin control strategy are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JThSc..20..548Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JThSc..20..548Y"><span>Investigation of detailed kinetic scheme performance on modelling of turbulent non-premixed sooting flames</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yunardi, Y.; Darmadi, D.; Hisbullah, H.; Fairweather, M.</p> <p>2011-12-01</p> <p>This paper presents the results of an application of a first-order conditional moment closure (CMC) approach coupled with a semi-empirical soot model to investigate the effect of various detailed combustion chemistry schemes on soot formation and destruction in turbulent non-premixed flames. A two-equation soot model representing soot particle nucleation, growth, coagulation and oxidation, was incorporated into the CMC model. The turbulent flow-field of both flames is described using the Favre-averaged fluid-flow equations, applying a standard k-ɛ turbulence model. A number of five reaction kinetic mechanisms having 50-100 species and 200-1000 elementary reactions called ABF, Miller-Bowman, GRI-Mech3.0, Warnatz, and Qin were employed to study the effect of combustion chemistry schemes on soot predictions. The results showed that of various kinetic schemes being studied, each yields similar accuracy in temperature prediction when compared with experimental data. With respect to soot prediction, the kinetic scheme containing benzene elementary reactions tends to result in a better prediction on soot concentrations in comparison to those contain no benzene elementary reactions. Among five kinetic mechanisms being studied, the Qin combustion scheme mechanism turned to yield the best prediction on both flame temperature and soot levels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27936015','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27936015"><span>Towards Assessing the Human Trajectory Planning Horizon.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Carton, Daniel; Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk</p> <p>2016-01-01</p> <p>Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27642585','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27642585"><span>Predictive Modeling of Estrogen Receptor Binding Agents Using Advanced Cheminformatics Tools and Massive Public Data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ribay, Kathryn; Kim, Marlene T; Wang, Wenyi; Pinolini, Daniel; Zhu, Hao</p> <p>2016-03-01</p> <p>Estrogen receptors (ERα) are a critical target for drug design as well as a potential source of toxicity when activated unintentionally. Thus, evaluating potential ERα binding agents is critical in both drug discovery and chemical toxicity areas. Using computational tools, e.g., Quantitative Structure-Activity Relationship (QSAR) models, can predict potential ERα binding agents before chemical synthesis. The purpose of this project was to develop enhanced predictive models of ERα binding agents by utilizing advanced cheminformatics tools that can integrate publicly available bioassay data. The initial ERα binding agent data set, consisting of 446 binders and 8307 non-binders, was obtained from the Tox21 Challenge project organized by the NIH Chemical Genomics Center (NCGC). After removing the duplicates and inorganic compounds, this data set was used to create a training set (259 binders and 259 non-binders). This training set was used to develop QSAR models using chemical descriptors. The resulting models were then used to predict the binding activity of 264 external compounds, which were available to us after the models were developed. The cross-validation results of training set [Correct Classification Rate (CCR) = 0.72] were much higher than the external predictivity of the unknown compounds (CCR = 0.59). To improve the conventional QSAR models, all compounds in the training set were used to search PubChem and generate a profile of their biological responses across thousands of bioassays. The most important bioassays were prioritized to generate a similarity index that was used to calculate the biosimilarity score between each two compounds. The nearest neighbors for each compound within the set were then identified and its ERα binding potential was predicted by its nearest neighbors in the training set. The hybrid model performance (CCR = 0.94 for cross validation; CCR = 0.68 for external prediction) showed significant improvement over the original QSAR models, particularly for the activity cliffs that induce prediction errors. The results of this study indicate that the response profile of chemicals from public data provides useful information for modeling and evaluation purposes. The public big data resources should be considered along with chemical structure information when predicting new compounds, such as unknown ERα binding agents.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8602L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8602L"><span>Benchmarking hydrological model predictive capability for UK River flows and flood peaks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten</p> <p>2017-04-01</p> <p>Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APS..OSF.P1004S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APS..OSF.P1004S"><span>Some Unsolved Problems, Questions, and Applications of the Brightsen Nucleon Cluster Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smarandache, Florentin</p> <p>2010-10-01</p> <p>Brightsen Model is opposite to the Standard Model, and it was build on John Weeler's Resonating Group Structure Model and on Linus Pauling's Close-Packed Spheron Model. Among Brightsen Model's predictions and applications we cite the fact that it derives the average number of prompt neutrons per fission event, it provides a theoretical way for understanding the low temperature / low energy reactions and for approaching the artificially induced fission, it predicts that forces within nucleon clusters are stronger than forces between such clusters within isotopes; it predicts the unmatter entities inside nuclei that result from stable and neutral union of matter and antimatter, and so on. But these predictions have to be tested in the future at the new CERN laboratory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015MS%26E...84a2015D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015MS%26E...84a2015D"><span>Prediction of as-cast grain size of inoculated aluminum alloys melt solidified under non-isothermal conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Du, Qiang; Li, Yanjun</p> <p>2015-06-01</p> <p>In this paper, a multi-scale as-cast grain size prediction model is proposed to predict as-cast grain size of inoculated aluminum alloys melt solidified under non-isothermal condition, i.e., the existence of temperature gradient. Given melt composition, inoculation and heat extraction boundary conditions, the model is able to predict maximum nucleation undercooling, cooling curve, primary phase solidification path and final as-cast grain size of binary alloys. The proposed model has been applied to two Al-Mg alloys, and comparison with laboratory and industrial solidification experimental results have been carried out. The preliminary conclusion is that the proposed model is a promising suitable microscopic model used within the multi-scale casting simulation modelling framework.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1811262H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1811262H"><span>A hybrid deep neural network and physically based distributed model for river stage prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>hitokoto, Masayuki; sakuraba, Masaaki</p> <p>2016-04-01</p> <p>We developed the real-time river stage prediction model, using the hybrid deep neural network and physically based distributed model. As the basic model, 4 layer feed-forward artificial neural network (ANN) was used. As a network training method, the deep learning technique was applied. To optimize the network weight, the stochastic gradient descent method based on the back propagation method was used. As a pre-training method, the denoising autoencoder was used. Input of the ANN model is hourly change of water level and hourly rainfall, output data is water level of downstream station. In general, the desirable input of the ANN has strong correlation with the output. In conceptual hydrological model such as tank model and storage-function model, river discharge is governed by the catchment storage. Therefore, the change of the catchment storage, downstream discharge subtracted from rainfall, can be the potent input candidate of the ANN model instead of rainfall. From this point of view, the hybrid deep neural network and physically based distributed model was developed. The prediction procedure of the hybrid model is as follows; first, downstream discharge was calculated by the distributed model, and then estimates the hourly change of catchment storage form rainfall and calculated discharge as the input of the ANN model, and finally the ANN model was calculated. In the training phase, hourly change of catchment storage can be calculated by the observed rainfall and discharge data. The developed model was applied to the one catchment of the OOYODO River, one of the first-grade river in Japan. The modeled catchment is 695 square km. For the training data, 5 water level gauging station and 14 rain-gauge station in the catchment was used. The training floods, superior 24 events, were selected during the period of 2005-2014. Prediction was made up to 6 hours, and 6 models were developed for each prediction time. To set the proper learning parameters and network architecture of the ANN model, sensitivity analysis was done by the case study approach. The prediction result was evaluated by the superior 4 flood events by the leave-one-out cross validation. The prediction result of the basic 4 layer ANN was better than the conventional 3 layer ANN model. However, the result did not reproduce well the biggest flood event, supposedly because the lack of the sufficient high-water level flood event in the training data. The result of the hybrid model outperforms the basic ANN model and distributed model, especially improved the performance of the basic ANN model in the biggest flood event.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28989022','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28989022"><span>How Sensitive Are Transdermal Transport Predictions by Microscopic Stratum Corneum Models to Geometric and Transport Parameter Input?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wen, Jessica; Koo, Soh Myoung; Lape, Nancy</p> <p>2018-02-01</p> <p>While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11080563','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11080563"><span>The Mt. Hood challenge: cross-testing two diabetes simulation models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brown, J B; Palmer, A J; Bisgaard, P; Chan, W; Pedula, K; Russell, A</p> <p>2000-11-01</p> <p>Starting from identical patients with type 2 diabetes, we compared the 20-year predictions of two computer simulation models, a 1998 version of the IMIB model and version 2.17 of the Global Diabetes Model (GDM). Primary measures of outcome were 20-year cumulative rates of: survival, first (incident) acute myocardial infarction (AMI), first stroke, proliferative diabetic retinopathy (PDR), macro-albuminuria (gross proteinuria, or GPR), and amputation. Standardized test patients were newly diagnosed males aged 45 or 75, with high and low levels of glycated hemoglobin (HbA(1c)), systolic blood pressure (SBP), and serum lipids. Both models generated realistic results and appropriate responses to changes in risk factors. Compared with the GDM, the IMIB model predicted much higher rates of mortality and AMI, and fewer strokes. These differences can be explained by differences in model architecture (Markov vs. microsimulation), different evidence bases for cardiovascular prediction (Framingham Heart Study cohort vs. Kaiser Permanente patients), and isolated versus interdependent prediction of cardiovascular events. Compared with IMIB, GDM predicted much higher lifetime costs, because of lower mortality and the use of a different costing method. It is feasible to cross-validate and explicate dissimilar diabetes simulation models using standardized patients. The wide differences in the model results that we observed demonstrate the need for cross-validation. We propose to hold a second 'Mt Hood Challenge' in 2001 and invite all diabetes modelers to attend.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24808276','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24808276"><span>Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Miranian, A; Abdollahzade, M</p> <p>2013-02-01</p> <p>Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013NatSR...3E1645K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013NatSR...3E1645K"><span>The predictability of consumer visitation patterns</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban</p> <p>2013-04-01</p> <p>We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3629735','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3629735"><span>The predictability of consumer visitation patterns</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban</p> <p>2013-01-01</p> <p>We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26123978','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26123978"><span>Regression modeling and prediction of road sweeping brush load characteristics from finite element analysis and experimental results.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Chong; Sun, Qun; Wahab, Magd Abdel; Zhang, Xingyu; Xu, Limin</p> <p>2015-09-01</p> <p>Rotary cup brushes mounted on each side of a road sweeper undertake heavy debris removal tasks but the characteristics have not been well known until recently. A Finite Element (FE) model that can analyze brush deformation and predict brush characteristics have been developed to investigate the sweeping efficiency and to assist the controller design. However, the FE model requires large amount of CPU time to simulate each brush design and operating scenario, which may affect its applications in a real-time system. This study develops a mathematical regression model to summarize the FE modeled results. The complex brush load characteristic curves were statistically analyzed to quantify the effects of cross-section, length, mounting angle, displacement and rotational speed etc. The data were then fitted by a multiple variable regression model using the maximum likelihood method. The fitted results showed good agreement with the FE analysis results and experimental results, suggesting that the mathematical regression model may be directly used in a real-time system to predict characteristics of different brushes under varying operating conditions. The methodology may also be used in the design and optimization of rotary brush tools. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29701973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29701973"><span>Conformal Regression for Quantitative Structure-Activity Relationship Modeling-Quantifying Prediction Uncertainty.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Svensson, Fredrik; Aniceto, Natalia; Norinder, Ulf; Cortes-Ciriano, Isidro; Spjuth, Ola; Carlsson, Lars; Bender, Andreas</p> <p>2018-05-29</p> <p>Making predictions with an associated confidence is highly desirable as it facilitates decision making and resource prioritization. Conformal regression is a machine learning framework that allows the user to define the required confidence and delivers predictions that are guaranteed to be correct to the selected extent. In this study, we apply conformal regression to model molecular properties and bioactivity values and investigate different ways to scale the resultant prediction intervals to create as efficient (i.e., narrow) regressors as possible. Different algorithms to estimate the prediction uncertainty were used to normalize the prediction ranges, and the different approaches were evaluated on 29 publicly available data sets. Our results show that the most efficient conformal regressors are obtained when using the natural exponential of the ensemble standard deviation from the underlying random forest to scale the prediction intervals, but other approaches were almost as efficient. This approach afforded an average prediction range of 1.65 pIC50 units at the 80% confidence level when applied to bioactivity modeling. The choice of nonconformity function has a pronounced impact on the average prediction range with a difference of close to one log unit in bioactivity between the tightest and widest prediction range. Overall, conformal regression is a robust approach to generate bioactivity predictions with associated confidence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110014627','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110014627"><span>CFD Modeling of Launch Vehicle Aerodynamic Heating</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tashakkor, Scott B.; Canabal, Francisco; Mishtawy, Jason E.</p> <p>2011-01-01</p> <p>The Loci-CHEM 3.2 Computational Fluid Dynamics (CFD) code is being used to predict Ares-I launch vehicle aerodynamic heating. CFD has been used to predict both ascent and stage reentry environments and has been validated against wind tunnel tests and the Ares I-X developmental flight test. Most of the CFD predictions agreed with measurements. On regions where mismatches occurred, the CFD predictions tended to be higher than measured data. These higher predictions usually occurred in complex regions, where the CFD models (mainly turbulence) contain less accurate approximations. In some instances, the errors causing the over-predictions would cause locations downstream to be affected even though the physics were still being modeled properly by CHEM. This is easily seen when comparing to the 103-AH data. In the areas where predictions were low, higher grid resolution often brought the results closer to the data. Other disagreements are attributed to Ares I-X hardware not being present in the grid, as a result of computational resources limitations. The satisfactory predictions from CHEM provide confidence that future designs and predictions from the CFD code will provide an accurate approximation of the correct values for use in design and other applications</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFD.E6002M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFD.E6002M"><span>A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moore, Chandler; Akiki, Georges; Balachandar, S.</p> <p>2017-11-01</p> <p>This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhDT........95F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhDT........95F"><span>A digital spatial predictive model of land-use change using economic and environmental inputs and a statistical tree classification approach: Thailand, 1970s--1990s</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Felkner, John Sames</p> <p></p> <p>The scale and extent of global land use change is massive, and has potentially powerful effects on the global climate and global atmospheric composition (Turner & Meyer, 1994). Because of this tremendous change and impact, there is an urgent need for quantitative, empirical models of land use change, especially predictive models with an ability to capture the trajectories of change (Agarwal, Green, Grove, Evans, & Schweik, 2000; Lambin et al., 1999). For this research, a spatial statistical predictive model of land use change was created and run in two provinces of Thailand. The model utilized an extensive spatial database, and used a classification tree approach for explanatory model creation and future land use (Breiman, Friedman, Olshen, & Stone, 1984). Eight input variables were used, and the trees were run on a dependent variable of land use change measured from 1979 to 1989 using classified satellite imagery. The derived tree models were used to create probability of change surfaces, and these were then used to create predicted land cover maps for 1999. These predicted 1999 maps were compared with actual 1999 landcover derived from 1999 Landsat 7 imagery. The primary research hypothesis was that an explanatory model using both economic and environmental input variables would better predict future land use change than would either a model using only economic variables or a model using only environmental. Thus, the eight input variables included four economic and four environmental variables. The results indicated a very slight superiority of the full models to predict future agricultural change and future deforestation, but a slight superiority of the economic models to predict future built change. However, the margins of superiority were too small to be statistically significant. The resulting tree structures were used, however, to derive a series of principles or "rules" governing land use change in both provinces. The model was able to predict future land use, given a series of assumptions, with 90 percent overall accuracies. The model can be used in other developing or developed country locations for future land use prediction, determination of future threatened areas, or to derive "rules" or principles driving land use change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5336475','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5336475"><span>Predicting solvation free energies and thermodynamics in polar solvents and mixtures using a solvation-layer interface condition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Goossens, Spencer; Mehdizadeh Rahimi, Ali</p> <p>2017-01-01</p> <p>We demonstrate that with two small modifications, the popular dielectric continuum model is capable of predicting, with high accuracy, ion solvation thermodynamics (Gibbs free energies, entropies, and heat capacities) in numerous polar solvents. We are also able to predict ion solvation free energies in water–co-solvent mixtures over available concentration series. The first modification to the classical dielectric Poisson model is a perturbation of the macroscopic dielectric-flux interface condition at the solute–solvent interface: we add a nonlinear function of the local electric field, giving what we have called a solvation-layer interface condition (SLIC). The second modification is including the microscopic interface potential (static potential) in our model. We show that the resulting model exhibits high accuracy without the need for fitting solute atom radii in a state-dependent fashion. Compared to experimental results in nine water–co-solvent mixtures, SLIC predicts transfer free energies to within 2.5 kJ/mol. The co-solvents include both protic and aprotic species, as well as biologically relevant denaturants such as urea and dimethylformamide. Furthermore, our results indicate that the interface potential is essential to reproduce entropies and heat capacities. These and previous tests of the SLIC model indicate that it is a promising dielectric continuum model for accurate predictions in a wide range of conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.146i4103M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.146i4103M"><span>Predicting solvation free energies and thermodynamics in polar solvents and mixtures using a solvation-layer interface condition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Knepley, Matthew; Bardhan, Jaydeep P.</p> <p>2017-03-01</p> <p>We demonstrate that with two small modifications, the popular dielectric continuum model is capable of predicting, with high accuracy, ion solvation thermodynamics (Gibbs free energies, entropies, and heat capacities) in numerous polar solvents. We are also able to predict ion solvation free energies in water-co-solvent mixtures over available concentration series. The first modification to the classical dielectric Poisson model is a perturbation of the macroscopic dielectric-flux interface condition at the solute-solvent interface: we add a nonlinear function of the local electric field, giving what we have called a solvation-layer interface condition (SLIC). The second modification is including the microscopic interface potential (static potential) in our model. We show that the resulting model exhibits high accuracy without the need for fitting solute atom radii in a state-dependent fashion. Compared to experimental results in nine water-co-solvent mixtures, SLIC predicts transfer free energies to within 2.5 kJ/mol. The co-solvents include both protic and aprotic species, as well as biologically relevant denaturants such as urea and dimethylformamide. Furthermore, our results indicate that the interface potential is essential to reproduce entropies and heat capacities. These and previous tests of the SLIC model indicate that it is a promising dielectric continuum model for accurate predictions in a wide range of conditions.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>