A Quantitative Model of Expert Transcription Typing
1993-03-08
side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
NASA Technical Reports Server (NTRS)
Dhanasekharan, M.; Huang, H.; Kokini, J. L.; Janes, H. W. (Principal Investigator)
1999-01-01
The measured rheological behavior of hard wheat flour dough was predicted using three nonlinear differential viscoelastic models. The Phan-Thien Tanner model gave good zero shear viscosity prediction, but overpredicted the shear viscosity at higher shear rates and the transient and extensional properties. The Giesekus-Leonov model gave similar predictions to the Phan-Thien Tanner model, but the extensional viscosity prediction showed extension thickening. Using high values of the mobility factor, extension thinning behavior was observed but the predictions were not satisfactory. The White-Metzner model gave good predictions of the steady shear viscosity and the first normal stress coefficient but it was unable to predict the uniaxial extensional viscosity as it exhibited asymptotic behavior in the tested extensional rates. It also predicted the transient shear properties with moderate accuracy in the transient phase, but very well at higher times, compared to the Phan-Thien Tanner model and the Giesekus-Leonov model. None of the models predicted all observed data consistently well. Overall the White-Metzner model appeared to make the best predictions of all the observed data.
Predicting U.S. food demand in the 20th century: a new look at system dynamics
NASA Astrophysics Data System (ADS)
Moorthy, Mukund; Cellier, Francois E.; LaFrance, Jeffrey T.
1998-08-01
The paper describes a new methodology for predicting the behavior of macroeconomic variables. The approach is based on System Dynamics and Fuzzy Inductive Reasoning. A four- layer pseudo-hierarchical model is proposed. The bottom layer makes predications about population dynamics, age distributions among the populace, as well as demographics. The second layer makes predications about the general state of the economy, including such variables as inflation and unemployment. The third layer makes predictions about the demand for certain goods or services, such as milk products, used cars, mobile telephones, or internet services. The fourth and top layer makes predictions about the supply of such goods and services, both in terms of their prices. Each layer can be influenced by control variables the values of which are only determined at higher levels. In this sense, the model is not strictly hierarchical. For example, the demand for goods at level three depends on the prices of these goods, which are only determined at level four. Yet, the prices are themselves influenced by the expected demand. The methodology is exemplified by means of a macroeconomic model that makes predictions about US food demand during the 20th century.
Burns, Ryan D; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Shultz, Barry B; Saint-Maurice, Pedro F; Welk, Gregory J; Mahar, Matthew T
2016-01-01
A popular algorithm to predict VO2Peak from the one-mile run/walk test (1MRW) includes body mass index (BMI), which manifests practical issues in school settings. The purpose of this study was to develop an aerobic capacity model from 1MRW in adolescents independent of BMI. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years. The 1MRW was administered on an outside track and a laboratory VO2Peak test was conducted using a maximal treadmill protocol. Multiple linear regression was employed to develop the prediction model. Results yielded the following algorithm: VO2Peak = 7.34 × (1MRW speed in m s(-1)) + 0.23 × (age × sex) + 17.75. The New Model displayed a multiple correlation and prediction error of R = 0.81, standard error of the estimate = 4.78 ml kg(-1) · min(-1), with measured VO2Peak and good criterion-referenced (CR) agreement into FITNESSGRAM's Healthy Fitness Zone (Kappa = 0.62; percentage agreement = 84.4%; Φ = 0.62). The New Model was validated using k-fold cross-validation and showed homoscedastic residuals across the range of predicted scores. The omission of BMI did not compromise accuracy of the model. In conclusion, the New Model displayed good predictive accuracy and good CR agreement with measured VO2Peak in adolescents aged 13-16 years.
NASA Astrophysics Data System (ADS)
Souza, Paul M.; Beladi, Hossein; Singh, Rajkumar P.; Hodgson, Peter D.; Rolfe, Bernard
2018-05-01
This paper developed high-temperature deformation constitutive models for a Ti6Al4V alloy using an empirical-based Arrhenius equation and an enhanced version of the authors' physical-based EM + Avrami equations. The initial microstructure was a partially equiaxed α + β grain structure. A wide range of experimental data was obtained from hot compression of the Ti6Al4 V alloy at deformation temperatures ranging from 720 to 970 °C, and at strain rates varying from 0.01 to 10 s-1. The friction- and adiabatic-corrected flow curves were used to identify the parameter values of the constitutive models. Both models provided good overall accuracy of the flow stress. The generalized modified Arrhenius model was better at predicting the flow stress at lower strain rates. However, the model was inaccurate in predicting the peak strain. In contrast, the enhanced physical-based EM + Avrami model revealed very good accuracy at intermediate and high strain rates, but it was also better at predicting the peak strain. Blind sample tests revealed that the EM + Avrami maintained good predictions on new (unseen) data. Thus, the enhanced EM + Avrami model may be preferred over the Arrhenius model to predict the flow behavior of Ti6Al4V alloy during industrial forgings, when the initial microstructure is partially equiaxed.
Predictors of treatment failure in young patients undergoing in vitro fertilization.
Jacobs, Marni B; Klonoff-Cohen, Hillary; Agarwal, Sanjay; Kritz-Silverstein, Donna; Lindsay, Suzanne; Garzo, V Gabriel
2016-08-01
The purpose of the study was to evaluate whether routinely collected clinical factors can predict in vitro fertilization (IVF) failure among young, "good prognosis" patients predominantly with secondary infertility who are less than 35 years of age. Using de-identified clinic records, 414 women <35 years undergoing their first autologous IVF cycle were identified. Logistic regression was used to identify patient-driven clinical factors routinely collected during fertility treatment that could be used to model predicted probability of cycle failure. One hundred ninety-seven patients with both primary and secondary infertility had a failed IVF cycle, and 217 with secondary infertility had a successful live birth. None of the women with primary infertility had a successful live birth. The significant predictors for IVF cycle failure among young patients were fewer previous live births, history of biochemical pregnancies or spontaneous abortions, lower baseline antral follicle count, higher total gonadotropin dose, unknown infertility diagnosis, and lack of at least one fair to good quality embryo. The full model showed good predictive value (c = 0.885) for estimating risk of cycle failure; at ≥80 % predicted probability of failure, sensitivity = 55.4 %, specificity = 97.5 %, positive predictive value = 95.4 %, and negative predictive value = 69.8 %. If this predictive model is validated in future studies, it could be beneficial for predicting IVF failure in good prognosis women under the age of 35 years.
Evaluation of 3D-Jury on CASP7 models.
Kaján, László; Rychlewski, Leszek
2007-08-21
3D-Jury, the structure prediction consensus method publicly available in the Meta Server http://meta.bioinfo.pl/, was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature http://meta.bioinfo.pl/compare_your_model_example.pl available in the Meta Server.
Evaluation of 3D-Jury on CASP7 models
Kaján, László; Rychlewski, Leszek
2007-01-01
Background 3D-Jury, the structure prediction consensus method publicly available in the Meta Server , was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. Results The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. Conclusion The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature available in the Meta Server. PMID:17711571
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya
2013-03-01
This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
ERIC Educational Resources Information Center
Parry, Malcolm
1998-01-01
Explains a novel way of approaching centripetal force: theory is used to predict an orbital period at which a toy train will topple from a circular track. The demonstration has elements of prediction (a criterion for a good model) and suspense (a criterion for a good demonstration). The demonstration proved useful in undergraduate physics and…
NASA Technical Reports Server (NTRS)
Ashrafi, S.
1991-01-01
K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.
Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D
2016-01-01
Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Predictive Caching Using the TDAG Algorithm
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We describe how the TDAG algorithm for learning to predict symbol sequences can be used to design a predictive cache store. A model of a two-level mass storage system is developed and used to calculate the performance of the cache under various conditions. Experimental simulations provide good confirmation of the model.
The Generation, Radiation and Prediction of Supersonic Jet Noise. Volume 1
1978-10-01
standard, Gaussian correlation function model can yield a good noise spectrum prediction (at 900), but the corresponding axial source distributions do not...forms for the turbulence cross-correlation function. Good agreement was obtained between measured and calculated far- field noise spectra. However, the...complementary error function profile (3.63) was found to provide a good fit to the axial velocity distribution tor a wide range of Mach numbers in the Initial
ERIC Educational Resources Information Center
Fox, William
2012-01-01
The purpose of our modeling effort is to predict future outcomes. We assume the data collected are both accurate and relatively precise. For our oscillating data, we examined several mathematical modeling forms for predictions. We also examined both ignoring the oscillations as an important feature and including the oscillations as an important…
Kim, Hwi Young; Lee, Dong Hyeon; Lee, Jeong-Hoon; Cho, Young Youn; Cho, Eun Ju; Yu, Su Jong; Kim, Yoon Jun; Yoon, Jung-Hwan
2018-03-20
Prediction of the outcome of sorafenib therapy using biomarkers is an unmet clinical need in patients with advanced hepatocellular carcinoma (HCC). The aim was to develop and validate a biomarker-based model for predicting sorafenib response and overall survival (OS). This prospective cohort study included 124 consecutive HCC patients (44 with disease control, 80 with progression) with Child-Pugh class A liver function, who received sorafenib. Potential serum biomarkers (namely, hepatocyte growth factor [HGF], fibroblast growth factor [FGF], vascular endothelial growth factor receptor-1, CD117, and angiopoietin-2) were tested. After identifying independent predictors of tumor response, a risk scoring system for predicting OS was developed and 3-fold internal validation was conducted. A risk scoring system was developed with six covariates: etiology, platelet count, Barcelona Clinic Liver Cancer stage, protein induced by vitamin K absence-II, HGF, and FGF. When patients were stratified into low-risk (score ≤ 5), intermediate-risk (score 6), and high-risk (score ≥ 7) groups, the model provided good discriminant functions on tumor response (concordance [c]-index, 0.884) and 12-month survival (area under the curve [AUC], 0.825). The median OS was 19.0, 11.2, and 6.1 months in the low-, intermediate-, and high-risk group, respectively (P < 0.001). In internal validation, the model maintained good discriminant functions on tumor response (c-index, 0.825) and 12-month survival (AUC, 0.803), and good calibration functions (all P > 0.05 between expected and observed values). This new model including serum FGF and HGF showed good performance in predicting the response to sorafenib and survival in patients with advanced HCC.
Calibration power of the Braden scale in predicting pressure ulcer development.
Chen, Hong-Lin; Cao, Ying-Juan; Wang, Jing; Huai, Bao-Sha
2016-11-02
Calibration is the degree of correspondence between the estimated probability produced by a model and the actual observed probability. The aim of this study was to investigate the calibration power of the Braden scale in predicting pressure ulcer development (PU). A retrospective analysis was performed among consecutive patients in 2013. The patients were separated into training a group and a validation group. The predicted incidence was calculated using a logistic regression model in the training group and the Hosmer-Lemeshow test was used for assessing the goodness of fit. In the validation cohort, the observed and the predicted incidence were compared by the Chi-square (χ 2 ) goodness of fit test for calibration power. We included 2585 patients in the study, of these 78 patients (3.0%) developed a PU. Between the training and validation groups the patient characteristics were non-significant (p>0.05). In the training group, the logistic regression model for predicting pressure ulcer was Logit(P) = -0.433*Braden score+2.616. The Hosmer-Lemeshow test showed no goodness fit (χ 2 =13.472; p=0.019). In the validation group, the predicted pressure ulcer incidence also did not fit well with the observed incidence (χ 2 =42.154, p=0.000 by Braden scores; and χ 2 =17.223, p=0.001 by Braden scale risk classification). The Braden scale has low calibration power in predicting PU formation.
Analysis of two-equation turbulence models for recirculating flows
NASA Technical Reports Server (NTRS)
Thangam, S.
1991-01-01
The two-equation kappa-epsilon model is used to analyze turbulent separated flow past a backward-facing step. It is shown that if the model constraints are modified to be consistent with the accepted energy decay rate for isotropic turbulence, the dominant features of the flow field, namely the size of the separation bubble and the streamwise component of the mean velocity, can be accurately predicted. In addition, except in the vicinity of the step, very good predictions for the turbulent shear stress, the wall pressure, and the wall shear stress are obtained. The model is also shown to provide good predictions for the turbulence intensity in the region downstream of the reattachment point. Estimated long time growth rates for the turbulent kinetic energy and dissipation rate of homogeneous shear flow are utilized to develop an optimal set of constants for the two equation kappa-epsilon model. The physical implications of the model performance are also discussed.
Rasulev, Bakhtiyor; Kusić, Hrvoje; Leszczynska, Danuta; Leszczynski, Jerzy; Koprivanac, Natalija
2010-05-01
The goal of the study was to predict toxicity in vivo caused by aromatic compounds structured with a single benzene ring and the presence or absence of different substituent groups such as hydroxyl-, nitro-, amino-, methyl-, methoxy-, etc., by using QSAR/QSPR tools. A Genetic Algorithm and multiple regression analysis were applied to select the descriptors and to generate the correlation models. The most predictive model is shown to be the 3-variable model which also has a good ratio of the number of descriptors and their predictive ability to avoid overfitting. The main contributions to the toxicity were shown to be the polarizability weighted MATS2p and the number of certain groups C-026 descriptors. The GA-MLRA approach showed good results in this study, which allows the building of a simple, interpretable and transparent model that can be used for future studies of predicting toxicity of organic compounds to mammals.
Textile composite processing science
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Hammond, Vincent H.; Kranbuehl, David E.; Hasko, Gregory H.
1993-01-01
A multi-dimensional model of the Resin Transfer Molding (RTM) process was developed for the prediction of the infiltration behavior of a resin into an anisotropic fiber preform. Frequency dependent electromagnetic sensing (FDEMS) was developed for in-situ monitoring of the RTM process. Flow visualization and mold filling experiments were conducted to verify sensor measurements and model predictions. Test results indicated good agreement between model predictions, sensor readings, and experimental data.
A Method to Test Model Calibration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
A Method to Test Model Calibration Techniques: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold
2015-03-01
A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Brenner, Bradley R.; Lyons, Heather Z.; Fassinger, Ruth E.
2010-01-01
An initial test and validation of a model predicting perceived organizational citizenship behaviors (OCBs) of lesbian and gay employees were conducted using structural equation modeling. The proposed structural model demonstrated acceptable goodness of ft and structural invariance across 2 samples (ns = 311 and 295), which suggested that…
Journal Article: Infant Exposure to Dioxin-Like Compounds in Breast Milk
A simple, one-compartment, first-order pharmacokinetic model is used to predict the infant body burden of dioxin-like compounds that results from breast-feeding. Validation testing of the model showed a good match between predictions and measurements of dioxin toxic equivalents ...
Far-ultraviolet spectra and flux distributions of some Orion stars
NASA Technical Reports Server (NTRS)
Carruthers, G. R.; Heckathorn, H. M.; Opal, C. B.
1981-01-01
Far-ultraviolet (950-1800 A) spectra with about 2 A resolution were obtained of a number of stars in Orion during a sounding-rocket flight 1975 December 6. These spectra have been reduced to absolute flux distributions with the aid of preflight calibrations. The derived fluxes are in good agreement with model-atmosphere predictions and previous observations down to about 1200 A. In the 1200-1080 A range, the present results are in good agreement with model predictions but fall above the rocket measurements of Brune, Mount and Feldman. Below 1080 A, our measurements fall below the model predictions, reaching a deviation of a factor of 2 near 1010 A and a factor of 4 near 950 A. The present results are compared with those of Brune et al. via Copernicus U2 observations in this spectral range, and possible sources of discrepancies between the various observations and model-atmosphere predictions are discussed. Other aspects of the spectra, particularly with regard to spectral classification, are briefly discussed.
Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung
2016-06-01
The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Fischer, Ulrich; Celia, Michael A.
1999-04-01
Functional relationships for unsaturated flow in soils, including those between capillary pressure, saturation, and relative permeabilities, are often described using analytical models based on the bundle-of-tubes concept. These models are often limited by, for example, inherent difficulties in prediction of absolute permeabilities, and in incorporation of a discontinuous nonwetting phase. To overcome these difficulties, an alternative approach may be formulated using pore-scale network models. In this approach, the pore space of the network model is adjusted to match retention data, and absolute and relative permeabilities are then calculated. A new approach that allows more general assignments of pore sizes within the network model provides for greater flexibility to match measured data. This additional flexibility is especially important for simultaneous modeling of main imbibition and drainage branches. Through comparisons between the network model results, analytical model results, and measured data for a variety of both undisturbed and repacked soils, the network model is seen to match capillary pressure-saturation data nearly as well as the analytical model, to predict water phase relative permeabilities equally well, and to predict gas phase relative permeabilities significantly better than the analytical model. The network model also provides very good estimates for intrinsic permeability and thus for absolute permeabilities. Both the network model and the analytical model lost accuracy in predicting relative water permeabilities for soils characterized by a van Genuchten exponent n≲3. Overall, the computational results indicate that reliable predictions of both relative and absolute permeabilities are obtained with the network model when the model matches the capillary pressure-saturation data well. The results also indicate that measured imbibition data are crucial to good predictions of the complete hysteresis loop.
NASA Astrophysics Data System (ADS)
Jones, G. T.; Jones, R. W. L.; Kennedy, B. W.; O'Neale, S. W.; Klein, H.; Morrison, D. R. O.; Schmid, P.; Wachsmuth, H.; Miller, D. B.; Mobayyen, M. M.; Wainstein, S.; Aderholz, M.; Hoffmann, E.; Katz, U. F.; Kern, J.; Schmitz, N.; Wittek, W.; Allport, P.; Myatt, G.; Radojicic, D.; Bullock, F. W.; Burke, S.
1987-03-01
Data obtained with the bubble chamber BEBC at CERN are used for the first significant test of Adler's prediction for the neutrino and antineutrino-proton scattering cross sections at vanishing four-momentum transfer squared Q 2. An Extended Vector Meson Dominance Model (EVDM) is applied to extrapolate Adler's prediction to experimentally accessible values of Q 2. The data show good agreement with Adler's prediction for Q 2→0 thus confirming the PCAC hypothesis in the kinematical region of high leptonic energy transfer ν>2 GeV. The good agreement of the data with the theoretical predictions also at higher Q 2, where the EVDM terms are dominant, also supports this model. However, an EVDM calculation without PCAC is clearly ruled out by the data.
Enhanced Fan Noise Modeling for Turbofan Engines
NASA Technical Reports Server (NTRS)
Krejsa, Eugene A.; Stone, James R.
2014-01-01
This report describes work by consultants to Diversitech Inc. for the NASA Glenn Research Center (GRC) to revise the fan noise prediction procedure based on fan noise data obtained in the 9- by 15 Foot Low-Speed Wind Tunnel at GRC. The purpose of this task is to begin development of an enhanced, analytical, more physics-based, fan noise prediction method applicable to commercial turbofan propulsion systems. The method is to be suitable for programming into a computational model for eventual incorporation into NASA's current aircraft system noise prediction computer codes. The scope of this task is in alignment with the mission of the Propulsion 21 research effort conducted by the coalition of NASA, state government, industry, and academia to develop aeropropulsion technologies. A model for fan noise prediction was developed based on measured noise levels for the R4 rotor with several outlet guide vane variations and three fan exhaust areas. The model predicts the complete fan noise spectrum, including broadband noise, tones, and for supersonic tip speeds, combination tones. Both spectra and directivity are predicted. Good agreement with data was achieved for all fan geometries. Comparisons with data from a second fan, the ADP fan, also showed good agreement.
van der Fels-Klerx, H J; Booij, C J H
2010-06-01
This article provides an overview of available systems for management of Fusarium mycotoxins in the cereal grain supply chain, with an emphasis on the use of predictive mathematical modeling. From the state of the art, it proposes future developments in modeling and management and their challenges. Mycotoxin contamination in cereal grain-based feed and food products is currently managed and controlled by good agricultural practices, good manufacturing practices, hazard analysis critical control points, and by checking and more recently by notification systems and predictive mathematical models. Most of the predictive models for Fusarium mycotoxins in cereal grains focus on deoxynivalenol in wheat and aim to help growers make decisions about the application of fungicides during cultivation. Future developments in managing Fusarium mycotoxins should include the linkage between predictive mathematical models and geographical information systems, resulting into region-specific predictions for mycotoxin occurrence. The envisioned geographically oriented decision support system may incorporate various underlying models for specific users' demands and regions and various related databases to feed the particular models with (geographically oriented) input data. Depending on the user requirements, the system selects the best fitting model and available input information. Future research areas include organizing data management in the cereal grain supply chain, developing predictive models for other stakeholders (taking into account the period up to harvest), other Fusarium mycotoxins, and cereal grain types, and understanding the underlying effects of the regional component in the models.
Baggott, Sarah; Cai, Xiaoming; McGregor, Glenn; Harrison, Roy M
2006-05-01
The Regional Atmospheric Modeling System (RAMS) and Urban Airshed Model (UAM IV) have been implemented for prediction of air pollutant concentrations within the West Midlands conurbation of the United Kingdom. The modelling results for wind speed, direction and temperature are in reasonable agreement with observations for two stations, one in a rural area and the other in an urban area. Predictions of surface temperature are generally good for both stations, but the results suggest that the quality of temperature prediction is sensitive to whether cloud cover is reproduced reliably by the model. Wind direction is captured very well by the model, while wind speed is generally overestimated. The air pollution climate of the UK West Midlands is very different to those for which the UAM model was primarily developed, and the methods used to overcome these limitations are described. The model shows a tendency towards under-prediction of primary pollutant (NOx and CO) concentrations, but with suitable attention to boundary conditions and vertical profiles gives fairly good predictions of ozone concentrations. Hourly updating of chemical concentration boundary conditions yields the best results, with input of vertical profiles desirable. The model seriously underpredicts NO2/NO ratios within the urban area and this appears to relate to inadequate production of peroxy radicals. Overall, the chemical reactivity predicted by the model appears to fall well below that occurring in the atmosphere.
Predicting geogenic arsenic contamination in shallow groundwater of south Louisiana, United States.
Yang, Ningfang; Winkel, Lenny H E; Johannesson, Karen H
2014-05-20
Groundwater contaminated with arsenic (As) threatens the health of more than 140 million people worldwide. Previous studies indicate that geology and sedimentary depositional environments are important factors controlling groundwater As contamination. The Mississippi River delta has broadly similar geology and sedimentary depositional environments to the large deltas in South and Southeast Asia, which are severely affected by geogenic As contamination and therefore may also be vulnerable to groundwater As contamination. In this study, logistic regression is used to develop a probability model based on surface hydrology, soil properties, geology, and sedimentary depositional environments. The model is calibrated using 3286 aggregated and binary-coded groundwater As concentration measurements from Bangladesh and verified using 78 As measurements from south Louisiana. The model's predictions are in good agreement with the known spatial distribution of groundwater As contamination of Bangladesh, and the predictions also indicate high risk of As contamination in shallow groundwater from Holocene sediments of south Louisiana. Furthermore, the model correctly predicted 79% of the existing shallow groundwater As measurements in the study region, indicating good performance of the model in predicting groundwater As contamination in shallow aquifers of south Louisiana.
Measurement and simulation of deformation and stresses in steel casting
NASA Astrophysics Data System (ADS)
Galles, D.; Monroe, C. A.; Beckermann, C.
2012-07-01
Experiments are conducted to measure displacements and forces during casting of a steel bar in a sand mold. In some experiments the bar is allowed to contract freely, while in others the bar is manually strained using embedded rods connected to a frame. Solidification and cooling of the experimental castings are simulated using a commercial code, and good agreement between measured and predicted temperatures is obtained. The deformations and stresses in the experiments are simulated using an elasto-viscoplastic finite-element model. The high temperature mechanical properties are estimated from data available in the literature. The mush is modeled using porous metal plasticity theory, where the coherency and coalescence solid fraction are taken into account. Good agreement is obtained between measured and predicted displacements and forces. The results shed considerable light on the modeling of stresses in steel casting and help in developing more accurate models for predicting hot tears and casting distortions.
Survival Regression Modeling Strategies in CVD Prediction.
Barkhordari, Mahnaz; Padyab, Mojgan; Sardarinia, Mahsa; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza
2016-04-01
A fundamental part of prevention is prediction. Potential predictors are the sine qua non of prediction models. However, whether incorporating novel predictors to prediction models could be directly translated to added predictive value remains an area of dispute. The difference between the predictive power of a predictive model with (enhanced model) and without (baseline model) a certain predictor is generally regarded as an indicator of the predictive value added by that predictor. Indices such as discrimination and calibration have long been used in this regard. Recently, the use of added predictive value has been suggested while comparing the predictive performances of the predictive models with and without novel biomarkers. User-friendly statistical software capable of implementing novel statistical procedures is conspicuously lacking. This shortcoming has restricted implementation of such novel model assessment methods. We aimed to construct Stata commands to help researchers obtain the aforementioned statistical indices. We have written Stata commands that are intended to help researchers obtain the following. 1, Nam-D'Agostino X 2 goodness of fit test; 2, Cut point-free and cut point-based net reclassification improvement index (NRI), relative absolute integrated discriminatory improvement index (IDI), and survival-based regression analyses. We applied the commands to real data on women participating in the Tehran lipid and glucose study (TLGS) to examine if information relating to a family history of premature cardiovascular disease (CVD), waist circumference, and fasting plasma glucose can improve predictive performance of Framingham's general CVD risk algorithm. The command is adpredsurv for survival models. Herein we have described the Stata package "adpredsurv" for calculation of the Nam-D'Agostino X 2 goodness of fit test as well as cut point-free and cut point-based NRI, relative and absolute IDI, and survival-based regression analyses. We hope this work encourages the use of novel methods in examining predictive capacity of the emerging plethora of novel biomarkers.
Jhin, Changho; Hwang, Keum Taek
2014-01-01
Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627
A Soil Temperature Model for Closed Canopied Forest Stands
James M. Vose; Wayne T. Swank
1991-01-01
A microcomputer-based soil temperature model was developed to predict temperature at the litter-soil interface and soil temperatures at three depths (0.10 m, 0.20 m, and 1.25 m) under closed forest canopies. Comparisons of predicted and measured soil temperatures indicated good model performance under most conditions. When generalized parameters describing soil...
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-01-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671
SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal
Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.
2017-01-01
Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758
An Electrophysiological Index of Perceptual Goodness
Makin, Alexis D.J.; Wright, Damien; Rampone, Giulia; Palumbo, Letizia; Guest, Martin; Sheehan, Rhiannon; Cleaver, Helen; Bertamini, Marco
2016-01-01
A traditional line of work starting with the Gestalt school has shown that patterns vary in strength and salience; a difference in “Perceptual goodness.” The Holographic weight of evidence model quantifies goodness of visual regularities. The key formula states that W = E/N, where E is number of holographic identities in a pattern and N is number of elements. We tested whether W predicts the amplitude of the neural response to regularity in an extrastriate symmetry-sensitive network. We recorded an Event Related Potential (ERP) generated by symmetry called the Sustained Posterior Negativity (SPN). First, we reanalyzed the published work and found that W explained most variance in SPN amplitude. Then in four new studies, we confirmed specific predictions of the holographic model regarding 1) the differential effects of numerosity on reflection and repetition, 2) the similarity between reflection and Glass patterns, 3) multiple symmetries, and 4) symmetry and anti-symmetry. In all cases, the holographic approach predicted SPN amplitude remarkably well; particularly in an early window around 300–400 ms post stimulus onset. Although the holographic model was not conceived as a model of neural processing, it captures many details of the brain response to symmetry. PMID:27702812
Astray, G; Soto, B; Lopez, D; Iglesias, M A; Mejuto, J C
2016-01-01
Transit data analysis and artificial neural networks (ANNs) have proven to be a useful tool for characterizing and modelling non-linear hydrological processes. In this paper, these methods have been used to characterize and to predict the discharge of Lor River (North Western Spain), 1, 2 and 3 days ahead. Transit data analyses show a coefficient of correlation of 0.53 for a lag between precipitation and discharge of 1 day. On the other hand, temperature and discharge has a negative coefficient of correlation (-0.43) for a delay of 19 days. The ANNs developed provide a good result for the validation period, with R(2) between 0.92 and 0.80. Furthermore, these prediction models have been tested with discharge data from a period 16 years later. Results of this testing period also show a good correlation, with R(2) between 0.91 and 0.64. Overall, results indicate that ANNs are a good tool to predict river discharge with a small number of input variables.
Girardat-Rotar, Laura; Braun, Julia; Puhan, Milo A; Abraham, Alison G; Serra, Andreas L
2017-07-17
Prediction models in autosomal dominant polycystic kidney disease (ADPKD) are useful in clinical settings to identify patients with greater risk of a rapid disease progression in whom a treatment may have more benefits than harms. Mayo Clinic investigators developed a risk prediction tool for ADPKD patients using a single kidney value. Our aim was to perform an independent geographical and temporal external validation as well as evaluate the potential for improving the predictive performance by including additional information on total kidney volume. We used data from the on-going Swiss ADPKD study from 2006 to 2016. The main analysis included a sample size of 214 patients with Typical ADPKD (Class 1). We evaluated the Mayo Clinic model performance calibration and discrimination in our external sample and assessed whether predictive performance could be improved through the addition of subsequent kidney volume measurements beyond the baseline assessment. The calibration of both versions of the Mayo Clinic prediction model using continuous Height adjusted total kidney volume (HtTKV) and using risk subclasses was good, with R 2 of 78% and 70%, respectively. Accuracy was also good with 91.5% and 88.7% of the predicted within 30% of the observed, respectively. Additional information regarding kidney volume did not substantially improve the model performance. The Mayo Clinic prediction models are generalizable to other clinical settings and provide an accurate tool based on available predictors to identify patients at high risk for rapid disease progression.
Mark A. Rumble; Lakhdar Benkobi; R. Scott Gamo
2007-01-01
We tested predictions of the spatially explicit ArcHSI habitat model for elk. The distribution of elk relative to proximity of forage and cover differed from that predicted. Elk used areas near primary roads similar to that predicted by the model, but elk were farther from secondary roads. Elk used areas categorized as good (> 0.7), fair (> 0.42 to 0.7), and poor...
A New Approach to Predict the Fish Fillet Shelf-Life in Presence of Natural Preservative Agents.
Giuffrida, Alessandro; Giarratana, Filippo; Valenti, Davide; Muscolino, Daniele; Parisi, Roberta; Parco, Alessio; Marotta, Stefania; Ziino, Graziella; Panebianco, Antonio
2017-04-13
Three data sets concerning the behaviour of spoilage flora of fillets treated with natural preservative substances (NPS) were used to construct a new kind of mathematical predictive model. This model, unlike other ones, allows expressing the antibacterial effect of the NPS separately from the prediction of the growth rate. This approach, based on the introduction of a parameter into the predictive primary model, produced a good fitting of observed data and allowed characterising quantitatively the increase of shelf-life of fillets.
Self-esteem recognition based on gait pattern using Kinect.
Sun, Bingli; Zhang, Zhan; Liu, Xingyun; Hu, Bin; Zhu, Tingshao
2017-10-01
Self-esteem is an important aspect of individual's mental health. When subjects are not able to complete self-report questionnaire, behavioral assessment will be a good supplement. In this paper, we propose to use gait data collected by Kinect as an indicator to recognize self-esteem. 178 graduate students without disabilities participate in our study. Firstly, all participants complete the 10-item Rosenberg Self-Esteem Scale (RSS) to acquire self-esteem score. After completing the RRS, each participant walks for two minutes naturally on a rectangular red carpet, and the gait data are recorded using Kinect sensor. After data preprocessing, we extract a few behavioral features to train predicting model by machine learning. Based on these features, we build predicting models to recognize self-esteem. For self-esteem prediction, the best correlation coefficient between predicted score and self-report score is 0.45 (p<0.001). We divide the participants according to gender, and for males, the correlation coefficient is 0.43 (p<0.001), for females, it is 0.59 (p<0.001). Using gait data captured by Kinect sensor, we find that the gait pattern could be used to recognize self-esteem with a fairly good criterion validity. The gait predicting model can be taken as a good supplementary method to measure self-esteem. Copyright © 2017 Elsevier B.V. All rights reserved.
Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph
2001-01-01
Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Tan, Ting; Chen, Lizhang; Liu, Fuqiang
2014-11-01
To establish multiple seasonal autoregressive integrated moving average model (ARIMA) according to the hand-foot-mouth disease incidence in Changsha, and to explore the feasibility of the multiple seasonal ARIMA in predicting the hand-foot-mouth disease incidence. EVIEWS 6.0 was used to establish multiple seasonal ARIMA according to the hand-foot- mouth disease incidence from May 2008 to August 2013 in Changsha, and the data of the hand- foot-mouth disease incidence from September 2013 to February 2014 were served as the examined samples of the multiple seasonal ARIMA, then the errors were compared between the forecasted incidence and the real value. Finally, the incidence of hand-foot-mouth disease from March 2014 to August 2014 was predicted by the model. After the data sequence was handled by smooth sequence, model identification and model diagnosis, the multiple seasonal ARIMA (1, 0, 1)×(0, 1, 1)12 was established. The R2 value of the model fitting degree was 0.81, the root mean square prediction error was 8.29 and the mean absolute error was 5.83. The multiple seasonal ARIMA is a good prediction model, and the fitting degree is good. It can provide reference for the prevention and control work in hand-foot-mouth disease.
Monte Carlo Simulation of Plumes Spectral Emission
2005-06-07
ERIM experimental data for hot cell radiance has been performed. It has been shown that NASA standard infrared optical model [3] provides good...Influence of different optical models on predicted numerical data on hot cell radiance for ERIM experimental conditions has been studied. 7...prediction (solid line) of the Hot cell radiance. NASA Standard Infrared Radiation model ; averaged rotational line structure (JLBL=0); spectral
Modeling of near wall turbulence and modeling of bypass transition
NASA Technical Reports Server (NTRS)
Yang, Z.
1992-01-01
The objectives for this project are as follows: (1) Modeling of the near wall turbulence: We aim to develop a second order closure for the near wall turbulence. As a first step of this project, we try to develop a kappa-epsilon model for near wall turbulence. We require the resulting model to be able to handle both near wall turbulence and turbulent flows away from the wall, computationally robust, and applicable for complex flow situations, flow with separation, for example, and (2) Modeling of the bypass transition: We aim to develop a bypass transition model which contains the effect of intermittency. Thus, the model can be used for both the transitional boundary layers and the turbulent boundary layers. We require the resulting model to give a good prediction of momentum and heat transfer within the transitional boundary and a good prediction of the effect of freestream turbulence on transitional boundary layers.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-05-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-01-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
Chen, Jin-hong; Wu, Hai-yun; He, Kun-lun; He, Yao; Qin, Yin-he
2010-10-01
To establish and verify the prediction model for ischemic cardiovascular disease (ICVD) among the elderly population who were under the current health care programs. Statistical analysis on data from physical examination, hospitalization of the past years, from questionnaire and telephone interview was carried out in May, 2003. Data was from a hospital which implementing a health care program. Baseline population with a proportion of 4:1 was randomly selected to generate both module group and verification group. Baseline data was induced to make the verification group into regression model of module group and to generate the predictive value. Distinguished ability with area under ROC curve and the predictive veracity were verified through comparing the predictive incidence rate and actual incidence rate of every deciles group by Hosmer-Lemeshow test. Predictive veracity of the prediction model at population level was verified through comparing the predictive 6-year incidence rates of ICVD with actual 6-year accumulative incidence rates of ICVD with error rate calculated. The samples included 2271 males over the age of 65 with 1817 people for modeling population and 454 for verified population. All of the samples were stratified into two layers to establish hierarchical Cox proportional hazard regression model, including one advanced age group (greater than or equal to 75 years old), and another elderly group (less than 75 years old). Data from the statically analysis showed that the risk factors in aged group were age, systolic blood pressure, serum creatinine level, fasting blood glucose level, while protective factor was high density lipoprotein;in advanced age group, the risk factors were body weight index, systolic blood pressure, serum total cholesterol level, serum creatinine level, fasting blood glucose level, while protective factor was HDL-C. The area under the ROC curve (AUC) and 95%CI were 0.723 and 0.687 - 0.759 respectively. Discriminating power was good. All individual predictive ICVD cumulative incidence and actual incidence were analyzed using Hosmer-Lemeshow test, χ(2) = 1.43, P = 0.786, showing that the predictive veracity was good. The stratified Cox Hazards Regression model was used to establish prediction model of the aged male population under a certain health care program. The common prediction factor of the two age groups were: systolic blood pressure, serum creatinine level, fasting blood glucose level and HDL-C. The area under the ROC curve of the verification group was 0.723, showing that the distinguished ability was good and the predict ability at the individual level and at the group level were also satisfactory. It was feasible to using Cox Proportional Hazards Regression Model for predicting the population groups.
Model predictions of latitude-dependent ozone depletion due to aerospace vehicle operations
NASA Technical Reports Server (NTRS)
Borucki, W. J.; Whitten, R. C.; Watson, V. R.; Riegel, C. A.; Maples, A. L.; Capone, L. A.
1976-01-01
Results are presented from a two-dimensional model of the stratosphere that simulates the seasonal movement of ozone by both wind and eddy transport, and contains all the chemistry known to be important. The calculated reductions in ozone due to NO2 injection from a fleet of supersonic transports are compared with the zonally averaged results of a three-dimensional model for a similar episode of injection. The agreement is good in the northern hemisphere, but is not as good in the southern hemisphere. Both sets of calculations show a strong corridor effect in that the predicted ozone depletions are largest to the north of the flight corridor for aircraft operating in the northern hemisphere.
The importance of understanding: Model space moderates goal specificity effects.
Kistner, Saskia; Burns, Bruce D; Vollmeyer, Regina; Kortenkamp, Ulrich
2016-01-01
The three-space theory of problem solving predicts that the quality of a learner's model and the goal specificity of a task interact on knowledge acquisition. In Experiment 1 participants used a computer simulation of a lever system to learn about torques. They either had to test hypotheses (nonspecific goal), or to produce given values for variables (specific goal). In the good- but not in the poor-model condition they saw torque depicted as an area. Results revealed the predicted interaction. A nonspecific goal only resulted in better learning when a good model of torques was provided. In Experiment 2 participants learned to manipulate the inputs of a system to control its outputs. A nonspecific goal to explore the system helped performance when compared to a specific goal to reach certain values when participants were given a good model, but not when given a poor model that suggested the wrong hypothesis space. Our findings support the three-space theory. They emphasize the importance of understanding for problem solving and stress the need to study underlying processes.
Comparing spatial regression to random forests for large ...
Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po
New developments in isotropic turbulent models for FENE-P fluids
NASA Astrophysics Data System (ADS)
Resende, P. R.; Cavadas, A. S.
2018-04-01
The evolution of viscoelastic turbulent models, in the last years, has been significant due to the direct numeric simulation (DNS) advances, which allowed us to capture in detail the evolution of the viscoelastic effects and the development of viscoelastic closures. New viscoelastic closures are proposed for viscoelastic fluids described by the finitely extensible nonlinear elastic-Peterlin constitutive model. One of the viscoelastic closure developed in the context of isotropic turbulent models, consists in a modification of the turbulent viscosity to include an elastic effect, capable of predicting, with good accuracy, the behaviour for different drag reductions. Another viscoelastic closure essential to predict drag reduction relates the viscoelastic term involving velocity and the tensor conformation fluctuations. The DNS data show the high impact of this term to predict correctly the drag reduction, and for this reason is proposed a simpler closure capable of predicting the viscoelastic behaviour with good performance. In addition, a new relation is developed to predict the drag reduction, quantity based on the trace of the tensor conformation at the wall, eliminating the need of the typically parameters of Weissenberg and Reynolds numbers, which depend on the friction velocity. This allows future developments for complex geometries.
ERIC Educational Resources Information Center
Baldwin, Scott A.; Berkeljon, Arjan; Atkins, David C.; Olsen, Joseph A.; Nielsen, Stevan L.
2009-01-01
Most research on the dose-effect model of change has combined data across patients who vary in their total dose of treatment and has implicitly assumed that the rate of change during therapy is constant across doses. In contrast, the good-enough level model predicts that rate of change will be related to total dose of therapy. In this study, the…
Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.
Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam
2015-06-22
A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.
Performance of PRISM III and PELOD-2 scores in a pediatric intensive care unit.
Gonçalves, Jean-Pierre; Severo, Milton; Rocha, Carla; Jardim, Joana; Mota, Teresa; Ribeiro, Augusto
2015-10-01
The study aims were to compare two models (The Pediatric Risk of Mortality III (PRISM III) and Pediatric Logistic Organ Dysfunction (PELOD-2)) for prediction of mortality in a pediatric intensive care unit (PICU) and recalibrate PELOD-2 in a Portuguese population. To achieve the previous goal, a prospective cohort study to evaluate score performance (standardized mortality ratio, discrimination, and calibration) for both models was performed. A total of 556 patients consecutively admitted to our PICU between January 2011 and December 2012 were included in the analysis. The median age was 65 months, with an interquartile range of 1 month to 17 years. The male-to-female ratio was 1.5. The median length of PICU stay was 3 days. The overall predicted number of deaths using PRISM III score was 30.8 patients whereas that by PELOD-2 was 22.1 patients. The observed mortality was 29 patients. The area under the receiver operating characteristics curve for the two models was 0.92 and 0.94, respectively. The Hosmer and Lemeshow goodness-of-fit test showed a good calibration only for PRISM III (PRISM III: χ (2) = 3.820, p = 0.282; PELOD-2: χ (2) = 9.576, p = 0.022). Both scores had good discrimination. PELOD-2 needs recalibration to be a better reliable prediction tool. • PRISM III (Pediatric Risk of Mortality III) and PELOD (Pediatric Logistic Organ Dysfunction) scores are frequently used to assess the performance of intensive care units and also for mortality prediction in the pediatric population. • Pediatric Logistic Organ Dysfunction 2 is the newer version of PELOD and has recently been validated with good discrimination and calibration. What is New: • In our population, both scores had good discrimination. • PELOD-2 needs recalibration to be a better reliable prediction tool.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2016-09-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kong, Lingxin; Yang, Bin; Xu, Baoqiang; Li, Yifu
2014-09-01
Based on the molecular interaction volume model (MIVM), the activities of components of Sn-Sb, Sb-Bi, Sn-Zn, Sn-Cu, and Sn-Ag alloys were predicted. The predicted values are in good agreement with the experimental data, which indicate that the MIVM is of better stability and reliability due to its good physical basis. A significant advantage of the MIVM lies in its ability to predict the thermodynamic properties of liquid alloys using only two parameters. The phase equilibria of Sn-Sb and Sn-Bi alloys were calculated based on the properties of pure components and the activity coefficients, which indicates that Sn-Sb and Sn-Bi alloys can be separated thoroughly by vacuum distillation. This study extends previous investigations and provides an effective and convenient model on which to base refining simulations for Sn-based alloys.
Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli
Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard
2016-01-01
Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli. PMID:27875575
Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli.
Necciari, Thibaud; Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard
2016-01-01
Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli.
ERIC Educational Resources Information Center
Carlo, Gustavo; McGinley, Meredith; Davis, Alexandra; Streit, Cara
2012-01-01
The article provides a brief review of theory and research on the roles of guilt, shame, and sympathy in predicting moral behaviors. Two models are presented and contrasted. The guilt-based model proposes that guilt and shame jointly predict prosocial and aggressive behaviors. In contrast, the sympathy-based model suggests that perspective taking…
Height-Diameter Equations for 12 Upland Species in the Missouri Ozark Highlands
J.R. Lootens; David R. Larsen; Stephen R. Shifley
2007-01-01
We calibrated a model predicting total tree height as a function of tree diameter for nine tree species common to the Missouri Ozarks. Model coefficients were derived from nearly 10,000 observed trees. The calibrated model did a good job predicting the mean height-diameter trend for each species (pseudo-R2 values ranged from 0.56 to 0.88), but...
NASA Technical Reports Server (NTRS)
Smith, Arthur F.
1985-01-01
Results of static stability wind tunnel tests of three 62.2 cm (24.5 in) diameter models of the Prop-Fan are presented. Measurements of blade stresses were made with the Prop-Fans mounted on an isolated nacelle in an open 5.5 m (18 ft) wind tunnel test section with no tunnel flow. The tests were conducted in the United Technology Research Center Large Subsonic Wind Tunnel. Stall flutter was determined by regions of high stress, which were compared with predictions of boundaries of zero total viscous damping. The structural analysis used beam methods for the model with straight blades and finite element methods for the models with swept blades. Increasing blade sweep tends to suppress stall flutter. Comparisons with similar test data acquired at NASA/Lewis are good. Correlations between measured and predicted critical speeds for all the models are good. The trend of increased stability with increased blade sweep is well predicted. Calculated flutter boundaries generaly coincide with tested boundaries. Stall flutter is predicted to occur in the third (torsion) mode. The straight blade test shows third mode response, while the swept blades respond in other modes.
Hou, Tingjun; Xu, Xiaojie
2002-12-01
In this study, the relationships between the brain-blood concentration ratio of 96 structurally diverse compounds with a large number of structurally derived descriptors were investigated. The linear models were based on molecular descriptors that can be calculated for any compound simply from a knowledge of its molecular structure. The linear correlation coefficients of the models were optimized by genetic algorithms (GAs), and the descriptors used in the linear models were automatically selected from 27 structurally derived descriptors. The GA optimizations resulted in a group of linear models with three or four molecular descriptors with good statistical significance. The change of descriptor use as the evolution proceeds demonstrates that the octane/water partition coefficient and the partial negative solvent-accessible surface area multiplied by the negative charge are crucial to brain-blood barrier permeability. Moreover, we found that the predictions using multiple QSPR models from GA optimization gave quite good results in spite of the diversity of structures, which was better than the predictions using the best single model. The predictions for the two external sets with 37 diverse compounds using multiple QSPR models indicate that the best linear models with four descriptors are sufficiently effective for predictive use. Considering the ease of computation of the descriptors, the linear models may be used as general utilities to screen the blood-brain barrier partitioning of drugs in a high-throughput fashion.
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Performance Prediction of Constrained Waveform Design for Adaptive Radar
2016-11-01
Kullback - Leibler divergence. χ2 Goodness - of - Fit Test We compute the estimated CDF for both models with 10000 MC trials. For Model 1 we observed a p-value of ...was clearly similar in its physical attributes, but the measures used , ( Kullback - Leibler , Chi-Square Test and the trace of the covariance) showed...models goodness - of - fit we look at three measures (1) χ2- Test (2) Trace of the inverse
Yajima, Airi; Uesawa, Yoshihiro; Ogawa, Chiaki; Yatabe, Megumi; Kondo, Naoki; Saito, Shinichiro; Suzuki, Yoshihiko; Atsuda, Kouichiro; Kagaya, Hajime
2015-05-01
There exist various useful predictive models, such as the Cockcroft-Gault model, for estimating creatinine clearance (CLcr). However, the prediction of renal function is difficult in patients with cancer treated with cisplatin. Therefore, we attempted to construct a new model for predicting CLcr in such patients. Japanese patients with head and neck cancer who had received cisplatin-based chemotherapy were used as subjects. A multiple regression equation was constructed as a model for predicting CLcr values based on background and laboratory data. A model for predicting CLcr, which included body surface area, serum creatinine and albumin, was constructed. The model exhibited good performance prior to cisplatin therapy. In addition, it performed better than previously reported models after cisplatin therapy. The predictive model constructed in the present study displayed excellent potential and was useful for estimating the renal function of patients treated with cisplatin therapy. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
M3Ag17(SPh)12 Nanoparticles and Their Structure Prediction.
Wickramasinghe, Sameera; Atnagulov, Aydar; Conn, Brian E; Yoon, Bokwon; Barnett, Robert N; Griffith, Wendell P; Landman, Uzi; Bigioni, Terry P
2015-09-16
Although silver nanoparticles are of great fundamental and practical interest, only one structure has been determined thus far: M4Ag44(SPh)30, where M is a monocation, and SPh is an aromatic thiolate ligand. This is in part due to the fact that no other molecular silver nanoparticles have been synthesized with aromatic thiolate ligands. Here we report the synthesis of M3Ag17(4-tert-butylbenzene-thiol)12, which has good stability and an unusual optical spectrum. We also present a rational strategy for predicting the structure of this molecule. First-principles calculations support the structural model, predict a HOMO-LUMO energy gap of 1.77 eV, and predict a new "monomer mount" capping motif, Ag(SR)3, for Ag nanoparticles. The calculated optical absorption spectrum is in good correspondence with the measured spectrum. Heteroatom substitution was also used as a structural probe. First-principles calculations based on the structural model predicted a strong preference for a single Au atom substitution in agreement with experiment.
Cheng, Jieyao; Hou, Jinlin; Ding, Huiguo; Chen, Guofeng; Xie, Qing; Wang, Yuming; Zeng, Minde; Ou, Xiaojuan; Ma, Hong; Jia, Jidong
2015-01-01
Background and Aims Noninvasive models have been developed for fibrosis assessment in patients with chronic hepatitis B. However, the sensitivity, specificity and diagnostic accuracy in evaluating liver fibrosis of these methods have not been validated and compared in the same group of patients. The aim of this study was to verify the diagnostic performance and reproducibility of ten reported noninvasive models in a large cohort of Asian CHB patients. Methods The diagnostic performance of ten noninvasive models (HALF index, FibroScan, S index, Zeng model, Youyi model, Hui model, APAG, APRI, FIB-4 and FibroTest) was assessed against the liver histology by ROC curve analysis in CHB patients. The reproducibility of the ten models were evaluated by recalculating the diagnostic values at the given cut-off values defined by the original studies. Results Six models (HALF index, FibroScan, Zeng model, Youyi model, S index and FibroTest) had AUROCs higher than 0.70 in predicting any fibrosis stage and 2 of them had best diagnostic performance with AUROCs to predict F≥2, F≥3 and F4 being 0.83, 0.89 and 0.89 for HALF index, 0.82, 0.87 and 0.87 for FibroScan, respectively. Four models (HALF index, FibroScan, Zeng model and Youyi model) showed good diagnostic values at given cut-offs. Conclusions HALF index, FibroScan, Zeng model, Youyi model, S index and FibroTest show a good diagnostic performance and all of them, except S index and FibroTest, have good reproducibility for evaluating liver fibrosis in CHB patients. Registration Number ChiCTR-DCS-07000039. PMID:26709706
Capillary Rise: Validity of the Dynamic Contact Angle Models.
Wu, Pingkeng; Nikolov, Alex D; Wasan, Darsh T
2017-08-15
The classical Lucas-Washburn-Rideal (LWR) equation, using the equilibrium contact angle, predicts a faster capillary rise process than experiments in many cases. The major contributor to the faster prediction is believed to be the velocity dependent dynamic contact angle. In this work, we investigated the dynamic contact angle models for their ability to correct the dynamic contact angle effect in the capillary rise process. We conducted capillary rise experiments of various wetting liquids in borosilicate glass capillaries and compared the model predictions with our experimental data. The results show that the LWR equations modified by the molecular kinetic theory and hydrodynamic model provide good predictions on the capillary rise of all the testing liquids with fitting parameters, while the one modified by Joos' empirical equation works for specific liquids, such as silicone oils. The LWR equation modified by molecular self-layering model predicts well the capillary rise of carbon tetrachloride, octamethylcyclotetrasiloxane, and n-alkanes with the molecular diameter or measured solvation force data. The molecular self-layering model modified LWR equation also has good predictions on the capillary rise of silicone oils covering a wide range of bulk viscosities with the same key parameter W(0), which results from the molecular self-layering. The advantage of the molecular self-layering model over the other models reveals the importance of the layered molecularly thin wetting film ahead of the main meniscus in the energy dissipation associated with dynamic contact angle. The analysis of the capillary rise of silicone oils with a wide range of bulk viscosities provides new insights into the capillary dynamics of polymer melts.
Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.
Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179
Alcohol-related predictors of adolescent driving: gender differences in crashes and offenses.
Shope, J T; Waller, P F; Lang, S W
1996-11-01
Demographic and alcohol-related data collected from eight-grade students (age 13 years) were used in logistic regression to predict subsequent first-year driving crashes and offenses (age 17 years). For young men's crashes and offenses, good-fitting models used living situation (both parents or not), parents' attitude about teen drinking (negative or neutral), and the interaction term. Young men who lived with both parents and reported negative parental attitudes regarding teen drinking were less likely to have crashes and offenses. For young women's crashes, a good-fitting model included friends' involvement with alcohol. Young women who reported that their friends were not involved with alcohol were least likely to have crashes. No model predicting young women's offenses emerged.
Predicting Salt Permeability Coefficients in Highly Swollen, Highly Charged Ion Exchange Membranes.
Kamcev, Jovan; Paul, Donald R; Manning, Gerald S; Freeman, Benny D
2017-02-01
This study presents a framework for predicting salt permeability coefficients in ion exchange membranes in contact with an aqueous salt solution. The model, based on the solution-diffusion mechanism, was tested using experimental salt permeability data for a series of commercial ion exchange membranes. Equilibrium salt partition coefficients were calculated using a thermodynamic framework (i.e., Donnan theory), incorporating Manning's counterion condensation theory to calculate ion activity coefficients in the membrane phase and the Pitzer model to calculate ion activity coefficients in the solution phase. The model predicted NaCl partition coefficients in a cation exchange membrane and two anion exchange membranes, as well as MgCl 2 partition coefficients in a cation exchange membrane, remarkably well at higher external salt concentrations (>0.1 M) and reasonably well at lower external salt concentrations (<0.1 M) with no adjustable parameters. Membrane ion diffusion coefficients were calculated using a combination of the Mackie and Meares model, which assumes ion diffusion in water-swollen polymers is affected by a tortuosity factor, and a model developed by Manning to account for electrostatic effects. Agreement between experimental and predicted salt diffusion coefficients was good with no adjustable parameters. Calculated salt partition and diffusion coefficients were combined within the framework of the solution-diffusion model to predict salt permeability coefficients. Agreement between model and experimental data was remarkably good. Additionally, a simplified version of the model was used to elucidate connections between membrane structure (e.g., fixed charge group concentration) and salt transport properties.
NASA Astrophysics Data System (ADS)
Wang, Zhao-Qiang; Hu, Chang-Hua; Si, Xiao-Sheng; Zio, Enrico
2018-02-01
Current degradation modeling and remaining useful life prediction studies share a common assumption that the degrading systems are not maintained or maintained perfectly (i.e., to an as-good-as new state). This paper concerns the issues of how to model the degradation process and predict the remaining useful life of degrading systems subjected to imperfect maintenance activities, which can restore the health condition of a degrading system to any degradation level between as-good-as new and as-bad-as old. Toward this end, a nonlinear model driven by Wiener process is first proposed to characterize the degradation trajectory of the degrading system subjected to imperfect maintenance, where negative jumps are incorporated to quantify the influence of imperfect maintenance activities on the system's degradation. Then, the probability density function of the remaining useful life is derived analytically by a space-scale transformation, i.e., transforming the constructed degradation model with negative jumps crossing a constant threshold level to a Wiener process model crossing a random threshold level. To implement the proposed method, unknown parameters in the degradation model are estimated by the maximum likelihood estimation method. Finally, the proposed degradation modeling and remaining useful life prediction method are applied to a practical case of draught fans belonging to a kind of mechanical systems from steel mills. The results reveal that, for a degrading system subjected to imperfect maintenance, our proposed method can obtain more accurate remaining useful life predictions than those of the benchmark model in literature.
NASA Astrophysics Data System (ADS)
Duc-Toan, Nguyen; Tien-Long, Banh; Young-Suk, Kim; Dong-Won, Jung
2011-08-01
In this study, a modified Johnson-Cook (J-C) model and an innovated method to determine (J-C) material parameters are proposed to predict more correctly stress-strain curve for tensile tests in elevated temperatures. A MATLAB tool is used to determine material parameters by fitting a curve to follow Ludwick's hardening law at various elevated temperatures. Those hardening law parameters are then utilized to determine modified (J-C) model material parameters. The modified (J-C) model shows the better prediction compared to the conventional one. As the first verification, an FEM tensile test simulation based on the isotropic hardening model for boron sheet steel at elevated temperatures was carried out via a user-material subroutine, using an explicit finite element code, and compared with the measurements. The temperature decrease of all elements due to the air cooling process was then calculated when considering the modified (J-C) model and coded to VUMAT subroutine for tensile test simulation of cooling process. The modified (J-C) model showed the good agreement between the simulation results and the corresponding experiments. The second investigation was applied for V-bending spring-back prediction of magnesium alloy sheets at elevated temperatures. Here, the combination of proposed J-C model with modified hardening law considering the unusual plastic behaviour for magnesium alloy sheet was adopted for FEM simulation of V-bending spring-back prediction and shown the good comparability with corresponding experiments.
Developing a Suitable Model for Water Uptake for Biodegradable Polymers Using Small Training Sets.
Valenzuela, Loreto M; Knight, Doyle D; Kohn, Joachim
2016-01-01
Prediction of the dynamic properties of water uptake across polymer libraries can accelerate polymer selection for a specific application. We first built semiempirical models using Artificial Neural Networks and all water uptake data, as individual input. These models give very good correlations (R (2) > 0.78 for test set) but very low accuracy on cross-validation sets (less than 19% of experimental points within experimental error). Instead, using consolidated parameters like equilibrium water uptake a good model is obtained (R (2) = 0.78 for test set), with accurate predictions for 50% of tested polymers. The semiempirical model was applied to the 56-polymer library of L-tyrosine-derived polyarylates, identifying groups of polymers that are likely to satisfy design criteria for water uptake. This research demonstrates that a surrogate modeling effort can reduce the number of polymers that must be synthesized and characterized to identify an appropriate polymer that meets certain performance criteria.
Quantifying Confidence in Model Predictions for Hypersonic Aircraft Structures
2015-03-01
of isolating calibrations of models in the network, segmented and simultaneous calibration are compared using the Kullback - Leibler ...value of θ. While not all test -statistics are as simple as measuring goodness or badness of fit , their directional interpretations tend to remain...data quite well, qualitatively. Quantitative goodness - of - fit tests are problematic because they assume a true empirical CDF is being tested or
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grach, I.L.; Kalashnikova, Y.S.; Narodetskii-breve, I.M.
We use the constituent-quark bag model for describing s-wave ..pi..N amplitudes at low energies. The resulting parameters of the ..pi..N potentials are in good agreement with the theoretical predictions of the MIT bag model.
Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia
2016-02-01
To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Model For Rapid Estimation of Economic Loss
NASA Astrophysics Data System (ADS)
Holliday, J. R.; Rundle, J. B.
2012-12-01
One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.
Lambert, Emily; Pierce, Graham J; Hall, Karen; Brereton, Tom; Dunn, Timothy E; Wall, Dave; Jepson, Paul D; Deaville, Rob; MacLeod, Colin D
2014-06-01
There is increasing evidence that the distributions of a large number of species are shifting with global climate change as they track changing surface temperatures that define their thermal niche. Modelling efforts to predict species distributions under future climates have increased with concern about the overall impact of these distribution shifts on species ecology, and especially where barriers to dispersal exist. Here we apply a bio-climatic envelope modelling technique to investigate the impacts of climate change on the geographic range of ten cetacean species in the eastern North Atlantic and to assess how such modelling can be used to inform conservation and management. The modelling process integrates elements of a species' habitat and thermal niche, and employs "hindcasting" of historical distribution changes in order to verify the accuracy of the modelled relationship between temperature and species range. If this ability is not verified, there is a risk that inappropriate or inaccurate models will be used to make future predictions of species distributions. Of the ten species investigated, we found that while the models for nine could successfully explain current spatial distribution, only four had a good ability to predict distribution changes over time in response to changes in water temperature. Applied to future climate scenarios, the four species-specific models with good predictive abilities indicated range expansion in one species and range contraction in three others, including the potential loss of up to 80% of suitable white-beaked dolphin habitat. Model predictions allow identification of affected areas and the likely time-scales over which impacts will occur. Thus, this work provides important information on both our ability to predict how individual species will respond to future climate change and the applicability of predictive distribution models as a tool to help construct viable conservation and management strategies. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bradshaw, Tyler; Fu, Rau; Bowen, Stephen; Zhu, Jun; Forrest, Lisa; Jeraj, Robert
2015-07-01
Dose painting relies on the ability of functional imaging to identify resistant tumor subvolumes to be targeted for additional boosting. This work assessed the ability of FDG, FLT, and Cu-ATSM PET imaging to predict the locations of residual FDG PET in canine tumors following radiotherapy. Nineteen canines with spontaneous sinonasal tumors underwent PET/CT imaging with radiotracers FDG, FLT, and Cu-ATSM prior to hypofractionated radiotherapy. Therapy consisted of 10 fractions of 4.2 Gy to the sinonasal cavity with or without an integrated boost of 0.8 Gy to the GTV. Patients had an additional FLT PET/CT scan after fraction 2, a Cu-ATSM PET/CT scan after fraction 3, and follow-up FDG PET/CT scans after radiotherapy. Following image registration, simple and multiple linear and logistic voxel regressions were performed to assess how well pre- and mid-treatment PET imaging predicted post-treatment FDG uptake. R2 and pseudo R2 were used to assess the goodness of fits. For simple linear regression models, regression coefficients for all pre- and mid-treatment PET images were significantly positive across the population (P < 0.05). However, there was large variability among patients in goodness of fits: R2 ranged from 0.00 to 0.85, with a median of 0.12. Results for logistic regression models were similar. Multiple linear regression models resulted in better fits (median R2 = 0.31), but there was still large variability between patients in R2. The R2 from regression models for different predictor variables were highly correlated across patients (R ≈ 0.8), indicating tumors that were poorly predicted with one tracer were also poorly predicted by other tracers. In conclusion, the high inter-patient variability in goodness of fits indicates that PET was able to predict locations of residual tumor in some patients, but not others. This suggests not all patients would be good candidates for dose painting based on a single biological target.
Bradshaw, Tyler; Fu, Rau; Bowen, Stephen; Zhu, Jun; Forrest, Lisa; Jeraj, Robert
2015-07-07
Dose painting relies on the ability of functional imaging to identify resistant tumor subvolumes to be targeted for additional boosting. This work assessed the ability of FDG, FLT, and Cu-ATSM PET imaging to predict the locations of residual FDG PET in canine tumors following radiotherapy. Nineteen canines with spontaneous sinonasal tumors underwent PET/CT imaging with radiotracers FDG, FLT, and Cu-ATSM prior to hypofractionated radiotherapy. Therapy consisted of 10 fractions of 4.2 Gy to the sinonasal cavity with or without an integrated boost of 0.8 Gy to the GTV. Patients had an additional FLT PET/CT scan after fraction 2, a Cu-ATSM PET/CT scan after fraction 3, and follow-up FDG PET/CT scans after radiotherapy. Following image registration, simple and multiple linear and logistic voxel regressions were performed to assess how well pre- and mid-treatment PET imaging predicted post-treatment FDG uptake. R(2) and pseudo R(2) were used to assess the goodness of fits. For simple linear regression models, regression coefficients for all pre- and mid-treatment PET images were significantly positive across the population (P < 0.05). However, there was large variability among patients in goodness of fits: R(2) ranged from 0.00 to 0.85, with a median of 0.12. Results for logistic regression models were similar. Multiple linear regression models resulted in better fits (median R(2) = 0.31), but there was still large variability between patients in R(2). The R(2) from regression models for different predictor variables were highly correlated across patients (R ≈ 0.8), indicating tumors that were poorly predicted with one tracer were also poorly predicted by other tracers. In conclusion, the high inter-patient variability in goodness of fits indicates that PET was able to predict locations of residual tumor in some patients, but not others. This suggests not all patients would be good candidates for dose painting based on a single biological target.
Nnoaham, Kelechi E.; Hummelshoj, Lone; Kennedy, Stephen H.; Jenkinson, Crispin; Zondervan, Krina T.
2012-01-01
Objective To generate and validate symptom-based models to predict endometriosis among symptomatic women prior to undergoing their first laparoscopy. Design Prospective, observational, two-phase study, in which women completed a 25-item questionnaire prior to surgery. Setting Nineteen hospitals in 13 countries. Patient(s) Symptomatic women (n = 1,396) scheduled for laparoscopy without a previous surgical diagnosis of endometriosis. Intervention(s) None. Main Outcome Measure(s) Sensitivity and specificity of endometriosis diagnosis predicted by symptoms and patient characteristics from optimal models developed using multiple logistic regression analyses in one data set (phase I), and independently validated in a second data set (phase II) by receiver operating characteristic (ROC) curve analysis. Result(s) Three hundred sixty (46.7%) women in phase I and 364 (58.2%) in phase II were diagnosed with endometriosis at laparoscopy. Menstrual dyschezia (pain on opening bowels) and a history of benign ovarian cysts most strongly predicted both any and stage III and IV endometriosis in both phases. Prediction of any-stage endometriosis, although improved by ultrasound scan evidence of cyst/nodules, was relatively poor (area under the curve [AUC] = 68.3). Stage III and IV disease was predicted with good accuracy (AUC = 84.9, sensitivity of 82.3% and specificity 75.8% at an optimal cut-off of 0.24). Conclusion(s) Our symptom-based models predict any-stage endometriosis relatively poorly and stage III and IV disease with good accuracy. Predictive tools based on such models could help to prioritize women for surgical investigation in clinical practice and thus contribute to reducing time to diagnosis. We invite other researchers to validate the key models in additional populations. PMID:22657249
NASA Technical Reports Server (NTRS)
Aboudi, Jacob
1998-01-01
The micromechanical generalized method of cells model is employed for the prediction of the effective elastic, piezoelectric, dielectric, pyroelectric and thermal-expansion constants of multiphase composites with embedded piezoelectric materials. The predicted effective constants are compared with other micromechanical methods available in the literature and good agreements are obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.
The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less
Anthropometric predictors of body fat in a large population of 9-year-old school-aged children.
Almeida, Sílvia M; Furtado, José M; Mascarenhas, Paulo; Ferraz, Maria E; Silva, Luís R; Ferreira, José C; Monteiro, Mariana; Vilanova, Manuel; Ferraz, Fernando P
2016-09-01
To develop and cross-validate predictive models for percentage body fat (%BF) from anthropometric measurements [including BMI z -score (zBMI) and calf circumference (CC)] excluding skinfold thickness. A descriptive study was carried out in 3,084 pre-pubertal children. Regression models and neural network were developed with %BF measured by Bioelectrical Impedance Analysis (BIA) as the dependent variables and age, sex and anthropometric measurements as independent predictors. All %BF grade predictive models presented a good global accuracy (≥91.3%) for obesity discrimination. Both overfat/obese and obese prediction models presented respectively good sensitivity (78.6% and 71.0%), specificity (98.0% and 99.2%) and reliability for positive or negative test results (≥82% and ≥96%). For boys, the order of parameters, by relative weight in the predictive model, was zBMI, height, waist-circumference-to-height-ratio (WHtR) squared variable (_Q), age, weight, CC_Q and hip circumference (HC)_Q (adjusted r 2 = 0.847 and RMSE = 2.852); for girls it was zBMI, WHtR_Q, height, age, HC_Q and CC_Q (adjusted r 2 = 0.872 and RMSE = 2.171). %BF can be graded and predicted with relative accuracy from anthropometric measurements excluding skinfold thickness. Fitness and cross-validation results showed that our multivariable regression model performed better in this population than did some previously published models.
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
Understanding heat and fluid flow in linear GTA welds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharia, T.; David, S.A.; Vitek, J.M.
1992-01-01
A transient heat flow and fluid flow model was used to predict the development of gas tungsten arc (GTA) weld pools in 1.5 mm thick AISI 304 SS. The welding parameters were chosen so as to correspond to an earlier experimental study which produced high-resolution surface temperature maps. The motivation of the present study was to verify the predictive capability of the computational model. Comparison of the numerical predictions and experimental observations indicate good agreement.
Understanding heat and fluid flow in linear GTA welds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharia, T.; David, S.A.; Vitek, J.M.
1992-12-31
A transient heat flow and fluid flow model was used to predict the development of gas tungsten arc (GTA) weld pools in 1.5 mm thick AISI 304 SS. The welding parameters were chosen so as to correspond to an earlier experimental study which produced high-resolution surface temperature maps. The motivation of the present study was to verify the predictive capability of the computational model. Comparison of the numerical predictions and experimental observations indicate good agreement.
Using Socioeconomic Data to Calibrate Loss Estimates
NASA Astrophysics Data System (ADS)
Holliday, J. R.; Rundle, J. B.
2013-12-01
One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
ERIC Educational Resources Information Center
Chiang, Yu-Tzu; Yeh, Yu-Chen; Lin, Sunny S. J.; Hwang, Fang-Ming
2011-01-01
This study examined structure and predictive utility of the 2 x 2 achievement goal model among Taiwan pre-university school students (ages 10 to 16) who learned Chinese language arts. The confirmatory factor analyses of Achievement Goal Questionnaire-Chinese version provided good fitting between the factorial and dimensional structures with the…
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
Knowledge and implicature: modeling language understanding as social cognition.
Goodman, Noah D; Stuhlmüller, Andreas
2013-01-01
Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the rational speech-act theory: Listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to "invert" this model of the speaker. We apply this framework to model scalar implicature ("some" implies "not all," and "N" implies "not more than N"). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. We test these predictions in two experiments and find good fit between model predictions and human judgments. Copyright © 2013 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Ajaz, M.; Ullah, S.; Ali, Y.; Younis, H.
2018-02-01
In this research paper, the comprehensive results on the double differential yield of π± and k± mesons, protons and antiprotons as a function of laboratory momentum are reported. These hadrons are produced in proton-carbon interaction at 60 GeV/c. EPOS 1.99, EPOS-LHC and QGSJETII-04 models are used to perform simulations. Comparing the predictions of these models show that QGSJETII-04 model predicts higher yields of all the hadrons in most of the cases at the peak of the distribution. In this interval, the EPOS 1.99 and EPOS-LHC produce similar results. In most of the cases at higher momentum of the hadrons, all the three models are in good agreement. For protons, all models are in good agreement. EPOS-LHC gives high yield of antiprotons at high momentum values as compared to the other two models. EPOS-LHC gives higher prediction at the peak value for π+ mesons and protons at higher polar angle intervals of 100 < 𝜃 < 420 and 100 < 𝜃 < 360, respectively, and EPOS 1.99 gives higher prediction at the peak value for π- mesons for 140 < 𝜃 < 420. The model predictions, except for antiprotons, are compared with the data obtained by the NA61/SHINE experiment at 31 GeV/c proton-carbon collision, which clearly shows that the behavior of the distributions in models are similar to the ones from the data but the yield in data is low because of lower beam energy.
A comprehensive mechanistic model for upward two-phase flow in wellbores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sylvester, N.D.; Sarica, C.; Shoham, O.
1994-05-01
A comprehensive model is formulated to predict the flow behavior for upward two-phase flow. This model is composed of a model for flow-pattern prediction and a set of independent mechanistic models for predicting such flow characteristics as holdup and pressure drop in bubble, slug, and annular flow. The comprehensive model is evaluated by using a well data bank made up of 1,712 well cases covering a wide variety of field data. Model performance is also compared with six commonly used empirical correlations and the Hasan-Kabir mechanistic model. Overall model performance is in good agreement with the data. In comparison withmore » other methods, the comprehensive model performed the best.« less
Eslami, Mohammad H; Rybin, Denis V; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik
2018-01-01
The purpose of this study is to externally validate a recently reported Vascular Study Group of New England (VSGNE) risk predictive model of postoperative mortality after elective abdominal aortic aneurysm (AAA) repair and to compare its predictive ability across different patients' risk categories and against the established risk predictive models using the Vascular Quality Initiative (VQI) AAA sample. The VQI AAA database (2010-2015) was queried for patients who underwent elective AAA repair. The VSGNE cases were excluded from the VQI sample. The external validation of a recently published VSGNE AAA risk predictive model, which includes only preoperative variables (age, gender, history of coronary artery disease, chronic obstructive pulmonary disease, cerebrovascular disease, creatinine levels, and aneurysm size) and planned type of repair, was performed using the VQI elective AAA repair sample. The predictive value of the model was assessed via the C-statistic. Hosmer-Lemeshow method was used to assess calibration and goodness of fit. This model was then compared with the Medicare, Vascular Governance Northwest model, and Glasgow Aneurysm Score for predicting mortality in VQI sample. The Vuong test was performed to compare the model fit between the models. Model discrimination was assessed in different risk group VQI quintiles. Data from 4431 cases from the VSGNE sample with the overall mortality rate of 1.4% was used to develop the model. The internally validated VSGNE model showed a very high discriminating ability in predicting mortality (C = 0.822) and good model fit (Hosmer-Lemeshow P = .309) among the VSGNE elective AAA repair sample. External validation on 16,989 VQI cases with an overall 0.9% mortality rate showed very robust predictive ability of mortality (C = 0.802). Vuong tests yielded a significant fit difference favoring the VSGNE over then Medicare model (C = 0.780), Vascular Governance Northwest (0.774), and Glasgow Aneurysm Score (0.639). Across the 5 risk quintiles, the VSGNE model predicted observed mortality significantly with great accuracy. This simple VSGNE AAA risk predictive model showed very high discriminative ability in predicting mortality after elective AAA repair among a large external independent sample of AAA cases performed by a diverse array of physicians nationwide. The risk score based on this simple VSGNE model can reliably stratify patients according to their risk of mortality after elective AAA repair better than other established models. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio
The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviationsmore » of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.« less
Pearson, Matthew R.; Kite, Benjamin A.; Henson, James M.
2016-01-01
In the present study, we examined whether use of protective behavioral strategies mediated the relationship between self-control constructs and alcohol-related outcomes. According to the two-mode model of self-control, good self-control (planfulness; measured with Future Time Perspective, Problem Solving, and Self-Reinforcement) and poor regulation (impulsivity; measured with Present Time Perspective, Poor Delay of Gratification, Distractibility) are theorized to be relatively independent constructs rather than opposite ends of a single continuum. The analytic sample consisted of 278 college student drinkers (68% women) who responded to a battery of surveys at a single time point. Using a structural equation model based on the two-mode model of self-control, we found that good self-control predicted increased use of three types of protective behavioral strategies (Manner of Drinking, Limiting/Stopping Drinking, and Serious Harm Reduction). Poor regulation was unrelated to use of protective behavioral strategies, but had direct effects on alcohol use and alcohol problems. Further, protective behavioral strategies mediated the relationship between good self-control and alcohol use. The clinical implications of these findings are discussed. PMID:22663345
Prediction of wastewater treatment plants performance based on artificial fish school neural network
NASA Astrophysics Data System (ADS)
Zhang, Ruicheng; Li, Chong
2011-10-01
A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.
Computation of turbulent rotating channel flow with an algebraic Reynolds stress model
NASA Technical Reports Server (NTRS)
Warfield, M. J.; Lakshminarayana, B.
1986-01-01
An Algebraic Reynolds Stress Model has been implemented to modify the Kolmogorov-Prandtl eddy viscosity relation to produce an anisotropic turbulence model. The eddy viscosity relation becomes a function of the local turbulent production to dissipation ratio and local turbulence/rotation parameters. The model is used to predict fully-developed rotating channel flow over a diverse range of rotation numbers. In addition, predictions are obtained for a developing channel flow with high rotation. The predictions are compared with the experimental data available. Good predictions are achieved for mean velocity and wall shear stress over most of the rotation speeds tested. There is some prediction breakdown at high rotation (rotation number greater than .10) where the effects of the rotation on turbulence become quite complex. At high rotation and low Reynolds number, the laminarization on the trailing side represents a complex effect of rotation which is difficult to predict with the described models.
Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G
2013-12-01
Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.
Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An
2010-01-01
To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.
Krajcsi, Attila; Lengyel, Gábor; Kojouharova, Petia
2018-01-01
HIGHLIGHTS We test whether symbolic number comparison is handled by an analog noisy system.Analog system model has systematic biases in describing symbolic number comparison.This suggests that symbolic and non-symbolic numbers are processed by different systems. Dominant numerical cognition models suppose that both symbolic and non-symbolic numbers are processed by the Analog Number System (ANS) working according to Weber's law. It was proposed that in a number comparison task the numerical distance and size effects reflect a ratio-based performance which is the sign of the ANS activation. However, increasing number of findings and alternative models propose that symbolic and non-symbolic numbers might be processed by different representations. Importantly, alternative explanations may offer similar predictions to the ANS prediction, therefore, former evidence usually utilizing only the goodness of fit of the ANS prediction is not sufficient to support the ANS account. To test the ANS model more rigorously, a more extensive test is offered here. Several properties of the ANS predictions for the error rates, reaction times, and diffusion model drift rates were systematically analyzed in both non-symbolic dot comparison and symbolic Indo-Arabic comparison tasks. It was consistently found that while the ANS model's prediction is relatively good for the non-symbolic dot comparison, its prediction is poorer and systematically biased for the symbolic Indo-Arabic comparison. We conclude that only non-symbolic comparison is supported by the ANS, and symbolic number comparisons are processed by other representation. PMID:29491845
Predictive model for survival in patients with gastric cancer.
Goshayeshi, Ladan; Hoseini, Benyamin; Yousefli, Zahra; Khooie, Alireza; Etminani, Kobra; Esmaeilzadeh, Abbas; Golabpour, Amin
2017-12-01
Gastric cancer is one of the most prevalent cancers in the world. Characterized by poor prognosis, it is a frequent cause of cancer in Iran. The aim of the study was to design a predictive model of survival time for patients suffering from gastric cancer. This was a historical cohort conducted between 2011 and 2016. Study population were 277 patients suffering from gastric cancer. Data were gathered from the Iranian Cancer Registry and the laboratory of Emam Reza Hospital in Mashhad, Iran. Patients or their relatives underwent interviews where it was needed. Missing values were imputed by data mining techniques. Fifteen factors were analyzed. Survival was addressed as a dependent variable. Then, the predictive model was designed by combining both genetic algorithm and logistic regression. Matlab 2014 software was used to combine them. Of the 277 patients, only survival of 80 patients was available whose data were used for designing the predictive model. Mean ?SD of missing values for each patient was 4.43?.41 combined predictive model achieved 72.57% accuracy. Sex, birth year, age at diagnosis time, age at diagnosis time of patients' family, family history of gastric cancer, and family history of other gastrointestinal cancers were six parameters associated with patient survival. The study revealed that imputing missing values by data mining techniques have a good accuracy. And it also revealed six parameters extracted by genetic algorithm effect on the survival of patients with gastric cancer. Our combined predictive model, with a good accuracy, is appropriate to forecast the survival of patients suffering from Gastric cancer. So, we suggest policy makers and specialists to apply it for prediction of patients' survival.
Austin, P C; Shah, B R; Newman, A; Anderson, G M
2012-09-01
There are limited validated methods to ascertain comorbidities for risk adjustment in ambulatory populations of patients with diabetes using administrative health-care databases. The objective was to examine the ability of the Johns Hopkins' Aggregated Diagnosis Groups to predict mortality in population-based ambulatory samples of both incident and prevalent subjects with diabetes. Retrospective cohorts constructed using population-based administrative data. The incident cohort consisted of all 346,297 subjects diagnosed with diabetes between 1 April 2004 and 31 March 2008. The prevalent cohort consisted of all 879,849 subjects with pre-existing diabetes on 1 January, 2007. The outcome was death within 1 year of the subject's index date. A logistic regression model consisting of age, sex and indicator variables for 22 of the 32 Johns Hopkins' Aggregated Diagnosis Group categories had excellent discrimination for predicting mortality in incident diabetes patients: the c-statistic was 0.87 in an independent validation sample. A similar model had excellent discrimination for predicting mortality in prevalent diabetes patients: the c-statistic was 0.84 in an independent validation sample. Both models demonstrated very good calibration, denoting good agreement between observed and predicted mortality across the range of predicted mortality in which the large majority of subjects lay. For comparative purposes, regression models incorporating the Charlson comorbidity index, age and sex, age and sex, and age alone had poorer discrimination than the model that incorporated the Johns Hopkins' Aggregated Diagnosis Groups. Logistical regression models using age, sex and the John Hopkins' Aggregated Diagnosis Groups were able to accurately predict 1-year mortality in population-based samples of patients with diabetes. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
NASA Astrophysics Data System (ADS)
Santosa, H.; Hobara, Y.
2017-01-01
The electric field amplitude of very low frequency (VLF) transmitter from Hawaii (NPM) has been continuously recorded at Chofu (CHF), Tokyo, Japan. The VLF amplitude variability indicates lower ionospheric perturbation in the D region (60-90 km altitude range) around the NPM-CHF propagation path. We carried out the prediction of daily nighttime mean VLF amplitude by using Nonlinear Autoregressive with Exogenous Input Neural Network (NARX NN). The NARX NN model, which was built based on the daily input variables of various physical parameters such as stratospheric temperature, total column ozone, cosmic rays, Dst, and Kp indices possess good accuracy during the model building. The fitted model was constructed within the training period from 1 January 2011 to 4 February 2013 by using three algorithms, namely, Bayesian Neural Network (BRANN), Levenberg Marquardt Neural Network (LMANN), and Scaled Conjugate Gradient (SCG). The LMANN has the largest Pearson correlation coefficient (r) of 0.94 and smallest root-mean-square error (RMSE) of 1.19 dB. The constructed models by using LMANN were applied to predict the VLF amplitude from 5 February 2013 to 31 December 2013. As a result the one step (1 day) ahead predicted nighttime VLF amplitude has the r of 0.93 and RMSE of 2.25 dB. We conclude that the model built according to the proposed methodology provides good predictions of the electric field amplitude of VLF waves for NPM-CHF (midlatitude) propagation path.
NASA Technical Reports Server (NTRS)
Yang, R. J.; Weinberg, B. C.; Shamroth, S. J.; Mcdonald, H.
1985-01-01
The application of the time-dependent ensemble-averaged Navier-Stokes equations to transonic turbine cascade flow fields was examined. In particular, efforts focused on an assessment of the procedure in conjunction with a suitable turbulence model to calculate steady turbine flow fields using an O-type coordinate system. Three cascade configurations were considered. Comparisons were made between the predicted and measured surface pressures and heat transfer distributions wherever available. In general, the pressure predictions were in good agreement with the data. Heat transfer calculations also showed good agreement when an empirical transition model was used. However, further work in the development of laminar-turbulent transitional models is indicated. The calculations showed most of the known features associated with turbine cascade flow fields. These results indicate the ability of the Navier-Stokes analysis to predict, in reasonable amounts of computation time, the surface pressure distribution, heat transfer rates, and viscous flow development for turbine cascades operating at realistic conditions.
NASA Technical Reports Server (NTRS)
El-Kaddah, N.; Szekely, J.
1982-01-01
A mathematical representation for the electromagnetic force field and the fluid flow field in a coreless induction furnace is presented. The fluid flow field was represented by writing the axisymmetric turbulent Navier-Stokes equation, containing the electromagnetic body force term. The electromagnetic body force field was calculated by using a technique of mutual inductances. The kappa-epsilon model was employed for evaluating the turbulent viscosity and the resultant differential equations were solved numerically. Theoretically predicted velocity fields are in reasonably good agreement with the experimental measurements reported by Hunt and Moore; furthermore, the agreement regarding the turbulent intensities are essentially quantitative. These results indicate that the kappa-epsilon model provides a good engineering representation of the turbulent recirculating flows occurring in induction furnaces. At this stage it is not clear whether the discrepancies between measurements and the predictions, which were not very great in any case, are attributable either to the model or to the measurement techniques employed.
Park, Gwansik; Kim, Taewung; Panzer, Matthew B; Crandall, Jeff R
2016-08-01
In previous shoulder impact studies, the 50th-percentile male GHBMC human body finite-element model was shown to have good biofidelity regarding impact force, but under-predicted shoulder deflection by 80% compared to those observed in the experiment. The goal of this study was to validate the response of the GHBMC M50 model by focusing on three-dimensional shoulder kinematics under a whole-body lateral impact condition. Five modifications, focused on material properties and modeling techniques, were introduced into the model and a supplementary sensitivity analysis was done to determine the influence of each modification to the biomechanical response of the body. The modified model predicted substantially improved shoulder response and peak shoulder deflection within 10% of the observed experimental data, and showed good correlation in the scapula kinematics on sagittal and transverse planes. The improvement in the biofidelity of the shoulder region was mainly due to the modifications of material properties of muscle, the acromioclavicular joint, and the attachment region between the pectoralis major and ribs. Predictions of rib fracture and chest deflection were also improved because of these modifications.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
Suarthana, Eva; Vergouwe, Yvonne; Moons, Karel G; de Monchy, Jan; Grobbee, Diederick; Heederik, Dick; Meijer, Evert
2010-09-01
To develop and validate a prediction model to detect sensitization to wheat allergens in bakery workers. The prediction model was developed in 867 Dutch bakery workers (development set, prevalence of sensitization 13%) and included questionnaire items (candidate predictors). First, principal component analysis was used to reduce the number of candidate predictors. Then, multivariable logistic regression analysis was used to develop the model. Internal validation and extent of optimism was assessed with bootstrapping. External validation was studied in 390 independent Dutch bakery workers (validation set, prevalence of sensitization 20%). The prediction model contained the predictors nasoconjunctival symptoms, asthma symptoms, shortness of breath and wheeze, work-related upper and lower respiratory symptoms, and traditional bakery. The model showed good discrimination with an area under the receiver operating characteristic (ROC) curve area of 0.76 (and 0.75 after internal validation). Application of the model in the validation set gave a reasonable discrimination (ROC area=0.69) and good calibration after a small adjustment of the model intercept. A simple model with questionnaire items only can be used to stratify bakers according to their risk of sensitization to wheat allergens. Its use may increase the cost-effectiveness of (subsequent) medical surveillance.
Prediction uncertainty and optimal experimental design for learning dynamical systems.
Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P
2016-06-01
Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.
Seasonal prediction of winter haze days in the north central North China Plain
NASA Astrophysics Data System (ADS)
Yin, Zhicong; Wang, Huijun
2016-11-01
Recently, the winter (December-February) haze pollution over the north central North China Plain (NCP) has become severe. By treating the year-to-year increment as the predictand, two new statistical schemes were established using the multiple linear regression (MLR) and the generalized additive model (GAM). By analyzing the associated increment of atmospheric circulation, seven leading predictors were selected to predict the upcoming winter haze days over the NCP (WHDNCP). After cross validation, the root mean square error and explained variance of the MLR (GAM) prediction model was 3.39 (3.38) and 53 % (54 %), respectively. For the final predicted WHDNCP, both of these models could capture the interannual and interdecadal trends and the extremums successfully. Independent prediction tests for 2014 and 2015 also confirmed the good predictive skill of the new schemes. The predicted bias of the MLR (GAM) prediction model in 2014 and 2015 was 0.09 (-0.07) and -3.33 (-1.01), respectively. Compared to the MLR model, the GAM model had a higher predictive skill in reproducing the rapid and continuous increase of WHDNCP after 2010.
PREDICTING CLIMATE-INDUCED RANGE SHIFTS FOR MAMMALS: HOW GOOD ARE THE MODELS?
In order to manage wildlife and conserve biodiversity, it is critical that we understand the potential impacts of climate change on species distributions. Several different approaches to predicting climate-induced geographic range shifts have been proposed to address this proble...
A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks
NASA Astrophysics Data System (ADS)
Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan
2018-01-01
Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.
NASA Astrophysics Data System (ADS)
Hustim, M.; Arifin, Z.; Aly, S. H.; Ramli, M. I.; Zakaria, R.; Liputo, A.
2018-04-01
This research aimed to predict the noise produced by the traffic in the road network in Makassar City using ASJ-RTN Model 2008 by calculating the horn sound. Observations were taken at 37 survey points on road side. The observations were conducted at 06.00 - 18.00 and 06.00 - 21.00 which research objects were motorcycle (MC), light vehicle (LV) and heavy vehicle (HV). The observed data were traffic volume, vehicle speed, number of horn and traffic noise using Sound Level Meter Tenmars TM-103. The research result indicates that prediction noise model by calculating the horn sound produces the average noise level value of 78.5 dB having the Pearson’s correlation and RMSE of 0.95 and 0.87. Therefore, ASJ-RTN Model 2008 prediction model by calculating the horn sound is said to be sufficiently good for predicting noise level.
Faulhammer, E; Llusa, M; Wahl, P R; Paudel, A; Lawrence, S; Biserni, S; Calzolari, V; Khinast, J G
2016-01-01
The objectives of this study were to develop a predictive statistical model for low-fill-weight capsule filling of inhalation products with dosator nozzles via the quality by design (QbD) approach and based on that to create refined models that include quadratic terms for significant parameters. Various controllable process parameters and uncontrolled material attributes of 12 powders were initially screened using a linear model with partial least square (PLS) regression to determine their effect on the critical quality attributes (CQA; fill weight and weight variability). After identifying critical material attributes (CMAs) and critical process parameters (CPPs) that influenced the CQA, model refinement was performed to study if interactions or quadratic terms influence the model. Based on the assessment of the effects of the CPPs and CMAs on fill weight and weight variability for low-fill-weight inhalation products, we developed an excellent linear predictive model for fill weight (R(2 )= 0.96, Q(2 )= 0.96 for powders with good flow properties and R(2 )= 0.94, Q(2 )= 0.93 for cohesive powders) and a model that provides a good approximation of the fill weight variability for each powder group. We validated the model, established a design space for the performance of different types of inhalation grade lactose on low-fill weight capsule filling and successfully used the CMAs and CPPs to predict fill weight of powders that were not included in the development set.
Pérez-Garín, Daniel; Molero, Fernando; Bos, Arjan E R
2017-04-01
The goal of this study is to test a model in which personal discrimination predicts internalized stigma, while group discrimination predicts a greater willingness to engage in collective action. Internalized stigma and collective action, in turn, are associated to positive and negative affect. A cross-sectional study with 213 people with mental illness was conducted. The model was tested using path analysis. Although the data supported the model, its fit was not sufficiently good. A respecified model, in which a direct path from collective action to internalized stigma was added, showed a good fit. Personal and group discrimination appear to impact subjective well-being through two different paths: the internalization of stigma and collective action intentions, respectively. These two paths, however, are not completely independent, as collective action predicts a lower internalization of stigma. Thus, collective action appears as an important tool to reduce internalized stigma and improve subjective well-being. Future interventions to reduce the impact of stigma should fight the internalization of stigma and promote collective action are suggested.
Implementation of algebraic stress models in a general 3-D Navier-Stokes method (PAB3D)
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.
1995-01-01
A three-dimensional multiblock Navier-Stokes code, PAB3D, which was developed for propulsion integration and general aerodynamic analysis, has been used extensively by NASA Langley and other organizations to perform both internal (exhaust) and external flow analysis of complex aircraft configurations. This code was designed to solve the simplified Reynolds Averaged Navier-Stokes equations. A two-equation k-epsilon turbulence model has been used with considerable success, especially for attached flows. Accurate predicting of transonic shock wave location and pressure recovery in separated flow regions has been more difficult. Two algebraic Reynolds stress models (ASM) have been recently implemented in the code that greatly improved the code's ability to predict these difficult flow conditions. Good agreement with Direct Numerical Simulation (DNS) for a subsonic flat plate was achieved with ASM's developed by Shih, Zhu, and Lumley and Gatski and Speziale. Good predictions were also achieved at subsonic and transonic Mach numbers for shock location and trailing edge boattail pressure recovery on a single-engine afterbody/nozzle model.
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erle C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China’s chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for ¼ of the sample data which was not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China’s first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives. PMID:21765567
Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control over a Hump Model
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2006-01-01
The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.
Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control Over a Hump Model
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2006-01-01
The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.
Mid- and long-term runoff predictions by an improved phase-space reconstruction model.
Hong, Mei; Wang, Dong; Wang, Yuankun; Zeng, Xiankui; Ge, Shanshan; Yan, Hengqian; Singh, Vijay P
2016-07-01
In recent years, the phase-space reconstruction method has usually been used for mid- and long-term runoff predictions. However, the traditional phase-space reconstruction method is still needs to be improved. Using the genetic algorithm to improve the phase-space reconstruction method, a new nonlinear model of monthly runoff is constructed. The new model does not rely heavily on embedding dimensions. Recognizing that the rainfall-runoff process is complex, affected by a number of factors, more variables (e.g. temperature and rainfall) are incorporated in the model. In order to detect the possible presence of chaos in the runoff dynamics, chaotic characteristics of the model are also analyzed, which shows the model can represent the nonlinear and chaotic characteristics of the runoff. The model is tested for its forecasting performance in four types of experiments using data from six hydrological stations on the Yellow River and the Yangtze River. Results show that the medium-and long-term runoff is satisfactorily forecasted at the hydrological stations. Not only is the forecasting trend accurate, but also the mean absolute percentage error is no more than 15%. Moreover, the forecast results of wet years and dry years are both good, which means that the improved model can overcome the traditional ''wet years and dry years predictability barrier,'' to some extent. The model forecasts for different regions are all good, showing the universality of the approach. Compared with selected conceptual and empirical methods, the model exhibits greater reliability and stability in the long-term runoff prediction. Our study provides a new thinking for research on the association between the monthly runoff and other hydrological factors, and also provides a new method for the prediction of the monthly runoff. Copyright © 2015 Elsevier Inc. All rights reserved.
CONFOLD2: improved contact-driven ab initio protein structure modeling.
Adhikari, Badri; Cheng, Jianlin
2018-01-25
Contact-guided protein structure prediction methods are becoming more and more successful because of the latest advances in residue-residue contact prediction. To support contact-driven structure prediction, effective tools that can quickly build tertiary structural models of good quality from predicted contacts need to be developed. We develop an improved contact-driven protein modelling method, CONFOLD2, and study how it may be effectively used for ab initio protein structure prediction with predicted contacts as input. It builds models using various subsets of input contacts to explore the fold space under the guidance of a soft square energy function, and then clusters the models to obtain the top five models. CONFOLD2 obtains an average reconstruction accuracy of 0.57 TM-score for the 150 proteins in the PSICOV contact prediction dataset. When benchmarked on the CASP11 contacts predicted using CONSIP2 and CASP12 contacts predicted using Raptor-X, CONFOLD2 achieves a mean TM-score of 0.41 on both datasets. CONFOLD2 allows to quickly generate top five structural models for a protein sequence when its secondary structures and contacts predictions at hand. The source code of CONFOLD2 is publicly available at https://github.com/multicom-toolbox/CONFOLD2/ .
Yan, Zhao-Da; Zhou, Chong-Guang; Su, Shi-Chuan; Liu, Zhen-Tao; Wang, Xi-Zhen
2003-01-01
In order to predict and improve the performance of natural gas/diesel dual fuel engine (DFE), a combustion rate model based on forward neural network was built to study the combustion process of the DFE. The effect of the operating parameters on combustion rate was also studied by means of this model. The study showed that the predicted results were good agreement with the experimental data. It was proved that the developed combustion rate model could be used to successfully predict and optimize the combustion process of dual fuel engine.
Sweet, Shane N.; Fortier, Michelle S.; Strachan, Shaelyn M.; Blanchard, Chris M.; Boulay, Pierre
2014-01-01
Self-determination theory and self-efficacy theory are prominent theories in the physical activity literature, and studies have begun integrating their concepts. Sweet, Fortier, Strachan and Blanchard (2012) have integrated these two theories in a cross-sectional study. Therefore, this study sought to test a longitudinal integrated model to predict physical activity at the end of a 4-month cardiac rehabilitation program based on theory, research and Sweet et al.’s cross-sectional model. Participants from two cardiac rehabilitation programs (N=109) answered validated self-report questionnaires at baseline, two and four months. Data were analyzed using Amos to assess the path analysis and model fit. Prior to integration, perceived competence and self-efficacy were combined, and labeled as confidence. After controlling for 2-month physical activity and cardiac rehabilitation site, no motivational variables significantly predicted residual change in 4-month physical activity. Although confidence at two months did not predict residual change in 4-month physical activity, it had a strong positive relationship with 2-month physical activity (β=0.30, P<0.001). The overall model retained good fit indices. In conclusion, results diverged from theoretical predictions of physical activity, but self-determination and self-efficacy theory were still partially supported. Because the model had good fit, this study demonstrated that theoretical integration is feasible. PMID:26973926
Predicting DPP-IV inhibitors with machine learning approaches
NASA Astrophysics Data System (ADS)
Cai, Jie; Li, Chanjuan; Liu, Zhihong; Du, Jiewen; Ye, Jiming; Gu, Qiong; Xu, Jun
2017-04-01
Dipeptidyl peptidase IV (DPP-IV) is a promising Type 2 diabetes mellitus (T2DM) drug target. DPP-IV inhibitors prolong the action of glucagon-like peptide-1 (GLP-1) and gastric inhibitory peptide (GIP), improve glucose homeostasis without weight gain, edema, and hypoglycemia. However, the marketed DPP-IV inhibitors have adverse effects such as nasopharyngitis, headache, nausea, hypersensitivity, skin reactions and pancreatitis. Therefore, it is still expected for novel DPP-IV inhibitors with minimal adverse effects. The scaffolds of existing DPP-IV inhibitors are structurally diversified. This makes it difficult to build virtual screening models based upon the known DPP-IV inhibitor libraries using conventional QSAR approaches. In this paper, we report a new strategy to predict DPP-IV inhibitors with machine learning approaches involving naïve Bayesian (NB) and recursive partitioning (RP) methods. We built 247 machine learning models based on 1307 known DPP-IV inhibitors with optimized molecular properties and topological fingerprints as descriptors. The overall predictive accuracies of the optimized models were greater than 80%. An external test set, composed of 65 recently reported compounds, was employed to validate the optimized models. The results demonstrated that both NB and RP models have a good predictive ability based on different combinations of descriptors. Twenty "good" and twenty "bad" structural fragments for DPP-IV inhibitors can also be derived from these models for inspiring the new DPP-IV inhibitor scaffold design.
Wave models for turbulent free shear flows
NASA Technical Reports Server (NTRS)
Liou, W. W.; Morris, P. J.
1991-01-01
New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.
Most predictions of the effect of climate change on species’ ranges are based on correlations between climate and current species’ distributions. These so-called envelope models may be a good first approximation, but we need demographically mechanistic models to incorporate the ...
NASA Astrophysics Data System (ADS)
Zhao, Siqi; Zhang, Guanglong; Xia, Shuwei; Yu, Liangmin
2018-06-01
As a group of diversified frameworks, quinazolin derivatives displayed a broad field of biological functions, especially as anticancer. To investigate the quantitative structure-activity relationship, 3D-QSAR models were generated with 24 quinazolin scaffold molecules. The experimental and predicted pIC50 values for both training and test set compounds showed good correlation, which proved the robustness and reliability of the generated QSAR models. The most effective CoMFA and CoMSIA were obtained with correlation coefficient r 2 ncv of 1.00 (both) and leave-one-out coefficient q 2 of 0.61 and 0.59, respectively. The predictive abilities of CoMFA and CoMSIA were quite good with the predictive correlation coefficients ( r 2 pred ) of 0.97 and 0.91. In addition, the statistic results of CoMFA and CoMSIA were used to design new quinazolin molecules.
Indications of M-Dwarf Deficits in the Halo and Thick Disk of the Galaxy
NASA Technical Reports Server (NTRS)
Konishi, Mihoko; Shibai, Hiroshi; Sumi, Takahiro; Fukagawa, Misato; Matsuo, Taro; Samland, Matthias S.; Yamamoto, Kodai; Sudo, Jun; Itoh, Yoichi; Arimoto, Nubuo;
2014-01-01
We compared the number of faint stars detected in deep survey fields with the current stellar distribution model of the Galaxy and found that the detected number in the H band is significantly smaller than the predicted number. This indicates that M-dwarfs, the major component, are fewer in the halo and the thick disk. We used archived data of several surveys in both the north and south field of GOODS (Great Observatories Origins Deep Survey), MODS in GOODS-N, and ERS and CANDELS in GOODS-S. The number density of M-dwarfs in the halo has to be 20 +/- 13% relative to that in the solar vicinity, in order for the detected number of stars fainter than 20.5 mag in the H band to match with the predicted value from the model. In the thick disk, the number density of M-dwarfs must be reduced (52 +/- 13%) or the scale height must be decreased (approximately 600 pc). Alternatively, overall fractions of the halo and thick disks can be significantly reduced to achieve the same effect, because our sample mainly consists of faint M-dwarfs. Our results imply that the M-dwarf population in regions distant from the Galactic plane is significantly smaller than previously thought. We then discussed the implications this has on the suitability of the model predictions for the prediction of non-companion faint stars in direct imaging extrasolar planet surveys by using the best-fit number densities.
Indications of M-Dwarf Deficits in the Halo and Thick Disk of the Galaxy
NASA Technical Reports Server (NTRS)
Konishi, Mihoko; Shibai, Hiroshi; Sumi, Takahiro; Fukagawa, Misato; Matsuo, Taro; Samland, Matthias S.; Yamamoto, Kodai; Sudo, Jun; Itoh, Yoichi; Arimoto, Nobuo;
2014-01-01
We compared the number of faint stars detected in deep survey fields with the current stellar distribution model of the Galaxy and found that the detected number in the H band is significantly smaller than the predicted number. This indicates that M-dwarfs, the major component, are fewer in the halo and the thick disk. We used archived data of several surveys in both the north and south field of GOODS (Great Observatories Origins Deep Survey), MODS in GOODS-N, and ERS and CANDELS in GOODS-S. The number density of M-dwarfs in the halo has to be 20+/-13% relative to that in the solar vicinity, in order for the detected number of stars fainter than 20.5 mag in the H band to match with the predicted value from the model. In the thick disk, the number density of M-dwarfs must be reduced (52+/-13%) or the scale height must be decreased ( approx. 600 pc). Alternatively, overall fractions of the halo and thick disks can be significantly reduced to achieve the same effect, because our sample mainly consists of faint M-dwarfs. Our results imply that the M-dwarf population in regions distant from the Galactic plane is significantly smaller than previously thought. We then discussed the implications this has on the suitability of the model predictions for the prediction of non-companion faint stars in direct imaging extrasolar planet surveys by using the best-fit number densities.
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
NASA Technical Reports Server (NTRS)
Seybert, A. F.; Wu, X. F.; Oswald, Fred B.
1992-01-01
Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.
Hidden markov model for the prediction of transmembrane proteins using MATLAB.
Chaturvedi, Navaneet; Shanker, Sudhanshu; Singh, Vinay Kumar; Sinha, Dhiraj; Pandey, Paras Nath
2011-01-01
Since membranous proteins play a key role in drug targeting therefore transmembrane proteins prediction is active and challenging area of biological sciences. Location based prediction of transmembrane proteins are significant for functional annotation of protein sequences. Hidden markov model based method was widely applied for transmembrane topology prediction. Here we have presented a revised and a better understanding model than an existing one for transmembrane protein prediction. Scripting on MATLAB was built and compiled for parameter estimation of model and applied this model on amino acid sequence to know the transmembrane and its adjacent locations. Estimated model of transmembrane topology was based on TMHMM model architecture. Only 7 super states are defined in the given dataset, which were converted to 96 states on the basis of their length in sequence. Accuracy of the prediction of model was observed about 74 %, is a good enough in the area of transmembrane topology prediction. Therefore we have concluded the hidden markov model plays crucial role in transmembrane helices prediction on MATLAB platform and it could also be useful for drug discovery strategy. The database is available for free at bioinfonavneet@gmail.comvinaysingh@bhu.ac.in.
Inter-comparison of time series models of lake levels predicted by several modeling strategies
NASA Astrophysics Data System (ADS)
Khatibi, R.; Ghorbani, M. A.; Naghipour, L.; Jothiprakash, V.; Fathima, T. A.; Fazelifard, M. H.
2014-04-01
Five modeling strategies are employed to analyze water level time series of six lakes with different physical characteristics such as shape, size, altitude and range of variations. The models comprise chaos theory, Auto-Regressive Integrated Moving Average (ARIMA) - treated for seasonality and hence SARIMA, Artificial Neural Networks (ANN), Gene Expression Programming (GEP) and Multiple Linear Regression (MLR). Each is formulated on a different premise with different underlying assumptions. Chaos theory is elaborated in a greater detail as it is customary to identify the existence of chaotic signals by a number of techniques (e.g. average mutual information and false nearest neighbors) and future values are predicted using the Nonlinear Local Prediction (NLP) technique. This paper takes a critical view of past inter-comparison studies seeking a superior performance, against which it is reported that (i) the performances of all five modeling strategies vary from good to poor, hampering the recommendation of a clear-cut predictive model; (ii) the performances of the datasets of two cases are consistently better with all five modeling strategies; (iii) in other cases, their performances are poor but the results can still be fit-for-purpose; (iv) the simultaneous good performances of NLP and SARIMA pull their underlying assumptions to different ends, which cannot be reconciled. A number of arguments are presented including the culture of pluralism, according to which the various modeling strategies facilitate an insight into the data from different vantages.
Research on cross - Project software defect prediction based on transfer learning
NASA Astrophysics Data System (ADS)
Chen, Ya; Ding, Xiaoming
2018-04-01
According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.
NASA Astrophysics Data System (ADS)
Panoiu, M.; Panoiu, C.; Lihaciu, I. L.
2018-01-01
This research presents an adaptive neuro-fuzzy system which is used in the prediction of the distance between the pantograph and contact line of the electrical locomotives used in railway transportation. In railway transportation any incident that occurs in the electrical system can have major negative effects: traffic interrupts, equipment destroying. Therefore, a prediction as good as possible of such situations is very useful. In the paper was analyzing the possibility of modeling and prediction the variation of the distance between the pantograph and the contact line using intelligent techniques
Sfakiotakis, Stelios; Vamvuka, Despina
2015-12-01
The pyrolysis of six waste biomass samples was studied and the fuels were kinetically evaluated. A modified independent parallel reactions scheme (IPR) and a distributed activation energy model (DAEM) were developed and their validity was assessed and compared by checking their accuracy of fitting the experimental results, as well as their prediction capability in different experimental conditions. The pyrolysis experiments were carried out in a thermogravimetric analyzer and a fitting procedure, based on least squares minimization, was performed simultaneously at different experimental conditions. A modification of the IPR model, considering dependence of the pre-exponential factor on heating rate, was proved to give better fit results for the same number of tuned kinetic parameters, comparing to the known IPR model and very good prediction results for stepwise experiments. Fit of calculated data to the experimental ones using the developed DAEM model was also proved to be very good. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sulea, Traian; Hogues, Hervé; Purisima, Enrico O.
2012-05-01
We carried out a prospective evaluation of the utility of the SIE (solvation interaction energy) scoring function for virtual screening and binding affinity prediction. Since experimental structures of the complexes were not provided, this was an exercise in virtual docking as well. We used our exhaustive docking program, Wilma, to provide high-quality poses that were rescored using SIE to provide binding affinity predictions. We also tested the combination of SIE with our latest solvation model, first shell of hydration (FiSH), which captures some of the discrete properties of water within a continuum model. We achieved good enrichment in virtual screening of fragments against trypsin, with an area under the curve of about 0.7 for the receiver operating characteristic curve. Moreover, the early enrichment performance was quite good with 50% of true actives recovered with a 15% false positive rate in a prospective calculation and with a 3% false positive rate in a retrospective application of SIE with FiSH. Binding affinity predictions for both trypsin and host-guest complexes were generally within 2 kcal/mol of the experimental values. However, the rank ordering of affinities differing by 2 kcal/mol or less was not well predicted. On the other hand, it was encouraging that the incorporation of a more sophisticated solvation model into SIE resulted in better discrimination of true binders from binders. This suggests that the inclusion of proper Physics in our models is a fruitful strategy for improving the reliability of our binding affinity predictions.
A novel method for structure-based prediction of ion channel conductance properties.
Smart, O S; Breed, J; Smith, G R; Sansom, M S
1997-01-01
A rapid and easy-to-use method of predicting the conductance of an ion channel from its three-dimensional structure is presented. The method combines the pore dimensions of the channel as measured in the HOLE program with an Ohmic model of conductance. An empirically based correction factor is then applied. The method yielded good results for six experimental channel structures (none of which were included in the training set) with predictions accurate to within an average factor of 1.62 to the true values. The predictive r2 was equal to 0.90, which is indicative of a good predictive ability. The procedure is used to validate model structures of alamethicin and phospholamban. Two genuine predictions for the conductance of channels with known structure but without reported conductances are given. A modification of the procedure that calculates the expected results for the effect of the addition of nonelectrolyte polymers on conductance is set out. Results for a cholera toxin B-subunit crystal structure agree well with the measured values. The difficulty in interpreting such studies is discussed, with the conclusion that measurements on channels of known structure are required. Images FIGURE 1 FIGURE 3 FIGURE 4 FIGURE 6 FIGURE 10 PMID:9138559
The application of improved neural network in hydrocarbon reservoir prediction
NASA Astrophysics Data System (ADS)
Peng, Xiaobo
2013-03-01
This paper use BP neural network techniques to realize hydrocarbon reservoir predication easier and faster in tarim basin in oil wells. A grey - cascade neural network model is proposed and it is faster convergence speed and low error rate. The new method overcomes the shortcomings of traditional BP neural network convergence slow, easy to achieve extreme minimum value. This study had 220 sets of measured logging data to the sample data training mode. By changing the neuron number and types of the transfer function of hidden layers, the best work prediction model is analyzed. The conclusion is the model which can produce good prediction results in general, and can be used for hydrocarbon reservoir prediction.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Advances in modeling sorption and diffusion of moisture in porous reactive materials.
Harley, Stephen J; Glascoe, Elizabeth A; Lewicki, James P; Maxwell, Robert S
2014-06-23
Water-vapor-uptake experiments were performed on a silica-filled poly(dimethylsiloxane) (PDMS) network and modeled by using two different approaches. The data was modeled by using established methods and the model parameters were used to predict moisture uptake in a sample. The predictions are reasonably good, but not outstanding; many of the shortcomings of the modeling are discussed. A high-fidelity modeling approach is derived and used to improve the modeling of moisture uptake and diffusion. Our modeling approach captures the physics and kinetics of diffusion and adsorption/desorption, simultaneously. It predicts uptake better than the established method; more importantly, it is also able to predict outgassing. The material used for these studies is a filled-PDMS network; physical interpretations concerning the sorption and diffusion of moisture in this network are discussed. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ahammad, S Ziauddin; Gomes, James; Sreekrishnan, T R
2011-09-01
Anaerobic degradation of waste involves different classes of microorganisms, and there are different types of interactions among them for substrates, terminal electron acceptors, and so on. A mathematical model is developed based on the mass balance of different substrates, products, and microbes present in the system to study the interaction between methanogens and sulfate-reducing bacteria (SRB). The performance of major microbial consortia present in the system, such as propionate-utilizing acetogens, butyrate-utilizing acetogens, acetoclastic methanogens, hydrogen-utilizing methanogens, and SRB were considered and analyzed in the model. Different substrates consumed and products formed during the process also were considered in the model. The experimental observations and model predictions showed very good prediction capabilities of the model. Model prediction was validated statistically. It was observed that the model-predicted values matched the experimental data very closely, with an average error of 3.9%.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
Finlay, V; Phillips, M; Allison, G T; Wood, F M; Ching, D; Wicaksono, D; Plowman, S; Hendrie, D; Edgar, D W
2015-11-01
As minor burn patients constitute the vast majority of a developed nation case-mix, streamlining care for this group can promote efficiency from a service-wide perspective. This study tested the hypothesis that a predictive nomogram model that estimates likelihood of good long-term quality of life (QoL) post-burn is a valid way to optimise patient selection and risk management when applying a streamlined model of care. A sample of 224 burn patients managed by the Burn Service of Western Australia who provided both short and long-term outcomes was used to estimate the probability of achieving a good QoL defined as 150 out of a possible 160 points on the Burn Specific Health Scale-Brief (BSHS-B) at least six months from injury. A multivariate logistic regression analysis produced a predictive model provisioned as a nomogram for clinical application. A second, independent cohort of consecutive patients (n=106) was used to validate the predictive merit of the nomogram. Male gender (p=0.02), conservative management (p=0.03), upper limb burn (p=0.04) and high BSHS-B score within one month of burn (p<0.001) were significant predictors of good outcome at six months and beyond. A Receiver Operating Curve (ROC) analysis demonstrated excellent (90%) accuracy overall. At 80% probability of good outcome, the false positive risk was 14%. The nomogram was validated by running a second ROC analysis of the model in an independent cohort. The analysis confirmed high (86%) overall accuracy of the model, the risk of false positive was reduced to 10% at a lower (70%) probability. This affirms the stability of the nomogram model in different patient groups over time. An investigation of the effect of missing data on sample selection determined that a greater proportion of younger patients with smaller TBSA burns were excluded due to loss to follow up. For clinicians managing comparable burn populations, the BSWA burns nomogram is an effective tool to assist the selection of patients to a streamlined care pathway with the aim of improving efficiency of service delivery. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
Gille, Laure-Anne; Marquis-Favre, Catherine; Morel, Julien
2016-09-01
An in situ survey was performed in 8 French cities in 2012 to study the annoyance due to combined transportation noises. As the European Commission recommends to use the exposure-response relationships suggested by Miedema and Oudshoorn [Environmental Health Perspective, 2001] to predict annoyance due to single transportation noise, these exposure-response relationships were tested using the annoyance due to each transportation noise measured during the French survey. These relationships only enabled a good prediction in terms of the percentages of people highly annoyed by road traffic noise. For the percentages of people annoyed and a little annoyed by road traffic noise, the quality of prediction is weak. For aircraft and railway noises, prediction of annoyance is not satisfactory either. As a consequence, the annoyance equivalents model of Miedema [The Journal of the Acoustical Society of America, 2004], based on these exposure-response relationships did not enable a good prediction of annoyance due to combined transportation noises. Local exposure-response relationships were derived, following the whole computation suggested by Miedema and Oudshoorn [Environmental Health Perspective, 2001]. They led to a better calculation of annoyance due to each transportation noise in the French cities. A new version of the annoyance equivalents model was proposed using these new exposure-response relationships. This model enabled a better prediction of the total annoyance due to the combined transportation noises. These results encourage therefore to improve the annoyance prediction for noise in isolation with local or revised exposure-response relationships, which will also contribute to improve annoyance modeling for combined noises. With this aim in mind, a methodology is proposed to consider noise sensitivity in exposure-response relationships and in the annoyance equivalents model. The results showed that taking into account such variable did not enable to enhance both exposure-response relationships and the annoyance equivalents model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lateral Orbitofrontal Inactivation Dissociates Devaluation-Sensitive Behavior and Economic Choice.
Gardner, Matthew P H; Conroy, Jessica S; Shaham, Michael H; Styer, Clay V; Schoenbaum, Geoffrey
2017-12-06
How do we choose between goods that have different subjective values, like apples and oranges? Neuroeconomics proposes that this is done by reducing complex goods to a single unitary value to allow comparison. This value is computed "on the fly" from the underlying model of the goods space, allowing decisions to meet current needs. This is termed "model-based" behavior to distinguish it from pre-determined, habitual, or "model-free" behavior. The lateral orbitofrontal cortex (OFC) supports model-based behavior in rats and primates, but whether the OFC is necessary for economic choice is less clear. Here we tested this question by optogenetically inactivating the lateral OFC in rats in a classic model-based task and during economic choice. Contrary to predictions, inactivation disrupted model-based behavior without affecting economic choice. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Bansal, P. N.; Arseneaux, P. J.; Smith, A. F.; Turnberg, J. E.; Brooks, B. M.
1985-01-01
Results of dynamic response and stability wind tunnel tests of three 62.2 cm (24.5 in) diameter models of the Prop-Fan, advanced turboprop, are presented. Measurements of dynamic response were made with the rotors mounted on an isolated nacelle, with varying tilt for nonuniform inflow. One model was also tested using a semi-span wing and fuselage configuration for response to realistic aircraft inflow. Stability tests were performed using tunnel turbulence or a nitrogen jet for excitation. Measurements are compared with predictions made using beam analysis methods for the model with straight blades, and finite element analysis methods for the models with swept blades. Correlations between measured and predicted rotating blade natural frequencies for all the models are very good. The IP dynamic response of the straight blade model is reasonably well predicted. The IP response of the swept blades is underpredicted and the wing induced response of the straight blade is overpredicted. Two models did not flutter, as predicted. One swept blade model encountered an instability at a higher RPM than predicted, showing predictions to be conservative.
Passenger Flow Forecasting Research for Airport Terminal Based on SARIMA Time Series Model
NASA Astrophysics Data System (ADS)
Li, Ziyu; Bi, Jun; Li, Zhiyin
2017-12-01
Based on the data of practical operating of Kunming Changshui International Airport during2016, this paper proposes Seasonal Autoregressive Integrated Moving Average (SARIMA) model to predict the passenger flow. This article not only considers the non-stationary and autocorrelation of the sequence, but also considers the daily periodicity of the sequence. The prediction results can accurately describe the change trend of airport passenger flow and provide scientific decision support for the optimal allocation of airport resources and optimization of departure process. The result shows that this model is applicable to the short-term prediction of airport terminal departure passenger traffic and the average error ranges from 1% to 3%. The difference between the predicted and the true values of passenger traffic flow is quite small, which indicates that the model has fairly good passenger traffic flow prediction ability.
Contaminant dispersal in bounded turbulent shear flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, J.M.; Bernard, P.S.; Chiang, K.F.
The dispersion of smoke downstream of a line source at the wall and at y{sup +} = 30 in a turbulent boundary layer has been predicted with a non-local model of the scalar fluxes {bar u}c and {bar v}c. The predicted plume from the wall source has been compared to high Schmidt number experimental measurements using a combination of hot-wire anemometry to obtain velocity component data synchronously with concentration data obtained optically. The predicted plumes from the source at y{sup +} = 30 and at the wall also have been compared to a low Schmidt number direct numerical simulation. Nearmore » the source, the non-local flux models give considerably better predictions than models which account solely for mean gradient transport. At a sufficient distance downstream the gradient models gives reasonably good predictions.« less
Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P
2017-05-22
PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age of 40. The PREDICT v2 is an improved prognostication and treatment benefit model compared with v1. The online version should continue to aid clinical decision making in women with early breast cancer.
Modeling ready biodegradability of fragrance materials.
Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola
2015-06-01
In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. © 2015 SETAC.
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Oh, H K; Yu, M J; Gwon, E M; Koo, J Y; Kim, S G; Koizumi, A
2004-01-01
This paper describes the prediction of flux behavior in an ultrafiltration (UF) membrane system using a Kalman neuro training (KNT) network model. The experimental data was obtained from operating a pilot plant of hollow fiber UF membrane with groundwater for 7 months. The network was trained using operating conditions such as inlet pressure, filtration duration, and feed water quality parameters including turbidity, temperature and UV254. Pre-processing of raw data allowed the normalized input data to be used in sigmoid activation functions. A neural network architecture was structured by modifying the number of hidden layers, neurons and learning iterations. The structure of KNT-neural network with 3 layers and 5 neurons allowed a good prediction of permeate flux by 0.997 of correlation coefficient during the learning phase. Also the validity of the designed model was evaluated with other experimental data not used during the training phase and nonlinear flux behavior was accurately estimated with 0.999 of correlation coefficient and a lower error of prediction in the testing phase. This good flux prediction can provide preliminary criteria in membrane design and set up the proper cleaning cycle in membrane operation. The KNT-artificial neural network is also expected to predict the variation of transmembrane pressure during filtration cycles and can be applied to automation and control of full scale treatment plants.
Predictive power of the grace score in population with diabetes.
Baeza-Román, Anna; de Miguel-Balsa, Eva; Latour-Pérez, Jaime; Carrillo-López, Andrés
2017-12-01
Current clinical practice guidelines recommend risk stratification in patients with acute coronary syndrome (ACS) upon admission to hospital. Diabetes mellitus (DM) is widely recognized as an independent predictor of mortality in these patients, although it is not included in the GRACE risk score. The objective of this study is to validate the GRACE risk score in a contemporary population and particularly in the subgroup of patients with diabetes, and to test the effects of including the DM variable in the model. Retrospective cohort study in patients included in the ARIAM-SEMICYUC registry, with a diagnosis of ACS and with available in-hospital mortality data. We tested the predictive power of the GRACE score, calculating the area under the ROC curve. We assessed the calibration of the score and the predictive ability based on type of ACS and the presence of DM. Finally, we evaluated the effect of including the DM variable in the model by calculating the net reclassification improvement. The GRACE score shows good predictive power for hospital mortality in the study population, with a moderate degree of calibration and no significant differences based on ACS type or the presence of DM. Including DM as a variable did not add any predictive value to the GRACE model. The GRACE score has an appropriate predictive power, with good calibration and clinical applicability in the subgroup of diabetic patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkler, David A., E-mail: dave.winkler@csiro.au
2016-05-15
Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based,more » have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods. - Highlights: • Nanomaterials regulators need good information to make good decisions. • Nanomaterials and their interactions with biology are very complex. • Computational methods use existing data to predict properties of new nanomaterials. • Statistical, data driven modelling methods have been successfully applied to this task. • Much more must be learnt before robust toolkits will be widely usable by regulators.« less
Molecular modeling of the microstructure evolution during carbon fiber processing
NASA Astrophysics Data System (ADS)
Desai, Saaketh; Li, Chunyu; Shen, Tongtong; Strachan, Alejandro
2017-12-01
The rational design of carbon fibers with desired properties requires quantitative relationships between the processing conditions, microstructure, and resulting properties. We developed a molecular model that combines kinetic Monte Carlo and molecular dynamics techniques to predict the microstructure evolution during the processes of carbonization and graphitization of polyacrylonitrile (PAN)-based carbon fibers. The model accurately predicts the cross-sectional microstructure of the fibers with the molecular structure of the stabilized PAN fibers and physics-based chemical reaction rates as the only inputs. The resulting structures exhibit key features observed in electron microcopy studies such as curved graphitic sheets and hairpin structures. In addition, computed X-ray diffraction patterns are in good agreement with experiments. We predict the transverse moduli of the resulting fibers between 1 GPa and 5 GPa, in good agreement with experimental results for high modulus fibers and slightly lower than those of high-strength fibers. The transverse modulus is governed by sliding between graphitic sheets, and the relatively low value for the predicted microstructures can be attributed to their perfect longitudinal texture. Finally, the simulations provide insight into the relationships between chemical kinetics and the final microstructure; we observe that high reaction rates result in porous structures with lower moduli.
Diagnosis of streamflow prediction skills in Oregon using Hydrologic Landscape Classification
A complete understanding of why rainfall-runoff models provide good streamflow predictions at catchments in some regions, but fail to do so in other regions, has still not been achieved. Here, we argue that a hydrologic classification system is a robust conceptual tool that is w...
Does Parsonnet scoring model predict mortality following adult cardiac surgery in India?
Srilata, Moningi; Padhy, Narmada; Padmaja, Durga; Gopinath, Ramachandran
2015-01-01
To validate the Parsonnet scoring model to predict mortality following adult cardiac surgery in Indian scenario. A total of 889 consecutive patients undergoing adult cardiac surgery between January 2010 and April 2011 were included in the study. The Parsonnet score was determined for each patient and its predictive ability for in-hospital mortality was evaluated. The validation of Parsonnet score was performed for the total data and separately for the sub-groups coronary artery bypass grafting (CABG), valve surgery and combined procedures (CABG with valve surgery). The model calibration was performed using Hosmer-Lemeshow goodness of fit test and receiver operating characteristics (ROC) analysis for discrimination. Independent predictors of mortality were assessed from the variables used in the Parsonnet score by multivariate regression analysis. The overall mortality was 6.3% (56 patients), 7.1% (34 patients) for CABG, 4.3% (16 patients) for valve surgery and 16.2% (6 patients) for combined procedures. The Hosmer-Lemeshow statistic was <0.05 for the total data and also within the sub-groups suggesting that the predicted outcome using Parsonnet score did not match the observed outcome. The area under the ROC curve for the total data was 0.699 (95% confidence interval 0.62-0.77) and when tested separately, it was 0.73 (0.64-0.81) for CABG, 0.79 (0.63-0.92) for valve surgery (good discriminatory ability) and only 0.55 (0.26-0.83) for combined procedures. The independent predictors of mortality determined for the total data were low ejection fraction (odds ratio [OR] - 1.7), preoperative intra-aortic balloon pump (OR - 10.7), combined procedures (OR - 5.1), dialysis dependency (OR - 23.4), and re-operation (OR - 9.4). The Parsonnet score yielded a good predictive value for valve surgeries, moderate predictive value for the total data and for CABG and poor predictive value for combined procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkvord, Sigurd; Flatmark, Kjersti; Department of Cancer and Surgery, Norwegian Radium Hospital, Oslo University Hospital
2010-10-01
Purpose: Tumor response of rectal cancer to preoperative chemoradiotherapy (CRT) varies considerably. In experimental tumor models and clinical radiotherapy, activity of particular subsets of kinase signaling pathways seems to predict radiation response. This study aimed to determine whether tumor kinase activity profiles might predict tumor response to preoperative CRT in locally advanced rectal cancer (LARC). Methods and Materials: Sixty-seven LARC patients were treated with a CRT regimen consisting of radiotherapy, fluorouracil, and, where possible, oxaliplatin. Pretreatment tumor biopsy specimens were analyzed using microarrays with kinase substrates, and the resulting substrate phosphorylation patterns were correlated with tumor response to preoperative treatmentmore » as assessed by histomorphologic tumor regression grade (TRG). A predictive model for TRG scores from phosphosubstrate signatures was obtained by partial-least-squares discriminant analysis. Prediction performance was evaluated by leave-one-out cross-validation and use of an independent test set. Results: In the patient population, 73% and 15% were scored as good responders (TRG 1-2) or intermediate responders (TRG 3), whereas 12% were assessed as poor responders (TRG 4-5). In a subset of 7 poor responders and 12 good responders, treatment outcome was correctly predicted for 95%. Application of the prediction model on the remaining patient samples resulted in correct prediction for 85%. Phosphosubstrate signatures generated by poor-responding tumors indicated high kinase activity, which was inhibited by the kinase inhibitor sunitinib, and several discriminating phosphosubstrates represented proteins derived from signaling pathways implicated in radioresistance. Conclusions: Multiplex kinase activity profiling may identify functional biomarkers predictive of tumor response to preoperative CRT in LARC.« less
NASA Astrophysics Data System (ADS)
Pôças, Isabel; Gonçalves, João; Costa, Patrícia Malva; Gonçalves, Igor; Pereira, Luís S.; Cunha, Mario
2017-06-01
In this study, hyperspectral reflectance (HySR) data derived from a handheld spectroradiometer were used to assess the water status of three grapevine cultivars in two sub-regions of Douro wine region during two consecutive years. A large set of potential predictors derived from the HySR data were considered for modelling/predicting the predawn leaf water potential (Ψpd) through different statistical and machine learning techniques. Three HySR vegetation indices were selected as final predictors for the computation of the models and the in-season time trend was removed from data by using a time predictor. The vegetation indices selected were the Normalized Reflectance Index for the wavelengths 554 nm and 561 nm (NRI554;561), the water index (WI) for the wavelengths 900 nm and 970 nm, and the D1 index which is associated with the rate of reflectance increase in the wavelengths of 706 nm and 730 nm. These vegetation indices covered the green, red edge and the near infrared domains of the electromagnetic spectrum. A large set of state-of-the-art analysis and statistical and machine-learning modelling techniques were tested. Predictive modelling techniques based on generalized boosted model (GBM), bagged multivariate adaptive regression splines (B-MARS), generalized additive model (GAM), and Bayesian regularized neural networks (BRNN) showed the best performance for predicting Ψpd, with an average determination coefficient (R2) ranging between 0.78 and 0.80 and RMSE varying between 0.11 and 0.12 MPa. When cultivar Touriga Nacional was used for training the models and the cultivars Touriga Franca and Tinta Barroca for testing (independent validation), the models performance was good, particularly for GBM (R2 = 0.85; RMSE = 0.09 MPa). Additionally, the comparison of Ψpd observed and predicted showed an equitable dispersion of data from the various cultivars. The results achieved show a good potential of these predictive models based on vegetation indices to support irrigation scheduling in vineyard.
NASA Astrophysics Data System (ADS)
Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.
2012-04-01
Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.
F-RAG: Generating Atomic Coordinates from RNA Graphs by Fragment Assembly.
Jain, Swati; Schlick, Tamar
2017-11-24
Coarse-grained models represent attractive approaches to analyze and simulate ribonucleic acid (RNA) molecules, for example, for structure prediction and design, as they simplify the RNA structure to reduce the conformational search space. Our structure prediction protocol RAGTOP (RNA-As-Graphs Topology Prediction) represents RNA structures as tree graphs and samples graph topologies to produce candidate graphs. However, for a more detailed study and analysis, construction of atomic from coarse-grained models is required. Here we present our graph-based fragment assembly algorithm (F-RAG) to convert candidate three-dimensional (3D) tree graph models, produced by RAGTOP into atomic structures. We use our related RAG-3D utilities to partition graphs into subgraphs and search for structurally similar atomic fragments in a data set of RNA 3D structures. The fragments are edited and superimposed using common residues, full atomic models are scored using RAGTOP's knowledge-based potential, and geometries of top scoring models is optimized. To evaluate our models, we assess all-atom RMSDs and Interaction Network Fidelity (a measure of residue interactions) with respect to experimentally solved structures and compare our results to other fragment assembly programs. For a set of 50 RNA structures, we obtain atomic models with reasonable geometries and interactions, particularly good for RNAs containing junctions. Additional improvements to our protocol and databases are outlined. These results provide a good foundation for further work on RNA structure prediction and design applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Meta-analysis suggests choosy females get sexy sons more than "good genes".
Prokop, Zofia M; Michalczyk, Łukasz; Drobniak, Szymon M; Herdegen, Magdalena; Radwan, Jacek
2012-09-01
Female preferences for specific male phenotypes have been documented across a wide range of animal taxa, including numerous species where males contribute only gametes to offspring production. Yet, selective pressures maintaining such preferences are among the major unknowns of evolutionary biology. Theoretical studies suggest that preferences can evolve if they confer genetic benefits in terms of increased attractiveness of sons ("Fisherian" models) or overall fitness of offspring ("good genes" models). These two types of models predict, respectively, that male attractiveness is heritable and genetically correlated with fitness. In this meta-analysis, we draw general conclusions from over two decades worth of empirical studies testing these predictions (90 studies on 55 species in total). We found evidence for heritability of male attractiveness. However, attractiveness showed no association with traits directly associated with fitness (life-history traits). Interestingly, it did show a positive correlation with physiological traits, which include immunocompetence and condition. In conclusion, our results support "Fisherian" models of preference evolution, while providing equivocal evidence for "good genes." We pinpoint research directions that should stimulate progress in our understanding of the evolution of female choice. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Developing a predictive tropospheric ozone model for Tabriz
NASA Astrophysics Data System (ADS)
Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi
2013-04-01
Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
Lee, Seung Yup; Skolnick, Jeffrey
2007-07-01
To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions. 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Paul, Suman; Ali, Muhammad; Chatterjee, Rima
2018-01-01
Velocity of compressional wave ( V P) of coal and non-coal lithology is predicted from five wells from the Bokaro coalfield (CF), India. Shear sonic travel time logs are not recorded for all wells under the study area. Shear wave velocity ( Vs) is available only for two wells: one from east and other from west Bokaro CF. The major lithologies of this CF are dominated by coal, shaly coal of Barakar formation. This paper focuses on the (a) relationship between Vp and Vs, (b) prediction of Vp using regression and neural network modeling and (c) estimation of maximum horizontal stress from image log. Coal characterizes with low acoustic impedance (AI) as compared to the overlying and underlying strata. The cross-plot between AI and Vp/ Vs is able to identify coal, shaly coal, shale and sandstone from wells in Bokaro CF. The relationship between Vp and Vs is obtained with excellent goodness of fit ( R 2) ranging from 0.90 to 0.93. Linear multiple regression and multi-layered feed-forward neural network (MLFN) models are developed for prediction Vp from two wells using four input log parameters: gamma ray, resistivity, bulk density and neutron porosity. Regression model predicted Vp shows poor fit (from R 2 = 0.28) to good fit ( R 2 = 0.79) with the observed velocity. MLFN model predicted Vp indicates satisfactory to good R2 values varying from 0.62 to 0.92 with the observed velocity. Maximum horizontal stress orientation from a well at west Bokaro CF is studied from Formation Micro-Imager (FMI) log. Breakouts and drilling-induced fractures (DIFs) are identified from the FMI log. Breakout length of 4.5 m is oriented towards N60°W whereas the orientation of DIFs for a cumulative length of 26.5 m is varying from N15°E to N35°E. The mean maximum horizontal stress in this CF is towards N28°E.
A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.
Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao
2016-07-01
Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value <0.25 in the univariate analysis were further evaluated in multivariate models using backward elimination. Discrimination was assessed using receiver operating curve. Calibration was evaluated using the McFadden's R2. Net reclassification index (NRI) associated with incorporation of laboratory results was calculated. Results were internally validated. A model similar to existing CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.
A predictive scoring instrument for tuberculosis lost to follow-up outcome
2012-01-01
Background Adherence to tuberculosis (TB) treatment is troublesome, due to long therapy duration, quick therapeutic response which allows the patient to disregard about the rest of their treatment and the lack of motivation on behalf of the patient for improved. The objective of this study was to develop and validate a scoring system to predict the probability of lost to follow-up outcome in TB patients as a way to identify patients suitable for directly observed treatments (DOT) and other interventions to improve adherence. Methods Two prospective cohorts, were used to develop and validate a logistic regression model. A scoring system was constructed, based on the coefficients of factors associated with a lost to follow-up outcome. The probability of lost to follow-up outcome associated with each score was calculated. Predictions in both cohorts were tested using receiver operating characteristic curves (ROC). Results The best model to predict lost to follow-up outcome included the following characteristics: immigration (1 point value), living alone (1 point) or in an institution (2 points), previous anti-TB treatment (2 points), poor patient understanding (2 points), intravenous drugs use (IDU) (4 points) or unknown IDU status (1 point). Scores of 0, 1, 2, 3, 4 and 5 points were associated with a lost to follow-up probability of 2,2% 5,4% 9,9%, 16,4%, 15%, and 28%, respectively. The ROC curve for the validation group demonstrated a good fit (AUC: 0,67 [95% CI; 0,65-0,70]). Conclusion This model has a good capacity to predict a lost to follow-up outcome. Its use could help TB Programs to determine which patients are good candidates for DOT and other strategies to improve TB treatment adherence. PMID:22938040
Information as a Measure of Model Skill
NASA Astrophysics Data System (ADS)
Roulston, M. S.; Smith, L. A.
2002-12-01
Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.
Droplet Deformation Prediction With the Droplet Deformation and Breakup Model (DDB)
NASA Technical Reports Server (NTRS)
Vargas, Mario
2012-01-01
The Droplet Deformation and Breakup Model was used to predict deformation of droplets approaching the leading edge stagnation line of an airfoil. The quasi-steady model was solved for each position along the droplet path. A program was developed to solve the non-linear, second order, ordinary differential equation that governs the model. A fourth order Runge-Kutta method was used to solve the equation. Experimental slip velocities from droplet breakup studies were used as input to the model which required slip velocity along the particle path. The center of mass displacement predictions were compared to the experimental measurements from the droplet breakup studies for droplets with radii in the range of 200 to 700 mm approaching the airfoil at 50 and 90 m/sec. The model predictions were good for the displacement of the center of mass for small and medium sized droplets. For larger droplets the model predictions did not agree with the experimental results.
NASA Technical Reports Server (NTRS)
Abid, R.; Speziale, C. G.
1993-01-01
Turbulent channel flow and homogeneous shear flow have served as basic building block flows for the testing and calibration of Reynolds stress models. A direct theoretical connection is made between homogeneous shear flow in equilibrium and the log-layer of fully-developed turbulent channel flow. It is shown that if a second-order closure model is calibrated to yield good equilibrium values for homogeneous shear flow it will also yield good results for the log-layer of channel flow provided that the Rotta coefficient is not too far removed from one. Most of the commonly used second-order closure models introduce an ad hoc wall reflection term in order to mask deficient predictions for the log-layer of channel flow that arise either from an inaccurate calibration of homogeneous shear flow or from the use of a Rotta coefficient that is too large. Illustrative model calculations are presented to demonstrate this point which has important implications for turbulence modeling.
NASA Technical Reports Server (NTRS)
Abid, R.; Speziale, C. G.
1992-01-01
Turbulent channel flow and homogeneous shear flow have served as basic building block flows for the testing and calibration of Reynolds stress models. A direct theoretical connection is made between homogeneous shear flow in equilibrium and the log-layer of fully-developed turbulent channel flow. It is shown that if a second-order closure model is calibrated to yield good equilibrium values for homogeneous shear flow it will also yield good results for the log-layer of channel flow provided that the Rotta coefficient is not too far removed from one. Most of the commonly used second-order closure models introduce an ad hoc wall reflection term in order to mask deficient predictions for the log-layer of channel flow that arise either from an inaccurate calibration of homogeneous shear flow or from the use of a Rotta coefficient that is too large. Illustrative model calculations are presented to demonstrate this point which has important implications for turbulence modeling.
Brouwer, Marieke T; Thoden van Velzen, Eggo U; Augustinus, Antje; Soethoudt, Han; De Meester, Steven; Ragaert, Kim
2018-01-01
The Dutch post-consumer plastic packaging recycling network has been described in detail (both on the level of packaging types and of materials) from the household potential to the polymeric composition of the recycled milled goods. The compositional analyses of 173 different samples of post-consumer plastic packaging from different locations in the network were combined to indicatively describe the complete network with material flow analysis, data reconciliation techniques and process technological parameters. The derived potential of post-consumer plastic packages in the Netherlands in 2014 amounted to 341 Gg net (or 20.2 kg net.cap -1 .a -1 ). The complete recycling network produced 75.2 Gg milled goods, 28.1 Gg side products and 16.7 Gg process waste. Hence the net recycling chain yield for post-consumer plastic packages equalled 30%. The end-of-life fates for 35 different plastic packaging types were resolved. Additionally, the polymeric compositions of the milled goods and the recovered masses were derived with this model. These compositions were compared with experimentally determined polymeric compositions of recycled milled goods, which confirmed that the model predicts these compositions reasonably well. Also the modelled recovered masses corresponded reasonably well with those measured experimentally. The model clarified the origin of polymeric contaminants in recycled plastics, either sorting faults or packaging components, which gives directions for future improvement measures. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Eck, Marshall; Mukunda, Meera
1988-01-01
A calculational method is described which provides a powerful tool for predicting solid rocket motor (SRM) casing and liquid rocket tankage fragmentation response. The approach properly partitions the available impulse to each major system-mass component. It uses the Pisces code developed by Physics International to couple the forces generated by an Eulerian-modeled gas flow field to a Lagrangian-modeled fuel and casing system. The details of the predictive analytical modeling process and the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed. Methods for applying similar modeling techniques to liquid-tankage-overpressure failures are also discussed. Good agreement between predictions and observations are obtained for five specific events.
Brown, Fred; Adelson, David; White, Deborah; Hughes, Timothy; Chaudhri, Naeem
2017-01-01
Background Treatment of patients with chronic myeloid leukaemia (CML) has become increasingly difficult in recent years due to the variety of treatment options available and challenge deciding on the most appropriate treatment strategy for an individual patient. To facilitate the treatment strategy decision, disease assessment should involve molecular response to initial treatment for an individual patient. Patients predicted not to achieve major molecular response (MMR) at 24 months to frontline imatinib may be better treated with alternative frontline therapies, such as nilotinib or dasatinib. The aims of this study were to i) understand the clinical prediction ‘rules’ for predicting MMR at 24 months for CML patients treated with imatinib using clinical, molecular, and cell count observations (predictive factors collected at diagnosis and categorised based on available knowledge) and ii) develop a predictive model for CML treatment management. This predictive model was developed, based on CML patients undergoing imatinib therapy enrolled in the TIDEL II clinical trial with an experimentally identified achieving MMR group and non-achieving MMR group, by addressing the challenge as a machine learning problem. The recommended model was validated externally using an independent data set from King Faisal Specialist Hospital and Research Centre, Saudi Arabia. Principle Findings The common prognostic scores yielded similar sensitivity performance in testing and validation datasets and are therefore good predictors of the positive group. The G-mean and F-score values in our models outperformed the common prognostic scores in testing and validation datasets and are therefore good predictors for both the positive and negative groups. Furthermore, a high PPV above 65% indicated that our models are appropriate for making decisions at diagnosis and pre-therapy. Study limitations include that prior knowledge may change based on varying expert opinions; hence, representing the category boundaries of each predictive factor could dramatically change performance of the models. PMID:28045960
A Predictive Model of Daily Seismic Activity Induced by Mining, Developed with Data Mining Methods
NASA Astrophysics Data System (ADS)
Jakubowski, Jacek
2014-12-01
The article presents the development and evaluation of a predictive classification model of daily seismic energy emissions induced by longwall mining in sector XVI of the Piast coal mine in Poland. The model uses data on tremor energy, basic characteristics of the longwall face and mined output in this sector over the period from July 1987 to March 2011. The predicted binary variable is the occurrence of a daily sum of tremor seismic energies in a longwall that is greater than or equal to the threshold value of 105 J. Three data mining analytical methods were applied: logistic regression,neural networks, and stochastic gradient boosted trees. The boosted trees model was chosen as the best for the purposes of the prediction. The validation sample results showed its good predictive capability, taking the complex nature of the phenomenon into account. This may indicate the applied model's suitability for a sequential, short-term prediction of mining induced seismic activity.
NASA Astrophysics Data System (ADS)
Hayati, M.; Rashidi, A. M.; Rezaei, A.
2012-10-01
In this paper, the applicability of ANFIS as an accurate model for the prediction of the mass gain during high temperature oxidation using experimental data obtained for aluminized nanostructured (NS) nickel is presented. For developing the model, exposure time and temperature are taken as input and the mass gain as output. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the network. We have compared the proposed ANFIS model with experimental data. The predicted data are found to be in good agreement with the experimental data with mean relative error less than 1.1%. Therefore, we can use ANFIS model to predict the performances of thermal systems in engineering applications, such as modeling the mass gain for NS materials.
Predicting the activity of drugs for a group of imidazopyridine anticoccidial compounds.
Si, Hongzong; Lian, Ning; Yuan, Shuping; Fu, Aiping; Duan, Yun-Bo; Zhang, Kejun; Yao, Xiaojun
2009-10-01
Gene expression programming (GEP) is a novel machine learning technique. The GEP is used to build nonlinear quantitative structure-activity relationship model for the prediction of the IC(50) for the imidazopyridine anticoccidial compounds. This model is based on descriptors which are calculated from the molecular structure. Four descriptors are selected from the descriptors' pool by heuristic method (HM) to build multivariable linear model. The GEP method produced a nonlinear quantitative model with a correlation coefficient and a mean error of 0.96 and 0.24 for the training set, 0.91 and 0.52 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones.
Vibrational kinetics in CO electric discharge lasers - Modeling and experiments
NASA Technical Reports Server (NTRS)
Stanton, A. C.; Hanson, R. K.; Mitchner, M.
1980-01-01
A model of CO laser vibrational kinetics is developed, and predicted vibrational distributions are compared with measurements. The experimental distributions were obtained at various flow locations in a transverse CW discharge in supersonic (M = 3) flow. Good qualitative agreement is obtained in the comparisons, including the prediction of a total inversion at low discharge current densities. The major area of discrepancy is an observed loss in vibrational energy downstream of the discharge which is not predicted by the model. This discrepancy may be due to three-dimensional effects in the experiment which are not included in the model. Possible kinetic effects which may contribute to vibrational energy loss are also examined.
Dichotomy between the band and hopping transport in organic crystals: insights from experiments.
Yavuz, I
2017-10-04
The molecular understanding of charge-transport in organic crystals has often been tangled with identifying the true dynamical origin. While in two distinct cases where complete delocalization and localization of charge-carriers are associated with band-like and hopping-like transports, respectively, their possible coalescence poses some mystery. Moreover, the existing models are still controversial at ambient temperatures. Here, we review the issues in charge-transport theories of organic materials and then provide an overview of prominent transport models. We explored ∼60 organic crystals, the single-crystal hole/electron mobilities of which have been predicted by band-like and hopping-like transport models, separately. Our comparative results show that at room-temperature neither of the models are exclusively capable of accurately predicting mobilities in a very broad range. Hopping-like models well-predict experimental mobilities around μ ∼ 1 cm 2 V -1 s -1 but systematically diverge at high mobilities. Similarly, band-like models are good at μ > ∼50 cm 2 V -1 s -1 but systematically diverge at lower mobilities. These results suggest the development of a unique and robust room-temperature transport model incorporating a mixture of these two extreme cases, whose relative importance is associated with their predominant regions. We deduce that while band models are beneficial for rationally designing high mobility organic-semiconductors, hopping models are good to elucidate the charge-transport of most organic-semiconductors.
Forcey, G.M.; Linz, G.M.; Thogmartin, W.E.; Bleier, W.J.
2008-01-01
Blackbirds share wetland habitat with many waterfowl species in Bird Conservation Region 11 (BCR 11), the prairie potholes. Because of similar habitat preferences, there may be associations between blackbird populations and populations of one or more species of waterfowl in BCR11. This study models populations of red-winged blackbirds and yellow-headed blackbirds as a function of multiple waterfowl species using data from the North American Breeding Bird Survey within BCR11. For each blackbird species, we created a global model with blackbird abundance modeled as a function of 11 waterfowl species; nuisance effects (year, route, and observer) also were included in the model. Hierarchical Poisson regression models were fit using Markov chain Monte Carlo methods in WinBUGS 1.4.1. Waterfowl abundances were weakly associated with blackbird numbers, and no single waterfowl species showed a strong correlation with any blackbird species. These findings suggest waterfowl abundance from a single species is not likely a good bioindicator of blackbird abundance; however, a global model provided good fit for predicting red-winged blackbird abundance. Increased model complexity may be required for accurate predictions of blackbird abundance; the amount of data required to construct appropriate models may limit this approach for predicting blackbird abundance in the prairie potholes. Copyright ?? Taylor & Francis Group, LLC.
Modeling maximum daily temperature using a varying coefficient regression model
Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith
2014-01-01
Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...
Application of Grey Model GM(1, 1) to Ultra Short-Term Predictions of Universal Time
NASA Astrophysics Data System (ADS)
Lei, Yu; Guo, Min; Zhao, Danning; Cai, Hongbing; Hu, Dandan
2016-03-01
A mathematical model known as one-order one-variable grey differential equation model GM(1, 1) has been herein employed successfully for the ultra short-term (<10days) predictions of universal time (UT1-UTC). The results of predictions are analyzed and compared with those obtained by other methods. It is shown that the accuracy of the predictions is comparable with that obtained by other prediction methods. The proposed method is able to yield an exact prediction even though only a few observations are provided. Hence it is very valuable in the case of a small size dataset since traditional methods, e.g., least-squares (LS) extrapolation, require longer data span to make a good forecast. In addition, these results can be obtained without making any assumption about an original dataset, and thus is of high reliability. Another advantage is that the developed method is easy to use. All these reveal a great potential of the GM(1, 1) model for UT1-UTC predictions.
Analysis and modeling of infrasound from a four-stage rocket launch.
Blom, Philip; Marcillo, Omar; Arrowsmith, Stephen
2016-06-01
Infrasound from a four-stage sounding rocket was recorded by several arrays within 100 km of the launch pad. Propagation modeling methods have been applied to the known trajectory to predict infrasonic signals at the ground in order to identify what information might be obtained from such observations. There is good agreement between modeled and observed back azimuths, and predicted arrival times for motor ignition signals match those observed. The signal due to the high-altitude stage ignition is found to be low amplitude, despite predictions of weak attenuation. This lack of signal is possibly due to inefficient aeroacoustic coupling in the rarefied upper atmosphere.
QSAR study of curcumine derivatives as HIV-1 integrase inhibitors.
Gupta, Pawan; Sharma, Anju; Garg, Prabha; Roy, Nilanjan
2013-03-01
A QSAR study was performed on curcumine derivatives as HIV-1 integrase inhibitors using multiple linear regression. The statistically significant model was developed with squared correlation coefficients (r(2)) 0.891 and cross validated r(2) (r(2) cv) 0.825. The developed model revealed that electronic, shape, size, geometry, substitution's information and hydrophilicity were important atomic properties for determining the inhibitory activity of these molecules. The model was also tested successfully for external validation (r(2) pred = 0.849) as well as Tropsha's test for model predictability. Furthermore, the domain analysis was carried out to evaluate the prediction reliability of external set molecules. The model was statistically robust and had good predictive power which can be successfully utilized for screening of new molecules.
A study of material damping in large space structures
NASA Technical Reports Server (NTRS)
Highsmith, A. L.; Allen, D. H.
1989-01-01
A constitutive model was developed for predicting damping as a function of damage in continuous fiber reinforced laminated composites. The damage model is a continuum formulation, and uses internal state variables to quantify damage and its subsequent effect on material response. The model is sensitive to the stacking sequence of the laminate. Given appropriate baseline data from unidirectional material, and damping as a function of damage in one crossply laminate, damage can be predicted as a function of damage in other crossply laminates. Agreement between theory and experiment was quite good. A micromechanics model was also developed for examining the influence of damage on damping. This model explicitly includes crack surfaces. The model provides reasonable predictions of bending stiffness as a function of damage. Damping predictions are not in agreement with the experiment. This is thought to be a result of dissipation mechanisms such as friction, which are not presently included in the analysis.
TiC growth in C fiber/Ti alloy composites during liquid infiltration
NASA Technical Reports Server (NTRS)
Warrier, S. G.; Lin, R. Y.
1993-01-01
A cylindrical model is developed for predicting the reaction zone thickness of carbon fiber-reinforced Ti-matrix composites, and good agreement is obtained between its predicted values and experimental results. The reaction-rate constant for TiC formation is estimated to be 1.5 x 10 exp -9 sq cm/sec. The model is extended to evaluate the relationship between C-coating thicknesses on SiC fibers and processing times.
Load Measurement in Structural Members Using Guided Acoustic Waves
NASA Astrophysics Data System (ADS)
Chen, Feng; Wilcox, Paul D.
2006-03-01
A non-destructive technique to measure load in structures such as rails and bridge cables by using guided acoustic waves is investigated both theoretically and experimentally. Robust finite element models for predicting the effect of load on guided wave propagation are developed and example results are presented for rods. Reasonably good agreement of experimental results with modelling prediction is obtained. The measurement technique has been developed to perform tests on larger specimens.
Joule-Thomson effect and internal convection heat transfer in turbulent He II flow
NASA Technical Reports Server (NTRS)
Walstrom, P. L.
1988-01-01
The temperature rise in highly turbulent He II flowing in tubing was measured in the temperature range 1.6-2.1 K. The effect of internal convection heat transport on the predicted temperature profiles is calculated from the two-fluid model with mutual friction. The model predictions are in good agreement with the measurements, provided that the pressure gradient term is retained in the expression for internal convection heat flow.
Predictions of the residue cross-sections for the elements Z = 113 and Z = 114
NASA Astrophysics Data System (ADS)
Bouriquet, B.; Abe, Y.; Kosenko, G.
2004-10-01
A good reproduction of experimental excitation functions is obtained for the 1 n reactions producing the elements with Z = 108, 110, 111 and 112 by the combined usage of the two-step model for fusion and the statistical decay code KEWPIE. Furthermore, the model provides reliable predictions of productions of the elements with Z = 113 and Z = 114 which will be a useful guide for plannings of experiments.
A diagnostic model for studying daytime urban air quality trends
NASA Technical Reports Server (NTRS)
Brewer, D. A.; Remsberg, E. E.; Woodbury, G. E.
1981-01-01
A single cell Eulerian photochemical air quality simulation model was developed and validated for selected days of the 1976 St. Louis Regional Air Pollution Study (RAPS) data sets; parameterizations of variables in the model and validation studies using the model are discussed. Good agreement was obtained between measured and modeled concentrations of NO, CO, and NO2 for all days simulated. The maximum concentration of O3 was also predicted well. Predicted species concentrations were relatively insensitive to small variations in CO and NOx emissions and to the concentrations of species which are entrained as the mixed layer rises.
A burnout prediction model based around char morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao Wu; Edward Lester; Michael Cloke
Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less
NASA Astrophysics Data System (ADS)
Ángel Prósper Fernández, Miguel; Casal, Carlos Otero; Canoura Fernández, Felipe; Miguez-Macho, Gonzalo
2017-04-01
Regional meteorological models are becoming a generalized tool for forecasting wind resource, due to their capacity to simulate local flow dynamics impacting wind farm production. This study focuses on the production forecast and validation of a real onshore wind farm using high horizontal and vertical resolution WRF (Weather Research and Forecasting) model simulations. The wind farm is located in Galicia, in the northwest of Spain, in a complex terrain region with high wind resource. Utilizing the Fitch scheme, specific for wind farms, a period of one year is simulated with a daily operational forecasting set-up. Power and wind predictions are obtained and compared with real data provided by the management company. Results show that WRF is able to yield good wind power operational predictions for this kind of wind farms, due to a good representation of the planetary boundary layer behaviour of the region and the good performance of the Fitch scheme under these conditions.
Wang, Yunfeng; Ma, Zhimin; Xu, Chaonan; Wang, ZiKun; Yang, Xinghua
2018-05-15
This study aimed to identify the rules of transition between normotension, prehypertension and hypertension states and to establish a prediction model for the incidence of prehypertension and hypertension. Data from the China Health and Nutrition Survey from 1991 to 2009 were used as training data to develop the model. Data of the year 2011 were used for model validation. The multistate Markov model was developed using the msm package in R software. A total of 5265 participants were included at baseline, with an average follow-up of 8.05 ± 5.27 years and 17 640 observations. The ratio of men to women was 1 : 1.17, and the mean age was 37.54 ± 13.80 years. Within 10 years, in men, from normotension, the average probability to prehypertension and hypertension are 34.5 and 35.25%, respectively; from prehypertension, the average probability of recovering to normotension and developing to hypertension are 17.78 and 43.85%, respectively. In women, the average probabilities are 27.49, 28.09, 29.11 and 39.05%. Fat consumption increasing was found to be a protective factor, with 4.5% lower rate of transferring from normotension to prehypertension for a quarter percentage increasing. The model showed a very good prediction ability within 10 years and provided good prediction of blood pressure in the 2011 cohort (χ = 0.781, P = 0.676). The multistate Markov model can be a useful tool to identify the rules of transition among multiple states of blood pressure and predict well prevalence of the normotension, prehypertension and hypertension in cohort populations.
A Severe Sepsis Mortality Prediction Model and Score for Use with Administrative Data
Ford, Dee W.; Goodwin, Andrew J.; Simpson, Annie N.; Johnson, Emily; Nadig, Nandita; Simpson, Kit N.
2016-01-01
Objective Administrative data is used for research, quality improvement, and health policy in severe sepsis. However, there is not a sepsis-specific tool applicable to administrative data with which to adjust for illness severity. Our objective was to develop, internally validate, and externally validate a severe sepsis mortality prediction model and associated mortality prediction score. Design Retrospective cohort study using 2012 administrative data from five US states. Three cohorts of patients with severe sepsis were created: 1) ICD-9-CM codes for severe sepsis/septic shock, 2) ‘Martin’ approach, and 3) ‘Angus’ approach. The model was developed and internally validated in ICD-9-CM cohort and externally validated in other cohorts. Integer point values for each predictor variable were generated to create a sepsis severity score. Setting Acute care, non-federal hospitals in NY, MD, FL, MI, and WA Subjects Patients in one of three severe sepsis cohorts: 1) explicitly coded (n=108,448), 2) Martin cohort (n=139,094), and 3) Angus cohort (n=523,637) Interventions None Measurements and Main Results Maximum likelihood estimation logistic regression to develop a predictive model for in-hospital mortality. Model calibration and discrimination assessed via Hosmer-Lemeshow goodness-of-fit (GOF) and C-statistics respectively. Primary cohort subset into risk deciles and observed versus predicted mortality plotted. GOF demonstrated p>0.05 for each cohort demonstrating sound calibration. C-statistic ranged from low of 0.709 (sepsis severity score) to high of 0.838 (Angus cohort) suggesting good to excellent model discrimination. Comparison of observed versus expected mortality was robust although accuracy decreased in highest risk decile. Conclusions Our sepsis severity model and score is a tool that provides reliable risk adjustment for administrative data. PMID:26496452
Roozenbeek, Bob; Lingsma, Hester F.; Lecky, Fiona E.; Lu, Juan; Weir, James; Butcher, Isabella; McHugh, Gillian S.; Murray, Gordon D.; Perel, Pablo; Maas, Andrew I.R.; Steyerberg, Ewout W.
2012-01-01
Objective The International Mission on Prognosis and Analysis of Clinical Trials (IMPACT) and Corticoid Randomisation After Significant Head injury (CRASH) prognostic models predict outcome after traumatic brain injury (TBI) but have not been compared in large datasets. The objective of this is study is to validate externally and compare the IMPACT and CRASH prognostic models for prediction of outcome after moderate or severe TBI. Design External validation study. Patients We considered 5 new datasets with a total of 9036 patients, comprising three randomized trials and two observational series, containing prospectively collected individual TBI patient data. Measurements Outcomes were mortality and unfavourable outcome, based on the Glasgow Outcome Score (GOS) at six months after injury. To assess performance, we studied the discrimination of the models (by AUCs), and calibration (by comparison of the mean observed to predicted outcomes and calibration slopes). Main Results The highest discrimination was found in the TARN trauma registry (AUCs between 0.83 and 0.87), and the lowest discrimination in the Pharmos trial (AUCs between 0.65 and 0.71). Although differences in predictor effects between development and validation populations were found (calibration slopes varying between 0.58 and 1.53), the differences in discrimination were largely explained by differences in case-mix in the validation studies. Calibration was good, the fraction of observed outcomes generally agreed well with the mean predicted outcome. No meaningful differences were noted in performance between the IMPACT and CRASH models. More complex models discriminated slightly better than simpler variants. Conclusions Since both the IMPACT and the CRASH prognostic models show good generalizability to more recent data, they are valid instruments to quantify prognosis in TBI. PMID:22511138
Prediction of breakdown strength of cellulosic insulating materials using artificial neural networks
NASA Astrophysics Data System (ADS)
Singh, Sakshi; Mohsin, M. M.; Masood, Aejaz
In this research work, a few sets of experiments have been performed in high voltage laboratory on various cellulosic insulating materials like diamond-dotted paper, paper phenolic sheets, cotton phenolic sheets, leatheroid, and presspaper, to measure different electrical parameters like breakdown strength, relative permittivity, loss tangent, etc. Considering the dependency of breakdown strength on other physical parameters, different Artificial Neural Network (ANN) models are proposed for the prediction of breakdown strength. The ANN model results are compared with those obtained experimentally and also with the values already predicted from an empirical relation suggested by Swanson and Dall. The reported results indicated that the breakdown strength predicted from the ANN model is in good agreement with the experimental values.
Microstructure Evolution and Flow Stress Model of a 20Mn5 Hollow Steel Ingot during Hot Compression.
Liu, Min; Ma, Qing-Xian; Luo, Jian-Bin
2018-03-21
20Mn5 steel is widely used in the manufacture of heavy hydro-generator shaft due to its good performance of strength, toughness and wear resistance. However, the hot deformation and recrystallization behaviors of 20Mn5 steel compressed under high temperature were not studied. In this study, the hot compression experiments under temperatures of 850-1200 °C and strain rates of 0.01/s-1/s are conducted using Gleeble thermal and mechanical simulation machine. And the flow stress curves and microstructure after hot compression are obtained. Effects of temperature and strain rate on microstructure are analyzed. Based on the classical stress-dislocation relation and the kinetics of dynamic recrystallization, a two-stage constitutive model is developed to predict the flow stress of 20Mn5 steel. Comparisons between experimental flow stress and predicted flow stress show that the predicted flow stress values are in good agreement with the experimental flow stress values, which indicates that the proposed constitutive model is reliable and can be used for numerical simulation of hot forging of 20Mn5 hollow steel ingot.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maloney, Daniel J; Monazam, Esmail R; Casleton, Kent H
Char samples representing a range of combustion conditions and extents of burnout were obtained from a well-characterized laminar flow combustion experiment. Individual particles from the parent coal and char samples were characterized to determine distributions in particle volume, mass, and density at different extent of burnout. The data were then compared with predictions from a comprehensive char combustion model referred to as the char burnout kinetics model (CBK). The data clearly reflect the particle- to-particle heterogeneity of the parent coal and show a significant broadening in the size and density distributions of the chars resulting from both devolatilization and combustion.more » Data for chars prepared in a lower oxygen content environment (6% oxygen by vol.) are consistent with zone II type combustion behavior where most of the combustion is occurring near the particle surface. At higher oxygen contents (12% by vol.), the data show indications of more burning occurring in the particle interior. The CBK model does a good job of predicting the general nature of the development of size and density distributions during burning but the input distribution of particle size and density is critical to obtaining good predictions. A significant reduction in particle size was observed to occur as a result of devolatilization. For comprehensive combustion models to provide accurate predictions, this size reduction phenomenon needs to be included in devolatilization models so that representative char distributions are carried through the calculations.« less
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network
Yu, Ying; Wang, Yirui; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.
Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.
Purposes and methods of scoring earthquake forecasts
NASA Astrophysics Data System (ADS)
Zhuang, J.
2010-12-01
There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.
Explaining and modeling the concentration and loading of Escherichia coli in a stream-A case study.
Wang, Chaozi; Schneider, Rebecca L; Parlange, Jean-Yves; Dahlke, Helen E; Walter, M Todd
2018-09-01
Escherichia coli (E. coli) level in streams is a public health indicator. Therefore, being able to explain why E. coli levels are sometimes high and sometimes low is important. Using citizen science data from Fall Creek in central NY we found that complementarily using principal component analysis (PCA) and partial least squares (PLS) regression provided insights into the drivers of E. coli and a mechanism for predicting E. coli levels, respectively. We found that stormwater, temperature/season and shallow subsurface flow are the three dominant processes driving the fate and transport of E. coli. PLS regression modeling provided very good predictions under stormwater conditions (R 2 = 0.85 for log (E. coli concentration) and R 2 = 0.90 for log (E. coli loading)); predictions under baseflow conditions were less robust. But, in our case, both E. coli concentration and E. coli loading were significantly higher under stormwater condition, so it is probably more important to predict high-flow E. coli hazards than low-flow conditions. Besides previously reported good indicators of in-stream E. coli level, nitrate-/nitrite-nitrogen and soluble reactive phosphorus were also found to be good indicators of in-stream E. coli levels. These findings suggest management practices to reduce E. coli concentrations and loads in-streams and, eventually, reduce the risk of waterborne disease outbreak. Copyright © 2018. Published by Elsevier B.V.
Yang, Fen; Wang, Baolian; Liu, Zhihao; Xia, Xuejun; Wang, Weijun; Yin, Dali; Sheng, Li; Li, Yan
2017-01-01
Physiologically based pharmacokinetic (PBPK)/pharmacodynamic (PD) models can contribute to animal-to-human extrapolation and therapeutic dose predictions. Buagafuran is a novel anxiolytic agent and phase I clinical trials of buagafuran have been completed. In this paper, a potentially effective dose for buagafuran of 30 mg t.i.d. in human was estimated based on the human brain concentration predicted by a PBPK/PD modeling. The software GastroPlus TM was used to build the PBPK/PD model for buagafuran in rat which related the brain tissue concentrations of buagafuran and the times of animals entering the open arms in the pharmacological model of elevated plus-maze. Buagafuran concentrations in human plasma were fitted and brain tissue concentrations were predicted by using a human PBPK model in which the predicted plasma profiles were in good agreement with observations. The results provided supportive data for the rational use of buagafuran in clinic.
FABRIC FILTER MODEL SENSITIVITY ANALYSIS
The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was bas...
NASA Technical Reports Server (NTRS)
Stoll, F.; Koenig, D. G.
1983-01-01
Data obtained through very high angles of attack from a large-scale, subsonic wind-tunnel test of a close-coupled canard-delta-wing fighter model are analyzed. The canard delays wing leading-edge vortex breakdown, even for angles of attack at which the canard is completely stalled. A vortex-lattice method was applied which gave good predictions of lift and pitching moment up to an angle of attack of about 20 deg, where vortex-breakdown effects on performance become significant. Pitch-control inputs generally retain full effectiveness up to the angle of attack of maximum lift, beyond which, effectiveness drops off rapidly. A high-angle-of-attack prediction method gives good estimates of lift and drag for the completely stalled aircraft. Roll asymmetry observed at zero sideslip is apparently caused by an asymmetry in the model support structure.
Numerical modeling of friction welding of bi-metal joints for electrical applications
NASA Astrophysics Data System (ADS)
Velu, P. Shenbaga; Hynes, N. Rajesh Jesudoss
2018-05-01
In the manufacturing industries, and more especially in electrical engineering applications, the usage of non-ferrous materials plays a vital role. Today's engineering applications relies upon some of the significant properties such as a good corrosion resistance, mechanical properties, good heat conductivity and higher electrical conductivity. Copper-aluminum bi-metal joint is one such combination that meets the demands requirements for electrical applications. In this work, the numerical simulation of AA 6061 T6 alloy/Copper was carried out under joining conditions. By using this developed model, the temperature distribution along the length of the dissimilar joint is predicted and the time-temperature profile has also been generated. Besides, a Finite Element Model has been developed by using the numerical simulation Tool "ABAQUS". This developed FEM is helpful in predicting various output parameters during friction welding of this dissimilar joint combination.
Prediction of pelvic organ prolapse using an artificial neural network.
Robinson, Christopher J; Swift, Steven; Johnson, Donna D; Almeida, Jonas S
2008-08-01
The objective of this investigation was to test the ability of a feedforward artificial neural network (ANN) to differentiate patients who have pelvic organ prolapse (POP) from those who retain good pelvic organ support. Following institutional review board approval, patients with POP (n = 87) and controls with good pelvic organ support (n = 368) were identified from the urogynecology research database. Historical and clinical information was extracted from the database. Data analysis included the training of a feedforward ANN, variable selection, and external validation of the model with an independent data set. Twenty variables were used. The median-performing ANN model used a median of 3 (quartile 1:3 to quartile 3:5) variables and achieved an area under the receiver operator curve of 0.90 (external, independent validation set). Ninety percent sensitivity and 83% specificity were obtained in the external validation by ANN classification. Feedforward ANN modeling is applicable to the identification and prediction of POP.
Liu, Wen; Cheng, Ruochuan; Ma, Yunhai; Wang, Dan; Su, Yanjun; Diao, Chang; Zhang, Jianming; Qian, Jun; Liu, Jin
2018-05-03
Early preoperative diagnosis of central lymph node metastasis (CNM) is crucial to improve survival rates among patients with papillary thyroid carcinoma (PTC). Here, we analyzed clinical data from 2862 PTC patients and developed a scoring system using multivariable logistic regression and testified by the validation group. The predictive diagnostic effectiveness of the scoring system was evaluated based on consistency, discrimination ability, and accuracy. The scoring system considered seven variables: gender, age, tumor size, microcalcification, resistance index >0.7, multiple nodular lesions, and extrathyroid extension. The area under the receiver operating characteristic curve (AUC) was 0.742, indicating a good discrimination. Using 5 points as a diagnostic threshold, the validation results for validation group had an AUC of 0.758, indicating good discrimination and consistency in the scoring system. The sensitivity of this predictive model for preoperative diagnosis of CNM was 4 times higher than a direct ultrasound diagnosis. These data indicate that the CNM prediction model would improve preoperative diagnostic sensitivity for CNM in patients with papillary thyroid carcinoma.
Prediction of surface distress using neural networks
NASA Astrophysics Data System (ADS)
Hamdi, Hadiwardoyo, Sigit P.; Correia, A. Gomes; Pereira, Paulo; Cortez, Paulo
2017-06-01
Road infrastructures contribute to a healthy economy throughout a sustainable distribution of goods and services. A road network requires appropriately programmed maintenance treatments in order to keep roads assets in good condition, providing maximum safety for road users under a cost-effective approach. Surface Distress is the key element to identify road condition and may be generated by many different factors. In this paper, a new approach is aimed to predict Surface Distress Index (SDI) values following a data-driven approach. Later this model will be accordingly applied by using data obtained from the Integrated Road Management System (IRMS) database. Artificial Neural Networks (ANNs) are used to predict SDI index using input variables related to the surface of distress, i.e., crack area and width, pothole, rutting, patching and depression. The achieved results show that ANN is able to predict SDI with high correlation factor (R2 = 0.996%). Moreover, a sensitivity analysis was applied to the ANN model, revealing the influence of the most relevant input parameters for SDI prediction, namely rutting (59.8%), crack width (29.9%) and crack area (5.0%), patching (3.0%), pothole (1.7%) and depression (0.3%).
Silva, P; Crozier, S; Veidt, M; Pearcy, M J
2005-07-01
A hydrogel intervertebral disc (IVD) model consisting of an inner nucleus core and an outer anulus ring was manufactured from 30 and 35% by weight Poly(vinyl alcohol) hydrogel (PVA-H) concentrations and subjected to axial compression in between saturated porous endplates at 200 N for 11 h, 30 min. Repeat experiments (n=4) on different samples (N=2) show good reproducibility of fluid loss and axial deformation. An axisymmetric nonlinear poroelastic finite element model with variable permeability was developed using commercial finite element software to compare axial deformation and predicted fluid loss with experimental data. The FE predictions indicate differential fluid loss similar to that of biological IVDs, with the nucleus losing more water than the anulus, and there is overall good agreement between experimental and finite element predicted fluid loss. The stress distribution pattern indicates important similarities with the biological IVD that includes stress transference from the nucleus to the anulus upon sustained loading and renders it suitable as a model that can be used in future studies to better understand the role of fluid and stress in biological IVDs.
A mathematical model for lactate transport to red blood cells.
Wahl, Patrick; Yue, Zengyuan; Zinner, Christoph; Bloch, Wilhelm; Mester, Joachim
2011-03-01
A simple mathematical model for the transport of lactate from plasma to red blood cells (RBCs) during and after exercise is proposed based on our experimental studies for the lactate concentrations in RBCs and in plasma. In addition to the influx associated with the plasma-to-RBC lactate concentration gradient, it is argued that an efflux must exist. The efflux rate is assumed to be proportional to the lactate concentration in RBCs. This simple model is justified by the comparison between the model-predicted results and observations: For all 33 cases (11 subjects and 3 different warm-up conditions), the model-predicted time courses of lactate concentrations in RBC are generally in good agreement with observations, and the model-predicted ratios between lactate concentrations in RBCs and in plasma at the peak of lactate concentration in RBCs are very close to the observed values. Two constants, the influx rate coefficient C (1) and the efflux rate coefficient C (2), are involved in the present model. They are determined by the best fit to observations. Although the exact electro-chemical mechanism for the efflux remains to be figured out in the future research, the good agreement of the present model with observations suggests that the efflux must get stronger as the lactate concentration in RBCs increases. The physiological meanings of C (1) and C (2) as well as their potential applications are discussed.
Intelligent processing for thick composites
NASA Astrophysics Data System (ADS)
Shin, Daniel Dong-Ok
2000-10-01
Manufacturing thick composite parts are associated with adverse curing conditions such as large in-plane temperature gradient and exotherms. The condition is further aggravated because the manufacturer's cycle and the existing cure control systems do not adequately counter such affects. In response, the forecast-based thermal control system is developed to have better cure control for thick composites. Accurate cure kinetic model is crucial for correctly identifying the amount of heat generated for composite process simulation. A new technique for identifying cure parameters for Hercules AS4/3502 prepreg is presented by normalizing the DSC data. The cure kinetics is based on an autocatalytic model for the proposed method, which uses dynamic and isothermal DSC data to determine its parameters. Existing models are also used to determine kinetic parameters but rendered inadequate because of the material's temperature dependent final degree of cure. The model predictions determined from the new technique showed good agreement to both isothermal and dynamic DSC data. The final degree of cure was also in good agreement with experimental data. A realistic cure simulation model including bleeder ply analysis and compaction is validated with Hercules AS4/3501-6 based laminates. The nonsymmetrical temperature distribution resulting from the presence of bleeder plies agreed well to the model prediction. Some of the discrepancies in the predicted compaction behavior were attributed to inaccurate viscosity and permeability models. The temperature prediction was quite good for the 3cm laminate. The validated process simulation model along with cure kinetics model for AS4/3502 prepreg were integrated into the thermal control system. The 3cm Hercules AS4/3501-6 and AS4/3502 laminate were fabricated. The resulting cure cycles satisfied all imposed requirements by minimizing exotherms and temperature gradient. Although the duration of the cure cycles increased, such phenomena was inevitable since longer time was required to maintain acceptable temperature gradient. The derived cure cycles were slightly different than what was anticipated by the offline simulation. Nevertheless, the system adapted to unanticipated events to satisfy the cure requirements.
A GIS modeling method applied to predicting forest songbird habitat
Dettmers, Randy; Bart, Jonathan
1999-01-01
We have developed an approach for using a??presencea?? data to construct habitat models. Presence data are those that indicate locations where the target organism is observed to occur, but that cannot be used to define locations where the organism does not occur. Surveys of highly mobile vertebrates often yield these kinds of data. Models developed through our approach yield predictions of the amount and the spatial distribution of good-quality habitat for the target species. This approach was developed primarily for use in a GIS context; thus, the models are spatially explicit and have the potential to be applied over large areas. Our method consists of two primary steps. In the first step, we identify an optimal range of values for each habitat variable to be used as a predictor in the model. To find these ranges, we employ the concept of maximizing the difference between cumulative distribution functions of (1) the values of a habitat variable at the observed presence locations of the target organism, and (2) the values of that habitat variable for all locations across a study area. In the second step, multivariate models of good habitat are constructed by combining these ranges of values, using the Boolean operators a??anda?? and a??or.a?? We use an approach similar to forward stepwise regression to select the best overall model. We demonstrate the use of this method by developing species-specific habitat models for nine forest-breeding songbirds (e.g., Cerulean Warbler, Scarlet Tanager, Wood Thrush) studied in southern Ohio. These models are based on speciesa?? microhabitat preferences for moisture and vegetation characteristics that can be predicted primarily through the use of abiotic variables. We use slope, land surface morphology, land surface curvature, water flow accumulation downhill, and an integrated moisture index, in conjunction with a land-cover classification that identifies forest/nonforest, to develop these models. The performance of these models was evaluated with an independent data set. Our tests showed that the models performed better than random at identifying where the birds occurred and provided useful information for predicting the amount and spatial distribution of good habitat for the birds we studied. In addition, we generally found positive correlations between the amount of habitat, as predicted by the models, and the number of territories within a given area. This added component provides the possibility, ultimately, of being able to estimate population sizes. Our models represent useful tools for resource managers who are interested in assessing the impacts of alternative management plans that could alter or remove habitat for these birds.
Pavurala, Naresh; Xu, Xiaoming; Krishnaiah, Yellela S R
2017-05-15
Hyperspectral imaging using near infrared spectroscopy (NIRS) integrates spectroscopy and conventional imaging to obtain both spectral and spatial information of materials. The non-invasive and rapid nature of hyperspectral imaging using NIRS makes it a valuable process analytical technology (PAT) tool for in-process monitoring and control of the manufacturing process for transdermal drug delivery systems (TDS). The focus of this investigation was to develop and validate the use of Near Infra-red (NIR) hyperspectral imaging to monitor coat thickness uniformity, a critical quality attribute (CQA) for TDS. Chemometric analysis was used to process the hyperspectral image and a partial least square (PLS) model was developed to predict the coat thickness of the TDS. The goodness of model fit and prediction were 0.9933 and 0.9933, respectively, indicating an excellent fit to the training data and also good predictability. The % Prediction Error (%PE) for internal and external validation samples was less than 5% confirming the accuracy of the PLS model developed in the present study. The feasibility of the hyperspectral imaging as a real-time process analytical tool for continuous processing was also investigated. When the PLS model was applied to detect deliberate variation in coating thickness, it was able to predict both the small and large variations as well as identify coating defects such as non-uniform regions and presence of air bubbles. Published by Elsevier B.V.
Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs
2014-02-01
The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.
Jin, Xiaochen; Fu, Zhiqiang; Li, Xuehua; Chen, Jingwen
2017-03-22
The octanol-air partition coefficient (K OA ) is a key parameter describing the partition behavior of organic chemicals between air and environmental organic phases. As the experimental determination of K OA is costly, time-consuming and sometimes limited by the availability of authentic chemical standards for the compounds to be determined, it becomes necessary to develop credible predictive models for K OA . In this study, a polyparameter linear free energy relationship (pp-LFER) model for predicting K OA at 298.15 K and a novel model incorporating pp-LFERs with temperature (pp-LFER-T model) were developed from 795 log K OA values for 367 chemicals at different temperatures (263.15-323.15 K), and were evaluated with the OECD guidelines on QSAR model validation and applicability domain description. Statistical results show that both models are well-fitted, robust and have good predictive capabilities. Particularly, the pp-LFER model shows a strong predictive ability for polyfluoroalkyl substances and organosilicon compounds, and the pp-LFER-T model maintains a high predictive accuracy within a wide temperature range (263.15-323.15 K).
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Sarkar, S.
1993-01-01
The roles of pressure-strain and turbulent diffusion models in the numerical calculation of turbulent plane channel flows with second-moment closure models are investigated. Three turbulent diffusion and five pressure-strain models are utilized in the computations. The main characteristics of the mean flow and the turbulent fields are compared against experimental data. All the features of the mean flow are correctly predicted by all but one of the Reynolds stress closure models. The Reynolds stress anisotropies in the log layer are predicted to varying degrees of accuracy (good to fair) by the models. None of the models could predict correctly the extent of relaxation towards isotropy in the wake region near the center of the channel. Results from the directional numerical simulation are used to further clarify this behavior of the models.
Systematic study of Reynolds stress closure models in the computations of plane channel flows
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Sarkar, S.
1992-01-01
The roles of pressure-strain and turbulent diffusion models in the numerical calculation of turbulent plane channel flows with second-moment closure models are investigated. Three turbulent diffusion and five pressure-strain models are utilized in the computations. The main characteristics of the mean flow and the turbulent fields are compared against experimental data. All the features of the mean flow are correctly predicted by all but one of the Reynolds stress closure models. The Reynolds stress anisotropies in the log layer are predicted to varying degrees of accuracy (good to fair) by the models. None of the models could predict correctly the extent of relaxation towards isotropy in the wake region near the center of the channel. Results from the directional numerical simulation are used to further clarify this behavior of the models.
Genomic-Enabled Prediction of Ordinal Data with Bayesian Logistic Ordinal Regression.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Burgueño, Juan; Eskridge, Kent
2015-08-18
Most genomic-enabled prediction models developed so far assume that the response variable is continuous and normally distributed. The exception is the probit model, developed for ordered categorical phenotypes. In statistical applications, because of the easy implementation of the Bayesian probit ordinal regression (BPOR) model, Bayesian logistic ordinal regression (BLOR) is implemented rarely in the context of genomic-enabled prediction [sample size (n) is much smaller than the number of parameters (p)]. For this reason, in this paper we propose a BLOR model using the Pólya-Gamma data augmentation approach that produces a Gibbs sampler with similar full conditional distributions of the BPOR model and with the advantage that the BPOR model is a particular case of the BLOR model. We evaluated the proposed model by using simulation and two real data sets. Results indicate that our BLOR model is a good alternative for analyzing ordinal data in the context of genomic-enabled prediction with the probit or logit link. Copyright © 2015 Montesinos-López et al.
Prediction of acute kidney injury within 30 days of cardiac surgery.
Ng, Shu Yi; Sanagou, Masoumeh; Wolfe, Rory; Cochrane, Andrew; Smith, Julian A; Reid, Christopher Michael
2014-06-01
To predict acute kidney injury after cardiac surgery. The study included 28,422 cardiac surgery patients who had had no preoperative renal dialysis from June 2001 to June 2009 in 18 hospitals. Logistic regression analyses were undertaken to identify the best combination of risk factors for predicting acute kidney injury. Two models were developed, one including the preoperative risk factors and another including the pre-, peri-, and early postoperative risk factors. The area under the receiver operating characteristic curve was calculated, using split-sample internal validation, to assess model discrimination. The incidence of acute kidney injury was 5.8% (1642 patients). The mortality for patients who experienced acute kidney injury was 17.4% versus 1.6% for patients who did not. On validation, the area under the curve for the preoperative model was 0.77, and the Hosmer-Lemeshow goodness-of-fit P value was .06. For the postoperative model area under the curve was 0.81 and the Hosmer-Lemeshow P value was .6. Both models had good discrimination and acceptable calibration. Acute kidney injury after cardiac surgery can be predicted using preoperative risk factors alone or, with greater accuracy, using pre-, peri-, and early postoperative risk factors. The ability to identify high-risk individuals can be useful in preoperative patient management and for recruitment of appropriate patients to clinical trials. Prediction in the early stages of postoperative care can guide subsequent intensive care of patients and could also be the basis of a retrospective performance audit tool. Copyright © 2014 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
A test of 3 models of Kirtland's warbler habitat suitability
Mark D. Nelson; Richard R. Buech
1996-01-01
We tested 3 models of Kirtland's warbler (Dendroica kirtlandii) habitat suitability during a period when we believe there was a surplus of good quality breeding habitat. A jack pine canopy-cover model was superior to 2 jack pine stem-density models in predicting Kirtland's warbler habitat use and non-use. Estimated density of birds in high...
The Theory of Planned Behavior as a Model of Heavy Episodic Drinking Among College Students
Collins, Susan E.; Carey, Kate B.
2008-01-01
This study provided a simultaneous, confirmatory test of the theory of planned behavior (TPB) in predicting heavy episodic drinking (HED) among college students. It was hypothesized that past HED, drinking attitudes, subjective norms and drinking refusal self-efficacy would predict intention, which would in turn predict future HED. Participants consisted of 131 college drinkers (63% female) who reported having engaged in HED in the previous two weeks. Participants were recruited and completed questionnaires within the context of a larger intervention study (see Collins & Carey, 2005). Latent factor structural equation modeling was used to test the ability of the TPB to predict HED. Chi-square tests and fit indices indicated good fit for the final structural models. Self-efficacy and attitudes but not subjective norms significantly predicted baseline intention, and intention and past HED predicted future HED. Contrary to hypotheses, however, a structural model excluding past HED provided a better fit than a model including it. Although further studies must be conducted before a definitive conclusion is reached, a TPB model excluding past behavior, which is arguably more parsimonious and theory driven, may provide better prediction of HED among college drinkers than a model including past behavior. PMID:18072832
Force Modelling in Orthogonal Cutting Considering Flank Wear Effect
NASA Astrophysics Data System (ADS)
Rathod, Kanti Bhikhubhai; Lalwani, Devdas I.
2017-05-01
In the present work, an attempt has been made to provide a predictive cutting force model during orthogonal cutting by combining two different force models, that is, a force model for a perfectly sharp tool plus considering the effect of edge radius and a force model for a worn tool. The first force model is for a perfectly sharp tool that is based on Oxley's predictive machining theory for orthogonal cutting as the Oxley's model is for perfectly sharp tool, the effect of cutting edge radius (hone radius) is added and improve model is presented. The second force model is based on worn tool (flank wear) that was proposed by Waldorf. Further, the developed combined force model is also used to predict flank wear width using inverse approach. The performance of the developed combined total force model is compared with the previously published results for AISI 1045 and AISI 4142 materials and found reasonably good agreement.
Nguyen, Minh Vu Chuong; Baillet, Athan; Romand, Xavier; Trocmé, Candice; Courtier, Anaïs; Marotte, Hubert; Thomas, Thierry; Soubrier, Martin; Miossec, Pierre; Tébib, Jacques; Grange, Laurent; Toussaint, Bertrand; Lequerré, Thierry; Vittecoq, Olivier; Gaudin, Philippe
2018-06-06
Tumour necrosis factor-alpha inhibitors (TNFi) are effective treatments for Rheumatoid Arthritis (RA). Responses to treatment are barely predictable. As these treatments are costly and may induce a number of side effects, we aimed at identifying a panel of protein biomarkers that could be used to predict clinical response to TNFi for RA patients. Baseline blood levels of C-reactive protein, platelet factor 4, apolipoprotein A1, prealbumin, α1-antitrypsin, haptoglobin, S100A8/A9 and S100A12 proteins in bDMARD naive patients at the time of TNFi treatment initiation were assessed in a multicentric prospective French cohort. Patients fulfilling good EULAR response at 6 months were considered as responders. Logistic regression was used to determine best biomarker set that could predict good clinical response to TNFi. A combination of biomarkers (prealbumin, platelet factor 4 and S100A12) was identified and could predict response to TNFi in RA with sensitivity of 78%, specificity of 77%, positive predictive values (PPV) of 72%, negative predictive values (NPV) of 82%, positive likelihood ratio (LR+) of 3.35 and negative likelihood ratio (LR-) of 0.28. Lower levels of prealbumin and S100A12 and higher level of platelet factor 4 than the determined cutoff at baseline in RA patients are good predictors for response to TNFi treatment globally as well as to Infliximab, Etanercept and Adalimumab individually. A multivariate model combining 3 biomarkers (prealbumin, platelet factor 4 and S100A12) accurately predicted response of RA patients to TNFi and has potential in a daily practice personalized treatment. Copyright © 2018. Published by Elsevier Masson SAS.
Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A
2013-01-01
A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P < .001). In the updating process, age, history, and additional candidate predictors did not significantly increase discrimination, being 94%, and leaving only 4 predictors of the original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.
Huhtanen, P; Seppälä, A; Ahvenjärvi, S; Rinne, M
2008-10-01
Eleven 1-pool, seven 2-pool, and three 3-pool models were compared in fitting gas production data and predicting in vivo NDF digestibility and effective first-order digestion rate of potentially digestible NDF (pdNDF). Isolated NDF from 15 grass silages harvested at different stages of maturity was incubated in triplicate in rumen fluid-buffer solution for 72 h to estimate the digestion kinetics from cumulative gas production profiles. In vivo digestibility was estimated by the total fecal collection method in sheep fed at a maintenance level of feeding. The concentration of pdNDF was estimated by a 12-d in situ incubation. The parameter values from gas production profiles and pdNDF were used in a 2-compartment rumen model to predict pdNDF digestibility using 50 h of rumen residence time distributed in a ratio of 0.4:0.6 between the non-escapable and escapable pools. The effective first-order digestion rate was computed both from observed in vivo and model-predicted pdNDF digestibility assuming the passage kinetic model described above. There were marked differences between the models in fitting the gas production data. The fit improved with increasing number of pools, suggesting that silage pdNDF is not a homogenous substrate. Generally, the models predicted in vivo NDF digestibility and digestion rate accurately. However, a good fit of gas production data was not necessarily translated into improved predictions of the in vivo data. The models overestimating the asymptotic gas volumes tended to underestimate the in vivo digestibility. Investigating the time-related residuals during the later phases of fermentation is important when the data are used to estimate the first-order digestion rate of pdNDF. Relatively simple models such as the France model or even a single exponential model with discrete lag period satisfied the minimum criteria for a good model. Further, the comparison of feedstuffs on the basis of parameter values is more unequivocal than in the case of multiple-pool models.
Hamadache, Mabrouk; Benkortbi, Othmane; Hanini, Salah; Amrane, Abdeltif; Khaouane, Latifa; Si Moussa, Cherif
2016-02-13
Quantitative Structure Activity Relationship (QSAR) models are expected to play an important role in the risk assessment of chemicals on humans and the environment. In this study, we developed a validated QSAR model to predict acute oral toxicity of 329 pesticides to rats because a few QSAR models have been devoted to predict the Lethal Dose 50 (LD50) of pesticides on rats. This QSAR model is based on 17 molecular descriptors, and is robust, externally predictive and characterized by a good applicability domain. The best results were obtained with a 17/9/1 Artificial Neural Network model trained with the Quasi Newton back propagation (BFGS) algorithm. The prediction accuracy for the external validation set was estimated by the Q(2)ext and the root mean square error (RMS) which are equal to 0.948 and 0.201, respectively. 98.6% of external validation set is correctly predicted and the present model proved to be superior to models previously published. Accordingly, the model developed in this study provides excellent predictions and can be used to predict the acute oral toxicity of pesticides, particularly for those that have not been tested as well as new pesticides. Copyright © 2015 Elsevier B.V. All rights reserved.
Circumferential distortion modeling of the TF30-P-3 compression system
NASA Technical Reports Server (NTRS)
Mazzawy, R. S.; Banks, G. A.
1977-01-01
Circumferential inlet pressure and temperature distortion testing of the TF30 P-3 turbofan engine was conducted. The compressor system at the test conditions run was modelled according to a multiple segment parallel compressor model. Aspects of engine operation and distortion configuration modelled include the effects of compressor bleeds, relative pressure-temperature distortion alignment and circumferential distortion extent. Model predictions for limiting distortion amplitudes and flow distributions within the compression system were compared with test results in order to evaluate predicted trends. Relatively good agreement was obtained. The model also identified the low pressure compressor as the stall-initiating component, which was in agreement with the data.
Nébouy, David; Hébert, Mathieu; Fournel, Thierry; Larina, Nina; Lesur, Jean-Luc
2015-09-01
Recent color printing technologies based on the principle of revealing colors on pre-functionalized achromatic supports by laser irradiation offer advanced functionalities, especially for security applications. However, for such technologies, the color prediction is challenging, compared to classic ink-transfer printing systems. The spectral properties of the coloring materials modified by the lasers are not precisely known and may strongly vary, depending on the laser settings, in a nonlinear manner. We show in this study, through the example of the color laser marking (CLM) technology, based on laser bleaching of a mixture of pigments, that the combination of an adapted optical reflectance model and learning methods to get the model's parameters enables prediction of the spectral reflectance of any printable color with rather good accuracy. Even though the pigment mixture is formulated from three colored pigments, an analysis of the dimensionality of the spectral space generated by CLM printing, thanks to a principal component analysis decomposition, shows that at least four spectral primaries are needed for accurate spectral reflectance predictions. A polynomial interpolation is then used to relate RGB laser intensities with virtual coordinates of new basis vectors. By studying the influence of the number of calibration patches on the prediction accuracy, we can conclude that a reasonable number of 130 patches are enough to achieve good accuracy in this application.
Prediction on carbon dioxide emissions based on fuzzy rules
NASA Astrophysics Data System (ADS)
Pauzi, Herrini; Abdullah, Lazim
2014-06-01
There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.
Assessing the stability of human locomotion: a review of current measures
Bruijn, S. M.; Meijer, O. G.; Beek, P. J.; van Dieën, J. H.
2013-01-01
Falling poses a major threat to the steadily growing population of the elderly in modern-day society. A major challenge in the prevention of falls is the identification of individuals who are at risk of falling owing to an unstable gait. At present, several methods are available for estimating gait stability, each with its own advantages and disadvantages. In this paper, we review the currently available measures: the maximum Lyapunov exponent (λS and λL), the maximum Floquet multiplier, variability measures, long-range correlations, extrapolated centre of mass, stabilizing and destabilizing forces, foot placement estimator, gait sensitivity norm and maximum allowable perturbation. We explain what these measures represent and how they are calculated, and we assess their validity, divided up into construct validity, predictive validity in simple models, convergent validity in experimental studies, and predictive validity in observational studies. We conclude that (i) the validity of variability measures and λS is best supported across all levels, (ii) the maximum Floquet multiplier and λL have good construct validity, but negative predictive validity in models, negative convergent validity and (for λL) negative predictive validity in observational studies, (iii) long-range correlations lack construct validity and predictive validity in models and have negative convergent validity, and (iv) measures derived from perturbation experiments have good construct validity, but data are lacking on convergent validity in experimental studies and predictive validity in observational studies. In closing, directions for future research on dynamic gait stability are discussed. PMID:23516062
Explaining Cooperation in Groups: Testing Models of Reciprocity and Learning
ERIC Educational Resources Information Center
Biele, Guido; Rieskamp, Jorg; Czienskowski, Uwe
2008-01-01
What are the cognitive processes underlying cooperation in groups? This question is addressed by examining how well a reciprocity model, two learning models, and social value orientation can predict cooperation in two iterated n-person social dilemmas with continuous contributions. In the first of these dilemmas, the public goods game,…
Alchemy and uncertainty: What good are models?
F.L. Bunnell
1989-01-01
Wildlife-habitat models are increasing in abundance, diversity, and use, but symptoms of failure are evident in their application, including misuse, disuse, failure to test, and litigation. Reasons for failure often relate to the different purposes managers and researchers have for using the models to predict and to aid understanding. This paper examines these two...
NASA Astrophysics Data System (ADS)
Xia, Z. M.; Wang, C. G.; Tan, H. F.
2018-04-01
A pseudo-beam model with modified internal bending moment is presented to predict elastic properties of graphene, including the Young's modulus and Poisson's ratio. In order to overcome a drawback in existing molecular structural mechanics models, which only account for pure bending (constant bending moment), the presented model accounts for linear bending moments deduced from the balance equations. Based on this pseudo-beam model, an analytical prediction is accomplished to predict the Young's modulus and Poisson's ratio of graphene based on the equation of the strain energies by using Castigliano second theorem. Then, the elastic properties of graphene are calculated compared with results available in literature, which verifies the feasibility of the pseudo-beam model. Finally, the pseudo-beam model is utilized to study the twisting wrinkling characteristics of annular graphene. Due to modifications of the internal bending moment, the wrinkling behaviors of graphene sheet are predicted accurately. The obtained results show that the pseudo-beam model has a good ability to predict the elastic properties of graphene accurately, especially the out-of-plane deformation behavior.
Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad
2018-04-21
In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.
NASA Technical Reports Server (NTRS)
Coats, Timothy William
1994-01-01
Progressive failure is a crucial concern when using laminated composites in structural design. Therefore the ability to model damage and predict the life of laminated composites is vital. The purpose of this research was to experimentally verify the application of the continuum damage model, a progressive failure theory utilizing continuum damage mechanics, to a toughened material system. Damage due to tension-tension fatigue was documented for the IM7/5260 composite laminates. Crack density and delamination surface area were used to calculate matrix cracking and delamination internal state variables, respectively, to predict stiffness loss. A damage dependent finite element code qualitatively predicted trends in transverse matrix cracking, axial splits and local stress-strain distributions for notched quasi-isotropic laminates. The predictions were similar to the experimental data and it was concluded that the continuum damage model provided a good prediction of stiffness loss while qualitatively predicting damage growth in notched laminates.
Cometabolic degradation kinetics of TCE and phenol by Pseudomonas putida.
Chen, Yan-Min; Lin, Tsair-Fuh; Huang, Chih; Lin, Jui-Che
2008-08-01
Modeling of cometabolic kinetics is important for better understanding of degradation reaction and in situ application of bio-remediation. In this study, a model incorporated cell growth and decay, loss of transformation activity, competitive inhibition between growth substrate and non-growth substrate and self-inhibition of non-growth substrate was proposed to simulate the degradation kinetics of phenol and trichloroethylene (TCE) by Pseudomonas putida. All the intrinsic parameters employed in this study were measured independently, and were then used for predicting the batch experimental data. The model predictions conformed well to the observed data at different phenol and TCE concentrations. At low TCE concentrations (<2 mg l(-1)), the models with or without self-inhibition of non-growth substrate both simulated the experimental data well. However, at higher TCE concentrations (>6 mg l(-1)), only the model considering self-inhibition can describe the experimental data, suggesting that a self-inhibition of TCE was present in the system. The proposed model was also employed in predicting the experimental data conducted in a repeated batch reactor, and good agreements were observed between model predictions and experimental data. The results also indicated that the biomass loss in the degradation of TCE below 2 mg l(-1) can be totally recovered in the absence of TCE for the next cycle, and it could be used for the next batch experiment for the degradation of phenol and TCE. However, for higher concentration of TCE (>6 mg l(-1)), the recovery of biomass may not be as good as that at lower TCE concentrations.
NASA Technical Reports Server (NTRS)
King, James; Nickling, William G.; Gillies, John A.
2005-01-01
The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.
Dueñas, C; Fernández, M C; Carretero, J; Liger, E; Cañete, S
2001-04-01
Measurements of gross-alpha and gross-beta activities were made every week during the years 1992-1997 for airborne particulate samples collected using air filters at a clear site. The data are sufficiently numerous to allow the examination of variations in time and by these measurements to establish several features that should be important in understanding any trends of atmospheric radioactivity. Two models were used to predict the gross-alpha and gross-beta activities. A good agreement between the results of these models and the measurements was highlighted.
Tabak, Ying P; Johannes, Richard S; Sun, Xiaowu; Nunez, Carlos M; McDonald, L Clifford
2015-06-01
To predict the likelihood of hospital-onset Clostridium difficile infection (HO-CDI) based on patient clinical presentations at admission Retrospective data analysis Six US acute care hospitals Adult inpatients We used clinical data collected at the time of admission in electronic health record (EHR) systems to develop and validate a HO-CDI predictive model. The outcome measure was HO-CDI cases identified by a nonduplicate positive C. difficile toxin assay result with stool specimens collected >48 hours after inpatient admission. We fit a logistic regression model to predict the risk of HO-CDI. We validated the model using 1,000 bootstrap simulations. Among 78,080 adult admissions, 323 HO-CDI cases were identified (ie, a rate of 4.1 per 1,000 admissions). The logistic regression model yielded 14 independent predictors, including hospital community onset CDI pressure, patient age ≥65, previous healthcare exposures, CDI in previous admission, admission to the intensive care unit, albumin ≤3 g/dL, creatinine >2.0 mg/dL, bands >32%, platelets ≤150 or >420 109/L, and white blood cell count >11,000 mm3. The model had a c-statistic of 0.78 (95% confidence interval [CI], 0.76-0.81) with good calibration. Among 79% of patients with risk scores of 0-7, 19 HO-CDIs occurred per 10,000 admissions; for patients with risk scores >20, 623 HO-CDIs occurred per 10,000 admissions (P<.0001). Using clinical parameters available at the time of admission, this HO-CDI model demonstrated good predictive ability, and it may have utility as an early risk identification tool for HO-CDI preventive interventions and outcome comparisons.
Predicting heavy metal concentrations in soils and plants using field spectrophotometry
NASA Astrophysics Data System (ADS)
Muradyan, V.; Tepanosyan, G.; Asmaryan, Sh.; Sahakyan, L.; Saghatelyan, A.; Warner, T. A.
2017-09-01
Aim of this study is to predict heavy metal (HM) concentrations in soils and plants using field remote sensing methods. The studied sites were an industrial town of Kajaran and city of Yerevan. The research also included sampling of soils and leaves of two tree species exposed to different pollution levels and determination of contents of HM in lab conditions. The obtained spectral values were then collated with contents of HM in Kajaran soils and the tree leaves sampled in Yerevan, and statistical analysis was done. Consequently, Zn and Pb have a negative correlation coefficient (p <0.01) in a 2498 nm spectral range for soils. Pb has a significantly higher correlation at red edge for plants. A regression models and artificial neural network (ANN) for HM prediction were developed. Good results were obtained for the best stress sensitive spectral band ANN (R2 0.9, RPD 2.0), Simple Linear Regression (SLR) and Partial Least Squares Regression (PLSR) (R2 0.7, RPD 1.4) models. Multiple Linear Regression (MLR) model was not applicable to predict Pb and Zn concentrations in soils in this research. Almost all full spectrum PLS models provide good calibration and validation results (RPD>1.4). Full spectrum ANN models are characterized by excellent calibration R2, rRMSE and RPD (0.9; 0.1 and >2.5 respectively). For prediction of Pb and Ni contents in plants SLR and PLS models were used. The latter provide almost the same results. Our findings indicate that it is possible to make coarse direct estimation of HM content in soils and plants using rapid and economic reflectance spectroscopy.
Preferential attachment in multiple trade networks
NASA Astrophysics Data System (ADS)
Foschi, Rachele; Riccaboni, Massimo; Schiavo, Stefano
2014-08-01
In this paper we develop a model for the evolution of multiple networks which is able to replicate the concentrated and sparse nature of world trade data. Our model is an extension of the preferential attachment growth model to the case of multiple networks. Countries trade a variety of goods of different complexity. Every country progressively evolves from trading less sophisticated to high-tech goods. The probabilities of capturing more trade opportunities at a given level of complexity and of starting to trade more complex goods are both proportional to the number of existing trade links. We provide a set of theoretical predictions and simulative results. A calibration exercise shows that our model replicates the same concentration level of world trade as well as the sparsity pattern of the trade matrix. We also discuss a set of numerical solutions to deal with large multiple networks.
Raji, Olaide Y.; Duffy, Stephen W.; Agbaje, Olorunshola F.; Baker, Stuart G.; Christiani, David C.; Cassidy, Adrian; Field, John K.
2013-01-01
Background External validation of existing lung cancer risk prediction models is limited. Using such models in clinical practice to guide the referral of patients for computed tomography (CT) screening for lung cancer depends on external validation and evidence of predicted clinical benefit. Objective To evaluate the discrimination of the Liverpool Lung Project (LLP) risk model and demonstrate its predicted benefit for stratifying patients for CT screening by using data from 3 independent studies from Europe and North America. Design Case–control and prospective cohort study. Setting Europe and North America. Patients Participants in the European Early Lung Cancer (EUELC) and Harvard case–control studies and the LLP population-based prospective cohort (LLPC) study. Measurements 5-year absolute risks for lung cancer predicted by the LLP model. Results The LLP risk model had good discrimination in both the Harvard (area under the receiver-operating characteristic curve [AUC], 0.76 [95% CI, 0.75 to 0.78]) and the LLPC (AUC, 0.82 [CI, 0.80 to 0.85]) studies and modest discrimination in the EUELC (AUC, 0.67 [CI, 0.64 to 0.69]) study. The decision utility analysis, which incorporates the harms and benefit of using a risk model to make clinical decisions, indicates that the LLP risk model performed better than smoking duration or family history alone in stratifying high-risk patients for lung cancer CT screening. Limitations The model cannot assess whether including other risk factors, such as lung function or genetic markers, would improve accuracy. Lack of information on asbestos exposure in the LLPC limited the ability to validate the complete LLP risk model. Conclusion Validation of the LLP risk model in 3 independent external data sets demonstrated good discrimination and evidence of predicted benefits for stratifying patients for lung cancer CT screening. Further studies are needed to prospectively evaluate model performance and evaluate the optimal population risk thresholds for initiating lung cancer screening. Primary Funding Source Roy Castle Lung Cancer Foundation. PMID:22910935
Physical and mathematical modelling of ladle metallurgy operations. [steelmaking
NASA Technical Reports Server (NTRS)
El-Kaddah, N.; Szekely, J.
1982-01-01
Experimental measurements are reported, on the velocity fields and turbulence parameters on a water model of an argon stirred ladle. These velocity measurements are complemented by direct heat transfer measurements, obtained by studying the rate at which ice rods immersed into the system melt, at various locations. The theoretical work undertaken involved the use of the turbulence Navier-Stokes equations in conjunction with the kappa-epsilon model to predict the local velocity fields and the maps of the turbulence parameters. Theoretical predictions were in reasonably good agreement with the experimentally measured velocity fields; the agreement between the predicted and the measured turbulence parameters was less perfect, but still satisfactory. The implications of these findings to the modelling of ladle metallurgical operations are discussed.
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2018-05-01
Photoproduction of vector mesons is computed with dipole model in proton-proton ultraperipheral collisions (UPCs) at the CERN Large Hadron Collider (LHC). The dipole model framework is employed in the calculations of vector mesons production in diffractive processes. Parameters of the bCGC model are refitted with the latest inclusive deep inelastic scattering experimental data. Employing the bCGC model and boosted Gaussian light-cone wave function for vector mesons, we obtain the prediction of rapidity distributions of J/ψ and ψ(2s) mesons in proton-proton ultraperipheral collisions at the LHC. The predictions give a good description of the experimental data of LHCb. Predictions of ϕ and ω mesons are also evaluated in this paper.
Khan, Waseem S; Hamadneh, Nawaf N; Khan, Waqar A
2017-01-01
In this study, multilayer perception neural network (MLPNN) was employed to predict thermal conductivity of PVP electrospun nanocomposite fibers with multiwalled carbon nanotubes (MWCNTs) and Nickel Zinc ferrites [(Ni0.6Zn0.4) Fe2O4]. This is the second attempt on the application of MLPNN with prey predator algorithm for the prediction of thermal conductivity of PVP electrospun nanocomposite fibers. The prey predator algorithm was used to train the neural networks to find the best models. The best models have the minimal of sum squared error between the experimental testing data and the corresponding models results. The minimal error was found to be 0.0028 for MWCNTs model and 0.00199 for Ni-Zn ferrites model. The predicted artificial neural networks (ANNs) responses were analyzed statistically using z-test, correlation coefficient, and the error functions for both inclusions. The predicted ANN responses for PVP electrospun nanocomposite fibers were compared with the experimental data and were found in good agreement.
NASA Technical Reports Server (NTRS)
Solomon, P. M.; De Zafra, R.; Parrish, A.; Barrett, J. W.
1984-01-01
Ground-based observations of a mm-wave spectral line at 278 GHz have yielded stratospheric chlorine monoxide column density diurnal variation records which indicate that the mixing ratio and column density of this compound above 30 km are about 20 percent lower than model predictions based on 2.1 parts/billion of total stratospheric chlorine. The observed day-to-night variation is, however, in good agreement with recent model predictions, both confirming the existence of a nighttime reservoir for chlorine and verifying the predicted general rate of its storage and retrieval.
Iranian risk model as a predictive tool for retinopathy in patients with type 2 diabetes.
Azizi-Soleiman, Fatemeh; Heidari-Beni, Motahar; Ambler, Gareth; Omar, Rumana; Amini, Masoud; Hosseini, Sayed-Mohsen
2015-10-01
Diabetic retinopathy (DR) is the leading cause of blindness in patients with type 1 or type 2 diabetes. The gold standard for the detection of DR requires expensive equipment. This study was undertaken to develop a simple and practical scoring system to predict the probability of DR. A total of 1782 patients who had first-degree relatives with type II diabetes were selected. Eye examinations were performed by an expert ophthalmologist. Biochemical and anthropometric predictors of DR were measured. Logistic regression was used to develop a statistical model that can be used to predict DR. Goodness of fit was examined using the Hosmer-Lemeshow test and the area under the receiver operating characteristic (ROC) curve. The risk model demonstrated good calibration and discrimination (ROC area=0.76) in the validation sample. Factors associated with DR in our model were duration of diabetes (odds ratio [OR]=2.14, confidence interval [CI] 95%=1.87 to 2.45); glycated hemoglobin (A1C) (OR=1.21, CI 95%=1.13 to 1.30); fasting plasma glucose (OR=1.83, CI 95%=1.28 to 2.62); systolic blood pressure (OR=1.01, CI 95%= 1.00 to 1.02); and proteinuria (OR=1.37, CI 95%=1.01 to 1.85). The only factor that had a protective effect against DR were body mass index and education level (OR=0.95, CI 95%=0.92 to 0.98). The good performance of our risk model suggests that it may be a useful risk-prediction tool for DR. It consisted of the positive predictors like A1C, diabetes duration, sex (male), fasting plasma glucose, systolic blood pressure and proteinuria, as well as negative risk factors like body mass index and education level. Copyright © 2015 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.
Syfert, Mindy M; Smith, Matthew J; Coomes, David A
2013-01-01
Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.A. Bamberger; L.M. Liljegren; P.S. Lowery
This document presents an analysis of the mechanisms influencing mixing within double-shell slurry tanks. A research program to characterize mixing of slurries within tanks has been proposed. The research program presents a combined experimental and computational approach to produce correlations describing the tank slurry concentration profile (and therefore uniformity) as a function of mixer pump operating conditions. The TEMPEST computer code was used to simulate both a full-scale (prototype) and scaled (model) double-shell waste tank to predict flow patterns resulting from a stationary jet centered in the tank. The simulation results were used to evaluate flow patterns in the tankmore » and to determine whether flow patterns are similar between the full-scale prototype and an existing 1/12-scale model tank. The flow patterns were sufficiently similar to recommend conducting scoping experiments at 1/12-scale. Also, TEMPEST modeled velocity profiles of the near-floor jet were compared to experimental measurements of the near-floor jet with good agreement. Reported values of physical properties of double-shell tank slurries were analyzed to evaluate the range of properties appropriate for conducting scaled experiments. One-twelfth scale scoping experiments are recommended to confirm the prioritization of the dimensionless groups (gravitational settling, Froude, and Reynolds numbers) that affect slurry suspension in the tank. Two of the proposed 1/12-scale test conditions were modeled using the TEMPEST computer code to observe the anticipated flow fields. This information will be used to guide selection of sampling probe locations. Additional computer modeling is being conducted to model a particulate laden, rotating jet centered in the tank. The results of this modeling effort will be compared to the scaled experimental data to quantify the agreement between the code and the 1/12-scale experiment. The scoping experiment results will guide selection of parameters to be varied in the follow-on experiments. Data from the follow-on experiments will be used to develop correlations to describe slurry concentration profile as a function of mixing pump operating conditions. This data will also be used to further evaluate the computer model applications. If the agreement between the experimental data and the code predictions is good, the computer code will be recommended for use to predict slurry uniformity in the tanks under various operating conditions. If the agreement between the code predictions and experimental results is not good, the experimental data correlations will be used to predict slurry uniformity in the tanks within the range of correlation applicability.« less
The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)
2001-01-01
A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.
Leong, Max K.; Syu, Ren-Guei; Ding, Yi-Lung; Weng, Ching-Feng
2017-01-01
The glycine-binding site of the N-methyl-D-aspartate receptor (NMDAR) subunit GluN1 is a potential pharmacological target for neurodegenerative disorders. A novel combinatorial ensemble docking scheme using ligand and protein conformation ensembles and customized support vector machine (SVM)-based models to select the docked pose and to predict the docking score was generated for predicting the NMDAR GluN1-ligand binding affinity. The predicted root mean square deviation (RMSD) values in pose by SVM-Pose models were found to be in good agreement with the observed values (n = 30, r2 = 0.928–0.988, = 0.894–0.954, RMSE = 0.002–0.412, s = 0.001–0.214), and the predicted pKi values by SVM-Score were found to be in good agreement with the observed values for the training samples (n = 24, r2 = 0.967, = 0.899, RMSE = 0.295, s = 0.170) and test samples (n = 13, q2 = 0.894, RMSE = 0.437, s = 0.202). When subjected to various statistical validations, the developed SVM-Pose and SVM-Score models consistently met the most stringent criteria. A mock test asserted the predictivity of this novel docking scheme. Collectively, this accurate novel combinatorial ensemble docking scheme can be used to predict the NMDAR GluN1-ligand binding affinity for facilitating drug discovery. PMID:28059133
Leong, Max K; Syu, Ren-Guei; Ding, Yi-Lung; Weng, Ching-Feng
2017-01-06
The glycine-binding site of the N-methyl-D-aspartate receptor (NMDAR) subunit GluN1 is a potential pharmacological target for neurodegenerative disorders. A novel combinatorial ensemble docking scheme using ligand and protein conformation ensembles and customized support vector machine (SVM)-based models to select the docked pose and to predict the docking score was generated for predicting the NMDAR GluN1-ligand binding affinity. The predicted root mean square deviation (RMSD) values in pose by SVM-Pose models were found to be in good agreement with the observed values (n = 30, r 2 = 0.928-0.988, = 0.894-0.954, RMSE = 0.002-0.412, s = 0.001-0.214), and the predicted pK i values by SVM-Score were found to be in good agreement with the observed values for the training samples (n = 24, r 2 = 0.967, = 0.899, RMSE = 0.295, s = 0.170) and test samples (n = 13, q 2 = 0.894, RMSE = 0.437, s = 0.202). When subjected to various statistical validations, the developed SVM-Pose and SVM-Score models consistently met the most stringent criteria. A mock test asserted the predictivity of this novel docking scheme. Collectively, this accurate novel combinatorial ensemble docking scheme can be used to predict the NMDAR GluN1-ligand binding affinity for facilitating drug discovery.
NASA Astrophysics Data System (ADS)
Leong, Max K.; Syu, Ren-Guei; Ding, Yi-Lung; Weng, Ching-Feng
2017-01-01
The glycine-binding site of the N-methyl-D-aspartate receptor (NMDAR) subunit GluN1 is a potential pharmacological target for neurodegenerative disorders. A novel combinatorial ensemble docking scheme using ligand and protein conformation ensembles and customized support vector machine (SVM)-based models to select the docked pose and to predict the docking score was generated for predicting the NMDAR GluN1-ligand binding affinity. The predicted root mean square deviation (RMSD) values in pose by SVM-Pose models were found to be in good agreement with the observed values (n = 30, r2 = 0.928-0.988, = 0.894-0.954, RMSE = 0.002-0.412, s = 0.001-0.214), and the predicted pKi values by SVM-Score were found to be in good agreement with the observed values for the training samples (n = 24, r2 = 0.967, = 0.899, RMSE = 0.295, s = 0.170) and test samples (n = 13, q2 = 0.894, RMSE = 0.437, s = 0.202). When subjected to various statistical validations, the developed SVM-Pose and SVM-Score models consistently met the most stringent criteria. A mock test asserted the predictivity of this novel docking scheme. Collectively, this accurate novel combinatorial ensemble docking scheme can be used to predict the NMDAR GluN1-ligand binding affinity for facilitating drug discovery.
[Predictive model based multimetric index of macroinvertebrates for river health assessment].
Chen, Kai; Yu, Hai Yan; Zhang, Ji Wei; Wang, Bei Xin; Chen, Qiu Wen
2017-06-18
Improving the stability of integrity of biotic index (IBI; i.e., multi-metric indices, MMI) across temporal and spatial scales is one of the most important issues in water ecosystem integrity bioassessment and water environment management. Using datasets of field-based macroinvertebrate and physicochemical variables and GIS-based natural predictors (e.g., geomorphology and climate) and land use variables collected at 227 river sites from 2004 to 2011 across the Zhejiang Province, China, we used random forests (RF) to adjust the effects of natural variations at temporal and spatial scales on macroinvertebrate metrics. We then developed natural variations adjusted (predictive) and unadjusted (null) MMIs and compared performance between them. The core me-trics selected for predictive and null MMIs were different from each other, and natural variations within core metrics in predictive MMI explained by RF models ranged between 11.4% and 61.2%. The predictive MMI was more precise and accurate, but less responsive and sensitive than null MMI. The multivariate nearest-neighbor test determined that 9 test sites and 1 most degraded site were flagged outside of the environmental space of the reference site network. We found that combination of predictive MMI developed by using predictive model and the nearest-neighbor test performed best and decreased risks of inferring type I (designating a water body as being in poor biological condition, when it was actually in good condition) and type II (designating a water body as being in good biological condition, when it was actually in poor condition) errors. Our results provided an effective method to improve the stability and performance of integrity of biotic index.
Bommert, Andrea; Rahnenführer, Jörg; Lang, Michel
2017-01-01
Finding a good predictive model for a high-dimensional data set can be challenging. For genetic data, it is not only important to find a model with high predictive accuracy, but it is also important that this model uses only few features and that the selection of these features is stable. This is because, in bioinformatics, the models are used not only for prediction but also for drawing biological conclusions which makes the interpretability and reliability of the model crucial. We suggest using three target criteria when fitting a predictive model to a high-dimensional data set: the classification accuracy, the stability of the feature selection, and the number of chosen features. As it is unclear which measure is best for evaluating the stability, we first compare a variety of stability measures. We conclude that the Pearson correlation has the best theoretical and empirical properties. Also, we find that for the stability assessment behaviour it is most important that a measure contains a correction for chance or large numbers of chosen features. Then, we analyse Pareto fronts and conclude that it is possible to find models with a stable selection of few features without losing much predictive accuracy.
Assessing participation in community-based physical activity programs in Brazil.
Reis, Rodrigo S; Yan, Yan; Parra, Diana C; Brownson, Ross C
2014-01-01
This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14-4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16-2.53), reporting a good health (OR = 1.58, 95% CI = 1.02-2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05-2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26-2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18-2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil.
DeepQA: improving the estimation of single protein model quality with deep belief networks.
Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin
2016-12-05
Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .
Koenecke, Christian; Göhring, Gudrun; de Wreede, Liesbeth C.; van Biezen, Anja; Scheid, Christof; Volin, Liisa; Maertens, Johan; Finke, Jürgen; Schaap, Nicolaas; Robin, Marie; Passweg, Jakob; Cornelissen, Jan; Beelen, Dietrich; Heuser, Michael; de Witte, Theo; Kröger, Nicolaus
2015-01-01
The aim of this study was to determine the impact of the revised 5-group International Prognostic Scoring System cytogenetic classification on outcome after allogeneic stem cell transplantation in patients with myelodysplastic syndromes or secondary acute myeloid leukemia who were reported to the European Society for Blood and Marrow Transplantation database. A total of 903 patients had sufficient cytogenetic information available at stem cell transplantation to be classified according to the 5-group classification. Poor and very poor risk according to this classification was an independent predictor of shorter relapse-free survival (hazard ratio 1.40 and 2.14), overall survival (hazard ratio 1.38 and 2.14), and significantly higher cumulative incidence of relapse (hazard ratio 1.64 and 2.76), compared to patients with very good, good or intermediate risk. When comparing the predictive performance of a series of Cox models both for relapse-free survival and for overall survival, a model with simplified 5-group cytogenetics (merging very good, good and intermediate cytogenetics) performed best. Furthermore, monosomal karyotype is an additional negative predictor for outcome within patients of the poor, but not the very poor risk group of the 5-group classification. The revised International Prognostic Scoring System cytogenetic classification allows patients with myelodysplastic syndromes to be separated into three groups with clearly different outcomes after stem cell transplantation. Poor and very poor risk cytogenetics were strong predictors of poor patient outcome. The new cytogenetic classification added value to prediction of patient outcome compared to prediction models using only traditional risk factors or the 3-group International Prognostic Scoring System cytogenetic classification. PMID:25552702
Fundamental Algorithms of the Goddard Battery Model
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1985-01-01
The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
[Potential distribution of Panax ginseng and its predicted responses to climate change.
Zhao, Ze Fang; Wei, Hai Yan; Guo, Yan Long; Gu, Wei
2016-11-18
This study utilized Panax ginseng as the research object. Based on BioMod2 platform, with species presence data and 22 climatic variables, the potential geographic distribution of P. ginseng under the current conditions in northeast China was simulated with ten species distribution model. And then with the receiver-operating characteristic curve (ROC) as weights, we build an ensemble model, which integrated the results of 10 models, using the ensemble model, the future distributions of P. ginseng were also projected for the periods 2050s and 2070s under the climate change scenarios of RCP 8.5, RCP 6, RCP 4.5 and RCP 2.6 emission scenarios described in the Special Report on Emissions Scenarios (SRES) of IPCC (Intergovernmental Panel on Climate Change). The results showed that for the entire region of study area, under the present climatic conditions, 10.4% of the areas were identified as suitable habitats, which were mainly located in northeast Changbai Mountains area and the southeastern region of the Xiaoxing'an Mountains. The model simulations indicated that the suitable habitats would have a relatively significant change under the different climate change scenarios, and generally the range of suitable habitats would be a certain degree of decrease. Meanwhile, the goodness-of-fit, predicted ranges, and weights of explanatory variables was various for each model. And according to the goodness-of-fit, Maxent had the highest model performance, and GAM, RF and ANN were followed, while SRE had the lowest prediction accuracy. In this study we established an ensemble model, which could improve the accuracy of the existing species distribution models, and optimization of species distribution prediction results.
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
Concentrations and fate of decamethylcyclopentasiloxane (D(5)) in the atmosphere.
McLachlan, Michael S; Kierkegaard, Amelie; Hansen, Kaj M; van Egmond, Roger; Christensen, Jesper H; Skjøth, Carsten A
2010-07-15
Decamethylcyclopentasiloxane (D(5)) is a volatile compound used in personal care products that is released to the atmosphere in large quantities. Although D(5) is currently under consideration for regulation, there have been no field investigations of its atmospheric fate. We employed a recently developed, quality assured method to measure D(5) concentration in ambient air at a rural site in Sweden. The samples were collected with daily resolution between January and June 2009. The D(5) concentration ranged from 0.3 to 9 ng m(-3), which is 1-3 orders of magnitude lower than previous reports. The measured data were compared with D(5) concentrations predicted using an atmospheric circulation model that included both OH radical and D(5) chemistry. The model was parametrized using emissions estimates and physical chemical properties determined in laboratory experiments. There was good agreement between the measured and modeled D(5) concentrations. The results show that D(5) is clearly subject to long-range atmospheric transport, but that it is also effectively removed from the atmosphere via phototransformation. Atmospheric deposition has little influence on the atmospheric fate. The good agreement between the model predictions and the field observations indicates that there is a good understanding of the major factors governing D(5) concentrations in the atmosphere.
Resource and competitive dynamics shape the benefits of public goods cooperation in a plant pathogen
Platt, Thomas G.; Fuqua, Clay; Bever, James D.
2012-01-01
Cooperative benefits depend on a variety of ecological factors. Many cooperative bacteria increase the population size of their groups by making a public good available. Increased local population size can alleviate the constraints of kin competition on the evolution of cooperation by enhancing the between-group fitness of cooperators. The cooperative pathogenesis of Agrobacterium tumefaciens causes infected plants to exude opines—resources that provide a nearly exclusive source of nutrient for the pathogen. We experimentally demonstrate that opines provide cooperative A. tumefaciens cells a within-group fitness advantage over saprophytic agrobacteria. Our results are congruent with a resource-consumer competition model, which predicts that cooperative, virulent agrobacteria are at a competitive disadvantage when opines are unavailable, but have an advantage when opines are available at sufficient levels. This model also predicts that freeloading agrobacteria that catabolize opines but cannot infect plants competitively displace the cooperative pathogen from all environments. However, we show that these cooperative public goods also promote increased local population size. A model built from the Price Equation shows that this effect on group size can contribute to the persistence of cooperative pathogenesis despite inherent kin competition for the benefits of pathogenesis. PMID:22671559
Effective equations for matter-wave gap solitons in higher-order transversal states.
Mateo, A Muñoz; Delgado, V
2013-10-01
We demonstrate that an important class of nonlinear stationary solutions of the three-dimensional (3D) Gross-Pitaevskii equation (GPE) exhibiting nontrivial transversal configurations can be found and characterized in terms of an effective one-dimensional (1D) model. Using a variational approach we derive effective equations of lower dimensionality for BECs in (m,n(r)) transversal states (states featuring a central vortex of charge m as well as n(r) concentric zero-density rings at every z plane) which provides us with a good approximate solution of the original 3D problem. Since the specifics of the transversal dynamics can be absorbed in the renormalization of a couple of parameters, the functional form of the equations obtained is universal. The model proposed finds its principal application in the study of the existence and classification of 3D gap solitons supported by 1D optical lattices, where in addition to providing a good estimate for the 3D wave functions it is able to make very good predictions for the μ(N) curves characterizing the different fundamental families. We have corroborated the validity of our model by comparing its predictions with those from the exact numerical solution of the full 3D GPE.
Evaporative segregation in 80% Ni-20% Cr and 60% Fe-40% Ni alloys
NASA Technical Reports Server (NTRS)
Gupta, K. P.; Mukherjee, J. L.; Li, C. H.
1974-01-01
An analytical approach is outlined to calculate the evaporative segregation behavior in metallic alloys. The theoretical predictions are based on a 'normal' evaporation model and have been examined for Fe-Ni and Ni-Cr alloys. A fairly good agreement has been found between the predicted values and the experimental results found in the literature.
Analysis and modeling of infrasound from a four-stage rocket launch
Blom, Philip Stephen; Marcillo, Omar Eduardo; Arrowsmith, Stephen
2016-06-17
Infrasound from a four-stage sounding rocket was recorded by several arrays within 100 km of the launch pad. Propagation modeling methods have been applied to the known trajectory to predict infrasonic signals at the ground in order to identify what information might be obtained from such observations. There is good agreement between modeled and observed back azimuths, and predicted arrival times for motor ignition signals match those observed. The signal due to the high-altitude stage ignition is found to be low amplitude, despite predictions of weak attenuation. As a result, this lack of signal is possibly due to inefficient aeroacousticmore » coupling in the rarefied upper atmosphere.« less
Predictions of spray combustion interactions
NASA Technical Reports Server (NTRS)
Shuen, J. S.; Solomon, A. S. P.; Faeth, G. M.
1984-01-01
Mean and fluctuating phase velocities; mean particle mass flux; particle size; and mean gas-phase Reynolds stress, composition and temperature were measured in stationary, turbulent, axisymmetric, and flows which conform to the boundary layer approximations while having well-defined initial and boundary conditions in dilute particle-laden jets, nonevaporating sprays, and evaporating sprays injected into a still air environment. Three models of the processes, typical of current practice, were evaluated. The local homogeneous flow and deterministic separated flow models did not provide very satisfactory predictions over the present data base. In contrast, the stochastic separated flow model generally provided good predictions and appears to be an attractive approach for treating nonlinear interphase transport processes in turbulent flows containing particles (drops).
NASA Astrophysics Data System (ADS)
Zhao, Xiang-Feng; Shang, De-Guang; Sun, Yu-Juan; Song, Ming-Liang; Wang, Xiao-Wei
2018-01-01
The maximum shear strain and the normal strain excursion on the critical plane are regarded as the primary parameters of the crack driving force to establish a new short crack model in this paper. An equivalent strain-based intensity factor is proposed to correlate the short crack growth rate under multiaxial loading. According to the short crack model, a new method is proposed for multiaxial fatigue life prediction based on crack growth analysis. It is demonstrated that the method can be used under proportional and non-proportional loadings. The predicted results showed a good agreement with experimental lives in both high-cycle and low-cycle regions.
Analysis and modeling of infrasound from a four-stage rocket launch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blom, Philip Stephen; Marcillo, Omar Eduardo; Arrowsmith, Stephen
Infrasound from a four-stage sounding rocket was recorded by several arrays within 100 km of the launch pad. Propagation modeling methods have been applied to the known trajectory to predict infrasonic signals at the ground in order to identify what information might be obtained from such observations. There is good agreement between modeled and observed back azimuths, and predicted arrival times for motor ignition signals match those observed. The signal due to the high-altitude stage ignition is found to be low amplitude, despite predictions of weak attenuation. As a result, this lack of signal is possibly due to inefficient aeroacousticmore » coupling in the rarefied upper atmosphere.« less
Liu, Xiu-ying; Wang, Li; Chang, Qing-rui; Wang, Xiao-xing; Shang, Yan
2015-07-01
Wuqi County of Shaanxi Province, where the vegetation recovering measures have been carried out for years, was taken as the study area. A total of 100 loess samples from 24 different profiles were collected. Total nitrogen (TN) and alkali hydrolysable nitrogen (AHN) contents of the soil samples were analyzed, and the soil samples were scanned in the visible/near-infrared (VNIR) region of 350-2500 nm in the laboratory. The calibration models were developed between TN and AHN contents and VNIR values based on correlation analysis (CA) and partial least squares regression (PLS). Independent samples validated the calibration models. The results indicated that the optimum model for predicting TN of loess was established by using first derivative of reflectance. The best model for predicting AHN of loess was established by using normal derivative spectra. The optimum TN model could effectively predict TN in loess from 0 to 40 cm, but the optimum AHN model could only roughly predict AHN at the same depth. This study provided a good method for rapidly predicting TN of loess where vegetation recovering measures have been adopted, but prediction of AHN needs to be further studied.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi
2018-02-01
To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.
Li, Yuqin; You, Guirong; Jia, Baoxiu; Si, Hongzong; Yao, Xiaojun
2014-01-01
Quantitative structure-activity relationships (QSAR) were developed to predict the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase via heuristic method (HM) and gene expression programming (GEP). The descriptors of 33 pyrrolidine derivatives were calculated by the software CODESSA, which can calculate quantum chemical, topological, geometrical, constitutional, and electrostatic descriptors. HM was also used for the preselection of 5 appropriate molecular descriptors. Linear and nonlinear QSAR models were developed based on the HM and GEP separately and two prediction models lead to a good correlation coefficient (R (2)) of 0.93 and 0.94. The two QSAR models are useful in predicting the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase during the discovery of new anticancer drugs and providing theory information for studying the new drugs.
Prediction of physical workload in reduced gravity environments
NASA Technical Reports Server (NTRS)
Goldberg, Joseph H.
1987-01-01
The background, development, and application of a methodology to predict human energy expenditure and physical workload in low gravity environments, such as a Lunar or Martian base, is described. Based on a validated model to predict energy expenditures in Earth-based industrial jobs, the model relies on an elemental analysis of the proposed job. Because the job itself need not physically exist, many alternative job designs may be compared in their physical workload. The feasibility of using the model for prediction of low gravity work was evaluated by lowering body and load weights, while maintaining basal energy expenditure. Comparison of model results was made both with simulated low gravity energy expenditure studies and with reported Apollo 14 Lunar EVA expenditure. Prediction accuracy was very good for walking and for cart pulling on slopes less than 15 deg, but the model underpredicted the most difficult work conditions. This model was applied to example core sampling and facility construction jobs, as presently conceptualized for a Lunar or Martian base. Resultant energy expenditures and suggested work-rest cycles were well within the range of moderate work difficulty. Future model development requirements were also discussed.
Universal predictability of mobility patterns in cities
Yan, Xiao-Yong; Zhao, Chen; Fan, Ying; Di, Zengru; Wang, Wen-Xu
2014-01-01
Despite the long history of modelling human mobility, we continue to lack a highly accurate approach with low data requirements for predicting mobility patterns in cities. Here, we present a population-weighted opportunities model without any adjustable parameters to capture the underlying driving force accounting for human mobility patterns at the city scale. We use various mobility data collected from a number of cities with different characteristics to demonstrate the predictive power of our model. We find that insofar as the spatial distribution of population is available, our model offers universal prediction of mobility patterns in good agreement with real observations, including distance distribution, destination travel constraints and flux. By contrast, the models that succeed in modelling mobility patterns in countries are not applicable in cities, which suggests that there is a diversity of human mobility at different spatial scales. Our model has potential applications in many fields relevant to mobility behaviour in cities, without relying on previous mobility measurements. PMID:25232053
Cissen, M; Meijerink, A M; D'Hauwers, K W; Meissner, A; van der Weide, N; Mochtar, M H; de Melker, A A; Ramos, L; Repping, S; Braat, D D M; Fleischer, K; van Wely, M
2016-09-01
Can an externally validated model, based on biological variables, be developed to predict successful sperm retrieval with testicular sperm extraction (TESE) in men with non-obstructive azoospermia (NOA) using a large nationwide cohort? Our prediction model including six variables was able to make a good distinction between men with a good chance and men with a poor chance of obtaining spermatozoa with TESE. Using ICSI in combination with TESE even men suffering from NOA are able to father their own biological child. Only in approximately half of the patients with NOA can testicular sperm be retrieved successfully. The few models that have been developed to predict the chance of obtaining spermatozoa with TESE were based on small datasets and none of them have been validated externally. We performed a retrospective nationwide cohort study. Data from 1371 TESE procedures were collected between June 2007 and June 2015 in the two fertility centres. All men with NOA undergoing their first TESE procedure as part of a fertility treatment were included. The primary end-point was the presence of one or more spermatozoa (regardless of their motility) in the testicular biopsies.We constructed a model for the prediction of successful sperm retrieval, using univariable and multivariable binary logistic regression analysis and the dataset from one centre. This model was then validated using the dataset from the other centre. The area under the receiver-operating characteristic curve (AUC) was calculated and model calibration was assessed. There were 599 (43.7%) successful sperm retrievals after a first TESE procedure. The prediction model, built after multivariable logistic regression analysis, demonstrated that higher male age, higher levels of serum testosterone and lower levels of FSH and LH were predictive for successful sperm retrieval. Diagnosis of idiopathic NOA and the presence of an azoospermia factor c gene deletion were predictive for unsuccessful sperm retrieval. The AUC was 0.69 (95% confidence interval (CI): 0.66-0.72). The difference between the mean observed chance and the mean predicted chance was <2.0% in all groups, indicating good calibration. In validation, the model had moderate discriminative capacity (AUC 0.65, 95% CI: 0.62-0.72) and moderate calibration: the predicted probability never differed by more than 9.2% of the mean observed probability. The percentage of men with Klinefelter syndrome among men diagnosed with NOA is expected to be higher than in our study population, which is a potential selection bias. The ability of the sperm retrieved to fertilize an oocyte and produce a live birth was not tested. This model can help in clinical decision-making in men with NOA by reliably predicting the chance of obtaining spermatozoa with TESE. This study was partly supported by an unconditional grant from Merck Serono (to D.D.M.B. and K.F.) and by the Department of Obstetrics and Gynaecology of Radboud University Medical Center, Nijmegen, The Netherlands, the Department of Obstetrics and Gynaecology, Jeroen Bosch Hospital, Den Bosch, The Netherlands, and the Department of Obstetrics and Gynaecology, Academic Medical Center, Amsterdam, The Netherlands. Merck Serono had no influence in concept, design nor elaboration of this study. Not applicable. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
2009-01-01
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input–output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input–output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down. PMID:20596382
Ahadian, Samad; Kawazoe, Yoshiyuki
2009-06-04
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input-output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input-output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.
Waste tyre pyrolysis: modelling of a moving bed reactor.
Aylón, E; Fernández-Colino, A; Murillo, R; Grasa, G; Navarro, M V; García, T; Mastral, A M
2010-12-01
This paper describes the development of a new model for waste tyre pyrolysis in a moving bed reactor. This model comprises three different sub-models: a kinetic sub-model that predicts solid conversion in terms of reaction time and temperature, a heat transfer sub-model that calculates the temperature profile inside the particle and the energy flux from the surroundings to the tyre particles and, finally, a hydrodynamic model that predicts the solid flow pattern inside the reactor. These three sub-models have been integrated in order to develop a comprehensive reactor model. Experimental results were obtained in a continuous moving bed reactor and used to validate model predictions, with good approximation achieved between the experimental and simulated results. In addition, a parametric study of the model was carried out, which showed that tyre particle heating is clearly faster than average particle residence time inside the reactor. Therefore, this fast particle heating together with fast reaction kinetics enables total solid conversion to be achieved in this system in accordance with the predictive model. Copyright © 2010 Elsevier Ltd. All rights reserved.
Modeling of Fume Formation from Shielded Metal Arc Welding Process
NASA Astrophysics Data System (ADS)
Sivapirakasam, S. P.; Mohan, Sreejith; Santhosh Kumar, M. C.; Surianarayanan, M.
2017-04-01
In this study, a semi-empirical model of fume formation rate (FFR) from a shielded metal arc welding (SMAW) process has been developed. The model was developed for a DC electrode positive (DCEP) operation and involves the calculations of droplet temperature, surface area of the droplet, and partial vapor pressures of the constituents of the droplet to predict the FFR. The model was further extended for predicting FFR from nano-coated electrodes. The model estimates the FFR for Fe and Mn assuming constant proportion of other elements in the electrode. Fe FFR was overestimated, while Mn FFR was underestimated. The contribution of spatters and other mechanism in the arc responsible for fume formation were neglected. A good positive correlation was obtained between the predicted and experimental FFR values which highlighted the usefulness of the model.
NASA Astrophysics Data System (ADS)
Manan, Norhafizah A.; Abidin, Basir
2015-02-01
Five percent of patients who went through Percutaneous Coronary Intervention (PCI) experienced Major Adverse Cardiac Events (MACE) after PCI procedure. Risk prediction of MACE following a PCI procedure therefore is helpful. This work describes a review of such prediction models currently in use. Literature search was done on PubMed and SCOPUS database. Thirty literatures were found but only 4 studies were chosen based on the data used, design, and outcome of the study. Particular emphasis was given and commented on the study design, population, sample size, modeling method, predictors, outcomes, discrimination and calibration of the model. All the models had acceptable discrimination ability (C-statistics >0.7) and good calibration (Hosmer-Lameshow P-value >0.05). Most common model used was multivariate logistic regression and most popular predictor was age.
Application of GA-SVM method with parameter optimization for landslide development prediction
NASA Astrophysics Data System (ADS)
Li, X. Z.; Kong, J. M.
2013-10-01
Prediction of landslide development process is always a hot issue in landslide research. So far, many methods for landslide displacement series prediction have been proposed. Support vector machine (SVM) has been proved to be a novel algorithm with good performance. However, the performance strongly depends on the right selection of the parameters (C and γ) of SVM model. In this study, we presented an application of GA-SVM method with parameter optimization in landslide displacement rate prediction. We selected a typical large-scale landslide in some hydro - electrical engineering area of Southwest China as a case. On the basis of analyzing the basic characteristics and monitoring data of the landslide, a single-factor GA-SVM model and a multi-factor GA-SVM model of the landslide were built. Moreover, the models were compared with single-factor and multi-factor SVM models of the landslide. The results show that, the four models have high prediction accuracies, but the accuracies of GA-SVM models are slightly higher than those of SVM models and the accuracies of multi-factor models are slightly higher than those of single-factor models for the landslide prediction. The accuracy of the multi-factor GA-SVM models is the highest, with the smallest RSME of 0.0009 and the biggest RI of 0.9992.
Development of New Transferable Coarse-Grained Models of Hydrocarbons.
An, Yaxin; Bejagam, Karteek K; Deshmukh, Sanket A
2018-06-21
We have utilized an approach that integrates molecular dynamics (MD) simulations with particle swarm optimization (PSO) to accelerate the development of coarse-grained (CG) models of hydrocarbons. Specifically, we have developed new transferable CG beads, which can be used to model the hydrocarbons (C5 to C17) and reproduce their experimental properties with good accuracy. Firstly, the PSO method was used to develop the CG beads of the decane model represented with 2:1 (2-2-2-2-2) mapping scheme. This was followed by the development of the nonane model described with hybrid 2-2-3-2, and 3:1 (3-3-3) mapping schemes. The force-field (FF) parameters for these three CG models were optimized to reproduce four experimentally observed properties including density, enthalpy of vaporization, surface tension, and self-diffusion coefficient at 300 K. The CG MD simulations conducted with these new CG models of decane and nonane, at different timesteps, for various system sizes, and at a range of different temperatures, were able to predict their density, enthalpy of vaporization, surface tension, self-diffusion coefficient, expansibility, and isothermal compressibility with a good accuracy. Moreover, comparison of structural features obtained from the CG MD simulations and the CG beads of mapped all-atom (AA) trajectories of decane and nonane showed very good agreement. To test the chemical transferability of these models, we have constructed the models for hydrocarbons ranging from pentane to heptadecane, by using different combination of the CG beads of decane and nonane. The properties of pentane to heptadecane predicted by these new CG models showed an excellent agreement with the experimental data.
Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.
Decker, Gifford Z; Thomson, Scott L
2007-05-01
The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate.
Monamele, Gwladys C.; Vernet, Marie-Astrid; Nsaibirni, Robert F. J.; Bigna, Jean Joel R.; Kenmoe, Sebastien; Njankouo, Mohamadou Ripa
2017-01-01
Influenza is associated with highly contagious respiratory infections. Previous research has found that influenza transmission is often associated with climate variables especially in temperate regions. This study was performed in order to fill the gap of knowledge regarding the relationship between incidence of influenza and three meteorological parameters (temperature, rainfall and humidity) in a tropical setting. This was a retrospective study performed in Yaoundé-Cameroon from January 2009 to November 2015. Weekly proportions of confirmed influenza cases from five sentinel sites were considered as dependent variables, whereas weekly values of mean temperature, average relative humidity and accumulated rainfall were considered as independent variables. A univariate linear regression model was used in determining associations between influenza activity and weather covariates. A time-series method was used to predict on future values of influenza activity. The data was divided into 2 parts; the first 71 months were used to calibrate the model, and the last 12 months to test for prediction. Overall, there were 1173 confirmed infections with influenza virus. Linear regression analysis showed that there was no statistically significant association observed between influenza activity and weather variables. Very weak relationships (-0.1 < r < 0.1) were observed. Three prediction models were obtained for the different viral types (overall positive, Influenza A and Influenza B). Model 1 (overall influenza) and model 2 (influenza A) fitted well during the estimation period; however, they did not succeed to make good forecasts for predictions. Accumulated rainfall was the only external covariate that enabled good fit of both models. Based on the stationary R2, 29.5% and 41.1% of the variation in the series can be explained by model 1 and 2, respectively. This study laid more emphasis on the fact that influenza in Cameroon is characterized by year-round activity. The meteorological variables selected in this study did not enable good forecast of future influenza activity and certainly acted as proxies to other factors not considered, such as, UV radiation, absolute humidity, air quality and wind. PMID:29088290
Xu, Suxin; Chen, Jiangang; Wang, Bijia; Yang, Yiqi
2015-11-15
Two predictive models were presented for the adsorption affinities and diffusion coefficients of disperse dyes in polylactic acid matrix. Quantitative structure-sorption behavior relationship would not only provide insights into sorption process, but also enable rational engineering for desired properties. The thermodynamic and kinetic parameters for three disperse dyes were measured. The predictive model for adsorption affinity was based on two linear relationships derived by interpreting the experimental measurements with molecular structural parameters and compensation effect: ΔH° vs. dye size and ΔS° vs. ΔH°. Similarly, the predictive model for diffusion coefficient was based on two derived linear relationships: activation energy of diffusion vs. dye size and logarithm of pre-exponential factor vs. activation energy of diffusion. The only required parameters for both models are temperature and solvent accessible surface area of the dye molecule. These two predictive models were validated by testing the adsorption and diffusion properties of new disperse dyes. The models offer fairly good predictive ability. The linkage between structural parameter of disperse dyes and sorption behaviors might be generalized and extended to other similar polymer-penetrant systems. Copyright © 2015 Elsevier Inc. All rights reserved.
Predicting outcome in severe traumatic brain injury using a simple prognostic model.
Sobuwa, Simpiwe; Hartzenberg, Henry Benjamin; Geduld, Heike; Uys, Corrie
2014-06-17
Several studies have made it possible to predict outcome in severe traumatic brain injury (TBI) making it beneficial as an aid for clinical decision-making in the emergency setting. However, reliable predictive models are lacking for resource-limited prehospital settings such as those in developing countries like South Africa. To develop a simple predictive model for severe TBI using clinical variables in a South African prehospital setting. All consecutive patients admitted at two level-one centres in Cape Town, South Africa, for severe TBI were included. A binary logistic regression model was used, which included three predictor variables: oxygen saturation (SpO₂), Glasgow Coma Scale (GCS) and pupil reactivity. The Glasgow Outcome Scale was used to assess outcome on hospital discharge. A total of 74.4% of the outcomes were correctly predicted by the logistic regression model. The model demonstrated SpO₂ (p=0.019), GCS (p=0.001) and pupil reactivity (p=0.002) as independently significant predictors of outcome in severe TBI. Odds ratios of a good outcome were 3.148 (SpO₂ ≥ 90%), 5.108 (GCS 6 - 8) and 4.405 (pupils bilaterally reactive). This model is potentially useful for effective predictions of outcome in severe TBI.
Neonatal intensive care unit: predictive models for length of stay.
Bender, G J; Koestler, D; Ombao, H; McCourt, M; Alskinis, B; Rubin, L P; Padbury, J F
2013-02-01
Hospital length of stay (LOS) is important to administrators and families of neonates admitted to the neonatal intensive care unit (NICU). A prediction model for NICU LOS was developed using predictors birth weight, gestational age and two severity of illness tools, the score for neonatal acute physiology, perinatal extension (SNAPPE) and the morbidity assessment index for newborns (MAIN). Consecutive admissions (n=293) to a New England regional level III NICU were retrospectively collected. Multiple predictive models were compared for complexity and goodness-of-fit, coefficient of determination (R (2)) and predictive error. The optimal model was validated prospectively with consecutive admissions (n=615). Observed and expected LOS was compared. The MAIN models had best Akaike's information criterion, highest R (2) (0.786) and lowest predictive error. The best SNAPPE model underestimated LOS, with substantial variability, yet was fairly well calibrated by birthweight category. LOS was longer in the prospective cohort than the retrospective cohort, without differences in birth weight, gestational age, MAIN or SNAPPE. LOS prediction is improved by accounting for severity of illness in the first week of life, beyond factors known at birth. Prospective validation of both MAIN and SNAPPE models is warranted.
Orruño, Estibalitz; Gagnon, Marie Pierre; Asua, José; Ben Abdeljelil, Anis
2011-01-01
We examined the main factors affecting the intention of physicians to use teledermatology using a modified Technology Acceptance Model (TAM). The investigation was carried out during a teledermatology pilot study conducted in Spain. A total of 276 questionnaires were sent to physicians by email and 171 responded (62%). Cronbach's alpha was acceptably high for all constructs. Theoretical variables were well correlated with each other and with the dependent variable (Intention to Use). Logistic regression indicated that the original TAM model was good at predicting physicians' intention to use teledermatology and that the variables Perceived Usefulness and Perceived Ease of Use were both significant (odds ratios of 8.4 and 7.4, respectively). When other theoretical variables were added, the model was still significant and it also became more powerful. However, the only significant predictor in the modified model was Facilitators with an odds ratio of 9.9. Thus the TAM was good at predicting physicians' intention to use teledermatology. However, the most important variable was the perception of Facilitators to using the technology (e.g. infrastructure, training and support).
Modeling and predicting community responses to events using cultural demographics
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.
2007-04-01
This paper describes a novel capability for modeling and predicting community responses to events (specifically military operations) related to demographics. Demographics in the form of words and/or numbers are used. As an example, State of Alabama annual demographic data for retail sales, auto registration, wholesale trade, shopping goods, and population were used; from which we determined a ranked estimate of the sensitivity of the demographic parameters on the cultural group response. Our algorithm and results are summarized in this paper.
Kinetic model for the collisionless sheath of a collisional plasma
Tang, Xian-Zhu; Guo, Zehua
2016-08-04
Collisional plasmas typically have mean-free-path still much greater than the Debye length, so the sheath is mostly collisionless. Once the plasma density, temperature, and flow are specified at the sheath entrance, the profile variation of electron and ion density, temperature, flow speed, and conductive heat fluxes inside the sheath is set by collisionless dynamics, and can be predicted by an analytical kinetic model distribution. Finally, these predictions are contrasted in this paper with direct kinetic simulations, showing good agreement.
PSO-Assisted Development of New Transferable Coarse-Grained Water Models.
Bejagam, Karteek K; Singh, Samrendra; An, Yaxin; Berry, Carter; Deshmukh, Sanket A
2018-02-15
We have employed two-to-one mapping scheme to develop three coarse-grained (CG) water models, namely, 1-, 2-, and 3-site CG models. Here, for the first time, particle swarm optimization (PSO) and gradient descent methods were coupled to optimize the force-field parameters of the CG models to reproduce the density, self-diffusion coefficient, and dielectric constant of real water at 300 K. The CG MD simulations of these new models conducted with various timesteps, for different system sizes, and at a range of different temperatures are able to predict the density, self-diffusion coefficient, dielectric constant, surface tension, heat of vaporization, hydration free energy, and isothermal compressibility of real water with excellent accuracy. The 1-site model is ∼3 and ∼4.5 times computationally more efficient than 2- and 3-site models, respectively. To utilize the speed of 1-site model and electrostatic interactions offered by 2- and 3-site models, CG MD simulations of 1:1 combination of 1- and 2-/3-site models were performed at 300 K. These mixture simulations could also predict the properties of real water with good accuracy. Two new CG models of benzene, consisting of beads with and without partial charges, were developed. All three water models showed good capacity to solvate these benzene models.
A summary of wind power prediction methods
NASA Astrophysics Data System (ADS)
Wang, Yuqi
2018-06-01
The deterministic prediction of wind power, the probability prediction and the prediction of wind power ramp events are introduced in this paper. Deterministic prediction includes the prediction of statistical learning based on histor ical data and the prediction of physical models based on NWP data. Due to the great impact of wind power ramp events on the power system, this paper also introduces the prediction of wind power ramp events. At last, the evaluation indicators of all kinds of prediction are given. The prediction of wind power can be a good solution to the adverse effects of wind power on the power system due to the abrupt, intermittent and undulation of wind power.
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
NASA Astrophysics Data System (ADS)
Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.
2011-02-01
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
Predicting the Job and Life Satisfaction of Italian Teachers: Test of a Social Cognitive Model
ERIC Educational Resources Information Center
Lent, Robert W.; Nota, Laura; Soresi, Salvatore; Ginevra, Maria C.; Duffy, Ryan D.; Brown, Steven D.
2011-01-01
This study tested a social cognitive model of work and life satisfaction (Lent & Brown, 2006, 2008) in a sample of 235 Italian school teachers. The model offered good overall fit to the data, though not all individual path coefficients were significant. Three of five predictors (favorable work conditions, efficacy-relevant supports, and…
PREDICTING INDIVIDUAL WELL-BEING THROUGH THE LANGUAGE OF SOCIAL MEDIA.
Schwartz, H Andrew; Sap, Maarten; Kern, Margaret L; Eichstaedt, Johannes C; Kapelner, Adam; Agrawal, Megha; Blanco, Eduardo; Dziurzynski, Lukasz; Park, Gregory; Stillwell, David; Kosinski, Michal; Seligman, Martin E P; Ungar, Lyle H
2016-01-01
We present the task of predicting individual well-being, as measured by a life satisfaction scale, through the language people use on social media. Well-being, which encompasses much more than emotion and mood, is linked with good mental and physical health. The ability to quickly and accurately assess it can supplement multi-million dollar national surveys as well as promote whole body health. Through crowd-sourced ratings of tweets and Facebook status updates, we create message-level predictive models for multiple components of well-being. However, well-being is ultimately attributed to people, so we perform an additional evaluation at the user-level, finding that a multi-level cascaded model, using both message-level predictions and userlevel features, performs best and outperforms popular lexicon-based happiness models. Finally, we suggest that analyses of language go beyond prediction by identifying the language that characterizes well-being.
Where and why do models fail? Perspectives from Oregon Hydrologic Landscape classification
A complete understanding of why rainfall-runoff models provide good streamflow predictions at catchments in some regions, but fail to do so in other regions, has still not been achieved. Here, we argue that a hydrologic classification system is a robust conceptual tool that is w...
Predicting Nitrogen in Streams : A Comparison of Two Estimates of Fertilizer Application
Decision makers frequently rely on water and air quality models to develop nutrient management strategies. Obviously, the results of these models (e.g., SWAT, SPARROW, CMAQ) are only as good as the nutrient source input data and recently the Nutrient Innovations Task Group has ca...
Geravanchizadeh, Masoud; Fallah, Ali
2015-12-01
A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.
Pérez-Garrido, Alfonso; Morales Helguera, Aliuska; Abellán Guillén, Adela; Cordeiro, M Natália D S; Garrido Escudero, Amalio
2009-01-15
This paper reports a QSAR study for predicting the complexation of a large and heterogeneous variety of substances (233 organic compounds) with beta-cyclodextrins (beta-CDs). Several different theoretical molecular descriptors, calculated solely from the molecular structure of the compounds under investigation, and an efficient variable selection procedure, like the Genetic Algorithm, led to models with satisfactory global accuracy and predictivity. But the best-final QSAR model is based on Topological descriptors meanwhile offering a reasonable interpretation. This QSAR model was able to explain ca. 84% of the variance in the experimental activity, and displayed very good internal cross-validation statistics and predictivity on external data. It shows that the driving forces for CD complexation are mainly hydrophobic and steric (van der Waals) interactions. Thus, the results of our study provide a valuable tool for future screening and priority testing of beta-CDs guest molecules.
Joshi, Shreedhar S; Anthony, G; Manasa, D; Ashwini, T; Jagadeesh, A M; Borde, Deepak P; Bhat, Seetharam; Manjunath, C N
2014-01-01
To validate Aristotle basic complexity and Aristotle comprehensive complexity (ABC and ACC) and risk adjustment in congenital heart surgery-1 (RACHS-1) prediction models for in hospital mortality after surgery for congenital heart disease in a single surgical unit. Patients younger than 18 years, who had undergone surgery for congenital heart diseases from July 2007 to July 2013 were enrolled. Scoring for ABC and ACC scoring and assigning to RACHS-1 categories were done retrospectively from retrieved case files. Discriminative power of scoring systems was assessed with area under curve (AUC) of receiver operating curves (ROC). Calibration (test for goodness of fit of the model) was measured with Hosmer-Lemeshow modification of χ2 test. Net reclassification improvement (NRI) and integrated discrimination improvement (IDI) were applied to assess reclassification. A total of 1150 cases were assessed with an all-cause in-hospital mortality rate of 7.91%. When modeled for multivariate regression analysis, the ABC (χ2 = 8.24, P = 0.08), ACC (χ2 = 4.17 , P = 0.57) and RACHS-1 (χ2 = 2.13 , P = 0.14) scores showed good overall performance. The AUC was 0.677 with 95% confidence interval (CI) of 0.61-0.73 for ABC score, 0.704 (95% CI: 0.64-0.76) for ACC score and for RACHS-1 it was 0.607 (95%CI: 0.55-0.66). ACC had an improved predictability in comparison to RACHS-1 and ABC on analysis with NRI and IDI. ACC predicted mortality better than ABC and RCAHS-1 models. A national database will help in developing predictive models unique to our populations, till then, ACC scoring model can be used to analyze individual performances and compare with other institutes.
Nambi, Vijay; Chambless, Lloyd; He, Max; Folsom, Aaron R; Mosley, Tom; Boerwinkle, Eric; Ballantyne, Christie M
2012-01-01
Carotid intima-media thickness (CIMT) and plaque information can improve coronary heart disease (CHD) risk prediction when added to traditional risk factors (TRF). However, obtaining adequate images of all carotid artery segments (A-CIMT) may be difficult. Of A-CIMT, the common carotid artery intima-media thickness (CCA-IMT) is relatively more reliable and easier to measure. We evaluated whether CCA-IMT is comparable to A-CIMT when added to TRF and plaque information in improving CHD risk prediction in the Atherosclerosis Risk in Communities (ARIC) study. Ten-year CHD risk prediction models using TRF alone, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque were developed for the overall cohort, men, and women. The area under the receiver operator characteristic curve (AUC), per cent individuals reclassified, net reclassification index (NRI), and model calibration by the Grønnesby-Borgan test were estimated. There were 1722 incident CHD events in 12 576 individuals over a mean follow-up of 15.2 years. The AUC for TRF only, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque models were 0.741, 0.754, and 0.753, respectively. Although there was some discordance when the CCA-IMT + plaque- and A-CIMT + plaque-based risk estimation was compared, the NRI and clinical NRI (NRI in the intermediate-risk group) when comparing the CIMT models with TRF-only model, per cent reclassified, and test for model calibration were not significantly different. Coronary heart disease risk prediction can be improved by adding A-CIMT + plaque or CCA-IMT + plaque information to TRF. Therefore, evaluating the carotid artery for plaque presence and measuring CCA-IMT, which is easier and more reliable than measuring A-CIMT, provide a good alternative to measuring A-CIMT for CHD risk prediction.
Lassale, Camille; Gunter, Marc J.; Romaguera, Dora; Peelen, Linda M.; Van der Schouw, Yvonne T.; Beulens, Joline W. J.; Freisling, Heinz; Muller, David C.; Ferrari, Pietro; Huybrechts, Inge; Fagherazzi, Guy; Boutron-Ruault, Marie-Christine; Affret, Aurélie; Overvad, Kim; Dahm, Christina C.; Olsen, Anja; Roswall, Nina; Tsilidis, Konstantinos K.; Katzke, Verena A.; Kühn, Tilman; Buijsse, Brian; Quirós, José-Ramón; Sánchez-Cantalejo, Emilio; Etxezarreta, Nerea; Huerta, José María; Barricarte, Aurelio; Bonet, Catalina; Khaw, Kay-Tee; Key, Timothy J.; Trichopoulou, Antonia; Bamia, Christina; Lagiou, Pagona; Palli, Domenico; Agnoli, Claudia; Tumino, Rosario; Fasanelli, Francesca; Panico, Salvatore; Bueno-de-Mesquita, H. Bas; Boer, Jolanda M. A.; Sonestedt, Emily; Nilsson, Lena Maria; Renström, Frida; Weiderpass, Elisabete; Skeie, Guri; Lund, Eiliv; Moons, Karel G. M.; Riboli, Elio; Tzoulaki, Ioanna
2016-01-01
Scores of overall diet quality have received increasing attention in relation to disease aetiology; however, their value in risk prediction has been little examined. The objective was to assess and compare the association and predictive performance of 10 diet quality scores on 10-year risk of all-cause, CVD and cancer mortality in 451,256 healthy participants to the European Prospective Investigation into Cancer and Nutrition, followed-up for a median of 12.8y. All dietary scores studied showed significant inverse associations with all outcomes. The range of HRs (95% CI) in the top vs. lowest quartile of dietary scores in a composite model including non-invasive factors (age, sex, smoking, body mass index, education, physical activity and study centre) was 0.75 (0.72–0.79) to 0.88 (0.84–0.92) for all-cause, 0.76 (0.69–0.83) to 0.84 (0.76–0.92) for CVD and 0.78 (0.73–0.83) to 0.91 (0.85–0.97) for cancer mortality. Models with dietary scores alone showed low discrimination, but composite models also including age, sex and other non-invasive factors showed good discrimination and calibration, which varied little between different diet scores examined. Mean C-statistic of full models was 0.73, 0.80 and 0.71 for all-cause, CVD and cancer mortality. Dietary scores have poor predictive performance for 10-year mortality risk when used in isolation but display good predictive ability in combination with other non-invasive common risk factors. PMID:27409582
Snug as a Bug: Goodness of Fit and Quality of Models.
Jupiter, Daniel C
In elucidating risk factors, or attempting to make predictions about the behavior of subjects in our biomedical studies, we often build statistical models. These models are meant to capture some aspect of reality, or some real-world process underlying the phenomena we are examining. However, no model is perfect, and it is thus important to have tools to assess how accurate models are. In this commentary, we delve into the various roles that our models can play. Then we introduce the notion of the goodness of fit of models and lay the ground work for further study of diagnostic tests for assessing both the fidelity of our models and the statistical assumptions underlying them. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Multimodel predictive system for carbon dioxide solubility in saline formation waters.
Wang, Zan; Small, Mitchell J; Karamalidis, Athanasios K
2013-02-05
The prediction of carbon dioxide solubility in brine at conditions relevant to carbon sequestration (i.e., high temperature, pressure, and salt concentration (T-P-X)) is crucial when this technology is applied. Eleven mathematical models for predicting CO(2) solubility in brine are compared and considered for inclusion in a multimodel predictive system. Model goodness of fit is evaluated over the temperature range 304-433 K, pressure range 74-500 bar, and salt concentration range 0-7 m (NaCl equivalent), using 173 published CO(2) solubility measurements, particularly selected for those conditions. The performance of each model is assessed using various statistical methods, including the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Different models emerge as best fits for different subranges of the input conditions. A classification tree is generated using machine learning methods to predict the best-performing model under different T-P-X subranges, allowing development of a multimodel predictive system (MMoPS) that selects and applies the model expected to yield the most accurate CO(2) solubility prediction. Statistical analysis of the MMoPS predictions, including a stratified 5-fold cross validation, shows that MMoPS outperforms each individual model and increases the overall accuracy of CO(2) solubility prediction across the range of T-P-X conditions likely to be encountered in carbon sequestration applications.
Comparison of simplified models in the prediction of two phase flow in pipelines
NASA Astrophysics Data System (ADS)
Jerez-Carrizales, M.; Jaramillo, J. E.; Fuentes, D.
2014-06-01
Prediction of two phase flow in pipelines is a common task in engineering. It is a complex phenomenon and many models have been developed to find an approximate solution to the problem. Some old models, such as the Hagedorn & Brown (HB) model, have been highlighted by many authors to give very good performance. Furthermore, many modifications have been applied to this method to improve its predictions. In this work two simplified models which are based on empiricism (HB and Mukherjee and Brill, MB) are considered. One mechanistic model which is based on the physics of the phenomenon (AN) and it still needs some correlations called closure relations is also used. Moreover, a drift flux model defined in steady state that is flow pattern dependent (HK model) is implemented. The implementation of these methods was tested using published data in the scientific literature for vertical upward flows. Furthermore, a comparison of the predictive performance of the four models is done against a well from Campo Escuela Colorado. Difference among four models is smaller than difference with experimental data from the well in Campo Escuela Colorado.
Evaluating and Optimizing Online Advertising: Forget the Click, but There Are Good Proxies.
Dalessandro, Brian; Hook, Rod; Perlich, Claudia; Provost, Foster
2015-06-01
Online systems promise to improve advertisement targeting via the massive and detailed data available. However, there often is too few data on exactly the outcome of interest, such as purchases, for accurate campaign evaluation and optimization (due to low conversion rates, cold start periods, lack of instrumentation of offline purchases, and long purchase cycles). This paper presents a detailed treatment of proxy modeling, which is based on the identification of a suitable alternative (proxy) target variable when data on the true objective is in short supply (or even completely nonexistent). The paper has a two-fold contribution. First, the potential of proxy modeling is demonstrated clearly, based on a massive-scale experiment across 58 real online advertising campaigns. Second, we assess the value of different specific proxies for evaluating and optimizing online display advertising, showing striking results. The results include bad news and good news. The most commonly cited and used proxy is a click on an ad. The bad news is that across a large number of campaigns, clicks are not good proxies for evaluation or for optimization: clickers do not resemble buyers. The good news is that an alternative sort of proxy performs remarkably well: observed visits to the brand's website. Specifically, predictive models built based on brand site visits-which are much more common than purchases-do a remarkably good job of predicting which browsers will make a purchase. The practical bottom line: evaluating and optimizing campaigns using clicks seems wrongheaded; however, there is an easy and attractive alternative-use a well-chosen site-visit proxy instead.
Correlation of Wissler Human Thermal Model Blood Flow and Shiver Algorithms
NASA Technical Reports Server (NTRS)
Bue, Grant; Makinen, Janice; Cognata, Thomas
2010-01-01
The Wissler Human Thermal Model (WHTM) is a thermal math model of the human body that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. The model has been shown to predict core temperature and skin temperatures higher and lower, respectively, than in tests of subjects in crew escape suit working in a controlled hot environments. Conversely the model predicts core temperature and skin temperatures lower and higher, respectively, than in tests of lightly clad subjects immersed in cold water conditions. The blood flow algorithms of the model has been investigated to allow for more and less flow, respectively, for the cold and hot case. These changes in the model have yielded better correlation of skin and core temperatures in the cold and hot cases. The algorithm for onset of shiver did not need to be modified to achieve good agreement in cold immersion simulations
Modeling and prediction of relaxation of polar order in high-activity nonlinear optical polymers
NASA Astrophysics Data System (ADS)
Guenthner, Andrew J.; Lindsay, Geoffrey A.; Wright, Michael E.; Fallis, Stephen; Ashley, Paul R.; Sanghadasa, Mohan
2007-09-01
Mach-Zehnder optical modulators were fabricated using the CLD and FTC chromophores in polymer-on-silicon optical waveguides. Up to 17 months of oven-ageing stability are reported for the poled polymer films. Modulators containing an FTC-polyimide had the best over all aging performance. To model and extrapolate the ageing data, a relaxation correlation function attributed to A. K. Jonscher was compared to the well-established stretched exponential correlation function. Both models gave a good fit to the data. The Jonscher model predicted a slower relaxation rate in the out years. Analysis showed that collecting data for a longer period relative to the relaxation time was more important for generating useful predictions than the precision with which individual model parameters could be estimated. Thus from a practical standpoint, time-temperature superposition must be assumed in order to generate meaningful predictions. For this purpose, Arrhenius-type expressions were found to relate the model time constants to the ageing temperatures.
Predicting gaseous emissions from small-scale combustion of agricultural biomass fuels.
Fournel, S; Marcos, B; Godbout, S; Heitz, M
2015-03-01
A prediction model of gaseous emissions (CO, CO2, NOx, SO2 and HCl) from small-scale combustion of agricultural biomass fuels was developed in order to rapidly assess their potential to be burned in accordance to current environmental threshold values. The model was established based on calculation of thermodynamic equilibrium of reactive multicomponent systems using Gibbs free energy minimization. Since this method has been widely used to estimate the composition of the syngas from wood gasification, the model was first validated by comparing its prediction results with those of similar models from the literature. The model was then used to evaluate the main gas emissions from the combustion of four dedicated energy crops (short-rotation willow, reed canary grass, switchgrass and miscanthus) previously burned in a 29-kW boiler. The prediction values revealed good agreement with the experimental results. The model was particularly effective in estimating the influence of harvest season on SO2 emissions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chan, Chung-Hung; Yusoff, Rozita; Ngoh, Gek-Cheng
2013-09-01
A modeling technique based on absorbed microwave energy was proposed to model microwave-assisted extraction (MAE) of antioxidant compounds from cocoa (Theobroma cacao L.) leaves. By adapting suitable extraction model at the basis of microwave energy absorbed during extraction, the model can be developed to predict extraction profile of MAE at various microwave irradiation power (100-600 W) and solvent loading (100-300 ml). Verification with experimental data confirmed that the prediction was accurate in capturing the extraction profile of MAE (R-square value greater than 0.87). Besides, the predicted yields from the model showed good agreement with the experimental results with less than 10% deviation observed. Furthermore, suitable extraction times to ensure high extraction yield at various MAE conditions can be estimated based on absorbed microwave energy. The estimation is feasible as more than 85% of active compounds can be extracted when compared with the conventional extraction technique. Copyright © 2013 Elsevier Ltd. All rights reserved.
Real estate value prediction using multivariate regression models
NASA Astrophysics Data System (ADS)
Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav
2017-11-01
The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.
Prediction of dynamical systems by symbolic regression
NASA Astrophysics Data System (ADS)
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.
Ji, Ruijun; Du, Wanliang; Shen, Haipeng; Pan, Yuesong; Wang, Penglian; Liu, Gaifen; Wang, Yilong; Li, Hao; Zhao, Xingquan; Wang, Yongjun
2014-11-25
Acute ischemic stroke (AIS) is one of the leading causes of death and adult disability worldwide. In the present study, we aimed to develop a web-based risk model for predicting dynamic functional status at discharge, 3-month, 6-month, and 1-year after acute ischemic stroke (Dynamic Functional Status after Acute Ischemic Stroke, DFS-AIS). The DFS-AIS was developed based on the China National Stroke Registry (CNSR), in which eligible patients were randomly divided into derivation (60%) and validation (40%) cohorts. Good functional outcome was defined as modified Rankin Scale (mRS) score ≤ 2 at discharge, 3-month, 6-month, and 1-year after AIS, respectively. Independent predictors of each outcome measure were obtained using multivariable logistic regression. The area under the receiver operating characteristic curve (AUROC) and plot of observed and predicted risk were used to assess model discrimination and calibration. A total of 12,026 patients were included and the median age was 67 (interquartile range: 57-75). The proportion of patients with good functional outcome at discharge, 3-month, 6-month, and 1-year after AIS was 67.9%, 66.5%, 66.9% and 66.9%, respectively. Age, gender, medical history of diabetes mellitus, stroke or transient ischemic attack, current smoking and atrial fibrillation, pre-stroke dependence, pre-stroke statins using, admission National Institutes of Health Stroke Scale score, admission blood glucose were identified as independent predictors of functional outcome at different time points after AIS. The DFS-AIS was developed from sets of predictors of mRS ≤ 2 at different time points following AIS. The DFS-AIS demonstrated good discrimination in the derivation and validation cohorts (AUROC range: 0.837-0.845). Plots of observed versus predicted likelihood showed excellent calibration in the derivation and validation cohorts (all r = 0.99, P < 0.001). When compared to 8 existing models, the DFS-AIS showed significantly better discrimination for good functional outcome and mortality at discharge, 3-month, 6-month, and 1-year after AIS (all P < 0.0001). The DFS-AIS is a valid risk model to predict functional outcome at discharge, 3-month, 6-month, and 1-year after AIS.
Time series models of environmental exposures: Good predictions or good understanding.
Barnett, Adrian G; Stephen, Dimity; Huang, Cunrui; Wolkewitz, Martin
2017-04-01
Time series data are popular in environmental epidemiology as they make use of the natural experiment of how changes in exposure over time might impact on disease. Many published time series papers have used parameter-heavy models that fully explained the second order patterns in disease to give residuals that have no short-term autocorrelation or seasonality. This is often achieved by including predictors of past disease counts (autoregression) or seasonal splines with many degrees of freedom. These approaches give great residuals, but add little to our understanding of cause and effect. We argue that modelling approaches should rely more on good epidemiology and less on statistical tests. This includes thinking about causal pathways, making potential confounders explicit, fitting a limited number of models, and not over-fitting at the cost of under-estimating the true association between exposure and disease. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dünser, Simon; Meyer, Daniel W.
2016-06-01
In most groundwater aquifers, dispersion of tracers is dominated by flow-field inhomogeneities resulting from the underlying heterogeneous conductivity or transmissivity field. This effect is referred to as macrodispersion. Since in practice, besides a few point measurements the complete conductivity field is virtually never available, a probabilistic treatment is needed. To quantify the uncertainty in tracer concentrations from a given geostatistical model for the conductivity, Monte Carlo (MC) simulation is typically used. To avoid the excessive computational costs of MC, the polar Markovian velocity process (PMVP) model was recently introduced delivering predictions at about three orders of magnitude smaller computing times. In artificial test cases, the PMVP model has provided good results in comparison with MC. In this study, we further validate the model in a more challenging and realistic setup. The setup considered is derived from the well-known benchmark macrodispersion experiment (MADE), which is highly heterogeneous and non-stationary with a large number of unevenly scattered conductivity measurements. Validations were done against reference MC and good overall agreement was found. Moreover, simulations of a simplified setup with a single measurement were conducted in order to reassess the model's most fundamental assumptions and to provide guidance for model improvements.
Collins, G S; Altman, D G
2012-07-10
Early identification of colorectal cancer is an unresolved challenge and the predictive value of single symptoms is limited. We evaluated the performance of QCancer (Colorectal) prediction model for predicting the absolute risk of colorectal cancer in an independent UK cohort of patients from general practice records. A total of 2.1 million patients registered with a general practice surgery between 01 January 2000 and 30 June 2008, aged 30-84 years (3.7 million person-years) with 3712 colorectal cancer cases were included in the analysis. Colorectal cancer was defined as incident diagnosis of colorectal cancer during the 2 years after study entry. The results from this independent and external validation of QCancer (Colorectal) prediction model demonstrated good performance data on a large cohort of general practice patients. QCancer (Colorectal) had very good discrimination with an area under the ROC curve of 0.92 (women) and 0.91 (men), and explained 68% (women) and 66% (men) of the variation. QCancer (Colorectal) was well calibrated across all tenths of risk and over all age ranges with predicted risks closely matching observed risks. QCancer (Colorectal) appears to be a useful tool for identifying undetected cases of undiagnosed colorectal cancer in primary care in the United Kingdom.
Du, Zhicheng; Xu, Lin; Zhang, Wangjian; Zhang, Dingmei; Yu, Shicheng; Hao, Yuantao
2017-10-06
Hand, foot, and mouth disease (HFMD) has caused a substantial burden in China, especially in Guangdong Province. Based on the enhanced surveillance system, we aimed to explore whether the addition of temperate and search engine query data improves the risk prediction of HFMD. Ecological study. Information on the confirmed cases of HFMD, climate parameters and search engine query logs was collected. A total of 1.36 million HFMD cases were identified from the surveillance system during 2011-2014. Analyses were conducted at aggregate level and no confidential information was involved. A seasonal autoregressive integrated moving average (ARIMA) model with external variables (ARIMAX) was used to predict the HFMD incidence from 2011 to 2014, taking into account temperature and search engine query data (Baidu Index, BDI). Statistics of goodness-of-fit and precision of prediction were used to compare models (1) based on surveillance data only, and with the addition of (2) temperature, (3) BDI, and (4) both temperature and BDI. A high correlation between HFMD incidence and BDI ( r =0.794, p<0.001) or temperature ( r =0.657, p<0.001) was observed using both time series plot and correlation matrix. A linear effect of BDI (without lag) and non-linear effect of temperature (1 week lag) on HFMD incidence were found in a distributed lag non-linear model. Compared with the model based on surveillance data only, the ARIMAX model including BDI reached the best goodness-of-fit with an Akaike information criterion (AIC) value of -345.332, whereas the model including both BDI and temperature had the most accurate prediction in terms of the mean absolute percentage error (MAPE) of 101.745%. An ARIMAX model incorporating search engine query data significantly improved the prediction of HFMD. Further studies are warranted to examine whether including search engine query data also improves the prediction of other infectious diseases in other settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Du, Zhicheng; Xu, Lin; Zhang, Wangjian; Zhang, Dingmei; Yu, Shicheng; Hao, Yuantao
2017-01-01
Objectives Hand, foot, and mouth disease (HFMD) has caused a substantial burden in China, especially in Guangdong Province. Based on the enhanced surveillance system, we aimed to explore whether the addition of temperate and search engine query data improves the risk prediction of HFMD. Design Ecological study. Setting and participants Information on the confirmed cases of HFMD, climate parameters and search engine query logs was collected. A total of 1.36 million HFMD cases were identified from the surveillance system during 2011–2014. Analyses were conducted at aggregate level and no confidential information was involved. Outcome measures A seasonal autoregressive integrated moving average (ARIMA) model with external variables (ARIMAX) was used to predict the HFMD incidence from 2011 to 2014, taking into account temperature and search engine query data (Baidu Index, BDI). Statistics of goodness-of-fit and precision of prediction were used to compare models (1) based on surveillance data only, and with the addition of (2) temperature, (3) BDI, and (4) both temperature and BDI. Results A high correlation between HFMD incidence and BDI (r=0.794, p<0.001) or temperature (r=0.657, p<0.001) was observed using both time series plot and correlation matrix. A linear effect of BDI (without lag) and non-linear effect of temperature (1 week lag) on HFMD incidence were found in a distributed lag non-linear model. Compared with the model based on surveillance data only, the ARIMAX model including BDI reached the best goodness-of-fit with an Akaike information criterion (AIC) value of −345.332, whereas the model including both BDI and temperature had the most accurate prediction in terms of the mean absolute percentage error (MAPE) of 101.745%. Conclusions An ARIMAX model incorporating search engine query data significantly improved the prediction of HFMD. Further studies are warranted to examine whether including search engine query data also improves the prediction of other infectious diseases in other settings. PMID:28988169
Comparison of Primary Models to Predict Microbial Growth by the Plate Count and Absorbance Methods.
Pla, María-Leonor; Oltra, Sandra; Esteban, María-Dolores; Andreu, Santiago; Palop, Alfredo
2015-01-01
The selection of a primary model to describe microbial growth in predictive food microbiology often appears to be subjective. The objective of this research was to check the performance of different mathematical models in predicting growth parameters, both by absorbance and plate count methods. For this purpose, growth curves of three different microorganisms (Bacillus cereus, Listeria monocytogenes, and Escherichia coli) grown under the same conditions, but with different initial concentrations each, were analysed. When measuring the microbial growth of each microorganism by optical density, almost all models provided quite high goodness of fit (r(2) > 0.93) for all growth curves. The growth rate remained approximately constant for all growth curves of each microorganism, when considering one growth model, but differences were found among models. Three-phase linear model provided the lowest variation for growth rate values for all three microorganisms. Baranyi model gave a variation marginally higher, despite a much better overall fitting. When measuring the microbial growth by plate count, similar results were obtained. These results provide insight into predictive microbiology and will help food microbiologists and researchers to choose the proper primary growth predictive model.
Microstructure Evolution and Flow Stress Model of a 20Mn5 Hollow Steel Ingot during Hot Compression
Liu, Min; Ma, Qing-Xian; Luo, Jian-Bin
2018-01-01
20Mn5 steel is widely used in the manufacture of heavy hydro-generator shaft due to its good performance of strength, toughness and wear resistance. However, the hot deformation and recrystallization behaviors of 20Mn5 steel compressed under high temperature were not studied. In this study, the hot compression experiments under temperatures of 850–1200 °C and strain rates of 0.01/s–1/s are conducted using Gleeble thermal and mechanical simulation machine. And the flow stress curves and microstructure after hot compression are obtained. Effects of temperature and strain rate on microstructure are analyzed. Based on the classical stress-dislocation relation and the kinetics of dynamic recrystallization, a two-stage constitutive model is developed to predict the flow stress of 20Mn5 steel. Comparisons between experimental flow stress and predicted flow stress show that the predicted flow stress values are in good agreement with the experimental flow stress values, which indicates that the proposed constitutive model is reliable and can be used for numerical simulation of hot forging of 20Mn5 hollow steel ingot. PMID:29561826
Assessment and prediction of drying shrinkage cracking in bonded mortar overlays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beushausen, Hans, E-mail: hans.beushausen@uct.ac.za; Chilwesa, Masuzyo
2013-11-15
Restrained drying shrinkage cracking was investigated on composite beams consisting of substrate concrete and bonded mortar overlays, and compared to the performance of the same mortars when subjected to the ring test. Stress development and cracking in the composite specimens were analytically modeled and predicted based on the measurement of relevant time-dependent material properties such as drying shrinkage, elastic modulus, tensile relaxation and tensile strength. Overlay cracking in the composite beams could be very well predicted with the analytical model. The ring test provided a useful qualitative comparison of the cracking performance of the mortars. The duration of curing wasmore » found to only have a minor influence on crack development. This was ascribed to the fact that prolonged curing has a beneficial effect on tensile strength at the onset of stress development, but is in the same time not beneficial to the values of tensile relaxation and elastic modulus. -- Highlights: •Parameter study on material characteristics influencing overlay cracking. •Analytical model gives good quantitative indication of overlay cracking. •Ring test presents good qualitative indication of overlay cracking. •Curing duration has little effect on overlay cracking.« less
Maltarollo, Vinícius G; Homem-de-Mello, Paula; Honorio, Káthia M
2011-10-01
Current researches on treatments for metabolic diseases involve a class of biological receptors called peroxisome proliferator-activated receptors (PPARs), which control the metabolism of carbohydrates and lipids. A subclass of these receptors, PPARδ, regulates several metabolic processes, and the substances that activate them are being studied as new drug candidates for the treatment of diabetes mellitus and metabolic syndrome. In this study, several PPARδ agonists with experimental biological activity were selected for a structural and chemical study. Electronic, stereochemical, lipophilic and topological descriptors were calculated for the selected compounds using various theoretical methods, such as density functional theory (DFT). Fisher's weight and principal components analysis (PCA) methods were employed to select the most relevant variables for this study. The partial least squares (PLS) method was used to construct the multivariate statistical model, and the best model obtained had 4 PCs, q ( 2 ) = 0.80 and r ( 2 ) = 0.90, indicating a good internal consistency. The prediction residues calculated for the compounds in the test set had low values, indicating the good predictive capability of our PLS model. The model obtained in this study is reliable and can be used to predict the biological activity of new untested compounds. Docking studies have also confirmed the importance of the molecular descriptors selected for this system.
The DoE method as an efficient tool for modeling the behavior of monocrystalline Si-PV module
NASA Astrophysics Data System (ADS)
Kessaissia, Fatma Zohra; Zegaoui, Abdallah; Boutoubat, Mohamed; Allouache, Hadj; Aillerie, Michel; Charles, Jean-Pierre
2018-05-01
The objective of this paper is to apply the Design of Experiments (DoE) method to study and to obtain a predictive model of any marketed monocrystalline photovoltaic (mc-PV) module. This technique allows us to have a mathematical model that represents the predicted responses depending upon input factors and experimental data. Therefore, the DoE model for characterization and modeling of mc-PV module behavior can be obtained by just performing a set of experimental trials. The DoE model of the mc-PV panel evaluates the predictive maximum power, as a function of irradiation and temperature in a bounded domain of study for inputs. For the mc-PV panel, the predictive model for both one level and two levels were developed taking into account both influences of the main effect and the interactive effects on the considered factors. The DoE method is then implemented by developing a code under Matlab software. The code allows us to simulate, characterize, and validate the predictive model of the mc-PV panel. The calculated results were compared to the experimental data, errors were estimated, and an accurate validation of the predictive models was evaluated by the surface response. Finally, we conclude that the predictive models reproduce the experimental trials and are defined within a good accuracy.
NASA Technical Reports Server (NTRS)
Phillips, M. A.
1973-01-01
Results are presented of an analysis which compares the performance predictions of a thermal model of a multi-panel modular radiator system with thermal vacuum test data. Comparisons between measured and predicted individual panel outlet temperatures and pressure drops and system outlet temperatures have been made over the full range of heat loads, environments and plumbing arrangements expected for the shuttle radiators. Both two sided and one sided radiation have been included. The model predictions show excellent agreement with the test data for the maximum design conditions of high load and hot environment. Predictions under minimum design conditions of low load-cold environments indicate good agreement with the measured data, but evaluation of low load predictions should consider the possibility of parallel flow instabilities due to main system freezing. Performance predictions under intermediate conditions in which the majority of the flow is not in either the main or prime system are adequate although model improvements in this area may be desired. The primary modeling objective of providing an analytical technique for performance predictions of a multi-panel radiator system under the design conditions has been met.
Multivariable Time Series Prediction for the Icing Process on Overhead Power Transmission Line
Li, Peng; Zhao, Na; Zhou, Donghua; Cao, Min; Li, Jingjie; Shi, Xinling
2014-01-01
The design of monitoring and predictive alarm systems is necessary for successful overhead power transmission line icing. Given the characteristics of complexity, nonlinearity, and fitfulness in the line icing process, a model based on a multivariable time series is presented here to predict the icing load of a transmission line. In this model, the time effects of micrometeorology parameters for the icing process have been analyzed. The phase-space reconstruction theory and machine learning method were then applied to establish the prediction model, which fully utilized the history of multivariable time series data in local monitoring systems to represent the mapping relationship between icing load and micrometeorology factors. Relevant to the characteristic of fitfulness in line icing, the simulations were carried out during the same icing process or different process to test the model's prediction precision and robustness. According to the simulation results for the Tao-Luo-Xiong Transmission Line, this model demonstrates a good accuracy of prediction in different process, if the prediction length is less than two hours, and would be helpful for power grid departments when deciding to take action in advance to address potential icing disasters. PMID:25136653
Prediction of mortality rates using a model with stochastic parameters
NASA Astrophysics Data System (ADS)
Tan, Chon Sern; Pooi, Ah Hin
2016-10-01
Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.
NASA Technical Reports Server (NTRS)
Coats, Timothy W.; Harris, Charles E.
1995-01-01
The durability and damage tolerance of laminated composites are critical design considerations for airframe composite structures. Therefore, the ability to model damage initiation and growth and predict the life of laminated composites is necessary to achieve structurally efficient and economical designs. The purpose of this research is to experimentally verify the application of a continuum damage model to predict progressive damage development in a toughened material system. Damage due to monotonic and tension-tension fatigue was documented for IM7/5260 graphite/bismaleimide laminates. Crack density and delamination surface area were used to calculate matrix cracking and delamination internal state variables to predict stiffness loss in unnotched laminates. A damage dependent finite element code predicted the stiffness loss for notched laminates with good agreement to experimental data. It was concluded that the continuum damage model can adequately predict matrix damage progression in notched and unnotched laminates as a function of loading history and laminate stacking sequence.
NASA Astrophysics Data System (ADS)
Ge, Honghao; Ren, Fengli; Li, Jun; Han, Xiujun; Xia, Mingxu; Li, Jianguo
2017-03-01
A four-phase dendritic model was developed to predict the macrosegregation, shrinkage cavity, and porosity during solidification. In this four-phase dendritic model, some important factors, including dendritic structure for equiaxed crystals, melt convection, crystals sedimentation, nucleation, growth, and shrinkage of solidified phases, were taken into consideration. Furthermore, in this four-phase dendritic model, a modified shrinkage criterion was established to predict shrinkage porosity (microporosity) of a 55-ton industrial Fe-3.3 wt pct C ingot. The predicted macrosegregation pattern and shrinkage cavity shape are in a good agreement with experimental results. The shrinkage cavity has a significant effect on the formation of positive segregation in hot top region, which generally forms during the last stage of ingot casting. The dendritic equiaxed grains also play an important role on the formation of A-segregation. A three-dimensional laminar structure of A-segregation in industrial ingot was, for the first time, predicted by using a 3D case simulation.
Description of the University of Auckland Global Mars Mesoscale Meteorological Model (GM4)
NASA Astrophysics Data System (ADS)
Wing, D. R.; Austin, G. L.
2005-08-01
The University of Auckland Global Mars Mesoscale Meteorological Model (GM4) is a numerical weather prediction model of the Martian atmosphere that has been developed through the conversion of the Penn State University / National Center for Atmospheric Research fifth generation mesoscale model (MM5). The global aspect of this model is self consistent, overlapping, and forms a continuous domain around the entire planet, removing the need to provide boundary conditions other than at initialisation, yielding independence from the constraint of a Mars general circulation model. The brief overview of the model will be given, outlining the key physical processes and setup of the model. Comparison between data collected from Mars Pathfinder during its 1997 mission and simulated conditions using GM4 have been performed. Diurnal temperature variation as predicted by the model shows very good correspondence with the surface truth data, to within 5 K for the majority of the diurnal cycle. Mars Viking Data is also compared with the model, with good agreement. As a further means of validation for the model, various seasonal comparisons of surface and vertical atmospheric structure are conducted with the European Space Agency AOPP/LMD Mars Climate Database. Selected simulations over regions of interest will also be presented.
Igne, Benoit; Shi, Zhenqi; Drennen, James K; Anderson, Carl A
2014-02-01
The impact of raw material variability on the prediction ability of a near-infrared calibration model was studied. Calibrations, developed from a quaternary mixture design comprising theophylline anhydrous, lactose monohydrate, microcrystalline cellulose, and soluble starch, were challenged by intentional variation of raw material properties. A design with two theophylline physical forms, three lactose particle sizes, and two starch manufacturers was created to test model robustness. Further challenges to the models were accomplished through environmental conditions. Along with full-spectrum partial least squares (PLS) modeling, variable selection by dynamic backward PLS and genetic algorithms was utilized in an effort to mitigate the effects of raw material variability. In addition to evaluating models based on their prediction statistics, prediction residuals were analyzed by analyses of variance and model diagnostics (Hotelling's T(2) and Q residuals). Full-spectrum models were significantly affected by lactose particle size. Models developed by selecting variables gave lower prediction errors and proved to be a good approach to limit the effect of changing raw material characteristics. Hotelling's T(2) and Q residuals provided valuable information that was not detectable when studying only prediction trends. Diagnostic statistics were demonstrated to be critical in the appropriate interpretation of the prediction of quality parameters. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okabe, T.; Takeda, N.; Komotori, J.
1999-11-26
A new model is proposed for multiple matrix cracking in order to take into account the role of matrix-rich regions in the cross section in initiating crack growth. The model is used to predict the matrix cracking stress and the total number of matrix cracks. The model converts the matrix-rich regions into equivalent penny shape crack sizes and predicts the matrix cracking stress with a fracture mechanics crack-bridging model. The estimated distribution of matrix cracking stresses is used as statistical input to predict the number of matrix cracks. The results show good agreement with the experimental results by replica observations.more » Therefore, it is found that the matrix cracking behavior mainly depends on the distribution of matrix-rich regions in the composite.« less
Development of a reactive-dispersive plume model
NASA Astrophysics Data System (ADS)
Kim, Hyun S.; Kim, Yong H.; Song, Chul H.
2017-04-01
A reactive-dispersive plume model (RDPM) was developed in this study. The RDPM can consider two main components of large-scale point source plume: i) turbulent dispersion and ii) photochemical reactions. In order to evaluate the simulation performance of newly developed RDPM, the comparisons between the model-predicted and observed mixing ratios were made using the TexAQS II 2006 (Texas Air Quality Study II 2006) power-plant experiment data. Statistical analyses show good correlation (0.61≤R≤0.92), and good agreement with the Index of Agreement (0.70≤R≤0.95). The chemical NOx lifetimes for two power-plant plumes (Monticello and Welsh power plants) were also estimated.
Per Aspera ad Astra: Through Complex Population Modeling to Predictive Theory.
Topping, Christopher J; Alrøe, Hugo Fjelsted; Farrell, Katharine N; Grimm, Volker
2015-11-01
Population models in ecology are often not good at predictions, even if they are complex and seem to be realistic enough. The reason for this might be that Occam's razor, which is key for minimal models exploring ideas and concepts, has been too uncritically adopted for more realistic models of systems. This can tie models too closely to certain situations, thereby preventing them from predicting the response to new conditions. We therefore advocate a new kind of parsimony to improve the application of Occam's razor. This new parsimony balances two contrasting strategies for avoiding errors in modeling: avoiding inclusion of nonessential factors (false inclusions) and avoiding exclusion of sometimes-important factors (false exclusions). It involves a synthesis of traditional modeling and analysis, used to describe the essentials of mechanistic relationships, with elements that are included in a model because they have been reported to be or can arguably be assumed to be important under certain conditions. The resulting models should be able to reflect how the internal organization of populations change and thereby generate representations of the novel behavior necessary for complex predictions, including regime shifts.
Prediction of N-nitrosodimethylamine (NDMA) formation as a disinfection by-product.
Kim, Jongo; Clevenger, Thomas E
2007-06-25
This study investigated the possibility of a statistical model application for the prediction of N-nitrosodimethylamine (NDMA) formation. The NDMA formation was studied as a function of monochloramine concentration (0.001-5mM) at fixed dimethylamine (DMA) concentrations of 0.01mM or 0.05mM. Excellent linear correlations were observed between the molar ratio of monochloramine to DMA and the NDMA formation on a log scale at pH 7 and 8. When a developed prediction equation was applied to a previously reported study, a good result was obtained. The statistical model appears to predict adequately NDMA concentrations if other NDMA precursors are excluded. Using the predictive tool, a simple and approximate calculation of NDMA formation can be obtained in drinking water systems.
NASA Astrophysics Data System (ADS)
Xu, Lei; Chen, Nengcheng; Zhang, Xiang
2018-02-01
Drought is an extreme natural disaster that can lead to huge socioeconomic losses. Drought prediction ahead of months is helpful for early drought warning and preparations. In this study, we developed a statistical model, two weighted dynamic models and a statistical-dynamic (hybrid) model for 1-6 month lead drought prediction in China. Specifically, statistical component refers to climate signals weighting by support vector regression (SVR), dynamic components consist of the ensemble mean (EM) and Bayesian model averaging (BMA) of the North American Multi-Model Ensemble (NMME) climatic models, and the hybrid part denotes a combination of statistical and dynamic components by assigning weights based on their historical performances. The results indicate that the statistical and hybrid models show better rainfall predictions than NMME-EM and NMME-BMA models, which have good predictability only in southern China. In the 2011 China winter-spring drought event, the statistical model well predicted the spatial extent and severity of drought nationwide, although the severity was underestimated in the mid-lower reaches of Yangtze River (MLRYR) region. The NMME-EM and NMME-BMA models largely overestimated rainfall in northern and western China in 2011 drought. In the 2013 China summer drought, the NMME-EM model forecasted the drought extent and severity in eastern China well, while the statistical and hybrid models falsely detected negative precipitation anomaly (NPA) in some areas. Model ensembles such as multiple statistical approaches, multiple dynamic models or multiple hybrid models for drought predictions were highlighted. These conclusions may be helpful for drought prediction and early drought warnings in China.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.
Deng, Li; Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP
Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740
Standard solar model. II - g-modes
NASA Technical Reports Server (NTRS)
Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.
1992-01-01
The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).
Predicting survival across chronic interstitial lung disease: the ILD-GAP model.
Ryerson, Christopher J; Vittinghoff, Eric; Ley, Brett; Lee, Joyce S; Mooney, Joshua J; Jones, Kirk D; Elicker, Brett M; Wolters, Paul J; Koth, Laura L; King, Talmadge E; Collard, Harold R
2014-04-01
Risk prediction is challenging in chronic interstitial lung disease (ILD) because of heterogeneity in disease-specific and patient-specific variables. Our objective was to determine whether mortality is accurately predicted in patients with chronic ILD using the GAP model, a clinical prediction model based on sex, age, and lung physiology, that was previously validated in patients with idiopathic pulmonary fibrosis. Patients with idiopathic pulmonary fibrosis (n=307), chronic hypersensitivity pneumonitis (n=206), connective tissue disease-associated ILD (n=281), idiopathic nonspecific interstitial pneumonia (n=45), or unclassifiable ILD (n=173) were selected from an ongoing database (N=1,012). Performance of the previously validated GAP model was compared with novel prediction models in each ILD subtype and the combined cohort. Patients with follow-up pulmonary function data were used for longitudinal model validation. The GAP model had good performance in all ILD subtypes (c-index, 74.6 in the combined cohort), which was maintained at all stages of disease severity and during follow-up evaluation. The GAP model had similar performance compared with alternative prediction models. A modified ILD-GAP Index was developed for application across all ILD subtypes to provide disease-specific survival estimates using a single risk prediction model. This was done by adding a disease subtype variable that accounted for better adjusted survival in connective tissue disease-associated ILD, chronic hypersensitivity pneumonitis, and idiopathic nonspecific interstitial pneumonia. The GAP model accurately predicts risk of death in chronic ILD. The ILD-GAP model accurately predicts mortality in major chronic ILD subtypes and at all stages of disease.
A Time Dependent Model of HD209458b
NASA Astrophysics Data System (ADS)
Iro, N.; Bézard, B.; Guillot, T.
2004-12-01
We developed a time-dependent radiative model for the atmosphere of HD209458b to investigate its thermal structure and chemical composition. Time-dependent temperature profiles were calculated, using a uniform zonal wind modelled as a solid body rotation. We predict day/night temperature variations of 600K around 0.1 bar, for a 1 km/s wind velocity, in good agreement with the predictions by Showman & Guillot (2002). On the night side, the low temperature allows the sodium to condense. Depletion of sodium in the morning limb may explain the lower than expected abundance found by Charbonneau et al. (2002).
Melting of genomic DNA: Predictive modeling by nonlinear lattice dynamics
NASA Astrophysics Data System (ADS)
Theodorakopoulos, Nikos
2010-08-01
The melting behavior of long, heterogeneous DNA chains is examined within the framework of the nonlinear lattice dynamics based Peyrard-Bishop-Dauxois (PBD) model. Data for the pBR322 plasmid and the complete T7 phage have been used to obtain model fits and determine parameter dependence on salt content. Melting curves predicted for the complete fd phage and the Y1 and Y2 fragments of the ϕX174 phage without any adjustable parameters are in good agreement with experiment. The calculated probabilities for single base-pair opening are consistent with values obtained from imino proton exchange experiments.
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET
2017-06-01
This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.
Khan, Taimoor; De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.
De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616
NASA Technical Reports Server (NTRS)
Norman, I.; Rochelle, W. C.; Kimbrough, B. S.; Ritrivi, C. A.; Ting, P. C.; Dotts, R. L.
1982-01-01
Thermal performance verification of Reusable Surface Insulation (RSI) has been accomplished by comparisons of STS-2 Orbiter Flight Test (OFT) data with Thermal Math Model (TMM) predictions. The OFT data was obtained from Development Flight Instrumentation RSI plug and gap thermocouples. Quartertile RSI TMMs were developed using measured flight data for surface temperature and pressure environments. Reference surface heating rates, derived from surface temperature data, were multiplied by gap heating ratios to obtain tile sidewall heating rates. This TMM analysis resulted in good agreement of predicted temperatures with flight data for thermocouples located in the RSI, Strain Isolation Pad, filler bar and structure.
NASA Technical Reports Server (NTRS)
Weil, Joseph; Sleeman, William C , Jr
1949-01-01
The effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes are analyzed, and a simple method is presented for computing power-on pitching-moment curves for flap-retracted flight conditions. The methods evolved are based on the results of powered-model wind-tunnel investigations of 28 model configurations. Correlation curves are presented from which the effects of power on the downwash over the tail and the stabilizer effectiveness can be rapidly predicted. The procedures developed enable prediction of power-on longitudinal stability characteristics that are generally in very good agreement with experiment.
NASA Technical Reports Server (NTRS)
Clark, S. K.; Dodge, R. N.; Nybakken, G. H.
1972-01-01
The string theory was evaluated for predicting lateral tire dynamic properties as obtained from scaled model tests. The experimental data and string theory predictions are in generally good agreement using lateral stiffness and relaxation length values obtained from the static or slowly rolling tire. The results indicate that lateral forces and self-aligning torques are linearly proportional to tire lateral stiffness and to the amplitude of either steer or lateral displacement. In addition, the results show that the ratio of input excitation frequency to road speed is the proper independent variable by which frequency should be measured.
NASA Astrophysics Data System (ADS)
Amiraux, Mathieu
Rotorcraft Blade-Vortex Interaction (BVI) remains one of the most challenging flow phenomenon to simulate numerically. Over the past decade, the HART-II rotor test and its extensive experimental dataset has been a major database for validation of CFD codes. Its strong BVI signature, with high levels of intrusive noise and vibrations, makes it a difficult test for computational methods. The main challenge is to accurately capture and preserve the vortices which interact with the rotor, while predicting correct blade deformations and loading. This doctoral dissertation presents the application of a coupled CFD/CSD methodology to the problem of helicopter BVI and compares three levels of fidelity for aerodynamic modeling: a hybrid lifting-line/free-wake (wake coupling) method, with modified compressible unsteady model; a hybrid URANS/free-wake method; and a URANS-based wake capturing method, using multiple overset meshes to capture the entire flow field. To further increase numerical correlation, three helicopter fuselage models are implemented in the framework. The first is a high resolution 3D GPU panel code; the second is an immersed boundary based method, with 3D elliptic grid adaption; the last one uses a body-fitted, curvilinear fuselage mesh. The main contribution of this work is the implementation and systematic comparison of multiple numerical methods to perform BVI modeling. The trade-offs between solution accuracy and computational cost are highlighted for the different approaches. Various improvements have been made to each code to enhance physical fidelity, while advanced technologies, such as GPU computing, have been employed to increase efficiency. The resulting numerical setup covers all aspects of the simulation creating a truly multi-fidelity and multi-physics framework. Overall, the wake capturing approach showed the best BVI phasing correlation and good blade deflection predictions, with slightly under-predicted aerodynamic loading magnitudes. However, it proved to be much more expensive than the other two methods. Wake coupling with RANS solver had very good loading magnitude predictions, and therefore good acoustic intensities, with acceptable computational cost. The lifting-line based technique often had over-predicted aerodynamic levels, due to the degree of empiricism of the model, but its very short run-times, thanks to GPU technology, makes it a very attractive approach.
Björnsson, Marcus A; Simonsson, Ulrika S H
2011-01-01
AIMS To describe pain intensity (PI) measured on a visual analogue scale (VAS) and dropout due to request for rescue medication after administration of naproxcinod, naproxen or placebo in 242 patients after wisdom tooth removal. METHODS Non-linear mixed effects modelling was used to describe the plasma concentrations of naproxen, either formed from naproxcinod or from naproxen itself, and their relationship to PI and dropout. Goodness of fit was assessed by simultaneous simulations of PI and dropout. RESULTS Baseline PI for the typical patient was 52.7 mm. The PI was influenced by placebo effects, using an exponential model, and by naproxen concentrations using a sigmoid Emax model. Typical maximal placebo effect was a decrease in PI by 20.2%, with an onset rate constant of 0.237 h−1. EC50 was 0.135 µmol l−1. A Weibull time-to-event model was used for the dropout, where the hazard was dependent on the predicted PI and by the PI at baseline. Since the dropout was not at random, it was necessary to include the simulated dropout in visual predictive checks (VPC) of PI. CONCLUSIONS This model describes the relationship between drug effects, PI and the likelihood of dropout after naproxcinod, naproxen and placebo administration. The model provides an opportunity to describe the effects of other doses or formulations, after dental extraction. VPC created by simultaneous simulations of PI and dropout provides a good way of assessing the goodness of fit when there is informative dropout. PMID:21272053
Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.
2013-01-01
Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches that had at least 2 years of data (2010-11 and sometimes earlier) and for 1 beach that had 1 year of data. For most models, software designed for model development by the U.S. Environmental Protection Agency (Virtual Beach) was used. The selected model for each beach was based on a combination of explanatory variables including, most commonly, turbidity, day of the year, change in lake level over 24 hours, wave height, wind direction and speed, and antecedent rainfall for various time periods. Forty-two predictive models were validated against data collected during an independent year (2012) and compared to the current method for assessing recreational water quality-using the previous day’s E. coli concentration (persistence model). Goals for good predictive-model performance were responses that were at least 5 percent greater than the persistence model and overall correct responses greater than or equal to 80 percent, sensitivities (percentage of exceedances of the bathing-water standard that were correctly predicted by the model) greater than or equal to 50 percent, and specificities (percentage of nonexceedances correctly predicted by the model) greater than or equal to 85 percent. Out of 42 predictive models, 24 models yielded over-all correct responses that were at least 5 percent greater than the use of the persistence model. Predictive-model responses met the performance goals more often than the persistence-model responses in terms of overall correctness (28 versus 17 models, respectively), sensitivity (17 versus 4 models), and specificity (34 versus 25 models). Gaining knowledge of each beach and the factors that affect E. coli concentrations is important for developing good predictive models. Collection of additional years of data with a wide range of environmental conditions may also help to improve future model performance. The USGS will continue to work with local agencies in 2013 and beyond to develop and validate predictive models at beaches and improve existing nowcasts, restructuring monitoring activities to accommodate future uncertainties in funding and resources.
Kagan, Leonid; Gershkovich, Pavel; Wasan, Kishor M; Mager, Donald E
2011-06-01
The time course of tissue distribution of amphotericin B (AmB) has not been sufficiently characterized despite its therapeutic importance and an apparent disconnect between plasma pharmacokinetics and clinical outcomes. The goals of this work were to develop and evaluate a physiologically based pharmacokinetic (PBPK) model to characterize the disposition properties of AmB administered as deoxycholate formulation in healthy rats and to examine the utility of the PBPK model for interspecies scaling of AmB pharmacokinetics. AmB plasma and tissue concentration-time data, following single and multiple intravenous administration of Fungizone® to rats, from several publications were combined for construction of the model. Physiological parameters were fixed to literature values. Various structural models for single organs were evaluated, and the whole-body PBPK model included liver, spleen, kidney, lung, heart, gastrointestinal tract, plasma, and remainder compartments. The final model resulted in a good simultaneous description of both single and multiple dose data sets. Incorporation of three subcompartments for spleen and kidney tissues was required for capturing a prolonged half-life in these organs. The predictive performance of the final PBPK model was assessed by evaluating its utility in predicting pharmacokinetics of AmB in mice and humans. Clearance and permeability-surface area terms were scaled with body weight. The model demonstrated good predictions of plasma AmB concentration-time profiles for both species. This modeling framework represents an important basis that may be further utilized for characterization of formulation- and disease-related factors in AmB pharmacokinetics and pharmacodynamics.
NASA Technical Reports Server (NTRS)
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
Thakar, Sumit; Sivaraju, Laxminadh; Jacob, Kuruthukulangara S; Arun, Aditya Atal; Aryan, Saritha; Mohan, Dilip; Sai Kiran, Narayanam Anantha; Hegde, Alangar S
2018-01-01
OBJECTIVE Although various predictors of postoperative outcome have been previously identified in patients with Chiari malformation Type I (CMI) with syringomyelia, there is no known algorithm for predicting a multifactorial outcome measure in this widely studied disorder. Using one of the largest preoperative variable arrays used so far in CMI research, the authors attempted to generate a formula for predicting postoperative outcome. METHODS Data from the clinical records of 82 symptomatic adult patients with CMI and altered hindbrain CSF flow who were managed with foramen magnum decompression, C-1 laminectomy, and duraplasty over an 8-year period were collected and analyzed. Various preoperative clinical and radiological variables in the 57 patients who formed the study cohort were assessed in a bivariate analysis to determine their ability to predict clinical outcome (as measured on the Chicago Chiari Outcome Scale [CCOS]) and the resolution of syrinx at the last follow-up. The variables that were significant in the bivariate analysis were further analyzed in a multiple linear regression analysis. Different regression models were tested, and the model with the best prediction of CCOS was identified and internally validated in a subcohort of 25 patients. RESULTS There was no correlation between CCOS score and syrinx resolution (p = 0.24) at a mean ± SD follow-up of 40.29 ± 10.36 months. Multiple linear regression analysis revealed that the presence of gait instability, obex position, and the M-line-fourth ventricle vertex (FVV) distance correlated with CCOS score, while the presence of motor deficits was associated with poor syrinx resolution (p ≤ 0.05). The algorithm generated from the regression model demonstrated good diagnostic accuracy (area under curve 0.81), with a score of more than 128 points demonstrating 100% specificity for clinical improvement (CCOS score of 11 or greater). The model had excellent reliability (κ = 0.85) and was validated with fair accuracy in the validation cohort (area under the curve 0.75). CONCLUSIONS The presence of gait imbalance and motor deficits independently predict worse clinical and radiological outcomes, respectively, after decompressive surgery for CMI with altered hindbrain CSF flow. Caudal displacement of the obex and a shorter M-line-FVV distance correlated with good CCOS scores, indicating that patients with a greater degree of hindbrain pathology respond better to surgery. The proposed points-based algorithm has good predictive value for postoperative multifactorial outcome in these patients.
NASA Astrophysics Data System (ADS)
Song, Di; Kang, Guozheng; Kan, Qianhua; Yu, Chao; Zhang, Chuanzeng
2015-08-01
Based on the experimental observations for the uniaxial low-cycle stress fatigue failure of super-elastic NiTi shape memory alloy microtubes (Song et al 2015 Smart Mater. Struct. 24 075004) and a new definition of damage variable corresponding to the variation of accumulated dissipation energy, a phenomenological damage model is proposed to describe the damage evolution of the NiTi microtubes during cyclic loading. Then, with a failure criterion of Dc = 1, the fatigue lives of the NiTi microtubes are predicted by the damage-based model, the predicted lives are in good agreement with the experimental ones, and all of the points are located within an error band of 1.5 times.
Correlation of AH-1G airframe flight vibration data with a coupled rotor-fuselage analysis
NASA Technical Reports Server (NTRS)
Sangha, K.; Shamie, J.
1990-01-01
The formulation and features of the Rotor-Airframe Comprehensive Analysis Program (RACAP) is described. The analysis employs a frequency domain, transfer matrix approach for the blade structural model, a time domain wake or momentum theory aerodynamic model, and impedance matching for rotor-fuselage coupling. The analysis is applied to the AH-1G helicopter, and a correlation study is conducted on fuselage vibration predictions. The purpose of the study is to evaluate the state-of-the-art in helicopter fuselage vibration prediction technology. The fuselage vibration predicted using RACAP are fairly good in the vertical direction and somewhat deficient in the lateral/longitudinal directions. Some of these deficiencies are traced to the fuselage finite element model.
Nearshore Tsunami Inundation Model Validation: Toward Sediment Transport Applications
Apotsos, Alex; Buckley, Mark; Gelfenbaum, Guy; Jaffe, Bruce; Vatvani, Deepak
2011-01-01
Model predictions from a numerical model, Delft3D, based on the nonlinear shallow water equations are compared with analytical results and laboratory observations from seven tsunami-like benchmark experiments, and with field observations from the 26 December 2004 Indian Ocean tsunami. The model accurately predicts the magnitude and timing of the measured water levels and flow velocities, as well as the magnitude of the maximum inundation distance and run-up, for both breaking and non-breaking waves. The shock-capturing numerical scheme employed describes well the total decrease in wave height due to breaking, but does not reproduce the observed shoaling near the break point. The maximum water levels observed onshore near Kuala Meurisi, Sumatra, following the 26 December 2004 tsunami are well predicted given the uncertainty in the model setup. The good agreement between the model predictions and the analytical results and observations demonstrates that the numerical solution and wetting and drying methods employed are appropriate for modeling tsunami inundation for breaking and non-breaking long waves. Extension of the model to include sediment transport may be appropriate for long, non-breaking tsunami waves. Using available sediment transport formulations, the sediment deposit thickness at Kuala Meurisi is predicted generally within a factor of 2.
Hermes, Helen E.; Teutonico, Donato; Preuss, Thomas G.; Schneckener, Sebastian
2018-01-01
The environmental fates of pharmaceuticals and the effects of crop protection products on non-target species are subjects that are undergoing intense review. Since measuring the concentrations and effects of xenobiotics on all affected species under all conceivable scenarios is not feasible, standard laboratory animals such as rabbits are tested, and the observed adverse effects are translated to focal species for environmental risk assessments. In that respect, mathematical modelling is becoming increasingly important for evaluating the consequences of pesticides in untested scenarios. In particular, physiologically based pharmacokinetic/toxicokinetic (PBPK/TK) modelling is a well-established methodology used to predict tissue concentrations based on the absorption, distribution, metabolism and excretion of drugs and toxicants. In the present work, a rabbit PBPK/TK model is developed and evaluated with data available from the literature. The model predictions include scenarios of both intravenous (i.v.) and oral (p.o.) administration of small and large compounds. The presented rabbit PBPK/TK model predicts the pharmacokinetics (Cmax, AUC) of the tested compounds with an average 1.7-fold error. This result indicates a good predictive capacity of the model, which enables its use for risk assessment modelling and simulations. PMID:29561908
Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield
NASA Technical Reports Server (NTRS)
Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)
2001-01-01
New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.
Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela
2018-01-01
This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.
Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches
NASA Astrophysics Data System (ADS)
H, Vathsala; Koolagudi, Shashidhar G.
2017-10-01
This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).
Statistical physics of interacting neural networks
NASA Astrophysics Data System (ADS)
Kinzel, Wolfgang; Metzler, Richard; Kanter, Ido
2001-12-01
Recent results on the statistical physics of time series generation and prediction are presented. A neural network is trained on quasi-periodic and chaotic sequences and overlaps to the sequence generator as well as the prediction errors are calculated numerically. For each network there exists a sequence for which it completely fails to make predictions. Two interacting networks show a transition to perfect synchronization. A pool of interacting networks shows good coordination in the minority game-a model of competition in a closed market. Finally, as a demonstration, a perceptron predicts bit sequences produced by human beings.
Granular support vector machines with association rules mining for protein homology prediction.
Tang, Yuchun; Jin, Bo; Zhang, Yan-Qing
2005-01-01
Protein homology prediction between protein sequences is one of critical problems in computational biology. Such a complex classification problem is common in medical or biological information processing applications. How to build a model with superior generalization capability from training samples is an essential issue for mining knowledge to accurately predict/classify unseen new samples and to effectively support human experts to make correct decisions. A new learning model called granular support vector machines (GSVM) is proposed based on our previous work. GSVM systematically and formally combines the principles from statistical learning theory and granular computing theory and thus provides an interesting new mechanism to address complex classification problems. It works by building a sequence of information granules and then building support vector machines (SVM) in some of these information granules on demand. A good granulation method to find suitable granules is crucial for modeling a GSVM with good performance. In this paper, we also propose an association rules-based granulation method. For the granules induced by association rules with high enough confidence and significant support, we leave them as they are because of their high "purity" and significant effect on simplifying the classification task. For every other granule, a SVM is modeled to discriminate the corresponding data. In this way, a complex classification problem is divided into multiple smaller problems so that the learning task is simplified. The proposed algorithm, here named GSVM-AR, is compared with SVM by KDDCUP04 protein homology prediction data. The experimental results show that finding the splitting hyperplane is not a trivial task (we should be careful to select the association rules to avoid overfitting) and GSVM-AR does show significant improvement compared to building one single SVM in the whole feature space. Another advantage is that the utility of GSVM-AR is very good because it is easy to be implemented. More importantly and more interestingly, GSVM provides a new mechanism to address complex classification problems.
Lu, Yinghui; Gribok, Andrei V; Ward, W Kenneth; Reifman, Jaques
2010-08-01
We investigated the relative importance and predictive power of different frequency bands of subcutaneous glucose signals for the short-term (0-50 min) forecasting of glucose concentrations in type 1 diabetic patients with data-driven autoregressive (AR) models. The study data consisted of minute-by-minute glucose signals collected from nine deidentified patients over a five-day period using continuous glucose monitoring devices. AR models were developed using single and pairwise combinations of frequency bands of the glucose signal and compared with a reference model including all bands. The results suggest that: for open-loop applications, there is no need to explicitly represent exogenous inputs, such as meals and insulin intake, in AR models; models based on a single-frequency band, with periods between 60-120 min and 150-500 min, yield good predictive power (error <3 mg/dL) for prediction horizons of up to 25 min; models based on pairs of bands produce predictions that are indistinguishable from those of the reference model as long as the 60-120 min period band is included; and AR models can be developed on signals of short length (approximately 300 min), i.e., ignoring long circadian rhythms, without any detriment in prediction accuracy. Together, these findings provide insights into efficient development of more effective and parsimonious data-driven models for short-term prediction of glucose concentrations in diabetic patients.
NASA Astrophysics Data System (ADS)
Chattopadhyay, Surajit; Chattopadhyay, Goutami
2012-10-01
In the work discussed in this paper we considered total ozone time series over Kolkata (22°34'10.92″N, 88°22'10.92″E), an urban area in eastern India. Using cloud cover, average temperature, and rainfall as the predictors, we developed an artificial neural network, in the form of a multilayer perceptron with sigmoid non-linearity, for prediction of monthly total ozone concentrations from values of the predictors in previous months. We also estimated total ozone from values of the predictors in the same month. Before development of the neural network model we removed multicollinearity by means of principal component analysis. On the basis of the variables extracted by principal component analysis, we developed three artificial neural network models. By rigorous statistical assessment it was found that cloud cover and rainfall can act as good predictors for monthly total ozone when they are considered as the set of input variables for the neural network model constructed in the form of a multilayer perceptron. In general, the artificial neural network has good potential for predicting and estimating monthly total ozone on the basis of the meteorological predictors. It was further observed that during pre-monsoon and winter seasons, the proposed models perform better than during and after the monsoon.
Development of a recursion RNG-based turbulence model
NASA Technical Reports Server (NTRS)
Zhou, YE; Vahala, George; Thangam, S.
1993-01-01
Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.
NASA Astrophysics Data System (ADS)
Mandal, Sumantra; Sivaprasad, P. V.; Venugopal, S.; Murthy, K. P. N.
2006-09-01
An artificial neural network (ANN) model is developed to predict the constitutive flow behaviour of austenitic stainless steels during hot deformation. The input parameters are alloy composition and process variables whereas flow stress is the output. The model is based on a three-layer feed-forward ANN with a back-propagation learning algorithm. The neural network is trained with an in-house database obtained from hot compression tests on various grades of austenitic stainless steels. The performance of the model is evaluated using a wide variety of statistical indices. Good agreement between experimental and predicted data is obtained. The correlation between individual alloying elements and high temperature flow behaviour is investigated by employing the ANN model. The results are found to be consistent with the physical phenomena. The model can be used as a guideline for new alloy development.
Predictive model for risk of cesarean section in pregnant women after induction of labor.
Hernández-Martínez, Antonio; Pascual-Pedreño, Ana I; Baño-Garnés, Ana B; Melero-Jiménez, María R; Tenías-Burillo, José M; Molina-Alarcón, Milagros
2016-03-01
To develop a predictive model for risk of cesarean section in pregnant women after induction of labor. A retrospective cohort study was conducted of 861 induced labors during 2009, 2010, and 2011 at Hospital "La Mancha-Centro" in Alcázar de San Juan, Spain. Multivariate analysis was used with binary logistic regression and areas under the ROC curves to determine predictive ability. Two predictive models were created: model A predicts the outcome at the time the woman is admitted to the hospital (before the decision to of the method of induction); and model B predicts the outcome at the time the woman is definitely admitted to the labor room. The predictive factors in the final model were: maternal height, body mass index, nulliparity, Bishop score, gestational age, macrosomia, gender of fetus, and the gynecologist's overall cesarean section rate. The predictive ability of model A was 0.77 [95% confidence interval (CI) 0.73-0.80] and model B was 0.79 (95% CI 0.76-0.83). The predictive ability for pregnant women with previous cesarean section with model A was 0.79 (95% CI 0.64-0.94) and with model B was 0.80 (95% CI 0.64-0.96). For a probability of estimated cesarean section ≥80%, the models A and B presented a positive likelihood ratio (+LR) for cesarean section of 22 and 20, respectively. Also, for a likelihood of estimated cesarean section ≤10%, the models A and B presented a +LR for vaginal delivery of 13 and 6, respectively. These predictive models have a good discriminative ability, both overall and for all subgroups studied. This tool can be useful in clinical practice, especially for pregnant women with previous cesarean section and diabetes.
Beal, Eliza W; Tumin, Dmitry; Chakedis, Jeffery; Porter, Erica; Moris, Dimitrios; Zhang, Xu-Feng; Arnold, Mark; Harzman, Alan; Husain, Syed; Schmidt, Carl R; Pawlik, Timothy M
2018-07-01
Given the conflicting nature of reported risk factors for post-discharge venous thromboembolism (VTE) and unclear guidelines for post-discharge pharmacoprophylaxis, we sought to determine risk factors for 30-day post-discharge VTE after colectomy to predict which patients will benefit from post-discharge pharmacoprophylaxis. Patients who underwent colectomy in the American College of Surgeons National Surgical Quality Improvement Project Participant Use Files from 2011 to 2015 were identified. Logistic regression modeling was used. Receiver-operating characteristic curves were used and the best cut-points were determined using Youden's J index (sensitivity + specificity - 1). Hosmer-Lemeshow goodness-of-fit test was used to test model calibration. A random sample of 30% of the cohort was used as a validation set. Among 77,823 cases, the overall incidence of VTE after colectomy was 1.9%, with 0.7% of VTE events occurring in the post-discharge setting. Factors associated with post-discharge VTE risk including body mass index, preoperative albumin, operation time, hospital length of stay, race, smoking status, inflammatory bowel disease, return to the operating room and postoperative ileus were included in logistic regression equation model. The model demonstrated good calibration (goodness of fit P = 0.7137) and good discrimination (area under the curve (AUC) = 0.68; validation set, AUC = 0.70). A score of ≥-5.00 had the maxim sensitivity and specificity, resulting in 36.63% of patients being treated with prophylaxis for an overall VTE risk of 0.67%. Approximately one-third of post-colectomy VTE events occurred after discharge. Patients with predicted post-discharge VTE risk of ≥-5.00 should be recommended for extended post-discharge VTE prophylaxis.
Spatially explicit habitat models for 28 fishes from the Upper Mississippi River System (AHAG 2.0)
Ickes, Brian S.; Sauer, J.S.; Richards, N.; Bowler, M.; Schlifer, B.
2014-01-01
Environmental management actions in the Upper Mississippi River System (UMRS) typically require pre-project assessments of predicted benefits under a range of project scenarios. The U.S. Army Corps of Engineers (USACE) now requires certified and peer-reviewed models to conduct these assessments. Previously, habitat benefits were estimated for fish communities in the UMRS using the Aquatic Habitat Appraisal Guide (AHAG v.1.0; AHAG from hereon). This spreadsheet-based model used a habitat suitability index (HSI) approach that drew heavily upon Habitat Evaluation Procedures (HEP; U.S. Fish and Wildlife Service, 1980) by the U.S. Fish and Wildlife Service (USFWS). The HSI approach requires developing species response curves for different environmental variables that seek to broadly represent habitat. The AHAG model uses species-specific response curves assembled from literature values, data from other ecosystems, or best professional judgment. A recent scientific review of the AHAG indicated that the model’s effectiveness is reduced by its dated approach to large river ecosystems, uncertainty regarding its data inputs and rationale for habitat-species response relationships, and lack of field validation (Abt Associates Inc., 2011). The reviewers made two major recommendations: (1) incorporate empirical data from the UMRS into defining the empirical response curves, and (2) conduct post-project biological evaluations to test pre-project benefits estimated by AHAG. Our objective was to address the first recommendation and generate updated response curves for AHAG using data from the Upper Mississippi River Restoration-Environmental Management Program (UMRR-EMP) Long Term Resource Monitoring Program (LTRMP) element. Fish community data have been collected by LTRMP (Gutreuter and others, 1995; Ratcliff and others, in press) for 20 years from 6 study reaches representing 1,930 kilometers of river and >140 species of fish. We modeled a subset of these data (28 different species; occurrences at sampling sites as observed in day electrofishing samples) using multiple logistic regression with presence/absence responses. Each species’ probability of occurrence, at each sample site, was modeled as a function of 17 environmental variables observed at each sample site by LTRMP standardized protocols. The modeling methods used (1) a forward-selection process to identify the most important predictors and their relative contributions to predictions; (2) partial methods on the predictor set to control variance inflation; and (3) diagnostics for LTRMP design elements that may influence model fits. Models were fit for 28 species, representing 3 habitat guilds (Lentic, Lotic, and Generalist). We intended to develop “systemic models” using data from all six LTRMP study reaches simultaneously; however, this proved impossible. Thus, we “regionalized” the models, creating two models for each species: “Upper Reach” models, using data from Pools 4, 8, and 13; and “Lower Reach” models, using data from Pool 26, the Open River Reach of the Mississippi River, and the La Grange reach of the Illinois River. A total of 56 models were attempted. For any given site-scale prediction, each model used data from the three LTRMP study reaches comprising the regional model to make predictions. For example, a site-scale prediction in Pool 8 was made using data from Pools 4, 8, and 13. This is the fundamental nature and trade-off of regionalizing these models for broad management application. Model fits were deemed “certifiably good” using the Hosmer and Lemeshow Goodness-of-Fit statistic (Hosmer and Lemeshow, 2000). This test post-partitions model predictions into 10 groups and conducts inferential tests on correspondences between observed and expected probability of occurrence across all partitions, under Chi-square distributional assumptions. This permits an inferential test of how well the models fit and a tool for reporting when they did not (and perhaps why). Our goal was to develop regionalized models, and to assess and describe circumstances when a good fit was not possible. Seven fish species composed the Lentic guild. Good fits were achieved for six Upper Reach models. In the Lower Reach, no model produced good fits for the Lentic guild. This was due to (1) lentic species being much less prominent in the Lower Reach study areas, and (2) those that do express greater prominence principally do so only in the La Grange reach of the Illinois River. Thus, developing Lower Reach models for Lentic species will require parsing La Grange from the other two Lower Reach study areas and fitting separate models. We did not do that as part of this study, but it could be done at a later time. Nine species comprised the Lotic guild. Good fits were achieved for seven Upper Reach models and six Lower Reach models. Four species had good fits for both regions (flathead catfish, blue sucker, sauger, and shorthead redhorse). Three species showed zoogeographic zonation, with a good model fit in one of the regions, but not in the region in which they were absent or rarely occurred (blue catfish, rock bass, and skipjack herring). Twelve species comprised the Generalist guild. Good fits were achieved for five Upper Reach models and eight Lower Reach models. Six species had good fits for both regions (brook silverside, emerald shiner, freshwater drum, logperch, longnose gar, and white bass). Two species showed zoogeographic zonation, with a good model fit in one of the regions, but not in the region in which they were absent or rarely occurred (red shiner and blackstripe topminnow). Poorly fit models were almost always due to the diagnostic variable “field station,” a surrogate for river mile. In these circumstances, the residuals for “field station” were non-randomly distributed and often strongly ordered. This indicates either fitting “pool scale” models for these species and regions, or explicitly model covariances between “field station” and the other predictors within the existing modeling framework. Further efforts on these models should seek to resolve these issues using one of these two approaches. In total, nine species, representing two of the three guilds (Lotic and Generalist), produced well-fit models for both regions. These nine species should comprise the basis for AHAG 2.0. Additional work, likely requiring downscaling of the regional models to pool-scale models, will be needed to incorporate additional species. Alternately, a regionalized AHAG could be comprised of those species, per region, that achieved well-fit models. The number of species and the composition of the regional species pools will differ among regions as a consequence. Each of these alternatives has both pros and cons, and managers are encouraged to consider them fully before further advancing this approach to modeling multi-species habitat suitability.
Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco
2014-11-15
The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all models were able to identify years and crops with high- and low-leaching rates. Copyright © 2014 Elsevier B.V. All rights reserved.
Assessing Participation in Community-Based Physical Activity Programs in Brazil
REIS, RODRIGO S.; YAN, YAN; PARRA, DIANA C.; BROWNSON, ROSS C.
2015-01-01
Purpose This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. Methods We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. Results The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14–4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16–2.53), reporting a good health (OR = 1.58, 95% CI = 1.02–2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05–2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26–2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18–2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Conclusions Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil. PMID:23846162
Shen, Weidong; Sakamoto, Naoko; Yang, Limin
2016-07-07
The objectives of this study were to evaluate and model the probability of melanoma-specific death and competing causes of death for patients with melanoma by competing risk analysis, and to build competing risk nomograms to provide individualized and accurate predictive tools. Melanoma data were obtained from the Surveillance Epidemiology and End Results program. All patients diagnosed with primary non-metastatic melanoma during the years 2004-2007 were potentially eligible for inclusion. The cumulative incidence function (CIF) was used to describe the probability of melanoma mortality and competing risk mortality. We used Gray's test to compare differences in CIF between groups. The proportional subdistribution hazard approach by Fine and Gray was used to model CIF. We built competing risk nomograms based on the models that we developed. The 5-year cumulative incidence of melanoma death was 7.1 %, and the cumulative incidence of other causes of death was 7.4 %. We identified that variables associated with an elevated probability of melanoma-specific mortality included older age, male sex, thick melanoma, ulcerated cancer, and positive lymph nodes. The nomograms were well calibrated. C-indexes were 0.85 and 0.83 for nomograms predicting the probability of melanoma mortality and competing risk mortality, which suggests good discriminative ability. This large study cohort enabled us to build a reliable competing risk model and nomogram for predicting melanoma prognosis. Model performance proved to be good. This individualized predictive tool can be used in clinical practice to help treatment-related decision making.
Soil moisture dynamics modeling considering multi-layer root zone.
Kumar, R; Shankar, V; Jat, M K
2013-01-01
The moisture uptake by plant from soil is a key process for plant growth and movement of water in the soil-plant system. A non-linear root water uptake (RWU) model was developed for a multi-layer crop root zone. The model comprised two parts: (1) model formulation and (2) moisture flow prediction. The developed model was tested for its efficiency in predicting moisture depletion in a non-uniform root zone. A field experiment on wheat (Triticum aestivum) was conducted in the sub-temperate sub-humid agro-climate of Solan, Himachal Pradesh, India. Model-predicted soil moisture parameters, i.e., moisture status at various depths, moisture depletion and soil moisture profile in the root zone, are in good agreement with experiment results. The results of simulation emphasize the utility of the RWU model across different agro-climatic regions. The model can be used for sound irrigation management especially in water-scarce humid, temperate, arid and semi-arid regions and can also be integrated with a water transport equation to predict the solute uptake by plant biomass.
NASA Technical Reports Server (NTRS)
Schmidt, Rodney C.; Patankar, Suhas V.
1988-01-01
The use of low Reynolds number (LRN) forms of the k-epsilon turbulence model in predicting transitional boundary layer flow characteristic of gas turbine blades is developed. The research presented consists of: (1) an evaluation of two existing models; (2) the development of a modification to current LRN models; and (3) the extensive testing of the proposed model against experimental data. The prediction characteristics and capabilities of the Jones-Launder (1972) and Lam-Bremhorst (1981) LRN k-epsilon models are evaluated with respect to the prediction of transition on flat plates. Next, the mechanism by which the models simulate transition is considered and the need for additional constraints is discussed. Finally, the transition predictions of a new model are compared with a wide range of different experiments, including transitional flows with free-stream turbulence under conditions of flat plate constant velocity, flat plate constant acceleration, flat plate but strongly variable acceleration, and flow around turbine blade test cascades. In general, calculational procedure yields good agreement with most of the experiments.
Inductive reasoning about causally transmitted properties.
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B
2008-11-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1990-01-01
A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.
Dynamics of Social Group Competition: Modeling the Decline of Religious Affiliation
NASA Astrophysics Data System (ADS)
Abrams, Daniel M.; Yaple, Haley A.; Wiener, Richard J.
2011-08-01
When social groups compete for members, the resulting dynamics may be understandable with mathematical models. We demonstrate that a simple ordinary differential equation (ODE) model is a good fit for religious shift by comparing it to a new international data set tracking religious nonaffiliation. We then generalize the model to include the possibility of nontrivial social interaction networks and examine the limiting case of a continuous system. Analytical and numerical predictions of this generalized system, which is robust to polarizing perturbations, match those of the original ODE model and justify its agreement with real-world data. The resulting predictions highlight possible causes of social shift and suggest future lines of research in both physics and sociology.
A Quantum Probability Model of Causal Reasoning
Trueblood, Jennifer S.; Busemeyer, Jerome R.
2012-01-01
People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment. PMID:22593747
Measurements and empirical model of the acoustic properties of reticulated vitreous carbon.
Muehleisena, Ralph T; Beamer, C Walter; Tinianov, Brandon D
2005-02-01
Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10,782 Pa s m(-2) in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed.
Measurements and empirical model of the acoustic properties of reticulated vitreous carbon
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter; Tinianov, Brandon D.
2005-02-01
Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10 782 Pa s m-2 in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed. .
A comparison of radiative transfer models for predicting the microwave emission from soils
NASA Technical Reports Server (NTRS)
Schmugge, T. J.; Choudhury, B. J.
1981-01-01
Noncoherent and coherent numerical models for predicting emission from soils are compared. Coherent models use the boundary conditions on the electric fields across the layer boundaries to calculate the radiation intensity, and noncoherent models consider radiation intensities directly. Interference may cause different results in the two approaches when coupling between soil layers in coherent models causes greater soil moisture sampling depths. Calculations performed at frequencies of 1.4 and 19.4 GHz show little difference between the models at 19.4 GHz, although differences are apparent at the lower frequency. A definition for an effective emissivity is also given for when a nonuniform temperature profile is present, and measurements made from a tower show good agreement with calculations from the coherent model.
Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G
2008-09-01
A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.
Du, Tianchuan; Liao, Li; Wu, Cathy H
2016-12-01
Identifying the residues in a protein that are involved in protein-protein interaction and identifying the contact matrix for a pair of interacting proteins are two computational tasks at different levels of an in-depth analysis of protein-protein interaction. Various methods for solving these two problems have been reported in the literature. However, the interacting residue prediction and contact matrix prediction were handled by and large independently in those existing methods, though intuitively good prediction of interacting residues will help with predicting the contact matrix. In this work, we developed a novel protein interacting residue prediction system, contact matrix-interaction profile hidden Markov model (CM-ipHMM), with the integration of contact matrix prediction and the ipHMM interaction residue prediction. We propose to leverage what is learned from the contact matrix prediction and utilize the predicted contact matrix as "feedback" to enhance the interaction residue prediction. The CM-ipHMM model showed significant improvement over the previous method that uses the ipHMM for predicting interaction residues only. It indicates that the downstream contact matrix prediction could help the interaction site prediction.
Ayturk, Ugur M; Puttlitz, Christian M
2011-08-01
The primary objective of this study was to generate a finite element model of the human lumbar spine (L1-L5), verify mesh convergence for each tissue constituent and perform an extensive validation using both kinematic/kinetic and stress/strain data. Mesh refinement was accomplished via convergence of strain energy density (SED) predictions for each spinal tissue. The converged model was validated based on range of motion, intradiscal pressure, facet force transmission, anterolateral cortical bone strain and anterior longitudinal ligament deformation predictions. Changes in mesh resolution had the biggest impact on SED predictions under axial rotation loading. Nonlinearity of the moment-rotation curves was accurately simulated and the model predictions on the aforementioned parameters were in good agreement with experimental data. The validated and converged model will be utilised to study the effects of degeneration on the lumbar spine biomechanics, as well as to investigate the mechanical underpinning of the contemporary treatment strategies.
Modelling the behaviour of additives in gun barrels
NASA Astrophysics Data System (ADS)
Rhodes, N.; Ludwig, J. C.
1986-01-01
A mathematical model which predicts the flow and heat transfer in a gun barrel is described. The model is transient, two-dimensional and equations are solved for velocities and enthalpies of a gas phase, which arises from the combustion of propellant and cartridge case, for particle additives which are released from the case; volume fractions of the gas and particles. Closure of the equations is obtained using a two-equation turbulence model. Preliminary calculations are described in which the proportions of particle additives in the cartridge case was altered. The model gives a good prediction of the ballistic performance and the gas to wall heat transfer. However, the expected magnitude of reduction in heat transfer when particles are present is not predicted. The predictions of gas flow invalidate some of the assumptions made regarding case and propellant behavior during combustion and further work is required to investigate these effects and other possible interactions, both chemical and physical, between gas and particles.
Ridge regression for predicting elastic moduli and hardness of calcium aluminosilicate glasses
NASA Astrophysics Data System (ADS)
Deng, Yifan; Zeng, Huidan; Jiang, Yejia; Chen, Guorong; Chen, Jianding; Sun, Luyi
2018-03-01
It is of great significance to design glasses with satisfactory mechanical properties predictively through modeling. Among various modeling methods, data-driven modeling is such a reliable approach that can dramatically shorten research duration, cut research cost and accelerate the development of glass materials. In this work, the ridge regression (RR) analysis was used to construct regression models for predicting the compositional dependence of CaO-Al2O3-SiO2 glass elastic moduli (Shear, Bulk, and Young’s moduli) and hardness based on the ternary diagram of the compositions. The property prediction over a large glass composition space was accomplished with known experimental data of various compositions in the literature, and the simulated results are in good agreement with the measured ones. This regression model can serve as a facile and effective tool for studying the relationship between the compositions and the property, enabling high-efficient design of glasses to meet the requirements for specific elasticity and hardness.
The EST Model for Predicting Progressive Damage and Failure of Open Hole Bending Specimens
NASA Technical Reports Server (NTRS)
Joseph, Ashith P. K.; Waas, Anthony M.; Pineda, Evan J.
2016-01-01
Progressive damage and failure in open hole composite laminate coupons subjected to flexural loading is modeled using Enhanced Schapery Theory (EST). Previous studies have demonstrated that EST can accurately predict the strength of open hole coupons under remote tensile and compressive loading states. This homogenized modeling approach uses single composite shell elements to represent the entire laminate in the thickness direction and significantly reduces computational cost. Therefore, when delaminations are not of concern or are active in the post-peak regime, the version of EST presented here is a good engineering tool for predicting deformation response. Standard coupon level tests provides all the input data needed for the model and they are interpreted in conjunction with finite element (FE) based simulations. Open hole bending test results of three different IM7/8552 carbon fiber composite layups agree well with EST predictions. The model is able to accurately capture the curvature change and deformation localization in the specimen at and during the post catastrophic load drop event.
Grant, S.W.; Hickey, G.L.; Carlson, E.D.; McCollum, C.N.
2014-01-01
Objective/background A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. Methods The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. Results The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76–0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70–0.86) and 0.75 (95% CI 0.65–0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. Conclusion All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. PMID:24837173
Grant, S W; Hickey, G L; Carlson, E D; McCollum, C N
2014-07-01
A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76-0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70-0.86) and 0.75 (95% CI 0.65-0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Myers, Greeley; Siera, Steven
1980-01-01
Default on guaranteed student loans has been increasing. The use of discriminant analysis as a technique to identify "good" v "bad" student loans based on information available from the loan application is discussed. Research to test the ability of models to such predictions is reported. (Author/MLW)
External intermittency prediction using AMR solutions of RANS turbulence and transported PDF models
NASA Astrophysics Data System (ADS)
Olivieri, D. A.; Fairweather, M.; Falle, S. A. E. G.
2011-12-01
External intermittency in turbulent round jets is predicted using a Reynolds-averaged Navier-Stokes modelling approach coupled to solutions of the transported probability density function (pdf) equation for scalar variables. Solutions to the descriptive equations are obtained using a finite-volume method, combined with an adaptive mesh refinement algorithm, applied in both physical and compositional space. This method contrasts with conventional approaches to solving the transported pdf equation which generally employ Monte Carlo techniques. Intermittency-modified eddy viscosity and second-moment turbulence closures are used to accommodate the effects of intermittency on the flow field, with the influence of intermittency also included, through modifications to the mixing model, in the transported pdf equation. Predictions of the overall model are compared with experimental data on the velocity and scalar fields in a round jet, as well as against measurements of intermittency profiles and scalar pdfs in a number of flows, with good agreement obtained. For the cases considered, predictions based on the second-moment turbulence closure are clearly superior, although both turbulence models give realistic predictions of the bimodal scalar pdfs observed experimentally.
NASA Astrophysics Data System (ADS)
Bazzazi, Abbas Aghajani; Esmaeili, Mohammad
2012-12-01
Adaptive neuro-fuzzy inference system (ANFIS) is powerful model in solving complex problems. Since ANFIS has the potential of solving nonlinear problem and can easily achieve the input-output mapping, it is perfect to be used for solving the predicting problem. Backbreak is one of the undesirable effects of blasting operations causing instability in mine walls, falling down the machinery, improper fragmentation and reduction in efficiency of drilling. In this paper, ANFIS was applied to predict backbreak in Sangan iron mine of Iran. The performance of the model was assessed through the root mean squared error (RMSE), the variance account for (VAF) and the correlation coefficient (
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.; Ashpis, D. E.; Volino, R. J.; Corke, T. C.; Thomas, F. O.; Huang, J.; Lake, J. P.; King, P. I.
2007-01-01
A transport equation for the intermittency factor is employed to predict the transitional flows in low-pressure turbines. The intermittent behavior of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, mu(sub p) with the intermittency factor, gamma. Turbulent quantities are predicted using Menter's two-equation turbulence model (SST). The intermittency factor is obtained from a transport equation model which can produce both the experimentally observed streamwise variation of intermittency and a realistic profile in the cross stream direction. The model had been previously validated against low-pressure turbine experiments with success. In this paper, the model is applied to predictions of three sets of recent low-pressure turbine experiments on the Pack B blade to further validate its predicting capabilities under various flow conditions. Comparisons of computational results with experimental data are provided. Overall, good agreement between the experimental data and computational results is obtained. The new model has been shown to have the capability of accurately predicting transitional flows under a wide range of low-pressure turbine conditions.
Endoscopic third ventriculostomy in the treatment of childhood hydrocephalus.
Kulkarni, Abhaya V; Drake, James M; Mallucci, Conor L; Sgouros, Spyros; Roth, Jonathan; Constantini, Shlomi
2009-08-01
To develop a model to predict the probability of endoscopic third ventriculostomy (ETV) success in the treatment for hydrocephalus on the basis of a child's individual characteristics. We analyzed 618 ETVs performed consecutively on children at 12 international institutions to identify predictors of ETV success at 6 months. A multivariable logistic regression model was developed on 70% of the dataset (training set) and validated on 30% of the dataset (validation set). In the training set, 305/455 ETVs (67.0%) were successful. The regression model (containing patient age, cause of hydrocephalus, and previous cerebrospinal fluid shunt) demonstrated good fit (Hosmer-Lemeshow, P = .78) and discrimination (C statistic = 0.70). In the validation set, 105/163 ETVs (64.4%) were successful and the model maintained good fit (Hosmer-Lemeshow, P = .45), discrimination (C statistic = 0.68), and calibration (calibration slope = 0.88). A simplified ETV Success Score was devised that closely approximates the predicted probability of ETV success. Children most likely to succeed with ETV can now be accurately identified and spared the long-term complications of CSF shunting.
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
Sleep Quality Prediction From Wearable Data Using Deep Learning.
Sathyanarayana, Aarti; Joty, Shafiq; Fernandez-Luque, Luis; Ofli, Ferda; Srivastava, Jaideep; Elmagarmid, Ahmed; Arora, Teresa; Taheri, Shahrad
2016-11-04
The importance of sleep is paramount to health. Insufficient sleep can reduce physical, emotional, and mental well-being and can lead to a multitude of health complications among people with chronic conditions. Physical activity and sleep are highly interrelated health behaviors. Our physical activity during the day (ie, awake time) influences our quality of sleep, and vice versa. The current popularity of wearables for tracking physical activity and sleep, including actigraphy devices, can foster the development of new advanced data analytics. This can help to develop new electronic health (eHealth) applications and provide more insights into sleep science. The objective of this study was to evaluate the feasibility of predicting sleep quality (ie, poor or adequate sleep efficiency) given the physical activity wearable data during awake time. In this study, we focused on predicting good or poor sleep efficiency as an indicator of sleep quality. Actigraphy sensors are wearable medical devices used to study sleep and physical activity patterns. The dataset used in our experiments contained the complete actigraphy data from a subset of 92 adolescents over 1 full week. Physical activity data during awake time was used to create predictive models for sleep quality, in particular, poor or good sleep efficiency. The physical activity data from sleep time was used for the evaluation. We compared the predictive performance of traditional logistic regression with more advanced deep learning methods: multilayer perceptron (MLP), convolutional neural network (CNN), simple Elman-type recurrent neural network (RNN), long short-term memory (LSTM-RNN), and a time-batched version of LSTM-RNN (TB-LSTM). Deep learning models were able to predict the quality of sleep (ie, poor or good sleep efficiency) based on wearable data from awake periods. More specifically, the deep learning methods performed better than traditional logistic regression. “CNN had the highest specificity and sensitivity, and an overall area under the receiver operating characteristic (ROC) curve (AUC) of 0.9449, which was 46% better as compared with traditional logistic regression (0.6463). Deep learning methods can predict the quality of sleep based on actigraphy data from awake periods. These predictive models can be an important tool for sleep research and to improve eHealth solutions for sleep. ©Aarti Sathyanarayana, Shafiq Joty, Luis Fernandez-Luque, Ferda Ofli, Jaideep Srivastava, Ahmed Elmagarmid, Teresa Arora, Shahrad Taheri. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.11.2016.
Sleep Quality Prediction From Wearable Data Using Deep Learning
Sathyanarayana, Aarti; Joty, Shafiq; Ofli, Ferda; Srivastava, Jaideep; Elmagarmid, Ahmed; Arora, Teresa; Taheri, Shahrad
2016-01-01
Background The importance of sleep is paramount to health. Insufficient sleep can reduce physical, emotional, and mental well-being and can lead to a multitude of health complications among people with chronic conditions. Physical activity and sleep are highly interrelated health behaviors. Our physical activity during the day (ie, awake time) influences our quality of sleep, and vice versa. The current popularity of wearables for tracking physical activity and sleep, including actigraphy devices, can foster the development of new advanced data analytics. This can help to develop new electronic health (eHealth) applications and provide more insights into sleep science. Objective The objective of this study was to evaluate the feasibility of predicting sleep quality (ie, poor or adequate sleep efficiency) given the physical activity wearable data during awake time. In this study, we focused on predicting good or poor sleep efficiency as an indicator of sleep quality. Methods Actigraphy sensors are wearable medical devices used to study sleep and physical activity patterns. The dataset used in our experiments contained the complete actigraphy data from a subset of 92 adolescents over 1 full week. Physical activity data during awake time was used to create predictive models for sleep quality, in particular, poor or good sleep efficiency. The physical activity data from sleep time was used for the evaluation. We compared the predictive performance of traditional logistic regression with more advanced deep learning methods: multilayer perceptron (MLP), convolutional neural network (CNN), simple Elman-type recurrent neural network (RNN), long short-term memory (LSTM-RNN), and a time-batched version of LSTM-RNN (TB-LSTM). Results Deep learning models were able to predict the quality of sleep (ie, poor or good sleep efficiency) based on wearable data from awake periods. More specifically, the deep learning methods performed better than traditional linear regression. CNN had the highest specificity and sensitivity, and an overall area under the receiver operating characteristic (ROC) curve (AUC) of 0.9449, which was 46% better as compared with traditional linear regression (0.6463). Conclusions Deep learning methods can predict the quality of sleep based on actigraphy data from awake periods. These predictive models can be an important tool for sleep research and to improve eHealth solutions for sleep. PMID:27815231
Evaluation of Industry Standard Turbulence Models on an Axisymmetric Supersonic Compression Corner
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2015-01-01
Reynolds-averaged Navier-Stokes computations of a shock-wave/boundary-layer interaction (SWBLI) created by a Mach 2.85 flow over an axisymmetric 30-degree compression corner were carried out. The objectives were to evaluate four turbulence models commonly used in industry, for SWBLIs, and to evaluate the suitability of this test case for use in further turbulence model benchmarking. The Spalart-Allmaras model, Menter's Baseline and Shear Stress Transport models, and a low-Reynolds number k- model were evaluated. Results indicate that the models do not accurately predict the separation location; with the SST model predicting the separation onset too early and the other models predicting the onset too late. Overall the Spalart-Allmaras model did the best job in matching the experimental data. However there is significant room for improvement, most notably in the prediction of the turbulent shear stress. Density data showed that the simulations did not accurately predict the thermal boundary layer upstream of the SWBLI. The effect of turbulent Prandtl number and wall temperature were studied in an attempt to improve this prediction and understand their effects on the interaction. The data showed that both parameters can significantly affect the separation size and location, but did not improve the agreement with the experiment. This case proved challenging to compute and should provide a good test for future turbulence modeling work.
Recent development of risk-prediction models for incident hypertension: An updated systematic review
Xiao, Lei; Liu, Ya; Wang, Zuoguang; Li, Chuang; Jin, Yongxin; Zhao, Qiong
2017-01-01
Background Hypertension is a leading global health threat and a major cardiovascular disease. Since clinical interventions are effective in delaying the disease progression from prehypertension to hypertension, diagnostic prediction models to identify patient populations at high risk for hypertension are imperative. Methods Both PubMed and Embase databases were searched for eligible reports of either prediction models or risk scores of hypertension. The study data were collected, including risk factors, statistic methods, characteristics of study design and participants, performance measurement, etc. Results From the searched literature, 26 studies reporting 48 prediction models were selected. Among them, 20 reports studied the established models using traditional risk factors, such as body mass index (BMI), age, smoking, blood pressure (BP) level, parental history of hypertension, and biochemical factors, whereas 6 reports used genetic risk score (GRS) as the prediction factor. AUC ranged from 0.64 to 0.97, and C-statistic ranged from 60% to 90%. Conclusions The traditional models are still the predominant risk prediction models for hypertension, but recently, more models have begun to incorporate genetic factors as part of their model predictors. However, these genetic predictors need to be well selected. The current reported models have acceptable to good discrimination and calibration ability, but whether the models can be applied in clinical practice still needs more validation and adjustment. PMID:29084293
Prediction of Classroom Reverberation Time using Neural Network
NASA Astrophysics Data System (ADS)
Liyana Zainudin, Fathin; Kadir Mahamad, Abd; Saon, Sharifah; Nizam Yahya, Musli
2018-04-01
In this paper, an alternative method for predicting the reverberation time (RT) using neural network (NN) for classroom was designed and explored. Classroom models were created using Google SketchUp software. The NN applied training dataset from the classroom models with RT values that were computed from ODEON 12.10 software. The NN was conducted separately for 500Hz, 1000Hz, and 2000Hz as absorption coefficient that is one of the prominent input variable is frequency dependent. Mean squared error (MSE) and regression (R) values were obtained to examine the NN efficiency. Overall, the NN shows a good result with MSE < 0.005 and R > 0.9. The NN also managed to achieve a percentage of accuracy of 92.53% for 500Hz, 93.66% for 1000Hz, and 93.18% for 2000Hz and thus displays a good and efficient performance. Nevertheless, the optimum RT value is range between 0.75 – 0.9 seconds.
Mars atmospheric circulation - Aspects from Viking Landers
NASA Technical Reports Server (NTRS)
Ryan, J. A.
1985-01-01
Winds measured by the two Viking Landers have been filtered and then compared with predictions from the general circulation model and to Orbiter observations of clouds and surface phenomena that indicate wind direction. This was done to determine the degree to which filtered winds may represent aspects of the general circulation. Excellent agreement was found between wind direction data from Lander 1 and the model predictions and Orbiter observations. For Lander 2, agreement was generally good, but there were periods of disagreement which indicate that the filtering did not remove other extraneous effects. It is concluded that Lander 1 gives a good representation of the general circulation at 22.5 deg N latitude but that Lander 2 is suspect. Most wind data from Lander 1 have yet to be analyzed. It appears that when analyzed these Lander 1 data (covering 3.5 Mars years) can provide information about interannual variations in the general circulation at the Lander latitude.
Saxena, Anil K; Ram, Siya; Saxena, Mridula; Singh, Nidhi; Prathipati, Philip; Jain, Padam C; Singh, H K; Anand, Nitya
2003-05-01
A series of nineteen substituted 1,2,3,4,6,7,12,12a-octahydropyrazino[2',1':6,1]pyrido[3, 4-b]indoles analogues of neuroleptic drug, Centbutindole have been studied using quantitative structure-activity relationship analysis. The derived models display good fits to the experimental data (r>or=0.75) having good predictive power (r(cv)>or=0.688). The best model describes a high correlation between predicted and experimental activity data (r=0.967). Statistical analysis of the equation populations indicates that hydrophobicity (as measured by pi(R), logP(o/w) and SlogP_VSA8), dipole y and structural parameters in terms of indicator variable, (In(1)) and globularity are important variables in describing the variation in the neuroleptic activity in the series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.
This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Mei; Wang, Dong, E-mail: wangdong@nju.edu.cn; Wang, Yuankun
In recent years, the phase-space reconstruction method has usually been used for mid- and long-term runoff predictions. However, the traditional phase-space reconstruction method is still needs to be improved. Using the genetic algorithm to improve the phase-space reconstruction method, a new nonlinear model of monthly runoff is constructed. The new model does not rely heavily on embedding dimensions. Recognizing that the rainfall–runoff process is complex, affected by a number of factors, more variables (e.g. temperature and rainfall) are incorporated in the model. In order to detect the possible presence of chaos in the runoff dynamics, chaotic characteristics of the modelmore » are also analyzed, which shows the model can represent the nonlinear and chaotic characteristics of the runoff. The model is tested for its forecasting performance in four types of experiments using data from six hydrological stations on the Yellow River and the Yangtze River. Results show that the medium-and long-term runoff is satisfactorily forecasted at the hydrological stations. Not only is the forecasting trend accurate, but also the mean absolute percentage error is no more than 15%. Moreover, the forecast results of wet years and dry years are both good, which means that the improved model can overcome the traditional ‘‘wet years and dry years predictability barrier,’’ to some extent. The model forecasts for different regions are all good, showing the universality of the approach. Compared with selected conceptual and empirical methods, the model exhibits greater reliability and stability in the long-term runoff prediction. Our study provides a new thinking for research on the association between the monthly runoff and other hydrological factors, and also provides a new method for the prediction of the monthly runoff. - Highlights: • The improved phase-space reconstruction model of monthly runoff is established. • Two variables (temperature and rainfall) are incorporated in the model. • Chaotic characteristics of the model are also analyzed. • The forecast results of the mid and long-term runoff in six stations are accurate.« less
Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano
2008-09-01
This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.
NASA Astrophysics Data System (ADS)
Zainudin, Wan Nur Rahini Aznie; Becker, Ralf; Clements, Adam
2015-12-01
Many market participants in Australia Electricity Market had cast doubts on whether the pre-dispatch process in the electricity market is able to give them good and timely quantity and price information. In a study by [11], they observed a significant bias (mainly indicating that the pre-dispatch process tends to underestimate spot price outcomes), a seasonality features of the bias across seasons and/or trading periods and changes in bias across the years in our sample period (1999 to 2007). In a formal setting of an ordered probit model we establish that there are some exogenous variables that are able to explain increased probabilities of over- or under-predictions of the spot price. It transpires that meteorological data, expected pre-dispatch prices and information on past over- and under-predictions contribute significantly to explaining variation in the probabilities for over- and under-predictions. The results allow us to conjecture that some of the bids and re-bids provided by electricity generators are not made in good faith.
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.
Modeling and Predicting the Stress Relaxation of Composites with Short and Randomly Oriented Fibers
Obaid, Numaira; Sain, Mohini
2017-01-01
The addition of short fibers has been experimentally observed to slow the stress relaxation of viscoelastic polymers, producing a change in the relaxation time constant. Our recent study attributed this effect of fibers on stress relaxation behavior to the interfacial shear stress transfer at the fiber-matrix interface. This model explained the effect of fiber addition on stress relaxation without the need to postulate structural changes at the interface. In our previous study, we developed an analytical model for the effect of fully aligned short fibers, and the model predictions were successfully compared to finite element simulations. However, in most industrial applications of short-fiber composites, fibers are not aligned, and hence it is necessary to examine the time dependence of viscoelastic polymers containing randomly oriented short fibers. In this study, we propose an analytical model to predict the stress relaxation behavior of short-fiber composites where the fibers are randomly oriented. The model predictions were compared to results obtained from Monte Carlo finite element simulations, and good agreement between the two was observed. The analytical model provides an excellent tool to accurately predict the stress relaxation behavior of randomly oriented short-fiber composites. PMID:29053601
Detecting failure of climate predictions
Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve
2016-01-01
The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.
Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong
2014-08-01
Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hilkens, N A; Algra, A; Greving, J P
2016-01-01
ESSENTIALS: Prediction models may help to identify patients at high risk of bleeding on antiplatelet therapy. We identified existing prediction models for bleeding and validated them in patients with cerebral ischemia. Five prediction models were identified, all of which had some methodological shortcomings. Performance in patients with cerebral ischemia was poor. Background Antiplatelet therapy is widely used in secondary prevention after a transient ischemic attack (TIA) or ischemic stroke. Bleeding is the main adverse effect of antiplatelet therapy and is potentially life threatening. Identification of patients at increased risk of bleeding may help target antiplatelet therapy. This study sought to identify existing prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy and evaluate their performance in patients with cerebral ischemia. We systematically searched PubMed and Embase for existing prediction models up to December 2014. The methodological quality of the included studies was assessed with the CHARMS checklist. Prediction models were externally validated in the European Stroke Prevention Study 2, comprising 6602 patients with a TIA or ischemic stroke. We assessed discrimination and calibration of included prediction models. Five prediction models were identified, of which two were developed in patients with previous cerebral ischemia. Three studies assessed major bleeding, one studied intracerebral hemorrhage and one gastrointestinal bleeding. None of the studies met all criteria of good quality. External validation showed poor discriminative performance, with c-statistics ranging from 0.53 to 0.64 and poor calibration. A limited number of prediction models is available that predict intracranial hemorrhage or major bleeding in patients on antiplatelet therapy. The methodological quality of the models varied, but was generally low. Predictive performance in patients with cerebral ischemia was poor. In order to reliably predict the risk of bleeding in patients with cerebral ischemia, development of a prediction model according to current methodological standards is needed. © 2015 International Society on Thrombosis and Haemostasis.
Interfacing comprehensive rotorcraft analysis with advanced aeromechanics and vortex wake models
NASA Astrophysics Data System (ADS)
Liu, Haiying
This dissertation describes three aspects of the comprehensive rotorcraft analysis. First, a physics-based methodology for the modeling of hydraulic devices within multibody-based comprehensive models of rotorcraft systems is developed. This newly proposed approach can predict the fully nonlinear behavior of hydraulic devices, and pressure levels in the hydraulic chambers are coupled with the dynamic response of the system. The proposed hydraulic device models are implemented in a multibody code and calibrated by comparing their predictions with test bench measurements for the UH-60 helicopter lead-lag damper. Predicted peak damping forces were found to be in good agreement with measurements, while the model did not predict the entire time history of damper force to the same level of accuracy. The proposed model evaluates relevant hydraulic quantities such as chamber pressures, orifice flow rates, and pressure relief valve displacements. This model could be used to design lead-lag dampers with desirable force and damping characteristics. The second part of this research is in the area of computational aeroelasticity, in which an interface between computational fluid dynamics (CFD) and computational structural dynamics (CSD) is established. This interface enables data exchange between CFD and CSD with the goal of achieving accurate airloads predictions. In this work, a loose coupling approach based on the delta-airloads method is developed in a finite-element method based multibody dynamics formulation, DYMORE. To validate this aerodynamic interface, a CFD code, OVERFLOW-2, is loosely coupled with a CSD program, DYMORE, to compute the airloads of different flight conditions for Sikorsky UH-60 aircraft. This loose coupling approach has good convergence characteristics. The predicted airloads are found to be in good agreement with the experimental data, although not for all flight conditions. In addition, the tight coupling interface between the CFD program, OVERFLOW-2, and the CSD program, DYMORE, is also established. The ability to accurately capture the wake structure around a helicopter rotor is crucial for rotorcraft performance analysis. In the third part of this thesis, a new representation of the wake vortex structure based on Non-Uniform Rational B-Spline (NURBS) curves and surfaces is proposed to develop an efficient model for prescribed and free wakes. NURBS curves and surfaces are able to represent complex shapes with remarkably little data. The proposed formulation has the potential to reduce the computational cost associated with the use of Helmholtz's law and the Biot-Savart law when calculating the induced flow field around the rotor. An efficient free-wake analysis will considerably decrease the computational cost of comprehensive rotorcraft analysis, making the approach more attractive to routine use in industrial settings.
Three-dimensional viscous rotor flow calculations using a viscous-inviscid interaction approach
NASA Technical Reports Server (NTRS)
Chen, Ching S.; Bridgeman, John O.
1990-01-01
A three-dimensional viscous-inviscid interaction analysis was developed to predict the performance of rotors in hover and in forward flight at subsonic and transonic tip speeds. The analysis solves the full-potential and boundary-layer equations by finite-difference numerical procedures. Calculations were made for several different model rotor configurations. The results were compared with predictions from a two-dimensional integral method and with experimental data. The comparisons show good agreement between predictions and test data.
Extra-Pair Mating and Evolution of Cooperative Neighbourhoods
Eliassen, Sigrunn; Jørgensen, Christian
2014-01-01
A striking but unexplained pattern in biology is the promiscuous mating behaviour in socially monogamous species. Although females commonly solicit extra-pair copulations, the adaptive reason has remained elusive. We use evolutionary modelling of breeding ecology to show that females benefit because extra-pair paternity incentivizes males to shift focus from a single brood towards the entire neighbourhood, as they are likely to have offspring there. Male-male cooperation towards public goods and dear enemy effects of reduced territorial aggression evolve from selfish interests, and lead to safer and more productive neighbourhoods. The mechanism provides adaptive explanations for the common empirical observations that females engage in extra-pair copulations, that neighbours dominate as extra-pair sires, and that extra-pair mating correlates with predation mortality and breeding density. The models predict cooperative behaviours at breeding sites where males cooperate more towards public goods than females. Where maternity certainty makes females care for offspring at home, paternity uncertainty and a potential for offspring in several broods make males invest in communal benefits and public goods. The models further predict that benefits of extra-pair mating affect whole nests or neighbourhoods, and that cuckolding males are often cuckolded themselves. Derived from ecological mechanisms, these new perspectives point towards the evolution of sociality in birds, with relevance also for mammals and primates including humans. PMID:24987839
Johnson, Douglas H.; Cook, R.D.
2013-01-01
In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.
Aircraft interior noise prediction using a structural-acoustic analogy in NASTRAN modal synthesis
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Sullivan, Brenda M.; Marulo, Francesco
1988-01-01
The noise induced inside a cylindrical fuselage model by shaker excitation is investigated theoretically and experimentally. The NASTRAN modal-synthesis program is used in the theoretical analysis, and the predictions are compared with experimental measurements in extensive graphs. Good general agreement is obtained, but the need for further refinements to account for acoustic-cavity damping and structural-acoustic interaction is indicated.
Prospective comparison of severity scores for predicting mortality in community-acquired pneumonia.
Luque, Sonia; Gea, Joaquim; Saballs, Pere; Ferrández, Olivia; Berenguer, Nuria; Grau, Santiago
2012-06-01
Specific prognostic models for community acquired pneumonia (CAP) to guide treatment decisions have been developed, such us the Pneumonia Severity Index (PSI) and the Confusion, Urea nitrogen, Respiratory rate, Blood pressure and age ≥ 65 years index (CURB-65). Additionally, general models are available such as the Mortality Probability Model (MPM-II). So far, which score performs better in CAP remains controversial. The objective was to compare PSI and CURB-65 and the general model, MPM-II, for predicting 30-day mortality in patients admitted with CAP. Prospective observational study including all consecutive patients hospitalised with a confirmed diagnosis of CAP and treated according to the hospital guidelines. Comparison of the overall discriminatory power of the models was performed by calculating the area under a receiver operator characteristic curve (AUC ROC curve) and calibration through the Goodness-of-fit test. One hundred and fifty two patients were included (mean age 73.0 years; 69.1% male; 75.0% with more than one comorbid condition). Seventy-five percent of the patients were classified as high-risk subjects according to the PSI, versus 61.2% according to the CURB-65. The 30-day mortality rate was 11.8%. All three scores obtained acceptable and similar values of the AUCs of the ROC curve for predicting mortality. Despite all rules showed good calibration, this seemed to be better for CURB-65. CURB-65 also revealed the highest positive likelihood ratio. CURB-65 performs similar to PSI or MPMII for predicting 30-day mortality in patients with CAP. Consequently, this simple model can be regarded as a valid alternative to the more complex rules.
Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.
A first European scale multimedia fate modelling of BDE-209 from 1970 to 2020.
Earnshaw, Mark R; Jones, Kevin C; Sweetman, Andy J
2015-01-01
The European Variant Berkeley Trent (EVn-BETR) multimedia fugacity model is used to test the validity of previously derived emission estimates and predict environmental concentrations of the main decabromodiphenyl ether congener, BDE-209. The results are presented here and compared with measured environmental data from the literature. Future multimedia concentration trends are predicted using three emission scenarios (Low, Realistic and High) in the dynamic unsteady state mode covering the period 1970-2020. The spatial and temporal distributions of emissions are evaluated. It is predicted that BDE-209 atmospheric concentrations peaked in 2004 and will decline to negligible levels by 2025. Freshwater concentrations should have peaked in 2011, one year after the emissions peak with sediment concentrations peaking in 2013. Predicted atmospheric concentrations are in good agreement with measured data for the Realistic (best estimate of emissions) and High (worst case scenario) emission scenarios. The Low emission scenario consistently underestimates measured data. The German unilateral ban on the use of DecaBDE in the textile industry is simulated in an additional scenario, the effects of which are mainly observed within Germany with only a small effect on the surrounding areas. Overall, the EVn-BTER model predicts atmospheric concentrations reasonably well, within a factor of 5 and 1.2 for the Realistic and High emission scenarios respectively, providing partial validation for the original emission estimate. Total mean MEC:PEC shows the High emission scenario predicts the best fit between air, freshwater and sediment data. An alternative spatial distribution of emissions is tested, based on higher consumption in EBFRIP member states, resulting in improved agreement between MECs and PECs in comparison with the Uniform spatial distribution based on population density. Despite good agreement between modelled and measured point data, more long-term monitoring datasets are needed to compare predicted trends in concentration to determine the rate of change of POPs within the environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Applications of statistical physics to technology price evolution
NASA Astrophysics Data System (ADS)
McNerney, James
Understanding how changing technology affects the prices of goods is a problem with both rich phenomenology and important policy consequences. Using methods from statistical physics, I model technology-driven price evolution. First, I examine a model for the price evolution of individual technologies. The price of a good often follows a power law equation when plotted against its cumulative production. This observation turns out to have significant consequences for technology policy aimed at mitigating climate change, where technologies are needed that achieve low carbon emissions at low cost. However, no theory adequately explains why technology prices follow power laws. To understand this behavior, I simplify an existing model that treats technologies as machines composed of interacting components. I find that the power law exponent of the price trajectory is inversely related to the number of interactions per component. I extend the model to allow for more realistic component interactions and make a testable prediction. Next, I conduct a case-study on the cost evolution of coal-fired electricity. I derive the cost in terms of various physical and economic components. The results suggest that commodities and technologies fall into distinct classes of price models, with commodities following martingales, and technologies following exponentials in time or power laws in cumulative production. I then examine the network of money flows between industries. This work is a precursor to studying the simultaneous evolution of multiple technologies. Economies resemble large machines, with different industries acting as interacting components with specialized functions. To begin studying the structure of these machines, I examine 20 economies with an emphasis on finding common features to serve as targets for statistical physics models. I find they share the same money flow and industry size distributions. I apply methods from statistical physics to show that industries cluster the same way according to industry type. Finally, I use these industry money flows to model the price evolution of many goods simultaneously, where network effects become important. I derive a prediction for which goods tend to improve most rapidly. The fastest-improving goods are those with the highest mean path lengths in the money flow network.
Wang, S; Sun, Z; Wang, S
1996-11-01
A prospective follow-up study of 539 advanced gastric carcinoma patients after resection was undertaken between 1 January 1980 and 31 December 1989, with a follow-up rate of 95.36%. A multivariate analysis of possible factors influencing survival of these patients was performed, and their predicting models of survival rates was established by Cox proportional hazard model. The results showed that the major significant prognostic factors influencing survival of these patients were rate and station of lymph node metastases, type of operation, hepatic metastases, size of tumor, age and location of tumor. The most important factor was the rate of lymph node metastases. According to their regression coefficients, the predicting value (PV) of each patient was calculated, then all patients were divided into five risk groups according to PV, their predicting models of survival rates after resection were established in groups. The goodness-fit of estimated predicting models of survival rates were checked by fitting curve and residual plot, and the estimated models tallied with the actual situation. The results suggest that the patients with advanced gastric cancer after resection without lymph node metastases and hepatic metastases had a better prognosis, and their survival probability may be predicted according to the predicting model of survival rates.
NASA Astrophysics Data System (ADS)
Dodla, Venkata B.; Srinivas, Desamsetti; Dasari, Hari Prasad; Gubbala, Chinna Satyanarayana
2016-05-01
Tropical cyclone prediction, in terms of intensification and movement, is important for disaster management and mitigation. Hitherto, research studies were focused on this issue that lead to improvement in numerical models, initial data with data assimilation, physical parameterizations and application of ensemble prediction. Weather Research and Forecasting (WRF) model is the state-of-art model for cyclone prediction. In the present study, prediction of tropical cyclone (Phailin, 2013) that formed in the North Indian Ocean (NIO) with and without data assimilation using WRF model has been made to assess impacts of data assimilation. WRF model was designed to have nested two domains of 15 and 5 km resolutions. In the present study, numerical experiments are made without and with the assimilation of scatterometer winds, and radiances from ATOVS and ATMS. The model performance was assessed in respect to the movement and intensification of cyclone. ATOVS data assimilation experiment had produced the best prediction with least errors less than 100 km up to 60 hours and producing pre-deepening and deepening periods accurately. The Control and SCAT wind assimilation experiments have shown good track but the errors were 150-200 km and gradual deepening from the beginning itself instead of sudden deepening.
Ribeiro, Ilda Patrícia; Caramelo, Francisco; Esteves, Luísa; Menoita, Joana; Marques, Francisco; Barroso, Leonor; Miguéis, Jorge; Melo, Joana Barbosa; Carreira, Isabel Marques
2017-10-24
The head and neck squamous cell carcinoma (HNSCC) population consists mainly of high-risk for recurrence and locally advanced stage patients. Increased knowledge of the HNSCC genomic profile can improve early diagnosis and treatment outcomes. The development of models to identify consistent genomic patterns that distinguish HNSCC patients that will recur and/or develop metastasis after treatment is of utmost importance to decrease mortality and improve survival rates. In this study, we used array comparative genomic hybridization data from HNSCC patients to implement a robust model to predict HNSCC recurrence/metastasis. This predictive model showed a good accuracy (>80%) and was validated in an independent population from TCGA data portal. This predictive genomic model comprises chromosomal regions from 5p, 6p, 8p, 9p, 11q, 12q, 15q and 17p, where several upstream and downstream members of signaling pathways that lead to an increase in cell proliferation and invasion are mapped. The introduction of genomic predictive models in clinical practice might contribute to a more individualized clinical management of the HNSCC patients, reducing recurrences and improving patients' quality of life. The power of this genomic model to predict the recurrence and metastases development should be evaluated in other HNSCC populations.
Elissen, Arianne M J; Struijs, Jeroen N; Baan, Caroline A; Ruwaard, Dirk
2015-05-01
To support providers and commissioners in accurately assessing their local populations' health needs, this study produces an overview of Dutch predictive risk models for health care, focusing specifically on the type, combination and relevance of included determinants for achieving the Triple Aim (improved health, better care experience, and lower costs). We conducted a mixed-methods study combining document analyses, interviews and a Delphi study. Predictive risk models were identified based on a web search and expert input. Participating in the study were Dutch experts in predictive risk modelling (interviews; n=11) and experts in healthcare delivery, insurance and/or funding methodology (Delphi panel; n=15). Ten predictive risk models were analysed, comprising 17 unique determinants. Twelve were considered relevant by experts for estimating community health needs. Although some compositional similarities were identified between models, the combination and operationalisation of determinants varied considerably. Existing predictive risk models provide a good starting point, but optimally balancing resources and targeting interventions on the community level will likely require a more holistic approach to health needs assessment. Development of additional determinants, such as measures of people's lifestyle and social network, may require policies pushing the integration of routine data from different (healthcare) sources. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra
2013-03-01
SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.
Reduction of initial shock in decadal predictions using a new initialization strategy
NASA Astrophysics Data System (ADS)
He, Yujun; Wang, Bin; Liu, Mimi; Liu, Li; Yu, Yongqiang; Liu, Juanjuan; Li, Ruizhe; Zhang, Cheng; Xu, Shiming; Huang, Wenyu; Liu, Qun; Wang, Yong; Li, Feifei
2017-08-01
A novel full-field initialization strategy based on the dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar) is proposed to alleviate the well-known initial shock occurring in the early years of decadal predictions. It generates consistent initial conditions, which best fit the monthly mean oceanic analysis data along the coupled model trajectory in 1 month windows. Three indices to measure the initial shock intensity are also proposed. Results indicate that this method does reduce the initial shock in decadal predictions by Flexible Global Ocean-Atmosphere-Land System model, Grid-point version 2 (FGOALS-g2) compared with the three-dimensional variational data assimilation-based nudging full-field initialization for the same model and is comparable to or even better than the different initialization strategies for other fifth phase of the Coupled Model Intercomparison Project (CMIP5) models. Better hindcasts of global mean surface air temperature anomalies can be obtained than in other FGOALS-g2 experiments. Due to the good model response to external forcing and the reduction of initial shock, higher decadal prediction skill is achieved than in other CMIP5 models.
Thermo-mechanical simulations of early-age concrete cracking with durability predictions
NASA Astrophysics Data System (ADS)
Havlásek, Petr; Šmilauer, Vít; Hájková, Karolina; Baquerizo, Luis
2017-09-01
Concrete performance is strongly affected by mix design, thermal boundary conditions, its evolving mechanical properties, and internal/external restraints with consequences to possible cracking with impaired durability. Thermo-mechanical simulations are able to capture those relevant phenomena and boundary conditions for predicting temperature, strains, stresses or cracking in reinforced concrete structures. In this paper, we propose a weakly coupled thermo-mechanical model for early age concrete with an affinity-based hydration model for thermal part, taking into account concrete mix design, cement type and thermal boundary conditions. The mechanical part uses B3/B4 model for concrete creep and shrinkage with isotropic damage model for cracking, able to predict a crack width. All models have been implemented in an open-source OOFEM software package. Validations of thermo-mechanical simulations will be presented on several massive concrete structures, showing excellent temperature predictions. Likewise, strain validation demonstrates good predictions on a restrained reinforced concrete wall and concrete beam. Durability predictions stem from induction time of reinforcement corrosion, caused by carbonation and/or chloride ingress influenced by crack width. Reinforcement corrosion in concrete struts of a bridge will serve for validation.
Harrison, David A; Parry, Gareth J; Carpenter, James R; Short, Alasdair; Rowan, Kathy
2007-04-01
To develop a new model to improve risk prediction for admissions to adult critical care units in the UK. Prospective cohort study. The setting was 163 adult, general critical care units in England, Wales, and Northern Ireland, December 1995 to August 2003. Patients were 216,626 critical care admissions. None. The performance of different approaches to modeling physiologic measurements was evaluated, and the best methods were selected to produce a new physiology score. This physiology score was combined with other information relating to the critical care admission-age, diagnostic category, source of admission, and cardiopulmonary resuscitation before admission-to develop a risk prediction model. Modeling interactions between diagnostic category and physiology score enabled the inclusion of groups of admissions that are frequently excluded from risk prediction models. The new model showed good discrimination (mean c index 0.870) and fit (mean Shapiro's R 0.665, mean Brier's score 0.132) in 200 repeated validation samples and performed well when compared with recalibrated versions of existing published risk prediction models in the cohort of patients eligible for all models. The hypothesis of perfect fit was rejected for all models, including the Intensive Care National Audit & Research Centre (ICNARC) model, as is to be expected in such a large cohort. The ICNARC model demonstrated better discrimination and overall fit than existing risk prediction models, even following recalibration of these models. We recommend it be used to replace previously published models for risk adjustment in the UK.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
Confronting uncertainty in flood damage predictions
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2015-04-01
Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
FIBER ORIENTATION IN INJECTION MOLDED LONG CARBON FIBER THERMOPLASTIC COMPOSITES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jin; Nguyen, Ba Nghiep; Mathur, Raj N.
2015-03-23
A set of edge-gated and center-gated plaques were injection molded with long carbon fiber-reinforced thermoplastic composites, and the fiber orientation was measured at different locations of the plaques. Autodesk Simulation Moldflow Insight (ASMI) software was used to simulate the injection molding of these plaques and to predict the fiber orientation, using the anisotropic rotary diffusion and the reduced strain closure models. The phenomenological parameters of the orientation models were carefully identified by fitting to the measured orientation data. The fiber orientation predictions show very good agreement with the experimental data.
Techno-economic analysis of a transient plant-based platform for monoclonal antibody production
Nandi, Somen; Kwong, Aaron T.; Holtz, Barry R.; Erwin, Robert L.; Marcel, Sylvain; McDonald, Karen A.
2016-01-01
ABSTRACT Plant-based biomanufacturing of therapeutic proteins is a relatively new platform with a small number of commercial-scale facilities, but offers advantages of linear scalability, reduced upstream complexity, reduced time to market, and potentially lower capital and operating costs. In this study we present a detailed process simulation model for a large-scale new “greenfield” biomanufacturing facility that uses transient agroinfiltration of Nicotiana benthamiana plants grown hydroponically indoors under light-emitting diode lighting for the production of a monoclonal antibody. The model was used to evaluate the total capital investment, annual operating cost, and cost of goods sold as a function of mAb expression level in the plant (g mAb/kg fresh weight of the plant) and production capacity (kg mAb/year). For the Base Case design scenario (300 kg mAb/year, 1 g mAb/kg fresh weight, and 65% recovery in downstream processing), the model predicts a total capital investment of $122 million dollars and cost of goods sold of $121/g including depreciation. Compared with traditional biomanufacturing platforms that use mammalian cells grown in bioreactors, the model predicts significant reductions in capital investment and >50% reduction in cost of goods compared with published values at similar production scales. The simulation model can be modified or adapted by others to assess the profitability of alternative designs, implement different process assumptions, and help guide process development and optimization. PMID:27559626
Techno-economic analysis of a transient plant-based platform for monoclonal antibody production.
Nandi, Somen; Kwong, Aaron T; Holtz, Barry R; Erwin, Robert L; Marcel, Sylvain; McDonald, Karen A
Plant-based biomanufacturing of therapeutic proteins is a relatively new platform with a small number of commercial-scale facilities, but offers advantages of linear scalability, reduced upstream complexity, reduced time to market, and potentially lower capital and operating costs. In this study we present a detailed process simulation model for a large-scale new "greenfield" biomanufacturing facility that uses transient agroinfiltration of Nicotiana benthamiana plants grown hydroponically indoors under light-emitting diode lighting for the production of a monoclonal antibody. The model was used to evaluate the total capital investment, annual operating cost, and cost of goods sold as a function of mAb expression level in the plant (g mAb/kg fresh weight of the plant) and production capacity (kg mAb/year). For the Base Case design scenario (300 kg mAb/year, 1 g mAb/kg fresh weight, and 65% recovery in downstream processing), the model predicts a total capital investment of $122 million dollars and cost of goods sold of $121/g including depreciation. Compared with traditional biomanufacturing platforms that use mammalian cells grown in bioreactors, the model predicts significant reductions in capital investment and >50% reduction in cost of goods compared with published values at similar production scales. The simulation model can be modified or adapted by others to assess the profitability of alternative designs, implement different process assumptions, and help guide process development and optimization.
A zero-equation turbulence model for two-dimensional hybrid Hall thruster simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappelli, Mark A., E-mail: cap@stanford.edu; Young, Christopher V.; Cha, Eunsun
2015-11-15
We present a model for electron transport across the magnetic field of a Hall thruster and integrate this model into 2-D hybrid particle-in-cell simulations. The model is based on a simple scaling of the turbulent electron energy dissipation rate and the assumption that this dissipation results in Ohmic heating. Implementing the model into 2-D hybrid simulations is straightforward and leverages the existing framework for solving the electron fluid equations. The model recovers the axial variation in the mobility seen in experiments, predicting the generation of a transport barrier which anchors the region of plasma acceleration. The predicted xenon neutral andmore » ion velocities are found to be in good agreement with laser-induced fluorescence measurements.« less
Network model for thermal conductivities of unidirectional fiber-reinforced composites
NASA Astrophysics Data System (ADS)
Wang, Yang; Peng, Chaoyi; Zhang, Weihua
2014-12-01
An empirical network model has been developed to predict the in-plane thermal conductivities along arbitrary directions for unidirectional fiber-reinforced composites lamina. Measurements of thermal conductivities along different orientations were carried out. Good agreement was observed between values predicted by the network model and the experimental data; compared with the established analytical models, the newly proposed network model could give values with higher precision. Therefore, this network model is helpful to get a wider and more comprehensive understanding of heat transmission characteristics of fiber-reinforced composites and can be utilized as guidance to design and fabricate laminated composites with specific directional or specific locational thermal conductivities for structures that simultaneously perform mechanical and thermal functions, i.e. multifunctional structures (MFS).
A control-theory model for human decision-making
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.
1971-01-01
A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.
Owens, Robert L; Edwards, Bradley A; Eckert, Danny J; Jordan, Amy S; Sands, Scott A; Malhotra, Atul; White, David P; Loring, Stephen H; Butler, James P; Wellman, Andrew
2015-06-01
Both anatomical and nonanatomical traits are important in obstructive sleep apnea (OSA) pathogenesis. We have previously described a model combining these traits, but have not determined its diagnostic accuracy to predict OSA. A valid model, and knowledge of the published effect sizes of trait manipulation, would also allow us to predict the number of patients with OSA who might be effectively treated without using positive airway pressure (PAP). Fifty-seven subjects with and without OSA underwent standard clinical and research sleep studies to measure OSA severity and the physiological traits important for OSA pathogenesis, respectively. The traits were incorporated into a physiological model to predict OSA. The model validity was determined by comparing the model prediction of OSA to the clinical diagnosis of OSA. The effect of various trait manipulations was then simulated to predict the proportion of patients treated by each intervention. The model had good sensitivity (80%) and specificity (100%) for predicting OSA. A single intervention on one trait would be predicted to treat OSA in approximately one quarter of all patients. Combination therapy with two interventions was predicted to treat OSA in ∼50% of patients. An integrative model of physiological traits can be used to predict population-wide and individual responses to non-PAP therapy. Many patients with OSA would be expected to be treated based on known trait manipulations, making a strong case for the importance of non-anatomical traits in OSA pathogenesis and the effectiveness of non-PAP therapies. © 2015 Associated Professional Sleep Societies, LLC.
Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong
2011-09-01
One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.
Transition Heat Transfer Modeling Based on the Characteristics of Turbulent Spots
NASA Technical Reports Server (NTRS)
Simon, Fred; Boyle, Robert
1998-01-01
While turbulence models are being developed which show promise for simulating the transition region on a turbine blade or vane, it is believed that the best approach with the greatest potential for practical use is the use of models which incorporate the physics of turbulent spots present in the transition region. This type of modeling results in the prediction of transition region intermittency which when incorporated in turbulence models give a good to excellent prediction of the transition region heat transfer. Some models are presented which show how turbulent spot characteristics and behavior can be employed to predict the effect of pressure gradient and Mach number on the transition region. The models predict the spot formation rate which is needed, in addition to the transition onset location, in the Narasimha concentrated breakdown intermittency equation. A simplified approach is taken for modeling turbulent spot growth and interaction in the transition region which utilizes the turbulent spot variables governing transition length and spot generation rate. The models are expressed in terms of spot spreading angle, dimensionless spot velocity, dimensionless spot area, disturbance frequency and Mach number. The models are used in conjunction with a computer code to predict the effects of pressure gradient and Mach number on the transition region and compared with VKI experimental turbine data.
Evolutionary model of an anonymous consumer durable market
NASA Astrophysics Data System (ADS)
Kaldasch, Joachim
2011-07-01
An analytic model is presented that considers the evolution of a market of durable goods. The model suggests that after introduction goods spread always according to a Bass diffusion. However, this phase will be followed by a diffusion process for durable consumer goods governed by a variation-selection-reproduction mechanism and the growth dynamics can be described by a replicator equation. The theory suggests that products play the role of species in biological evolutionary models. It implies that the evolution of man-made products can be arranged into an evolutionary tree. The model suggests that each product can be characterized by its product fitness. The fitness space contains elements of both sites of the market, supply and demand. The unit sales of products with a higher product fitness compared to the mean fitness increase. Durables with a constant fitness advantage replace other goods according to a logistic law. The model predicts in particular that the mean price exhibits an exponential decrease over a long time period for durable goods. The evolutionary diffusion process is directly related to this price decline and is governed by Gompertz equation. Therefore it is denoted as Gompertz diffusion. Describing the aggregate sales as the sum of first, multiple and replacement purchase the product life cycle can be derived. Replacement purchase causes periodic variations of the sales determined by the finite lifetime of the good (Juglar cycles). The model suggests that both, Bass- and Gompertz diffusion may contribute to the product life cycle of a consumer durable. The theory contains the standard equilibrium view of a market as a special case. It depends on the time scale, whether an equilibrium or evolutionary description is more appropriate. The evolutionary framework is used to derive also the size, growth rate and price distribution of manufacturing business units. It predicts that the size distribution of the business units (products) is lognormal, while the growth rates exhibit a Laplace distribution. Large price deviations from the mean price are also governed by a Laplace distribution (fat tails). These results are in agreement with empirical findings. The explicit comparison of the time evolution of consumer durables with empirical investigations confirms the close relationship between price decline and Gompertz diffusion, while the product life cycle can be described qualitatively for a long time period.
NASA Astrophysics Data System (ADS)
Matamala, R.; Fan, Z.; Jastrow, J. D.; Liang, C.; Calderon, F.; Michaelson, G.; Ping, C. L.; Mishra, U.; Hofmann, S. M.
2016-12-01
The large amounts of organic matter stored in permafrost-region soils are preserved in a relatively undecomposed state by the cold and wet environmental conditions limiting decomposer activity. With pending climate changes and the potential for warming of Arctic soils, there is a need to better understand the amount and potential susceptibility to mineralization of the carbon stored in the soils of this region. Studies have suggested that soil C:N ratio or other indicators based on the molecular composition of soil organic matter could be good predictors of potential decomposability. In this study, we investigated the capability of Fourier-transform mid infrared spectroscopy (MidIR) spectroscopy to predict the evolution of carbon dioxide (CO2) produced by Arctic tundra soils during a 60-day laboratory incubation. Soils collected from four tundra sites on the Coastal Plain, and Arctic Foothills of the North Slope of Alaska were separated into active-layer organic, active-layer mineral, and upper permafrost and incubated at 1, 4, 8 and 16 °C. Carbon dioxide production was measured throughout the incubations. Total soil organic carbon (SOC) and total nitrogen (TN) concentrations, salt (0.5 M K2SO4) extractable organic matter (SEOM), and MidIR spectra of the soils were measured before and after incubation. Multivariate partial least squares (PLS) modeling was used to predict cumulative CO2 production, decay rates, and the other measurements. MidIR reliably estimated SOC and TN and SEOM concentrations. The MidIR prediction models of CO2 production were very good for active-layer mineral and upper permafrost soils and good for the active-layer organic soils. SEOM was also a very good predictor of CO2 produced during the incubations. Analysis of the standardized beta coefficients from the PLS models of CO2 production for the three soil layers indicated a small number (9) of influential spectral bands. Of these, bands associated with O-H and N-H stretch, carbonates, and ester C-O appeared to be most important for predicting CO2 production for both active-layer mineral and upper permafrost soils. Further analysis of these influential bands and their relationships to SEOM in soil will be explored. Our results show that the MidIR spectra contains valuable information that can be related to decomposability of soils.
The relationship between morphological and behavioral mimicry in hover flies (Diptera: Syrphidae).
Penney, Heather D; Hassall, Christopher; Skevington, Jeffrey H; Lamborn, Brent; Sherratt, Thomas N
2014-02-01
Palatable (Batesian) mimics of unprofitable models could use behavioral mimicry to compensate for the ease with which they can be visually discriminated or to augment an already close morphological resemblance. We evaluated these contrasting predictions by assaying the behavior of 57 field-caught species of mimetic hover flies (Diptera: Syrphidae) and quantifying their morphological similarity to a range of potential hymenopteran models. A purpose-built phylogeny for the hover flies was used to control for potential lack of independence due to shared evolutionary history. Those hover fly species that engage in behavioral mimicry (mock stinging, leg waving, wing wagging) were all large wasp mimics within the genera Spilomyia and Temnostoma. While the behavioral mimics assayed were good morphological mimics, not all good mimics were behavioral mimics. Therefore, while the behaviors may have evolved to augment good morphological mimicry, they do not advantage all good mimics.
Fighting the good fight: the relationship between belief in evil and support for violent policies.
Campbell, Maggie; Vollhardt, Johanna Ray
2014-01-01
The rhetoric of good and evil is prevalent in many areas of society and is often used to garner support for "redemptive violence" (i.e., using violence to rid and save the world from evil). While evil is discussed in psychological literature, beliefs about good and evil have not received adequate empirical attention as predictors of violent versus peaceful intergroup attitudes. In four survey studies, we developed and tested novel measures of belief in evil and endorsement of redemptive violence. Across four different samples, belief in evil predicted greater support for violence and lesser support for nonviolent responses. These effects were, in most cases, mediated by endorsement of redemptive violence. Structural equation modeling suggested that need for cognitive closure predicts belief in evil, and that the effect of belief in evil on support for violence is independent of right-wing authoritarianism, religious fundamentalism, and dangerous world beliefs.
NASA Astrophysics Data System (ADS)
Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.
2013-09-01
This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.
Validation of the Integrated Medical Model Using Historical Space Flight Data
NASA Technical Reports Server (NTRS)
Kerstman, Eric L.; Minard, Charles G.; FreiredeCarvalho, Mary H.; Walton, Marlei E.; Myers, Jerry G., Jr.; Saile, Lynn G.; Lopez, Vilma; Butler, Douglas J.; Johnson-Throop, Kathy A.
2010-01-01
The Integrated Medical Model (IMM) utilizes Monte Carlo methodologies to predict the occurrence of medical events, utilization of resources, and clinical outcomes during space flight. Real-world data may be used to demonstrate the accuracy of the model. For this analysis, IMM predictions were compared to data from historical shuttle missions, not yet included as model source input. Initial goodness of fit test-ing on International Space Station data suggests that the IMM may overestimate the number of occurrences for three of the 83 medical conditions in the model. The IMM did not underestimate the occurrence of any medical condition. Initial comparisons with shuttle data demonstrate the importance of understanding crew preference (i.e., preferred analgesic) for accurately predicting the utilization of re-sources. The initial analysis demonstrates the validity of the IMM for its intended use and highlights areas for improvement.
Forecasting stochastic neural network based on financial empirical mode decomposition.
Wang, Jie; Wang, Jun
2017-06-01
In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prediction of gestational age based on genome-wide differentially methylated regions.
Bohlin, J; Håberg, S E; Magnus, P; Reese, S E; Gjessing, H K; Magnus, M C; Parr, C L; Page, C M; London, S J; Nystad, W
2016-10-07
We explored the association between gestational age and cord blood DNA methylation at birth and whether DNA methylation could be effective in predicting gestational age due to limitations with the presently used methods. We used data from the Norwegian Mother and Child Birth Cohort study (MoBa) with Illumina HumanMethylation450 data measured for 1753 newborns in two batches: MoBa 1, n = 1068; and MoBa 2, n = 685. Gestational age was computed using both ultrasound and the last menstrual period. We evaluated associations between DNA methylation and gestational age and developed a statistical model for predicting gestational age using MoBa 1 for training and MoBa 2 for predictions. The prediction model was additionally used to compare ultrasound and last menstrual period-based gestational age predictions. Furthermore, both CpGs and associated genes detected in the training models were compared to those detected in a published prediction model for chronological age. There were 5474 CpGs associated with ultrasound gestational age after adjustment for a set of covariates, including estimated cell type proportions, and Bonferroni-correction for multiple testing. Our model predicted ultrasound gestational age more accurately than it predicted last menstrual period gestational age. DNA methylation at birth appears to be a good predictor of gestational age. Ultrasound gestational age is more strongly associated with methylation than last menstrual period gestational age. The CpGs linked with our gestational age prediction model, and their associated genes, differed substantially from the corresponding CpGs and genes associated with a chronological age prediction model.
NASA Astrophysics Data System (ADS)
Vatandoost, Hossein; Norouzi, Mahmood; Masoud Sajjadi Alehashem, Seyed; Smoukov, Stoyan K.
2017-06-01
Tension-compression operation in MR elastomers (MREs) offers both the most compact design and superior stiffness in many vertical load-bearing applications, such as MRE bearing isolators in bridges and buildings, suspension systems and engine mounts in cars, and vibration control equipment. It suffers, however, from lack of good computational models to predict device performance, and as a result shear-mode MREs are widely used in the industry, despite their low stiffness and load-bearing capacity. We start with a comprehensive review of modeling of MREs and their dynamic characteristics, showing previous studies have mostly focused on dynamic behavior of MREs in shear mode, though the MRE strength and MR effect are greatly decreased at high strain amplitudes, due to increasing distance between the magnetic particles. Moreover, the characteristic parameters of the current models assume either frequency, or strain, or magnetic field are constant; hence, new model parameters must be recalculated for new loading conditions. This is an experimentally time consuming and computationally expensive task, and no models capture the full dynamic behavior of the MREs at all loading conditions. In this study, we present an experimental setup to test MREs in a coupled tension-compression mode, as well as a novel phenomenological model which fully predicts the stress-strain material behavior as a function of magnetic flux density, loading frequency and strain. We use a training set of experiments to find the experimentally derived model parameters, from which can predict by interpolation the MRE behavior in a relatively large continuous range of frequency, strain and magnetic field. We also challenge the model to make extrapolating predictions and compare to additional experiments outside the training experimental data set with good agreement. Further development of this model would allow design and control of engineering structures equipped with tension-compression MREs and all the advantages they offer.
Sagarduy, José Luis Ybarra; López, Julio Alfonso Piña; Ramírez, Mónica Teresa González; Dávila, Luis Enrique Fierros
2017-09-04
The objective of this study has been to test the ability of variables of a psychological model to predict antiretroviral therapy medication adherence behavior. We have conducted a cross-sectional study among 172 persons living with HIV/AIDS (PLWHA), who completed four self-administered assessments: 1) the Psychological Variables and Adherence Behaviors Questionnaire, 2) the Stress-Related Situation Scale to assess the variable of Personality, 3) The Zung Depression Scale, and 4) the Duke-UNC Functional Social Support Questionnaire. Structural equation modeling was used to construct a model to predict medication adherence behaviors. Out of all the participants, 141 (82%) have been considered 100% adherent to antiretroviral therapy. Structural equation modeling has confirmed the direct effect that personality (decision-making and tolerance of frustration) has on motives to behave, or act accordingly, which was in turn directly related to medication adherence behaviors. In addition, these behaviors have had a direct and significant effect on viral load, as well as an indirect effect on CD4 cell count. The final model demonstrates the congruence between theory and data (x2/df. = 1.480, goodness of fit index = 0.97, adjusted goodness of fit index = 0.94, comparative fit index = 0.98, root mean square error of approximation = 0.05), accounting for 55.7% of the variance. The results of this study support our theoretical model as a conceptual framework for the prediction of medication adherence behaviors in persons living with HIV/AIDS. Implications for designing, implementing, and evaluating intervention programs based on the model are to be discussed.
Payne, Courtney E; Wolfrum, Edward J
2015-01-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. We present individual model statistics to demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. It is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.
Dong, Ni; Huang, Helai; Zheng, Liang
2015-09-01
In zone-level crash prediction, accounting for spatial dependence has become an extensively studied topic. This study proposes Support Vector Machine (SVM) model to address complex, large and multi-dimensional spatial data in crash prediction. Correlation-based Feature Selector (CFS) was applied to evaluate candidate factors possibly related to zonal crash frequency in handling high-dimension spatial data. To demonstrate the proposed approaches and to compare them with the Bayesian spatial model with conditional autoregressive prior (i.e., CAR), a dataset in Hillsborough county of Florida was employed. The results showed that SVM models accounting for spatial proximity outperform the non-spatial model in terms of model fitting and predictive performance, which indicates the reasonableness of considering cross-zonal spatial correlations. The best model predictive capability, relatively, is associated with the model considering proximity of the centroid distance by choosing the RBF kernel and setting the 10% of the whole dataset as the testing data, which further exhibits SVM models' capacity for addressing comparatively complex spatial data in regional crash prediction modeling. Moreover, SVM models exhibit the better goodness-of-fit compared with CAR models when utilizing the whole dataset as the samples. A sensitivity analysis of the centroid-distance-based spatial SVM models was conducted to capture the impacts of explanatory variables on the mean predicted probabilities for crash occurrence. While the results conform to the coefficient estimation in the CAR models, which supports the employment of the SVM model as an alternative in regional safety modeling. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lauer, Mark A.; Poirier, David R.; Erdmann, Robert G.
2014-09-01
This report covers the modeling of seven directionally solidified samples, five under normal gravitational conditions and two in microgravity. A model is presented to predict macrosegregation during the melting phases of samples solidified under microgravitational conditions. The results of this model are compared against two samples processed in microgravity and good agreement is found. A second model is presented that captures thermosolutal convection during directional solidification. Results for this model are compared across several experiments and quantitative comparisons are made between the model and the experimentally obtained radial macrosegregation profiles with good agreement being found. Changes in cross section weremore » present in some samples and micrographs of these are qualitatively compared with the results of the simulations. It is found that macrosegregation patterns can be affected by changing the mold material.« less
Some Recent Developments in Turbulence Closure Modeling
NASA Astrophysics Data System (ADS)
Durbin, Paul A.
2018-01-01
Turbulence closure models are central to a good deal of applied computational fluid dynamical analysis. Closure modeling endures as a productive area of research. This review covers recent developments in elliptic relaxation and elliptic blending models, unified rotation and curvature corrections, transition prediction, hybrid simulation, and data-driven methods. The focus is on closure models in which transport equations are solved for scalar variables, such as the turbulent kinetic energy, a timescale, or a measure of anisotropy. Algebraic constitutive representations are reviewed for their role in relating scalar closures to the Reynolds stress tensor. Seamless and nonzonal methods, which invoke a single closure model, are reviewed, especially detached eddy simulation (DES) and adaptive DES. Other topics surveyed include data-driven modeling and intermittency and laminar fluctuation models for transition prediction. The review concludes with an outlook.
Myint, Kyaw Z.; Xie, Xiang-Qun
2015-01-01
This chapter focuses on the fingerprint-based artificial neural networks QSAR (FANN-QSAR) approach to predict biological activities of structurally diverse compounds. Three types of fingerprints, namely ECFP6, FP2, and MACCS, were used as inputs to train the FANN-QSAR models. The results were benchmarked against known 2D and 3D QSAR methods, and the derived models were used to predict cannabinoid (CB) ligand binding activities as a case study. In addition, the FANN-QSAR model was used as a virtual screening tool to search a large NCI compound database for lead cannabinoid compounds. We discovered several compounds with good CB2 binding affinities ranging from 6.70 nM to 3.75 μM. The studies proved that the FANN-QSAR method is a useful approach to predict bioactivities or properties of ligands and to find novel lead compounds for drug discovery research. PMID:25502380
Kumar, Atul; Samadder, S R
2017-10-01
Accurate prediction of the quantity of household solid waste generation is very much essential for effective management of municipal solid waste (MSW). In actual practice, modelling methods are often found useful for precise prediction of MSW generation rate. In this study, two models have been proposed that established the relationships between the household solid waste generation rate and the socioeconomic parameters, such as household size, total family income, education, occupation and fuel used in the kitchen. Multiple linear regression technique was applied to develop the two models, one for the prediction of biodegradable MSW generation rate and the other for non-biodegradable MSW generation rate for individual households of the city Dhanbad, India. The results of the two models showed that the coefficient of determinations (R 2 ) were 0.782 for biodegradable waste generation rate and 0.676 for non-biodegradable waste generation rate using the selected independent variables. The accuracy tests of the developed models showed convincing results, as the predicted values were very close to the observed values. Validation of the developed models with a new set of data indicated a good fit for actual prediction purpose with predicted R 2 values of 0.76 and 0.64 for biodegradable and non-biodegradable MSW generation rate respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Zhenchen; Lu, Guihua; He, Hai; Wu, Zhiyong; He, Jian
2018-01-01
Reliable drought prediction is fundamental for water resource managers to develop and implement drought mitigation measures. Considering that drought development is closely related to the spatial-temporal evolution of large-scale circulation patterns, we developed a conceptual prediction model of seasonal drought processes based on atmospheric and oceanic standardized anomalies (SAs). Empirical orthogonal function (EOF) analysis is first applied to drought-related SAs at 200 and 500 hPa geopotential height (HGT) and sea surface temperature (SST). Subsequently, SA-based predictors are built based on the spatial pattern of the first EOF modes. This drought prediction model is essentially the synchronous statistical relationship between 90-day-accumulated atmospheric-oceanic SA-based predictors and SPI3 (3-month standardized precipitation index), calibrated using a simple stepwise regression method. Predictor computation is based on forecast atmospheric-oceanic products retrieved from the NCEP Climate Forecast System Version 2 (CFSv2), indicating the lead time of the model depends on that of CFSv2. The model can make seamless drought predictions for operational use after a year-to-year calibration. Model application to four recent severe regional drought processes in China indicates its good performance in predicting seasonal drought development, despite its weakness in predicting drought severity. Overall, the model can be a worthy reference for seasonal water resource management in China.
Processing and utilization of LiDAR data as a support for a good management of DDBR
NASA Astrophysics Data System (ADS)
Nichersu, I.; Grigoras, I.; Constantinescu, A.; Mierla, M.; Tifanov, C.
2012-04-01
Danube Delta Biosphere Reserve (DDBR) has 5,800 km2 as surface and it is situated in the South-East of Europe, in the East of Romania. The paper is taking into account the data related to the elevation surfaces of the DDBR (Digital Terrain Model DTM and Digital Surface Model DSM). To produce such kind of models of elevation for the entire area of DDBR it was used the most modern method that utilizes the Light Detection And Ranging (LiDAR). The raw LiDAR data (x, y, z) for each point were transformed into grid formats for DTM and DSM. Based on these data multiple GIS analyses can be done for management purposes : hydraulic modeling 1D2D scenarios, flooding regime and protection, biomass volume estimation, GIS biodiversity processing. These analyses are very useful in the management planning process. The hydraulic modeling 1D2D scenarios are used by the administrative authority to predict the sense of the fluvial water flow and also to predict the places where the flooding could occur. Also it can be predicted the surface of the terrain that will be occupied by the water from floods. Flooding regime gives information about the frequency of the floods and also the intensity of these. In the same time it could be predicted the time of water remanence period. The protection face of the flooding regime is in direct relation with the socio-cultural communities and all their annexes those that are in risk of being flooded. This raises the problem of building dykes and other flooding protection systems. The biomass volume contains information derived from the LiDAR cloud points that describes only the vegetation. The volume of biomass is an important item in the management of a Biosphere Reserve. Also the LiDAR cloud points that refer to vegetation could help in identifying the groups of vegetal association. All these information corroborated with other information build good premises for a good management. Keywords: Danube Delta Biosphere Reserve, LiDAR data, DTM, DSM, flooding, management
Automobile exhaust as a means of suicide: an experimental study with a proposed model.
Morgen, C; Schramm, J; Kofoed, P; Steensberg, J; Theilade, P
1998-07-01
Experiments were conducted to investigate the concentration of carbon monoxide (CO) in a car cabin under suicide attempts with different vehicles and different start situations, and a mathematical model describing the concentration of CO in the cabin was constructed. Three cars were set up to donate the exhaust. The first vehicle didn't have any catalyst, the second one was equipped with a malfunctioning three-way catalyst, and the third car was equipped with a well-functioning three-way catalyst. The three different starting situations were cold, tepid and warm engine start, respectively. Measurements of the CO concentrations were made in both the cabin and in the exhaust pipe. Lethal concentrations were measured in the cabin using all three vehicles as the donor car, including the vehicle with the well-functioning catalyst. The model results in most cases gave a good prediction of the CO concentration in the cabin. Four case studies of cars used for suicides were described. In each case measurements of CO were made in both the cabin and the exhaust under different starting conditions, and the mathematical model was tested on these cases. In most cases the model predictions were good.
NASA Technical Reports Server (NTRS)
Komerath, Narayanan M.; Schreiber, Olivier A.
1987-01-01
The wake model was implemented using a VAX 750 and a Microvax II workstation. Online graphics capability using a DISSPLA graphics package. The rotor model used by Beddoes was significantly extended to include azimuthal variations due to forward flight and a simplified scheme for locating critical points where vortex elements are placed. A test case was obtained for validation of the predictions of induced velocity. Comparison of the results indicates that the code requires some more features before satisfactory predictions can be made over the whole rotor disk. Specifically, shed vorticity due to the azimuthal variation of blade loading must be incorporated into the model. Interactions between vortices shed from the four blades of the model rotor must be included. The Scully code for calculating the velocity field is being modified in parallel with these efforts to enable comparison with experimental data. To date, some comparisons with flow visualization data obtained at Georgia Tech were performed and show good agreement for the isolated rotor case. Comparison of time-resolved velocity data obtained at Georgia Tech also shows good agreement. Modifications are being implemented to enable generation of time-averaged results for comparison with NASA data.
The effect of dilatancy on the unloading behavior of Mt. Helen tuff
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attia, A.V.; Rubin, M.B.
1993-11-01
In order to understand the role of rock dilatancy in modeling the response of partially saturated rock formations to underground nuclear explosions, we have developed a thermodynamically consistent model for a porous material, partially saturated with fluid. This model gives good predictions of the unloading behavior of dry, partially saturated, and fully saturated Mt. Helen tuff, as measured by Heard.
Chen, Tao; Lian, Guoping; Kattou, Panayiotis
2016-07-01
The purpose was to develop a mechanistic mathematical model for predicting the pharmacokinetics of topically applied solutes penetrating through the skin and into the blood circulation. The model could be used to support the design of transdermal drug delivery systems and skin care products, and risk assessment of occupational or consumer exposure. A recently reported skin penetration model [Pharm Res 32 (2015) 1779] was integrated with the kinetic equations for dermis-to-capillary transport and systemic circulation. All model parameters were determined separately from the molecular, microscopic and physiological bases, without fitting to the in vivo data to be predicted. Published clinical studies of nicotine were used for model demonstration. The predicted plasma kinetics is in good agreement with observed clinical data. The simulated two-dimensional concentration profile in the stratum corneum vividly illustrates the local sub-cellular disposition kinetics, including tortuous lipid pathway for diffusion and the "reservoir" effect of the corneocytes. A mechanistic model for predicting transdermal and systemic kinetics was developed and demonstrated with published clinical data. The integrated mechanistic approach has significantly extended the applicability of a recently reported microscopic skin penetration model by providing prediction of solute concentration in the blood.
Kattou, Panayiotis; Lian, Guoping; Glavin, Stephen; Sorrell, Ian; Chen, Tao
2017-10-01
The development of a new two-dimensional (2D) model to predict follicular permeation, with integration into a recently reported multi-scale model of transdermal permeation is presented. The follicular pathway is modelled by diffusion in sebum. The mass transfer and partition properties of solutes in lipid, corneocytes, viable dermis, dermis and systemic circulation are calculated as reported previously [Pharm Res 33 (2016) 1602]. The mass transfer and partition properties in sebum are collected from existing literature. None of the model input parameters was fit to the clinical data with which the model prediction is compared. The integrated model has been applied to predict the published clinical data of transdermal permeation of caffeine. The relative importance of the follicular pathway is analysed. Good agreement of the model prediction with the clinical data has been obtained. The simulation confirms that for caffeine the follicular route is important; the maximum bioavailable concentration of caffeine in systemic circulation with open hair follicles is predicted to be 20% higher than that when hair follicles are blocked. The follicular pathway contributes to not only short time fast penetration, but also the overall systemic bioavailability. With such in silico model, useful information can be obtained for caffeine disposition and localised delivery in lipid, corneocytes, viable dermis, dermis and the hair follicle. Such detailed information is difficult to obtain experimentally.
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi
2016-01-01
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
A generalized preferential attachment model for business firms growth rates. I. Empirical evidence
NASA Astrophysics Data System (ADS)
Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.
2007-05-01
We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.
Quantifying the Effect of Polymer Blending through Molecular Modelling of Cyanurate Polymers
Crawford, Alasdair O.; Hamerton, Ian; Cavalli, Gabriel; Howlin, Brendan J.
2012-01-01
Modification of polymer properties by blending is a common practice in the polymer industry. We report here a study of blends of cyanurate polymers by molecular modelling that shows that the final experimentally determined properties can be predicted from first principles modelling to a good degree of accuracy. There is always a compromise between simulation length, accuracy and speed of prediction. A comparison of simulation times shows that 125ps of molecular dynamics simulation at each temperature provides the optimum compromise for models of this size with current technology. This study opens up the possibility of computer aided design of polymer blends with desired physical and mechanical properties. PMID:22970230
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Lakshmanan, B.
1993-01-01
A high-speed shear layer is studied using compressibility corrected Reynolds stress turbulence model which employs newly developed model for pressure-strain correlation. MacCormack explicit prediction-corrector method is used for solving the governing equations and the turbulence transport equations. The stiffness arising due to source terms in the turbulence equations is handled by a semi-implicit numerical technique. Results obtained using the new model show a sharper reduction in growth rate with increasing convective Mach number. Some improvements were also noted in the prediction of the normalized streamwise stress and Reynolds shear stress. The computed results are in good agreement with the experimental data.
Light aircraft sound transmission studies - Noise reduction model
NASA Technical Reports Server (NTRS)
Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.
1987-01-01
Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.
Kostal, Jakub; Voutchkova-Kostal, Adelina
2016-01-19
Using computer models to accurately predict toxicity outcomes is considered to be a major challenge. However, state-of-the-art computational chemistry techniques can now be incorporated in predictive models, supported by advances in mechanistic toxicology and the exponential growth of computing resources witnessed over the past decade. The CADRE (Computer-Aided Discovery and REdesign) platform relies on quantum-mechanical modeling of molecular interactions that represent key biochemical triggers in toxicity pathways. Here, we present an external validation exercise for CADRE-SS, a variant developed to predict the skin sensitization potential of commercial chemicals. CADRE-SS is a hybrid model that evaluates skin permeability using Monte Carlo simulations, assigns reactive centers in a molecule and possible biotransformations via expert rules, and determines reactivity with skin proteins via quantum-mechanical modeling. The results were promising with an overall very good concordance of 93% between experimental and predicted values. Comparison to performance metrics yielded by other tools available for this endpoint suggests that CADRE-SS offers distinct advantages for first-round screenings of chemicals and could be used as an in silico alternative to animal tests where permissible by legislative programs.
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey William; Devaud, Cecile
2017-05-01
A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.
Numerical simulation of experiments in the Giant Planet Facility
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.
1979-01-01
Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.
A rational account of pedagogical reasoning: teaching by, and learning from, examples.
Shafto, Patrick; Goodman, Noah D; Griffiths, Thomas L
2014-06-01
Much of learning and reasoning occurs in pedagogical situations--situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher's examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning. Copyright © 2014 Elsevier Inc. All rights reserved.
Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.
2011-01-01
Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.
Application of stiffened cylinder analysis to ATP interior noise studies
NASA Technical Reports Server (NTRS)
Wilby, E. G.; Wilby, J. F.
1983-01-01
An analytical model developed to predict the interior noise of propeller driven aircraft was applied to experimental configurations for a Fairchild Swearingen Metro II fuselage exposed to simulated propeller excitation. The floor structure of the test fuselage was of unusual construction - mounted on air springs. As a consequence, the analytical model was extended to include a floor treatment transmission coefficient which could be used to describe vibration attenuation through the mounts. Good agreement was obtained between measured and predicted noise reductions when the foor treatment transmission loss was about 20 dB - a value which is consistent with the vibration attenuation provided by the mounts. The analytical model was also adapted to allow the prediction of noise reductions associated with boundary layer excitation as well as propeller and reverberant noise.
Evaluation of Data-Driven Models for Predicting Solar Photovoltaics Power Output
Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas
2017-09-10
This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less
Web-based decision support system to predict risk level of long term rice production
NASA Astrophysics Data System (ADS)
Mukhlash, Imam; Maulidiyah, Ratna; Sutikno; Setiyono, Budi
2017-09-01
Appropriate decision making in risk management of rice production is very important in agricultural planning, especially for Indonesia which is an agricultural country. Good decision would be obtained if the supporting data required are satisfied and using appropriate methods. This study aims to develop a Decision Support System that can be used to predict the risk level of rice production in some districts which are central of rice production in East Java. Web-based decision support system is constructed so that the information can be easily accessed and understood. Components of the system are data management, model management, and user interface. This research uses regression models of OLS and Copula. OLS model used to predict rainfall while Copula model used to predict harvested area. Experimental results show that the models used are successfully predict the harvested area of rice production in some districts which are central of rice production in East Java at any given time based on the conditions and climate of a region. Furthermore, it can predict the amount of rice production with the level of risk. System generates prediction of production risk level in the long term for some districts that can be used as a decision support for the authorities.
McAlpine, Donna D; McCreedy, Ellen; Alang, Sirry
2018-06-01
Self-rated health is a valid measure of health that predicts quality of life, morbidity, and mortality. Its predictive value reflects a conceptualization of health that goes beyond a traditional medical model. However, less is known about self-rated mental health (SRMH). Using data from the Medical Expenditure Panel Survey ( N = 2,547), we examine how rating your mental health as good-despite meeting criteria for a mental health problem-predicts outcomes. We found that 62% of people with a mental health problem rated their mental health positively. Persons who rated their mental health as good (compared to poor) had 30% lower odds of having a mental health problem at follow-up. Even without treatment, persons with a mental health problem did better if they perceived their mental health positively. SRMH might comprise information beyond the experience of symptoms. Understanding the unobserved information individuals incorporate into SRMH will help us improve screening and treatment interventions.