Yajima, Airi; Uesawa, Yoshihiro; Ogawa, Chiaki; Yatabe, Megumi; Kondo, Naoki; Saito, Shinichiro; Suzuki, Yoshihiko; Atsuda, Kouichiro; Kagaya, Hajime
2015-05-01
There exist various useful predictive models, such as the Cockcroft-Gault model, for estimating creatinine clearance (CLcr). However, the prediction of renal function is difficult in patients with cancer treated with cisplatin. Therefore, we attempted to construct a new model for predicting CLcr in such patients. Japanese patients with head and neck cancer who had received cisplatin-based chemotherapy were used as subjects. A multiple regression equation was constructed as a model for predicting CLcr values based on background and laboratory data. A model for predicting CLcr, which included body surface area, serum creatinine and albumin, was constructed. The model exhibited good performance prior to cisplatin therapy. In addition, it performed better than previously reported models after cisplatin therapy. The predictive model constructed in the present study displayed excellent potential and was useful for estimating the renal function of patients treated with cisplatin therapy. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
Liao, C; Peng, Z Y; Li, J B; Cui, X W; Zhang, Z H; Malakar, P K; Zhang, W J; Pan, Y J; Zhao, Y
2015-03-01
The aim of this study was to simultaneously construct PCR-DGGE-based predictive models of Listeria monocytogenes and Vibrio parahaemolyticus on cooked shrimps at 4 and 10°C. Calibration curves were established to correlate peak density of DGGE bands with microbial counts. Microbial counts derived from PCR-DGGE and plate methods were fitted by Baranyi model to obtain molecular and traditional predictive models. For L. monocytogenes, growing at 4 and 10°C, molecular predictive models were constructed. It showed good evaluations of correlation coefficients (R(2) > 0.92), bias factors (Bf ) and accuracy factors (Af ) (1.0 ≤ Bf ≤ Af ≤ 1.1). Moreover, no significant difference was found between molecular and traditional predictive models when analysed on lag phase (λ), maximum growth rate (μmax ) and growth data (P > 0.05). But for V. parahaemolyticus, inactivated at 4 and 10°C, molecular models show significant difference when compared with traditional models. Taken together, these results suggest that PCR-DGGE based on DNA can be used to construct growth models, but it is inappropriate for inactivation models yet. This is the first report of developing PCR-DGGE to simultaneously construct multiple molecular models. It has been known for a long time that microbial predictive models based on traditional plate methods are time-consuming and labour-intensive. Denaturing gradient gel electrophoresis (DGGE) has been widely used as a semiquantitative method to describe complex microbial community. In our study, we developed DGGE to quantify bacterial counts and simultaneously established two molecular predictive models to describe the growth and survival of two bacteria (Listeria monocytogenes and Vibrio parahaemolyticus) at 4 and 10°C. We demonstrated that PCR-DGGE could be used to construct growth models. This work provides a new approach to construct molecular predictive models and thereby facilitates predictive microbiology and QMRA (Quantitative Microbial Risk Assessment). © 2014 The Society for Applied Microbiology.
He, Y J; Li, X T; Fan, Z Q; Li, Y L; Cao, K; Sun, Y S; Ouyang, T
2018-01-23
Objective: To construct a dynamic enhanced MR based predictive model for early assessing pathological complete response (pCR) to neoadjuvant therapy in breast cancer, and to evaluate the clinical benefit of the model by using decision curve. Methods: From December 2005 to December 2007, 170 patients with breast cancer treated with neoadjuvant therapy were identified and their MR images before neoadjuvant therapy and at the end of the first cycle of neoadjuvant therapy were collected. Logistic regression model was used to detect independent factors for predicting pCR and construct the predictive model accordingly, then receiver operating characteristic (ROC) curve and decision curve were used to evaluate the predictive model. Results: ΔArea(max) and Δslope(max) were independent predictive factors for pCR, OR =0.942 (95% CI : 0.918-0.967) and 0.961 (95% CI : 0.940-0.987), respectively. The area under ROC curve (AUC) for the constructed model was 0.886 (95% CI : 0.820-0.951). Decision curve showed that in the range of the threshold probability above 0.4, the predictive model presented increased net benefit as the threshold probability increased. Conclusions: The constructed predictive model for pCR is of potential clinical value, with an AUC>0.85. Meanwhile, decision curve analysis indicates the constructed predictive model has net benefit from 3 to 8 percent in the likely range of probability threshold from 80% to 90%.
Bai, Wenming; Yoshimura, Norio; Takayanagi, Masao; Che, Jingai; Horiuchi, Naomi; Ogiwara, Isao
2016-06-28
Nondestructive prediction of ingredient contents of farm products is useful to ship and sell the products with guaranteed qualities. Here, near-infrared spectroscopy is used to predict nondestructively total sugar, total organic acid, and total anthocyanin content in each blueberry. The technique is expected to enable the selection of only delicious blueberries from all harvested ones. The near-infrared absorption spectra of blueberries are measured with the diffuse reflectance mode at the positions not on the calyx. The ingredient contents of a blueberry determined by high-performance liquid chromatography are used to construct models to predict the ingredient contents from observed spectra. Partial least squares regression is used for the construction of the models. It is necessary to properly select the pretreatments for the observed spectra and the wavelength regions of the spectra used for analyses. Validations are necessary for the constructed models to confirm that the ingredient contents are predicted with practical accuracies. Here we present a protocol to construct and validate the models for nondestructive prediction of ingredient contents in blueberries by near-infrared spectroscopy.
NASA Astrophysics Data System (ADS)
Saad, Ahmed S.; Hamdy, Abdallah M.; Salama, Fathy M.; Abdelkawy, Mohamed
2016-10-01
Effect of data manipulation in preprocessing step proceeding construction of chemometric models was assessed. The same set of UV spectral data was used for construction of PLS and PCR models directly and after mathematically manipulation as per well known first and second derivatives of the absorption spectra, ratio spectra and first and second derivatives of the ratio spectra spectrophotometric methods, meanwhile the optimal working wavelength ranges were carefully selected for each model and the models were constructed. Unexpectedly, number of latent variables used for models' construction varied among the different methods. The prediction power of the different models was compared using a validation set of 8 mixtures prepared as per the multilevel multifactor design and results were statistically compared using two-way ANOVA test. Root mean squares error of prediction (RMSEP) was used for further comparison of the predictability among different constructed models. Although no significant difference was found between results obtained using Partial Least Squares (PLS) and Principal Component Regression (PCR) models, however, discrepancies among results was found to be attributed to the variation in the discrimination power of adopted spectrophotometric methods on spectral data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001; Gupta, Shikha
Robust global models capable of discriminating positive and non-positive carcinogens; and predicting carcinogenic potency of chemicals in rodents were developed. The dataset of 834 structurally diverse chemicals extracted from Carcinogenic Potency Database (CPDB) was used which contained 466 positive and 368 non-positive carcinogens. Twelve non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals and nonlinearity in the data were evaluated using Tanimoto similarity index and Brock–Dechert–Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) models were constructed for classification and function optimization problems using the carcinogenicity end point in rat. Validation of the models wasmore » performed using the internal and external procedures employing a wide series of statistical checks. PNN constructed using five descriptors rendered classification accuracy of 92.09% in complete rat data. The PNN model rendered classification accuracies of 91.77%, 80.70% and 92.08% in mouse, hamster and pesticide data, respectively. The GRNN constructed with nine descriptors yielded correlation coefficient of 0.896 between the measured and predicted carcinogenic potency with mean squared error (MSE) of 0.44 in complete rat data. The rat carcinogenicity model (GRNN) applied to the mouse and hamster data yielded correlation coefficient and MSE of 0.758, 0.71 and 0.760, 0.46, respectively. The results suggest for wide applicability of the inter-species models in predicting carcinogenic potency of chemicals. Both the PNN and GRNN (inter-species) models constructed here can be useful tools in predicting the carcinogenicity of new chemicals for regulatory purposes. - Graphical abstract: Figure (a) shows classification accuracies (positive and non-positive carcinogens) in rat, mouse, hamster, and pesticide data yielded by optimal PNN model. Figure (b) shows generalization and predictive abilities of the interspecies GRNN model to predict the carcinogenic potency of diverse chemicals. - Highlights: • Global robust models constructed for carcinogenicity prediction of diverse chemicals. • Tanimoto/BDS test revealed structural diversity of chemicals and nonlinearity in data. • PNN/GRNN successfully predicted carcinogenicity/carcinogenic potency of chemicals. • Developed interspecies PNN/GRNN models for carcinogenicity prediction. • Proposed models can be used as tool to predict carcinogenicity of new chemicals.« less
A novel methodology to estimate the evolution of construction waste in construction sites.
Katz, Amnon; Baum, Hadassa
2011-02-01
This paper focuses on the accumulation of construction waste generated throughout the erection of new residential buildings. A special methodology was developed in order to provide a model that will predict the flow of construction waste. The amount of waste and its constituents, produced on 10 relatively large construction sites (7000-32,000 m(2) of built area) was monitored periodically for a limited time. A model that predicts the accumulation of construction waste was developed based on these field observations. According to the model, waste accumulates in an exponential manner, i.e. smaller amounts are generated during the early stages of construction and increasing amounts are generated towards the end of the project. The total amount of waste from these sites was estimated at 0.2m(3) per 1m(2) floor area. A good correlation was found between the model predictions and actual data from the field survey. Copyright © 2010 Elsevier Ltd. All rights reserved.
Multi-scale modeling of tsunami flows and tsunami-induced forces
NASA Astrophysics Data System (ADS)
Qin, X.; Motley, M. R.; LeVeque, R. J.; Gonzalez, F. I.
2016-12-01
The modeling of tsunami flows and tsunami-induced forces in coastal communities with the incorporation of the constructed environment is challenging for many numerical modelers because of the scale and complexity of the physical problem. A two-dimensional (2D) depth-averaged model can be efficient for modeling of waves offshore but may not be accurate enough to predict the complex flow with transient variance in vertical direction around constructed environments on land. On the other hand, using a more complex three-dimensional model is much more computational expensive and can become impractical due to the size of the problem and the meshing requirements near the built environment. In this study, a 2D depth-integrated model and a 3D Reynolds Averaged Navier-Stokes (RANS) model are built to model a 1:50 model-scale, idealized community, representative of Seaside, OR, USA, for which existing experimental data is available for comparison. Numerical results from the two numerical models are compared with each other as well as experimental measurement. Both models predict the flow parameters (water level, velocity, and momentum flux in the vicinity of the buildings) accurately, in general, except for time period near the initial impact, where the depth-averaged models can fail to capture the complexities in the flow. Forces predicted using direct integration of predicted pressure on structural surfaces from the 3D model and using momentum flux from the 2D model with constructed environment are compared, which indicates that force prediction from the 2D model is not always reliable in such a complicated case. Force predictions from integration of the pressure are also compared with forces predicted from bare earth momentum flux calculations to reveal the importance of incorporating the constructed environment in force prediction models.
FHWA roadway construction noise model, version 1.0 : user's guide
DOT National Transportation Integrated Search
2006-01-01
The Roadway Construction Noise Model (RCNM) is the Federal Highway Administrations (FHWA) national model for the prediction of construction noise. Due to the fact that construction is often conducted in close proximity to residences and businesses...
Singh, Kunwar P; Gupta, Shikha; Ojha, Priyanka; Rai, Premanjali
2013-04-01
The research aims to develop artificial intelligence (AI)-based model to predict the adsorptive removal of 2-chlorophenol (CP) in aqueous solution by coconut shell carbon (CSC) using four operational variables (pH of solution, adsorbate concentration, temperature, and contact time), and to investigate their effects on the adsorption process. Accordingly, based on a factorial design, 640 batch experiments were conducted. Nonlinearities in experimental data were checked using Brock-Dechert-Scheimkman (BDS) statistics. Five nonlinear models were constructed to predict the adsorptive removal of CP in aqueous solution by CSC using four variables as input. Performances of the constructed models were evaluated and compared using statistical criteria. BDS statistics revealed strong nonlinearity in experimental data. Performance of all the models constructed here was satisfactory. Radial basis function network (RBFN) and multilayer perceptron network (MLPN) models performed better than generalized regression neural network, support vector machines, and gene expression programming models. Sensitivity analysis revealed that the contact time had highest effect on adsorption followed by the solution pH, temperature, and CP concentration. The study concluded that all the models constructed here were capable of capturing the nonlinearity in data. A better generalization and predictive performance of RBFN and MLPN models suggested that these can be used to predict the adsorption of CP in aqueous solution using CSC.
Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Kubo, H.
2013-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian; Aur, Katherine Anderson; Preston, Leiph
This report shows the results of constructing predictive atmospheric models for the Source Physics Experiments 1-6. Historic atmospheric data are combined with topography to construct an atmo- spheric model that corresponds to the predicted (or actual) time of a given SPE event. The models are ultimately used to construct atmospheric Green's functions to be used for subsequent analysis. We present three atmospheric models for each SPE event: an average model based on ten one- hour snap shots of the atmosphere and two extrema models corresponding to the warmest, coolest, windiest, etc. atmospheric snap shots. The atmospheric snap shots consist ofmore » wind, temperature, and pressure profiles of the atmosphere for a one-hour time window centered at the time of the predicted SPE event, as well as nine additional snap shots for each of the nine preceding years, centered at the time and day of the SPE event.« less
Vesicular stomatitis forecasting based on Google Trends
Lu, Yi; Zhou, GuangYa; Chen, Qin
2018-01-01
Background Vesicular stomatitis (VS) is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends. Methods American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression. Results For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity), SP (specificity) and ACC (prediction accuracy) values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively. Conclusion This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast. PMID:29385198
Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale
NASA Astrophysics Data System (ADS)
Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang
2017-12-01
The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.
NASA Astrophysics Data System (ADS)
Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi
2016-04-01
In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.
Cross-Linguistic Influence in French-English Bilingual Children's Possessive Constructions
ERIC Educational Resources Information Center
Nicoladis, Elena
2012-01-01
The purpose of this article was to test the predictions of a speech production model of cross-linguistic influence in French-English bilingual children. A speech production model predicts bidirectional influence (i.e., bilinguals' greater use of periphrastic constructions like the hat of the dog relative to monolinguals in English and reversed…
Wilcox, D.A.; Xie, Y.
2007-01-01
Integrated, GIS-based, wetland predictive models were constructed to assist in predicting the responses of wetland plant communities to proposed new water-level regulation plans for Lake Ontario. The modeling exercise consisted of four major components: 1) building individual site wetland geometric models; 2) constructing generalized wetland geometric models representing specific types of wetlands (rectangle model for drowned river mouth wetlands, half ring model for open embayment wetlands, half ellipse model for protected embayment wetlands, and ellipse model for barrier beach wetlands); 3) assigning wetland plant profiles to the generalized wetland geometric models that identify associations between past flooding / dewatering events and the regulated water-level changes of a proposed water-level-regulation plan; and 4) predicting relevant proportions of wetland plant communities and the time durations during which they would be affected under proposed regulation plans. Based on this conceptual foundation, the predictive models were constructed using bathymetric and topographic wetland models and technical procedures operating on the platform of ArcGIS. An example of the model processes and outputs for the drowned river mouth wetland model using a test regulation plan illustrates the four components and, when compared against other test regulation plans, provided results that met ecological expectations. The model results were also compared to independent data collected by photointerpretation. Although data collections were not directly comparable, the predicted extent of meadow marsh in years in which photographs were taken was significantly correlated with extent of mapped meadow marsh in all but barrier beach wetlands. The predictive model for wetland plant communities provided valuable input into International Joint Commission deliberations on new regulation plans and was also incorporated into faunal predictive models used for that purpose.
Engoren, Milo; Habib, Robert H; Dooner, John J; Schwann, Thomas A
2013-08-01
As many as 14 % of patients undergoing coronary artery bypass surgery are readmitted within 30 days. Readmission is usually the result of morbidity and may lead to death. The purpose of this study is to develop and compare statistical and genetic programming models to predict readmission. Patients were divided into separate Construction and Validation populations. Using 88 variables, logistic regression, genetic programs, and artificial neural nets were used to develop predictive models. Models were first constructed and tested on the Construction populations, then validated on the Validation population. Areas under the receiver operator characteristic curves (AU ROC) were used to compare the models. Two hundred and two patients (7.6 %) in the 2,644 patient Construction group and 216 (8.0 %) of the 2,711 patient Validation group were re-admitted within 30 days of CABG surgery. Logistic regression predicted readmission with AU ROC = .675 ± .021 in the Construction group. Genetic programs significantly improved the accuracy, AU ROC = .767 ± .001, p < .001). Artificial neural nets were less accurate with AU ROC = 0.597 ± .001 in the Construction group. Predictive accuracy of all three techniques fell in the Validation group. However, the accuracy of genetic programming (AU ROC = .654 ± .001) was still trivially but statistically non-significantly better than that of the logistic regression (AU ROC = .644 ± .020, p = .61). Genetic programming and logistic regression provide alternative methods to predict readmission that are similarly accurate.
Key Questions in Building Defect Prediction Models in Practice
NASA Astrophysics Data System (ADS)
Ramler, Rudolf; Wolfmaier, Klaus; Stauder, Erwin; Kossak, Felix; Natschläger, Thomas
The information about which modules of a future version of a software system are defect-prone is a valuable planning aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. However, constructing effective defect prediction models in an industrial setting involves a number of key questions. In this paper we discuss ten key questions identified in context of establishing defect prediction in a large software development project. Seven consecutive versions of the software system have been used to construct and validate defect prediction models for system test planning. Furthermore, the paper presents initial empirical results from the studied project and, by this means, contributes answers to the identified questions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Gupta, Shikha
Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data,more » optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the constructed (c) DTB and (d) DTF regression models to predict the T. pyriformis toxicity of diverse chemicals. - Highlights: • Ensemble learning (EL) based models constructed for toxicity prediction of chemicals • Predictive models used a few simple non-quantum mechanical molecular descriptors. • EL-based DTB/DTF models successfully discriminated toxic and non-toxic chemicals. • DTB/DTF regression models precisely predicted toxicity of chemicals in multi-species. • Proposed EL based models can be used as tool to predict toxicity of new chemicals.« less
Multifactorial disease risk calculator: Risk prediction for multifactorial disease pedigrees.
Campbell, Desmond D; Li, Yiming; Sham, Pak C
2018-03-01
Construction of multifactorial disease models from epidemiological findings and their application to disease pedigrees for risk prediction is nontrivial for all but the simplest of cases. Multifactorial Disease Risk Calculator is a web tool facilitating this. It provides a user-friendly interface, extending a reported methodology based on a liability-threshold model. Multifactorial disease models incorporating all the following features in combination are handled: quantitative risk factors (including polygenic scores), categorical risk factors (including major genetic risk loci), stratified age of onset curves, and the partition of the population variance in disease liability into genetic, shared, and unique environment effects. It allows the application of such models to disease pedigrees. Pedigree-related outputs are (i) individual disease risk for pedigree members, (ii) n year risk for unaffected pedigree members, and (iii) the disease pedigree's joint liability distribution. Risk prediction for each pedigree member is based on using the constructed disease model to appropriately weigh evidence on disease risk available from personal attributes and family history. Evidence is used to construct the disease pedigree's joint liability distribution. From this, lifetime and n year risk can be predicted. Example disease models and pedigrees are provided at the website and are used in accompanying tutorials to illustrate the features available. The website is built on an R package which provides the functionality for pedigree validation, disease model construction, and risk prediction. Website: http://grass.cgs.hku.hk:3838/mdrc/current. © 2017 WILEY PERIODICALS, INC.
A computational cognitive model of self-efficacy and daily adherence in mHealth.
Pirolli, Peter
2016-12-01
Mobile health (mHealth) applications provide an excellent opportunity for collecting rich, fine-grained data necessary for understanding and predicting day-to-day health behavior change dynamics. A computational predictive model (ACT-R-DStress) is presented and fit to individual daily adherence in 28-day mHealth exercise programs. The ACT-R-DStress model refines the psychological construct of self-efficacy. To explain and predict the dynamics of self-efficacy and predict individual performance of targeted behaviors, the self-efficacy construct is implemented as a theory-based neurocognitive simulation of the interaction of behavioral goals, memories of past experiences, and behavioral performance.
Psychopathy and Deviant Workplace Behavior: A Comparison of Two Psychopathy Models.
Carre, Jessica R; Mueller, Steven M; Schleicher, Karly M; Jones, Daniel N
2018-04-01
Although psychopathy is an interpersonally harmful construct, few studies have compared different psycho athy models in predicting different types of workplace deviance. We examined how the Triarchic Psychopathy Model (TRI-PM) and the Self-Report Psychopathy-Short Form (SRP-SF) predicted deviant workplace behaviors in two forms: sexual harassment and deviant work behaviors. Using structural equations modeling, the latent factor of psychopathy was predictive for both types of deviant workplace behavior. Specifically, the SRP-SF signif cantly predicted both measures of deviant workplace behavior. With respect to the TRI-PM, meanness and disinhibition significantly predicted higher scores of workplace deviance and workplace sexual harassment measures. Future research needs to investigate the influence of psychopathy on deviant workplace behaviors, and consider the measures they use when they investigate these constructs.
Fei, Yang; Hu, Jian; Gao, Kun; Tu, Jianfeng; Li, Wei-Qin; Wang, Wei
2017-06-01
To construct a radical basis function (RBF) artificial neural networks (ANNs) model to predict the incidence of acute pancreatitis (AP)-induced portal vein thrombosis. The analysis included 353 patients with AP who had admitted between January 2011 and December 2015. RBF ANNs model and logistic regression model were constructed based on eleven factors relevant to AP respectively. Statistical indexes were used to evaluate the value of the prediction in two models. The predict sensitivity, specificity, positive predictive value, negative predictive value and accuracy by RBF ANNs model for PVT were 73.3%, 91.4%, 68.8%, 93.0% and 87.7%, respectively. There were significant differences between the RBF ANNs and logistic regression models in these parameters (P<0.05). In addition, a comparison of the area under receiver operating characteristic curves of the two models showed a statistically significant difference (P<0.05). The RBF ANNs model is more likely to predict the occurrence of PVT induced by AP than logistic regression model. D-dimer, AMY, Hct and PT were important prediction factors of approval for AP-induced PVT. Copyright © 2017 Elsevier Inc. All rights reserved.
A Global Model for Bankruptcy Prediction
Alaminos, David; del Castillo, Agustín; Fernández, Manuel Ángel
2016-01-01
The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy. PMID:27880810
USDA-ARS?s Scientific Manuscript database
The objective of this study was to develop a new approach using a one-step approach to directly construct predictive models for describing the growth of Salmonella Enteritidis (SE) in liquid egg white (LEW) and egg yolk (LEY). A five-strain cocktail of SE, induced to resist rifampicin at 100 mg/L, ...
Real-time emissions from construction equipment compared with model predictions.
Heidari, Bardia; Marr, Linsey C
2015-02-01
The construction industry is a large source of greenhouse gases and other air pollutants. Measuring and monitoring real-time emissions will provide practitioners with information to assess environmental impacts and improve the sustainability of construction. We employed a portable emission measurement system (PEMS) for real-time measurement of carbon dioxide (CO), nitrogen oxides (NOx), hydrocarbon, and carbon monoxide (CO) emissions from construction equipment to derive emission rates (mass of pollutant emitted per unit time) and emission factors (mass of pollutant emitted per unit volume of fuel consumed) under real-world operating conditions. Measurements were compared with emissions predicted by methodologies used in three models: NONROAD2008, OFFROAD2011, and a modal statistical model. Measured emission rates agreed with model predictions for some pieces of equipment but were up to 100 times lower for others. Much of the difference was driven by lower fuel consumption rates than predicted. Emission factors during idling and hauling were significantly different from each other and from those of other moving activities, such as digging and dumping. It appears that operating conditions introduce considerable variability in emission factors. Results of this research will aid researchers and practitioners in improving current emission estimation techniques, frameworks, and databases.
Construction and evaluation of FiND, a fall risk prediction model of inpatients from nursing data.
Yokota, Shinichiroh; Ohe, Kazuhiko
2016-04-01
To construct and evaluate an easy-to-use fall risk prediction model based on the daily condition of inpatients from secondary use electronic medical record system data. The present authors scrutinized electronic medical record system data and created a dataset for analysis by including inpatient fall report data and Intensity of Nursing Care Needs data. The authors divided the analysis dataset into training data and testing data, then constructed the fall risk prediction model FiND from the training data, and tested the model using the testing data. The dataset for analysis contained 1,230,604 records from 46,241 patients. The sensitivity of the model constructed from the training data was 71.3% and the specificity was 66.0%. The verification result from the testing dataset was almost equivalent to the theoretical value. Although the model's accuracy did not surpass that of models developed in previous research, the authors believe FiND will be useful in medical institutions all over Japan because it is composed of few variables (only age, sex, and the Intensity of Nursing Care Needs items), and the accuracy for unknown data was clear. © 2016 Japan Academy of Nursing Science.
Cheung, Rebecca Y M; Cummings, E Mark; Zhang, Zhiyong; Davies, Patrick T
2016-11-01
Recognizing the significance of interacting family subsystems, the present study addresses how interparental conflict is linked to adolescent emotional security as a function of parental gender. A total of 272 families with a child at 12.60 years of age (133 boys, 139 girls) were invited to participate each year for three consecutive years. A multi-informant method was used, along with trivariate models to test the associations among mothers, fathers, and their adolescent children's behaviors. The findings from separate models of destructive and constructive interparental conflict revealed intricate linkages among family members. In the model of destructive interparental conflict, mothers and fathers predicted each other's conflict behaviors over time. Moreover, adolescents' exposure to negativity expressed by either parent dampened their emotional security. Consistent with child effects models, adolescent emotional insecurity predicted fathers' destructive conflict behaviors. As for the model of constructive interparental conflict, fathers predicted mothers' conflict behaviors over time. Adolescents' exposure to fathers' constructive conflict behaviors also enhanced their sense of emotional security. Consistent with child effects models, adolescent emotional security predicted mothers' and fathers' constructive conflict behaviors. These findings extended the family and the adolescent literature by indicating that family processes are multiidirectional, involving multiple dyads in the study of parents' and adolescents' functioning. Contributions of these findings to the understanding of interparental conflict and emotional security in adolescence are discussed.
NASA Astrophysics Data System (ADS)
Huang, Darong; Bai, Xing-Rong
Based on wavelet transform and neural network theory, a traffic-flow prediction model, which was used in optimal control of Intelligent Traffic system, is constructed. First of all, we have extracted the scale coefficient and wavelet coefficient from the online measured raw data of traffic flow via wavelet transform; Secondly, an Artificial Neural Network model of Traffic-flow Prediction was constructed and trained using the coefficient sequences as inputs and raw data as outputs; Simultaneous, we have designed the running principium of the optimal control system of traffic-flow Forecasting model, the network topological structure and the data transmitted model; Finally, a simulated example has shown that the technique is effectively and exactly. The theoretical results indicated that the wavelet neural network prediction model and algorithms have a broad prospect for practical application.
Jhin, Changho; Hwang, Keum Taek
2014-01-01
Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627
Social network models predict movement and connectivity in ecological landscapes
Fletcher, R.J.; Acevedo, M.A.; Reichert, Brian E.; Pias, Kyle E.; Kitchens, W.M.
2011-01-01
Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.
Tsai, Tsung-Yuan; Li, Jing-Sheng; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Li, Guoan
2013-01-01
The statistical shape model (SSM) method that uses 2D images of the knee joint to predict the 3D joint surface model has been reported in literature. In this study, we constructed a SSM database using 152 human CT knee joint models, including the femur, tibia and patella and analyzed the characteristics of each principal component of the SSM. The surface models of two in vivo knees were predicted using the SSM and their 2D bi-plane fluoroscopic images. The predicted models were compared to their CT joint models. The differences between the predicted 3D knee joint surfaces and the CT image-based surfaces were 0.30 ± 0.81 mm, 0.34 ± 0.79 mm and 0.36 ± 0.59 mm for the femur, tibia and patella, respectively (average ± standard deviation). The computational time for each bone of the knee joint was within 30 seconds using a personal computer. The analysis of this study indicated that the SSM method could be a useful tool to construct 3D surface models of the knee with sub-millimeter accuracy in real time. Thus it may have a broad application in computer assisted knee surgeries that require 3D surface models of the knee. PMID:24156375
Grobman, William A.; Lai, Yinglei; Landon, Mark B.; Spong, Catherine Y.; Leveno, Kenneth J.; Rouse, Dwight J.; Varner, Michael W.; Moawad, Atef H.; Simhan, Hyagriv N.; Harper, Margaret; Wapner, Ronald J.; Sorokin, Yoram; Miodovnik, Menachem; Carpenter, Marshall; O'sullivan, Mary J.; Sibai, Baha M.; Langer, Oded; Thorp, John M.; Ramin, Susan M.; Mercer, Brian M.
2010-01-01
Objective To construct a predictive model for vaginal birth after cesarean (VBAC) that combines factors that can be ascertained only as the pregnancy progresses with those known at initiation of prenatal care. Study design Using multivariable modeling, we constructed a predictive model for VBAC that included patient factors known at the initial prenatal visit as well as those that only became evident as the pregancy progressed to the admission for delivery. Results 9616 women were analyzed. The regression equation for VBAC success included multiple factors that could not be known at the first prenatal visit. The area under the curve for this model was significantly greater (P < .001) than that of a model that included only factors available at the first prenatal visit. Conclusion A prediction model for VBAC success that incorporates factors that can be ascertained only as the pregnancy progresses adds to the predictive accuracy of a model that uses only factors available at a first prenatal visit. PMID:19813165
ERIC Educational Resources Information Center
Malloch, Douglas C.; Michael, William B.
1981-01-01
This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…
DOT National Transportation Integrated Search
1995-06-30
Topographic surface modeling using a Geographic Information System (GIS) can be useful for the prediction of soil erosion resulting from highway construction projects. The assumption is that terrain, along with other parameters, will influence the po...
Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models.
Ramírez-Albores, Jorge E; Bustamante, Ramiro O; Badano, Ernesto I
2016-01-01
Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of naturally established individuals because this improves the accuracy of predictions about their distribution ranges.
Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models
Ramírez-Albores, Jorge E.; Bustamante, Ramiro O.
2016-01-01
Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of naturally established individuals because this improves the accuracy of predictions about their distribution ranges. PMID:27195983
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng
2014-01-01
Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng
2014-04-01
Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. Copyright © 2013 Elsevier Inc. All rights reserved.
Modeling the prediction of business intelligence system effectiveness.
Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I
2016-01-01
Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.
Valente, Bruno D.; Morota, Gota; Peñagaricano, Francisco; Gianola, Daniel; Weigel, Kent; Rosa, Guilherme J. M.
2015-01-01
The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. PMID:25908318
Rowlinson, Steve; Jia, Yunyan Andrea
2014-04-01
Existing heat stress risk management guidelines recommended by international standards are not practical for the construction industry which needs site supervision staff to make instant managerial decisions to mitigate heat risks. The ability of the predicted heat strain (PHS) model [ISO 7933 (2004). Ergonomics of the thermal environment analytical determination and interpretation of heat stress using calculation of the predicted heat strain. Geneva: International Standard Organisation] to predict maximum allowable exposure time (D lim) has now enabled development of localized, action-triggering and threshold-based guidelines for implementation by lay frontline staff on construction sites. This article presents a protocol for development of two heat stress management tools by applying the PHS model to its full potential. One of the tools is developed to facilitate managerial decisions on an optimized work-rest regimen for paced work. The other tool is developed to enable workers' self-regulation during self-paced work.
NASA Astrophysics Data System (ADS)
Kruse Christensen, Nikolaj; Ferre, Ty Paul A.; Fiandaca, Gianluca; Christensen, Steen
2017-03-01
We present a workflow for efficient construction and calibration of large-scale groundwater models that includes the integration of airborne electromagnetic (AEM) data and hydrological data. In the first step, the AEM data are inverted to form a 3-D geophysical model. In the second step, the 3-D geophysical model is translated, using a spatially dependent petrophysical relationship, to form a 3-D hydraulic conductivity distribution. The geophysical models and the hydrological data are used to estimate spatially distributed petrophysical shape factors. The shape factors primarily work as translators between resistivity and hydraulic conductivity, but they can also compensate for structural defects in the geophysical model. The method is demonstrated for a synthetic case study with sharp transitions among various types of deposits. Besides demonstrating the methodology, we demonstrate the importance of using geophysical regularization constraints that conform well to the depositional environment. This is done by inverting the AEM data using either smoothness (smooth) constraints or minimum gradient support (sharp) constraints, where the use of sharp constraints conforms best to the environment. The dependency on AEM data quality is also tested by inverting the geophysical model using data corrupted with four different levels of background noise. Subsequently, the geophysical models are used to construct competing groundwater models for which the shape factors are calibrated. The performance of each groundwater model is tested with respect to four types of prediction that are beyond the calibration base: a pumping well's recharge area and groundwater age, respectively, are predicted by applying the same stress as for the hydrologic model calibration; and head and stream discharge are predicted for a different stress situation. As expected, in this case the predictive capability of a groundwater model is better when it is based on a sharp geophysical model instead of a smoothness constraint. This is true for predictions of recharge area, head change, and stream discharge, while we find no improvement for prediction of groundwater age. Furthermore, we show that the model prediction accuracy improves with AEM data quality for predictions of recharge area, head change, and stream discharge, while there appears to be no accuracy improvement for the prediction of groundwater age.
A real-time prediction model for post-irradiation malignant cervical lymph nodes.
Lo, W-C; Cheng, P-W; Shueng, P-W; Hsieh, C-H; Chang, Y-L; Liao, L-J
2018-04-01
To establish a real-time predictive scoring model based on sonographic characteristics for identifying malignant cervical lymph nodes (LNs) in cancer patients after neck irradiation. One-hundred forty-four irradiation-treated patients underwent ultrasonography and ultrasound-guided fine-needle aspirations (USgFNAs), and the resultant data were used to construct a real-time and computerised predictive scoring model. This scoring system was further compared with our previously proposed prediction model. A predictive scoring model, 1.35 × (L axis) + 2.03 × (S axis) + 2.27 × (margin) + 1.48 × (echogenic hilum) + 3.7, was generated by stepwise multivariate logistic regression analysis. Neck LNs were considered to be malignant when the score was ≥ 7, corresponding to a sensitivity of 85.5%, specificity of 79.4%, positive predictive value (PPV) of 82.3%, negative predictive value (NPV) of 83.1%, and overall accuracy of 82.6%. When this new model and the original model were compared, the areas under the receiver operating characteristic curve (c-statistic) were 0.89 and 0.81, respectively (P < .05). A real-time sonographic predictive scoring model was constructed to provide prompt and reliable guidance for USgFNA biopsies to manage cervical LNs after neck irradiation. © 2017 John Wiley & Sons Ltd.
A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.
Ni, Qianwu; Chen, Lei
2017-01-01
Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A simplified building airflow model for agent concentration prediction.
Jacques, David R; Smith, David A
2010-11-01
A simplified building airflow model is presented that can be used to predict the spread of a contaminant agent from a chemical or biological attack. If the dominant means of agent transport throughout the building is an air-handling system operating at steady-state, a linear time-invariant (LTI) model can be constructed to predict the concentration in any room of the building as a result of either an internal or external release. While the model does not capture weather-driven and other temperature-driven effects, it is suitable for concentration predictions under average daily conditions. The model is easily constructed using information that should be accessible to a building manager, supplemented with assumptions based on building codes and standard air-handling system design practices. The results of the model are compared with a popular multi-zone model for a simple building and are demonstrated for building examples containing one or more air-handling systems. The model can be used for rapid concentration prediction to support low-cost placement strategies for chemical and biological detection sensors.
Self-Determination Theory and Outpatient Follow-Up After Psychiatric Hospitalization.
Sripada, Rebecca K; Bowersox, Nicholas W; Ganoczy, Dara; Valenstein, Marcia; Pfeiffer, Paul N
2016-08-01
The objective of this study was to assess whether the constructs of self-determination theory-autonomy, competence, and relatedness-are associated with adherence to outpatient follow-up appointments after psychiatric hospitalization. 242 individuals discharged from inpatient psychiatric treatment within the Veterans Health Administration completed surveys assessing self-determination theory constructs as well as measures of depression and barriers to treatment. Medical records were used to count the number of mental health visits and no-shows in the 14 weeks following discharge. Logistic regression models assessed the association between survey items assessing theory constructs and attendance at mental healthcare visits. In multivariate models, none of the self-determination theory factors predicted outpatient follow-up attendance. The constructs of self-determination theory as measured by a single self-report survey may not reliably predict adherence to post-hospital care. Need factors such as depression may be more strongly predictive of treatment adherence.
Dorota, Myszkowska
2013-03-01
The aim of the study was to construct the model forecasting the birch pollen season characteristics in Cracow on the basis of an 18-year data series. The study was performed using the volumetric method (Lanzoni/Burkard trap). The 98/95 % method was used to calculate the pollen season. The Spearman's correlation test was applied to find the relationship between the meteorological parameters and pollen season characteristics. To construct the predictive model, the backward stepwise multiple regression analysis was used including the multi-collinearity of variables. The predictive models best fitted the pollen season start and end, especially models containing two independent variables. The peak concentration value was predicted with the higher prediction error. Also the accuracy of the models predicting the pollen season characteristics in 2009 was higher in comparison with 2010. Both, the multi-variable model and one-variable model for the beginning of the pollen season included air temperature during the last 10 days of February, while the multi-variable model also included humidity at the beginning of April. The models forecasting the end of the pollen season were based on temperature in March-April, while the peak day was predicted using the temperature during the last 10 days of March.
Improved Modeling of Open Waveguide Aperture Radiators for use in Conformal Antenna Arrays
NASA Astrophysics Data System (ADS)
Nelson, Gregory James
Open waveguide apertures have been used as radiating elements in conformal arrays. Individual radiating element model patterns are used in constructing overall array models. The existing models for these aperture radiating elements may not accurately predict the array pattern for TEM waves which are not on boresight for each radiating element. In particular, surrounding structures can affect the far field patterns of these apertures, which ultimately affects the overall array pattern. New models of open waveguide apertures are developed here with the goal of accounting for the surrounding structure effects on the aperture far field patterns such that the new models make accurate pattern predictions. These aperture patterns (both E plane and H plane) are measured in an anechoic chamber and the manner in which they deviate from existing model patterns are studied. Using these measurements as a basis, existing models for both E and H planes are updated with new factors and terms which allow the prediction of far field open waveguide aperture patterns with improved accuracy. These new and improved individual radiator models are then used to predict overall conformal array patterns. Arrays of open waveguide apertures are constructed and measured in a similar fashion to the individual aperture measurements. These measured array patterns are compared with the newly modeled array patterns to verify the improved accuracy of the new models as compared with the performance of existing models in making array far field pattern predictions. The array pattern lobe characteristics are then studied for predicting fully circularly conformal arrays of varying radii. The lobe metrics that are tracked are angular location and magnitude as the radii of the conformal arrays are varied. A constructed, measured array that is close to conforming to a circular surface is compared with a fully circularly conformal modeled array pattern prediction, with the predicted lobe angular locations and magnitudes tracked, plotted and tabulated. The close match between the patterns of the measured array and the modeled circularly conformal array verifies the validity of the modeled circularly conformal array pattern predictions.
Tsai, Tsung-Yuan; Li, Jing-Sheng; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Li, Guoan
2015-01-01
The statistical shape model (SSM) method that uses 2D images of the knee joint to predict the three-dimensional (3D) joint surface model has been reported in the literature. In this study, we constructed a SSM database using 152 human computed tomography (CT) knee joint models, including the femur, tibia and patella and analysed the characteristics of each principal component of the SSM. The surface models of two in vivo knees were predicted using the SSM and their 2D bi-plane fluoroscopic images. The predicted models were compared to their CT joint models. The differences between the predicted 3D knee joint surfaces and the CT image-based surfaces were 0.30 ± 0.81 mm, 0.34 ± 0.79 mm and 0.36 ± 0.59 mm for the femur, tibia and patella, respectively (average ± standard deviation). The computational time for each bone of the knee joint was within 30 s using a personal computer. The analysis of this study indicated that the SSM method could be a useful tool to construct 3D surface models of the knee with sub-millimeter accuracy in real time. Thus, it may have a broad application in computer-assisted knee surgeries that require 3D surface models of the knee.
Sodium intake prediction with health promotion model constructs in rural hypertensive patients.
Kamran, Aziz; Sharifirad, Gholamreza; Shafaeei, Yousef; Azadbakht, Leila
2015-01-01
Hypertension is the most common cause of cardiovascular disease, and the growing epidemic is a serious warning to pay more attention to this disease. The aims of this study were to examine the relationships between the health promotion model (HPM) constructs and sodium intake, and to determine the predictive power of the HPM constructs as the possible mediators of sodium intake in rural Iranian hypertensive patients. This cross-sectional study was conducted on 671 hypertensive patients in Ardabil, Iran in 2013. The data were obtained during a 25-40 min face-to-face conversation by validated and reliable instruments. The nutritional data were assessed with Nutritionist version 4 (N4) software. Descriptive statistics, Spearman's correlations were calculated using SPSS Statistics version 18.0. Structural equation modeling was conducted using AMOS version 18. Sodium intake was negatively correlated with perceived benefits (r = -0.707; P < 0.01), perceived self-efficacy (r = -0.719; P < 0.01), situational influences (r = -0.590; P < 0.01), interpersonal influences (r = -0.637; P < 0.01), commitment to action (r = -0.605; P < 0.01), affects related behavior (r = -0.499; P < 0.01), and positively associated with the perceived barriers score (r = 0.563; P < 0.01). The structural equation modeling showed that the model explained 63.0% of the variation in sodium intake. HPM constructs were significantly associated with sodium intake and dietary perceptions based on HPM constructs can predict acceptable rate of the variation of sodium intake. Therefore, we suggest using this model constructs to improve the effectiveness of nutritional interventions.
Modeling of fugitive dust emission for construction sand and gravel processing plant.
Lee, C H; Tang, L W; Chang, C T
2001-05-15
Due to rapid economic development in Taiwan, a large quantity of construction sand and gravel is needed to support domestic civil construction projects. However, a construction sand and gravel processing plant is often a major source of air pollution, due to its associated fugitive dust emission. To predict the amount of fugitive dust emitted from this kind of processing plant, a semiempirical model was developed in this study. This model was developed on the basis of the actual dust emission data (i.e., total suspended particulate, TSP) and four on-site operating parameters (i.e., wind speed (u), soil moisture (M), soil silt content (s), and number (N) of trucks) measured at a construction sand and gravel processing plant. On the basis of the on-site measured data and an SAS nonlinear regression program, the expression of this model is E = 0.011.u2.653.M-1.875.s0.060.N0.896, where E is the amount (kg/ton) of dust emitted during the production of each ton of gravel and sand. This model can serve as a facile tool for predicting the fugitive dust emission from a construction sand and gravel processing plant.
2011-12-02
construction and validation of predictive computer models such as those used in Time-domain Analysis Simulation for Advanced Tracking (TASAT), a...characterization data, successful construction and validation of predictive computer models was accomplished. And an investigation in pose determination from...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES
NASA Astrophysics Data System (ADS)
Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher
2015-07-01
Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.
Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo
2017-01-01
The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.
Construction of ground-state preserving sparse lattice models for predictive materials simulations
NASA Astrophysics Data System (ADS)
Huang, Wenxuan; Urban, Alexander; Rong, Ziqin; Ding, Zhiwei; Luo, Chuan; Ceder, Gerbrand
2017-08-01
First-principles based cluster expansion models are the dominant approach in ab initio thermodynamics of crystalline mixtures enabling the prediction of phase diagrams and novel ground states. However, despite recent advances, the construction of accurate models still requires a careful and time-consuming manual parameter tuning process for ground-state preservation, since this property is not guaranteed by default. In this paper, we present a systematic and mathematically sound method to obtain cluster expansion models that are guaranteed to preserve the ground states of their reference data. The method builds on the recently introduced compressive sensing paradigm for cluster expansion and employs quadratic programming to impose constraints on the model parameters. The robustness of our methodology is illustrated for two lithium transition metal oxides with relevance for Li-ion battery cathodes, i.e., Li2xFe2(1-x)O2 and Li2xTi2(1-x)O2, for which the construction of cluster expansion models with compressive sensing alone has proven to be challenging. We demonstrate that our method not only guarantees ground-state preservation on the set of reference structures used for the model construction, but also show that out-of-sample ground-state preservation up to relatively large supercell size is achievable through a rapidly converging iterative refinement. This method provides a general tool for building robust, compressed and constrained physical models with predictive power.
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal
2017-01-01
Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.
An in-premise model for Legionella exposure during showering events
An exposure model was constructed to predict the critical Legionella densities in an engineered water system that might result in infection from inhalation of aerosols containing the pathogen while showering. The model predicted the Legionella densities in the shower air, water ...
Deriving the expected utility of a predictive model when the utilities are uncertain.
Cooper, Gregory F; Visweswaran, Shyam
2005-01-01
Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.
Bouwhuis, Stef; Geuskens, Goedele A; Boot, Cécile R L; Bongers, Paulien M; van der Beek, Allard J
2017-08-01
To construct prediction models for transitions to combination multiple job holding (MJH) (multiple jobs as an employee) and hybrid MJH (being an employee and self-employed), among employees aged 45-64. A total of 5187 employees in the Netherlands completed online questionnaires annually between 2010 and 2013. We applied logistic regression analyses with a backward elimination strategy to construct prediction models. Transitions to combination MJH and hybrid MJH were best predicted by a combination of factors including: demographics, health and mastery, work characteristics, work history, skills and knowledge, social factors, and financial factors. Not having a permanent contract and a poor household financial situation predicted both transitions. Some predictors only predicted combination MJH, e.g., working part-time, or hybrid MJH, e.g., work-home interference. A wide variety of factors predict combination MJH and/or hybrid MJH. The prediction model approach allowed for the identification of predictors that have not been previously studied. © 2017 Wiley Periodicals, Inc.
Assessing the stability of human locomotion: a review of current measures
Bruijn, S. M.; Meijer, O. G.; Beek, P. J.; van Dieën, J. H.
2013-01-01
Falling poses a major threat to the steadily growing population of the elderly in modern-day society. A major challenge in the prevention of falls is the identification of individuals who are at risk of falling owing to an unstable gait. At present, several methods are available for estimating gait stability, each with its own advantages and disadvantages. In this paper, we review the currently available measures: the maximum Lyapunov exponent (λS and λL), the maximum Floquet multiplier, variability measures, long-range correlations, extrapolated centre of mass, stabilizing and destabilizing forces, foot placement estimator, gait sensitivity norm and maximum allowable perturbation. We explain what these measures represent and how they are calculated, and we assess their validity, divided up into construct validity, predictive validity in simple models, convergent validity in experimental studies, and predictive validity in observational studies. We conclude that (i) the validity of variability measures and λS is best supported across all levels, (ii) the maximum Floquet multiplier and λL have good construct validity, but negative predictive validity in models, negative convergent validity and (for λL) negative predictive validity in observational studies, (iii) long-range correlations lack construct validity and predictive validity in models and have negative convergent validity, and (iv) measures derived from perturbation experiments have good construct validity, but data are lacking on convergent validity in experimental studies and predictive validity in observational studies. In closing, directions for future research on dynamic gait stability are discussed. PMID:23516062
A univariate model of river water nitrate time series
NASA Astrophysics Data System (ADS)
Worrall, F.; Burt, T. P.
1999-01-01
Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.
Lack of complete and appropriate human data requires prediction of the hazards for exposed human populations by extrapolation from available animal and in vitro data. Predictive models for the toxicity of chemicals can be constructed by linking kinetic and mode of action data uti...
Predictive modelling of Lactobacillus casei KN291 survival in fermented soy beverage.
Zielińska, Dorota; Dorota, Zielińska; Kołożyn-Krajewska, Danuta; Danuta, Kołożyn-Krajewska; Goryl, Antoni; Antoni, Goryl; Motyl, Ilona
2014-02-01
The aim of the study was to construct and verify predictive growth and survival models of a potentially probiotic bacteria in fermented soy beverage. The research material included natural soy beverage (Polgrunt, Poland) and the strain of lactic acid bacteria (LAB) - Lactobacillus casei KN291. To construct predictive models for the growth and survival of L. casei KN291 bacteria in the fermented soy beverage we design an experiment which allowed the collection of CFU data. Fermented soy beverage samples were stored at various temperature conditions (5, 10, 15, and 20°C) for 28 days. On the basis of obtained data concerning the survival of L. casei KN291 bacteria in soy beverage at different temperature and time conditions, two non-linear models (r(2)= 0.68-0.93) and two surface models (r(2)=0.76-0.79) were constructed; these models described the behaviour of the bacteria in the product to a satisfactory extent. Verification of the surface models was carried out utilizing the validation data - at 7°C during 28 days. It was found that applied models were well fitted and charged with small systematic errors, which is evidenced by accuracy factor - Af, bias factor - Bf and mean squared error - MSE. The constructed microbiological growth and survival models of L. casei KN291 in fermented soy beverage enable the estimation of products shelf life period, which in this case is defined by the requirement for the level of the bacteria to be above 10(6) CFU/cm(3). The constructed models may be useful as a tool for the manufacture of probiotic foods to estimate of their shelf life period.
A. Weiskittel; D. Maguire; R. Monserud
2007-01-01
Hybrid models offer the opportunity to improve future growth projections by combining advantages of both empirical and process-based modeling approaches. Hybrid models have been constructed in several regions and their performance relative to a purely empirical approach has varied. A hybrid model was constructed for intensively managed Douglas-fir plantations in the...
Vaegter, Katarina Kebbon; Lakic, Tatevik Ghukasyan; Olovsson, Matts; Berglund, Lars; Brodin, Thomas; Holte, Jan
2017-03-01
To construct a prediction model for live birth after in vitro fertilization/intracytoplasmic sperm injection (IVF/ICSI) treatment and single-embryo transfer (SET) after 2 days of embryo culture. Prospective observational cohort study. University-affiliated private infertility center. SET in 8,451 IVF/ICSI treatments in 5,699 unselected consecutive couples during 1999-2014. A total of 100 basal patient characteristics and treatment data were analyzed for associations with live birth after IVF/ICSI (adjusted for repeated treatments) and subsequently combined for prediction model construction. Live birth rate (LBR) and performance of live birth prediction model. Embryo score, treatment history, ovarian sensitivity index (OSI; number of oocytes/total dose of FSH administered), female age, infertility cause, endometrial thickness, and female height were all independent predictors of live birth. A prediction model (training data set; n = 5,722) based on these variables showed moderate discrimination, but predicted LBR with high accuracy in subgroups of patients, with LBR estimates ranging from <10% to >40%. Outcomes were similar in an internal validation data set (n = 2,460). Based on 100 variables prospectively recorded during a 15-year period, a model for live birth prediction after strict SET was constructed and showed excellent calibration in internal validation. For the first time, female height qualified as a predictor of live birth after IVF/ICSI. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Mendel, Raymond M.; Dickinson, Terry L.
Vroom's cognitive model, which proposes to both explain and predict an individual's level of work productivity by drawing on the construct motivation, is discussed and three hypotheses generated: (1) that Vroom's model does predict performance in a non-industrial setting; (2) that it predicts self-perceived performance better than measures…
Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela
2018-01-01
This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.
Trends in highway construction costs in Louisiana : technical summary.
DOT National Transportation Integrated Search
1999-09-01
The objectives of this study are to observe past trends in highway construction costs in Louisiana, identify factors that determine these costs, quantify their impact, and establish a model that can be used to predict future construction cost in Loui...
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
Prediction of barrier island restoration response and its interactions with the natural environment
NASA Astrophysics Data System (ADS)
Plant, N. G.; Stockdon, H. F.; Flocks, J.; Sallenger, A. H.; Long, J. W.; Cormier, J. M.; Guy, K.; Thompson, D. M.
2012-12-01
A 2-meter high sand berm was constructed along Chandeleur Island, Louisiana, in an attempt to provide protection against the Deepwater Horizon oil spill. Berm construction started in June 2010 and ended in April 2011. Variations in both island morphology and construction of the 15-km long berm resulted in the development of four different morphologies: a berm built on a submerged island platform to the north of the existing island, a berm built seaward of the existing island, a berm built along the island shoreline, and portions of the island where no berm was constructed. These different morphologies provide a natural laboratory for testing the understanding of berm and barrier island response to storms. In particular, the ability to predict berm evolution using statistical modeling of the interactions between the island, berm, and oceanographic processes was tested. This particular test was part of a broader USGS research effort to understand processes that bridge the gap between short-term storm response and longer-term geologic and climate interactions that shape barrier-island systems. Berm construction and subsequent berm and island evolution were monitored using satellite and aerial remote sensing and topographic and bathymetric surveys. To date, significant berm evolution occurred in both the north (including terminal erosion, overwash, and a large breach), center (overwash and numerous breaches), and south (overwash). The response of the central portion of the berm to winter and tropical storms was significant such that none of the residual berm remained within its construction footprint. The evolution of the central portion of the berm was well predicted using a statistical modeling approach that used predicted and modeled wave conditions to identify the likelihood of overwash events. Comparison of different modeled evolution scenarios to the one that was observed showed that berm response was sensitive to the frequency and severity of winter and tropical storms. These findings demonstrate an observation and modeling approach that can be applied to understanding and managing other natural and restored barrier islands.
Applying the age-shift approach to model responses to midrotation fertilization
Colleen A. Carlson; Thomas R. Fox; H. Lee Allen; Timothy J. Albaugh
2010-01-01
Growth and yield models used to evaluate midrotation fertilization economics require adjustments to account for the typically observed responses. This study investigated the use of age-shift models to predict midrotation fertilizer responses. Age-shift prediction models were constructed from a regional study consisting of 43 installations of a nitrogen (N) by...
Seven lessons from manyfield inflation in random potentials
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2018-01-01
We study inflation in models with many interacting fields subject to randomly generated scalar potentials. We use methods from non-equilibrium random matrix theory to construct the potentials and an adaption of the `transport method' to evolve the two-point correlators during inflation. This construction allows, for the first time, for an explicit study of models with up to 100 interacting fields supporting a period of `approximately saddle-point' inflation. We determine the statistical predictions for observables by generating over 30,000 models with 2–100 fields supporting at least 60 efolds of inflation. These studies lead us to seven lessons: i) Manyfield inflation is not single-field inflation, ii) The larger the number of fields, the simpler and sharper the predictions, iii) Planck compatibility is not rare, but future experiments may rule out this class of models, iv) The smoother the potentials, the sharper the predictions, v) Hyperparameters can transition from stiff to sloppy, vi) Despite tachyons, isocurvature can decay, vii) Eigenvalue repulsion drives the predictions. We conclude that many of the `generic predictions' of single-field inflation can be emergent features of complex inflation models.
NASA Astrophysics Data System (ADS)
Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann
2018-03-01
Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.
Identifying pollution sources and predicting urban air quality using ensemble learning methods
NASA Astrophysics Data System (ADS)
Singh, Kunwar P.; Gupta, Shikha; Rai, Premanjali
2013-12-01
In this study, principal components analysis (PCA) was performed to identify air pollution sources and tree based ensemble learning models were constructed to predict the urban air quality of Lucknow (India) using the air quality and meteorological databases pertaining to a period of five years. PCA identified vehicular emissions and fuel combustion as major air pollution sources. The air quality indices revealed the air quality unhealthy during the summer and winter. Ensemble models were constructed to discriminate between the seasonal air qualities, factors responsible for discrimination, and to predict the air quality indices. Accordingly, single decision tree (SDT), decision tree forest (DTF), and decision treeboost (DTB) were constructed and their generalization and predictive performance was evaluated in terms of several statistical parameters and compared with conventional machine learning benchmark, support vector machines (SVM). The DT and SVM models discriminated the seasonal air quality rendering misclassification rate (MR) of 8.32% (SDT); 4.12% (DTF); 5.62% (DTB), and 6.18% (SVM), respectively in complete data. The AQI and CAQI regression models yielded a correlation between measured and predicted values and root mean squared error of 0.901, 6.67 and 0.825, 9.45 (SDT); 0.951, 4.85 and 0.922, 6.56 (DTF); 0.959, 4.38 and 0.929, 6.30 (DTB); 0.890, 7.00 and 0.836, 9.16 (SVR) in complete data. The DTF and DTB models outperformed the SVM both in classification and regression which could be attributed to the incorporation of the bagging and boosting algorithms in these models. The proposed ensemble models successfully predicted the urban ambient air quality and can be used as effective tools for its management.
Mathematical model for predicting human vertebral fracture
NASA Technical Reports Server (NTRS)
Benedict, J. V.
1973-01-01
Mathematical model has been constructed to predict dynamic response of tapered, curved beam columns in as much as human spine closely resembles this form. Model takes into consideration effects of impact force, mass distribution, and material properties. Solutions were verified by dynamic tests on curved, tapered, elastic polyethylene beam.
Gregorini, P; Beukes, P C; Hanigan, M D; Waghorn, G; Muetzel, S; McNamara, J P
2013-08-01
Molly is a deterministic, mechanistic, dynamic model representing the digestion, metabolism, and production of a dairy cow. This study compared the predictions of enteric methane production from the original version of Molly (MollyOrigin) and 2 new versions of Molly. Updated versions included new ruminal fiber digestive parameters and animal hormonal parameters (Molly84) and a revised version of digestive and ruminal parameters (Molly85), using 3 different ruminal volatile fatty acid (VFA) stoichiometry constructs to describe the VFA pattern and methane (CH4) production (g of CH4/d). The VFA stoichiometry constructs were the original forage and mixed-diet VFA constructs and a new VFA stoichiometry based on a more recent and larger set of data that includes lactate and valerate production, amylolytic and cellulolytic bacteria, as well as protozoal pools. The models' outputs were challenged using data from 16 dairy cattle 26 mo old [standard error of the mean (SEM)=1.7], 82 (SEM=8.7) d in milk, producing 17 (SEM=0.2) kg of milk/d, and fed fresh-cut ryegrass [dry matter intake=12.3 (SEM=0.3) kg of DM/d] in respiration chambers. Mean observed CH4 production was 266±5.6 SEM (g/d). Mean predicted values for CH4 production were 287 and 258 g/d for MollyOrigin without and with the new VFA construct. Model Molly84 predicted 295 and 288 g of CH4/d with and without the new VFA settings. Model Molly85 predicted the same CH4 production (276 g/d) with or without the new VFA construct. The incorporation of the new VFA construct did not consistently reduce the low prediction error across the versions of Molly evaluated in the present study. The improvements in the Molly versions from MollyOrigin to Molly84 to Molly85 resulted in a decrease in mean square prediction error from 8.6 to 8.3 to 4.3% using the forage diet setting. The majority of the mean square prediction error was apportioned to random bias (e.g., 43, 65, and 70% in MollyOrigin, Molly84, and Molly85, respectively, on the forage setting, showing that with the updated versions a greater proportion of error was random). The slope bias was less than 2% in all cases. We concluded that, of the versions of Molly used for pastoral systems, Molly85 has the capability to predict CH4 production from grass-fed dairy cows with the highest accuracy. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Eric J. Gustafson
2013-01-01
Researchers and natural resource managers need predictions of how multiple global changes (e.g., climate change, rising levels of air pollutants, exotic invasions) will affect landscape composition and ecosystem function. Ecological predictive models used for this purpose are constructed using either a mechanistic (process-based) or a phenomenological (empirical)...
ERIC Educational Resources Information Center
Cheng, Meng-Fei; Brown, David E.
2010-01-01
This study explores the spontaneous explanatory models children construct, critique, and revise in the context of tasks in which children need to predict, observe, and explain phenomena involving magnetism. It further investigates what conceptual resources students use, and in what ways they use them, to construct explanatory models, and the…
Zoellner, Jamie M; Porter, Kathleen J; Chen, Yvonnes; Hedrick, Valisa E; You, Wen; Hickman, Maja; Estabrooks, Paul A
2017-05-01
Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13-20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6-38%) and behaviour (average 30%, range 6-55%) were significant. Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases.
Anger, hostility, and hospitalizations in patients with heart failure.
Keith, Felicia; Krantz, David S; Chen, Rusan; Harris, Kristie M; Ware, Catherine M; Lee, Amy K; Bellini, Paula G; Gottlieb, Stephen S
2017-09-01
Heart failure patients have a high hospitalization rate, and anger and hostility are associated with coronary heart disease morbidity and mortality. Using structural equation modeling, this prospective study assessed the predictive validity of anger and hostility traits for cardiovascular and all-cause rehospitalizations in patients with heart failure. 146 heart failure patients were administered the STAXI and Cook-Medley Hostility Inventory to measure anger, hostility, and their component traits. Hospitalizations were recorded for up to 3 years following baseline. Causes of hospitalizations were categorized as heart failure, total cardiac, noncardiac, and all-cause (sum of cardiac and noncardiac). Measurement models were separately fit for Anger and Hostility, followed by a Confirmatory Factor Analysis to estimate the relationship between the Anger and Hostility constructs. An Anger model consisted of State Anger, Trait Anger, Anger Expression Out, and Anger Expression In, and a Hostility model included Cynicism, Hostile Affect, Aggressive Responding, and Hostile Attribution. The latent construct of Anger did not predict any of the hospitalization outcomes, but Hostility significantly predicted all-cause hospitalizations. Analyses of individual trait components of each of the 2 models indicated that Anger Expression Out predicted all-cause and noncardiac hospitalizations, and Trait Anger predicted noncardiac hospitalizations. None of the individual components of Hostility were related to rehospitalizations or death. The construct of Hostility and several components of Anger are predictive of hospitalizations that were not specific to cardiac causes. Mechanisms common to a variety of health problems, such as self-care and risky health behaviors, may be involved in these associations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Mysid Population Responses to Resource Limitation Differ from those Predicted by Cohort Studies
Effects of anthropogenic stressors on animal populations are often evaluated by assembling vital rate responses from isolated cohort studies into a single demographic model. However, models constructed from cohort studies are difficult to translate into ecological predictions be...
DOT National Transportation Integrated Search
2008-04-01
The objective of this study was to develop resilient modulus prediction models for possible application in the quality control/quality assurance (QC/QA) procedures during and after the construction of pavement layers. Field and laboratory testing pro...
ERIC Educational Resources Information Center
Standage, Martyn; Duda, Joan L.; Ntoumanis, Nikos
2003-01-01
Examines a study of student motivation in physical education that incorporated constructs from achievement goal and self-determination theories. Self-determined motivation was found to positively predict, whereas amotivation was a negative predictor of leisure-time physical activity intentions. (Contains 86 references and 3 tables.) (GCP)
Fu, Jicheng; Jones, Maria; Jan, Yih-Kuen
2014-01-01
Wheelchair tilt and recline functions are two of the most desirable features for relieving seating pressure to decrease the risk of pressure ulcers. The effective guidance on wheelchair tilt and recline usage is therefore critical to pressure ulcer prevention. The aim of this study was to demonstrate the feasibility of using machine learning techniques to construct an intelligent model to provide personalized guidance to individuals with spinal cord injury (SCI). The motivation stems from the clinical evidence that the requirements of individuals vary greatly and that no universal guidance on tilt and recline usage could possibly satisfy all individuals with SCI. We explored all aspects involved in constructing the intelligent model and proposed approaches tailored to suit the characteristics of this preliminary study, such as the way of modeling research participants, using machine learning techniques to construct the intelligent model, and evaluating the performance of the intelligent model. We further improved the intelligent model's prediction accuracy by developing a two-phase feature selection algorithm to identify important attributes. Experimental results demonstrated that our approaches held the promise: they could effectively construct the intelligent model, evaluate its performance, and refine the participant model so that the intelligent model's prediction accuracy was significantly improved.
Bankruptcy Prediction in the Construction Industry: Financial Ratio Analysis
1989-08-01
financial reporting between the two industries. Using this information an effort will be made to modifying the models that can be applicable to the construction industry. Keywords: Analysis of variance,
Hu, Jun; Ji, Ming-liang; Qian, Bang-ping; Qiu, Yong; Wang, Bin; Yu, Yang; Zhu, Ze-Zhang; Jiang, Jun
2014-11-01
A retrospective radiographical study. To construct a predictive model for pelvic tilt (PT) based on the sacrofemoral-pubic (SFP) angle in patients with thoracolumbar kyphosis secondary to ankylosing spondylitis (or AS). PT is a key pelvic parameter in the regulation of spine sagittal alignment that can be used to plan the appropriate osteotomy angle in patients with AS with thoracolumbar kyphosis. However, it could be difficult to measure PT in patients with femoral heads poorly visualized on lateral radiographs. Previous studies showed that the SFP angle could be used to evaluate PT in adult patients with scoliosis. However, this method has not been validated in patients with AS. A total of 115 patients with AS with thoracolumbar kyphosis were included. Full-length anteroposterior and lateral spine radiographs were all available, with spinal and pelvic anatomical landmarks clearly identified. PT, SFP angle, and global kyphosis were measured. The patients were randomly divided into group A (n=65) and group B (n=50). In group A, the predictive model for PT was constructed by the results of the linear regression analysis. In group B, the predictive ability and accuracy of the predictive model were investigated. In group A, the Pearson correlation analysis revealed a strong correlation between the SFP angle and PT (r=0.852; P<0.001). The predictive model for PT was constructed as PT=72.3-0.82×(SFP angle). In group B, PT was predicted by the model with a mean error of 4.6° (SD=4.5°) with a predictive value of 78%. PT can be accurately predicted by the SFP angle using the current model: PT=72.3-0.82×(SFP angle), when the femur heads are poorly visualized on lateral radiographs in patients with AS with thoracolumbar kyphosis. 4.
Lorente, Laura; Salanova, Marisa; Martínez, Isabel M; Vera, María
2014-06-01
Traditionally, research focussing on psychosocial factors in the construction industry has focused mainly on the negative aspects of health and on results such as occupational accidents. This study, however, focuses on the specific relationships among the different positive psychosocial factors shared by construction workers that could be responsible for occupational well-being and outcomes such as performance. The main objective of this study was to test whether personal resources predict self-rated job performance through job resources and work engagement. Following the predictions of Bandura's Social Cognitive Theory and the motivational process of the Job Demands-Resources Model, we expect that the relationship between personal resources and performance will be fully mediated by job resources and work engagement. The sample consists of 228 construction workers. Structural equation modelling supports the research model. Personal resources (i.e. self-efficacy, mental and emotional competences) play a predicting role in the perception of job resources (i.e. job control and supervisor social support), which in turn leads to work engagement and self-rated performance. This study emphasises the crucial role that personal resources play in determining how people perceive job resources by determining the levels of work engagement and, hence, their self-rated job performance. Theoretical and practical implications are discussed. © 2014 International Union of Psychological Science.
Li, Xiaochuan; Bai, Xuedong; Wu, Yaohong; Ruan, Dike
2016-03-15
To construct and validate a model to predict responsible nerve roots in lumbar degenerative disease with diagnostic doubt (DD). From January 2009-January 2013, 163 patients with DD were assigned to the construction (n = 106) or validation sample (n = 57) according to different admission times to hospital. Outcome was assessed according to the Japanese Orthopedic Association (JOA) recovery rate as excellent, good, fair, and poor. The first two results were considered as effective clinical outcome (ECO). Baseline patient and clinical characteristics were considered as secondary variables. A multivariate logistic regression model was used to construct a model with the ECO as a dependent variable and other factors as explanatory variables. The odds ratios (ORs) of each risk factor were adjusted and transformed into a scoring system. Area under the curve (AUC) was calculated and validated in both internal and external samples. Moreover, calibration plot and predictive ability of this scoring system were also tested for further validation. Patients with DD with ECOs in both construction and validation models were around 76 % (76.4 and 75.5 % respectively). more preoperative visual analog pain scale (VAS) score (OR = 1.56, p < 0.01), stenosis levels of L4/5 or L5/S1 (OR = 1.44, p = 0.04), stenosis locations with neuroforamen (OR = 1.95, p = 0.01), neurological deficit (OR = 1.62, p = 0.01), and more VAS improvement of selective nerve route block (SNRB) (OR = 3.42, p = 0.02). the internal area under the curve (AUC) was 0.85, and the external AUC was 0.72, with a good calibration plot of prediction accuracy. Besides, the predictive ability of ECOs was not different from the actual results (p = 0.532). We have constructed and validated a predictive model for confirming responsible nerve roots in patients with DD. The associated risk factors were preoperative VAS score, stenosis levels of L4/5 or L5/S1, stenosis locations with neuroforamen, neurological deficit, and VAS improvement of SNRB. A tool such as this is beneficial in the preoperative counseling of patients, shared surgical decision making, and ultimately improving safety in spine surgery.
Reid, Allecia E.; Aiken, Leona S.
2011-01-01
The purpose of this research was to select from the health belief model (HBM), theories of reasoned action (TRA) and planned behaviour (TPB), information-motivation-behavioural skills model (IMB), and social cognitive theory (SCT) the strongest longitudinal predictors of women’s condom use and to combine these constructs into a single integrated model of condom use. The integrated model was evaluated for prediction of condom use among young women who had steady versus casual partners. At Time 1, all constructs of the five models and condom use were assessed in an initial and a replication sample (n= 193, n= 161). Condom use reassessed 8 weeks later (Time 2) served as the main outcome. Information from IMB, perceived susceptibility, benefits, and barriers from HBM, self-efficacy and self-evaluative expectancies from SCT, and partner norm and attitudes from TPB served as indirect or direct predictors of condom use. All paths replicated across samples. Direct predictors of behaviour varied with relationship status: self-efficacy significantly predicted condom use for women with casual partners, while attitude and partner norm predicted for those with steady partners. Integrated psychosocial models, rich in constructs and relationships drawn from multiple theories of behaviour, may provide a more complete characterization of health protective behaviour. PMID:21678166
Visscher, H; Ross, C J D; Rassekh, S R; Sandor, G S S; Caron, H N; van Dalen, E C; Kremer, L C; van der Pal, H J; Rogers, P C; Rieder, M J; Carleton, B C; Hayden, M R
2013-08-01
The use of anthracyclines as effective antineoplastic drugs is limited by the occurrence of cardiotoxicity. Multiple genetic variants predictive of anthracycline-induced cardiotoxicity (ACT) in children were recently identified. The current study was aimed to assess replication of these findings in an independent cohort of children. . Twenty-three variants were tested for association with ACT in an independent cohort of 218 patients. Predictive models including genetic and clinical risk factors were constructed in the original cohort and assessed in the current replication cohort. . We confirmed the association of rs17863783 in UGT1A6 and ACT in the replication cohort (P = 0.0062, odds ratio (OR) 7.98). Additional evidence for association of rs7853758 (P = 0.058, OR 0.46) and rs885004 (P = 0.058, OR 0.42) in SLC28A3 was found (combined P = 1.6 × 10(-5) and P = 3.0 × 10(-5), respectively). A previously constructed prediction model did not significantly improve risk prediction in the replication cohort over clinical factors alone. However, an improved prediction model constructed using replicated genetic variants as well as clinical factors discriminated significantly better between cases and controls than clinical factors alone in both original (AUC 0.77 vs. 0.68, P = 0.0031) and replication cohort (AUC 0.77 vs. 0.69, P = 0.060). . We validated genetic variants in two genes predictive of ACT in an independent cohort. A prediction model combining replicated genetic variants as well as clinical risk factors might be able to identify high- and low-risk patients who could benefit from alternative treatment options. Copyright © 2013 Wiley Periodicals, Inc.
Adolescents' protection motivation and smoking behaviour.
Thrul, Johannes; Stemmler, Mark; Bühler, Anneke; Kuntsche, Emmanuel
2013-08-01
The protection motivation theory (PMT) is a well-known theory of behaviour change. This study tested the applicability of the sub-constructs of threat and coping appraisal in predicting adolescents' smoking-related behavioural intentions and smoking behaviour longitudinally. Adolescents (N = 494) aged 11-16 years and not currently smoking at baseline participated in the study. Predictive validity of PMT constructs was tested in a path analysis model. Self-efficacy significantly predicted behavioural intention at baseline, which significantly predicted behavioural intention at follow-up, which in turn predicted smoking behaviour at follow-up. The effect of self-efficacy on behavioural intention at follow-up was mediated by behavioural intention at baseline and the effect of self-efficacy on smoking behaviour was mediated by behavioural intention at baseline and follow-up. In conclusion, we found support for one part of the PMT, namely for the predictive validity of the coping appraisal construct self-efficacy in predicting adolescents' smoking-related behavioural intention and smoking behaviour. These results fail to support the appropriateness of the PMT's construct threat appraisal in longitudinally predicting adolescents' smoking as well as the applicability of communicating fear and negative information as preventive interventions for this target group.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
Evaluation of a Computational Model of Situational Awareness
NASA Technical Reports Server (NTRS)
Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)
2000-01-01
Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.
Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme
2012-09-01
A quantitative determinants-of-exposure analysis of respirable crystalline silica (RCS) levels in the construction industry was performed using a database compiled from an extensive literature review. Statistical models were developed to predict work-shift exposure levels by trade. Monte Carlo simulation was used to recreate exposures derived from summarized measurements which were combined with single measurements for analysis. Modeling was performed using Tobit models within a multimodel inference framework, with year, sampling duration, type of environment, project purpose, project type, sampling strategy and use of exposure controls as potential predictors. 1346 RCS measurements were included in the analysis, of which 318 were non-detects and 228 were simulated from summary statistics. The model containing all the variables explained 22% of total variability. Apart from trade, sampling duration, year and strategy were the most influential predictors of RCS levels. The use of exposure controls was associated with an average decrease of 19% in exposure levels compared to none, and increased concentrations were found for industrial, demolition and renovation projects. Predicted geometric means for year 1999 were the highest for drilling rig operators (0.238 mg m(-3)) and tunnel construction workers (0.224 mg m(-3)), while the estimated exceedance fraction of the ACGIH TLV by trade ranged from 47% to 91%. The predicted geometric means in this study indicated important overexposure compared to the TLV. However, the low proportion of variability explained by the models suggests that the construction trade is only a moderate predictor of work-shift exposure levels. The impact of the different tasks performed during a work shift should also be assessed to provide better management and control of RCS exposure levels on construction sites.
Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions
NASA Technical Reports Server (NTRS)
Balmes, Etienne
1993-01-01
An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.
Schrodi, Steven J.; Mukherjee, Shubhabrata; Shan, Ying; Tromp, Gerard; Sninsky, John J.; Callear, Amy P.; Carter, Tonia C.; Ye, Zhan; Haines, Jonathan L.; Brilliant, Murray H.; Crane, Paul K.; Smelser, Diane T.; Elston, Robert C.; Weeks, Daniel E.
2014-01-01
Translation of results from genetic findings to inform medical practice is a highly anticipated goal of human genetics. The aim of this paper is to review and discuss the role of genetics in medically-relevant prediction. Germline genetics presages disease onset and therefore can contribute prognostic signals that augment laboratory tests and clinical features. As such, the impact of genetic-based predictive models on clinical decisions and therapy choice could be profound. However, given that (i) medical traits result from a complex interplay between genetic and environmental factors, (ii) the underlying genetic architectures for susceptibility to common diseases are not well-understood, and (iii) replicable susceptibility alleles, in combination, account for only a moderate amount of disease heritability, there are substantial challenges to constructing and implementing genetic risk prediction models with high utility. In spite of these challenges, concerted progress has continued in this area with an ongoing accumulation of studies that identify disease predisposing genotypes. Several statistical approaches with the aim of predicting disease have been published. Here we summarize the current state of disease susceptibility mapping and pharmacogenetics efforts for risk prediction, describe methods used to construct and evaluate genetic-based predictive models, and discuss applications. PMID:24917882
Simulation of Surface Erosion on a Logging Road in the Jackson Demonstration State Forest
Teresa Ish; David Tomberlin
2007-01-01
In constructing management models for the control of sediment delivery to streams, we have used a simulation model of road surface erosion known as the Watershed Erosion Prediction Project (WEPP) model, developed by the USDA Forest Service. This model predicts discharge, erosion, and sediment delivery at the road segment level, based on a stochastic climate simulator...
Abd-El-Fattah, Sabry M
2010-11-01
In this project, 119 undergraduates responded to a questionnaire tapping three psychological constructs implicated in Garrison's model of self-directed learning: self-management, self-monitoring, and motivation. Mediation analyses showed that these psychological constructs are interrelated and that motivation mediates the relationship between self-management and self-monitoring. Path modeling analyses revealed that self-management and self-monitoring significantly predicted academic achievement over two semesters with self-management being the strongest predictor. Motivation significantly predicted academic achievement over the second semester only. Implications of these findings for self-directed learning and academic achievement in a traditional classroom setting are discussed.
Abizari, Abdul-Razak; Pilime, Nerisa; Armar-Klemesu, Margaret; Brouwer, Inge D.
2013-01-01
Background Cowpeas are important staple legumes among the rural poor in northern Ghana. Our objectives were to assess the iron and zinc content of cowpea landraces and identify factors that predict the intention of mothers/caregivers to give cowpeas to their schoolchildren. Methods and Findings We performed biochemical analysis on 14 landraces of cowpeas and assessed the opinion of 120 caregiver-child pairs on constructs based on the combined model of the Theory of Planned Behaviour and Health Belief Model. We used correlations and multiple regressions to measure simple associations between constructs and identify predictive constructs. Cowpea landraces contained iron and zinc in the range of 4.9–8.2 mg/100 g d.w and 2.7–4.1 mg/100 g d.w respectively. The landraces also contained high amounts of phytate (477–1110 mg/100 g d.w) and polyphenol (327–1055 mg/100 g d.w). Intention of mothers was strongly associated (rs = 0.72, P<0.001) with and predicted (β = 0.63, P<0.001) behaviour. The constructs, barriers (β = –0.42, P = 0.001) and attitudes towards behaviour (β = 0.25, P<0.028), significantly predicted intention albeit the predictive ability of the model was weak. Conclusions We conclude that some cowpea landraces from northern Ghana have appreciable amounts of iron and zinc but probably with poor bioavailability. Attitudes towards giving cowpeas and perception of barriers are important predictors of caregivers’ intention to give cowpeas to their schoolchildren. Finally our results suggest that increasing knowledge on nutritional benefits of cowpeas may increase health values caregivers hold for their children in support of giving cowpeas to schoolchildren. PMID:23951289
Modeling the Etiology of Adolescent Substance Use: A Test of the Social Development Model
Catalano, Richard F.; Kosterman, Rick; Hawkins, J. David; Newcomb, Michael D.; Abbott, Robert D.
2007-01-01
The social development model is a general theory of human behavior that seeks to explain antisocial behaviors through specification of predictive developmental relationships. It incorporates the effects of empirical predictors (“risk factors” and “protective factors”) for antisocial behavior and attempts to synthesize the most strongly supported propositions of control theory, social learning theory, and differential association theory. This article examines the power of social development model constructs measured at ages 9 to 10 and 13 to 14 to predict drug use at ages 17 to 18. The sample of 590 is from the longitudinal panel of the Seattle Social Development Project, which in 1985 sampled fifth grade students from high crime neighborhoods in Seattle, Washington. Structural equation modeling techniques were used to examine the fit of the model to the data. Although all but one path coefficient were significant and in the expected direction, the model did not fit the data as well as expected (CFI=.87). We next specified second-order factors for each path to capture the substantial common variance in the constructs' opportunities, involvement, and rewards. This model fit the data well (CFI=.90). We conclude that the social development model provides an acceptable fit to predict drug use at ages 17 to 18. Implications for the temporal nature of key constructs and for prevention are discussed. PMID:17848978
Estimation of wind erosion from construction of a railway in arid northwest China
USDA-ARS?s Scientific Manuscript database
A state-of-the-art wind erosion simulation model, the Wind Erosion Prediction System and the United States Environmental Protection Agency’s AP-42 emission factors formula, were combined together to evaluate wind-blown dust emissions from various construction units from a railway construction projec...
DOT National Transportation Integrated Search
2009-05-01
The primary objective of this research was to develop models that predict the resilient modulus of cohesive and granular soils from the test results of various in-situ test devices for possible application in QA/QC during construction of pavement str...
Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M
2011-08-01
The purpose of this article is to demonstrate how a model can be constructed such that the progress of a submerged fed-batch fermentation of a filamentous fungus can be predicted with acceptable accuracy. The studied process was enzyme production with Aspergillus oryzae in 550 L pilot plant stirred tank reactors. Different conditions of agitation and aeration were employed as well as two different impeller geometries. The limiting factor for the productivity was oxygen supply to the fermentation broth, and the carbon substrate feed flow rate was controlled by the dissolved oxygen tension. In order to predict the available oxygen transfer in the system, the stoichiometry of the reaction equation including maintenance substrate consumption was first determined. Mainly based on the biomass concentration a viscosity prediction model was constructed, because rising viscosity of the fermentation broth due to hyphal growth of the fungus leads to significant lower mass transfer towards the end of the fermentation process. Each compartment of the model was shown to predict the experimental results well. The overall model can be used to predict key process parameters at varying fermentation conditions. Copyright © 2011 Wiley Periodicals, Inc.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Beyond the three-component model of organizational commitment.
Solinger, Omar N; van Olffen, Woody; Roe, Robert A
2008-01-01
This article offers a conceptual critique of the three-component model (TCM) of organizational commitment (Allen & Meyer, 1990) and proposes a reconceptualization based on standard attitude theory. The authors use the attitude-behavior model by Eagly and Chaiken (1993) to demonstrate that the TCM combines fundamentally different attitudinal phenomena. They argue that general organizational commitment can best be understood as an attitude regarding the organization, while normative and continuance commitment are attitudes regarding specific forms of behavior (i.e., staying or leaving). The conceptual analysis shows that the TCM fails to qualify as general model of organizational commitment but instead represents a specific model for predicting turnover. The authors suggest that the use of the TCM be restricted to this purpose and that Eagly and Chaiken's model be adopted as a generic commitment model template from which a range of models for predicting specific organizational behaviors can be extracted. Finally, they discuss the definition and measurement of the organizational commitment attitude. Covering the affective, cognitive, and behavioral facets of this attitude helps to enhance construct validity and to differentiate the construct from other constructs. 2008 APA
Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne
2014-01-01
Background In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. Objective In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. Methods The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users’ perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). Results The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in ‘Quality of Work Life’, ‘Perceived Usefulness’, ‘Perceived Ease of Use’, and ‘User Control’, respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. Conclusions The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. PMID:24567081
Predicting School Enrollments Using the Modified Regression Technique.
ERIC Educational Resources Information Center
Grip, Richard S.; Young, John W.
This report is based on a study in which a regression model was constructed to increase accuracy in enrollment predictions. A model, known as the Modified Regression Technique (MRT), was used to examine K-12 enrollment over the past 20 years in 2 New Jersey school districts of similar size and ethnicity. To test the model's accuracy, MRT was…
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
Event-based total suspended sediment particle size distribution model
NASA Astrophysics Data System (ADS)
Thompson, Jennifer; Sattar, Ahmed M. A.; Gharabaghi, Bahram; Warner, Richard C.
2016-05-01
One of the most challenging modelling tasks in hydrology is prediction of the total suspended sediment particle size distribution (TSS-PSD) in stormwater runoff generated from exposed soil surfaces at active construction sites and surface mining operations. The main objective of this study is to employ gene expression programming (GEP) and artificial neural networks (ANN) to develop a new model with the ability to more accurately predict the TSS-PSD by taking advantage of both event-specific and site-specific factors in the model. To compile the data for this study, laboratory scale experiments using rainfall simulators were conducted on fourteen different soils to obtain TSS-PSD. This data is supplemented with field data from three construction sites in Ontario over a period of two years to capture the effect of transport and deposition within the site. The combined data sets provide a wide range of key overlooked site-specific and storm event-specific factors. Both parent soil and TSS-PSD in runoff are quantified by fitting each to a lognormal distribution. Compared to existing regression models, the developed model more accurately predicted the TSS-PSD using a more comprehensive list of key model input parameters. Employment of the new model will increase the efficiency of deployment of required best management practices, designed based on TSS-PSD, to minimize potential adverse effects of construction site runoff on aquatic life in the receiving watercourses.
Zoellner, Jamie M.; Porter, Kathleen J.; Chen, Yvonnes; Hedrick, Valisa E.; You, Wen; Hickman, Maja; Estabrooks, Paul A.
2017-01-01
Objective Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Design Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Main Outcome Measures Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. Results TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13–20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6–38%) and behaviour (average 30%, range 6–55%) were significant. Conclusion Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases. PMID:28165771
Ding, Feng; Yang, Xianhai; Chen, Guosong; Liu, Jining; Shi, Lili; Chen, Jingwen
2017-10-01
The partition coefficients between bovine serum albumin (BSA) and water (K BSA/w ) for ionogenic organic chemicals (IOCs) were different greatly from those of neutral organic chemicals (NOCs). For NOCs, several excellent models were developed to predict their logK BSA/w . However, it was found that the conventional descriptors are inappropriate for modeling logK BSA/w of IOCs. Thus, alternative approaches are urgently needed to develop predictive models for K BSA/w of IOCs. In this study, molecular descriptors that can be used to characterize the ionization effects (e.g. chemical form adjusted descriptors) were calculated and used to develop predictive models for logK BSA/w of IOCs. The models developed had high goodness-of-fit, robustness, and predictive ability. The predictor variables selected to construct the models included the chemical form adjusted averages of the negative potentials on the molecular surface (V s-adj - ), the chemical form adjusted molecular dipole moment (dipolemoment adj ), the logarithm of the n-octanol/water distribution coefficient (logD). As these molecular descriptors can be calculated from their molecular structures directly, the developed model can be easily used to fill the logK BSA/w data gap for other IOCs within the applicability domain. Furthermore, the chemical form adjusted descriptors calculated in this study also could be used to construct predictive models on other endpoints of IOCs. Copyright © 2017 Elsevier Inc. All rights reserved.
Li, Guowei; Thabane, Lehana; Delate, Thomas; Witt, Daniel M.; Levine, Mitchell A. H.; Cheng, Ji; Holbrook, Anne
2016-01-01
Objectives To construct and validate a prediction model for individual combined benefit and harm outcomes (stroke with no major bleeding, major bleeding with no stroke, neither event, or both) in patients with atrial fibrillation (AF) with and without warfarin therapy. Methods Using the Kaiser Permanente Colorado databases, we included patients newly diagnosed with AF between January 1, 2005 and December 31, 2012 for model construction and validation. The primary outcome was a prediction model of composite of stroke or major bleeding using polytomous logistic regression (PLR) modelling. The secondary outcome was a prediction model of all-cause mortality using the Cox regression modelling. Results We included 9074 patients with 4537 and 4537 warfarin users and non-users, respectively. In the derivation cohort (n = 4632), there were 136 strokes (2.94%), 280 major bleedings (6.04%) and 1194 deaths (25.78%) occurred. In the prediction models, warfarin use was not significantly associated with risk of stroke, but increased the risk of major bleeding and decreased the risk of death. Both the PLR and Cox models were robust, internally and externally validated, and with acceptable model performances. Conclusions In this study, we introduce a new methodology for predicting individual combined benefit and harm outcomes associated with warfarin therapy for patients with AF. Should this approach be validated in other patient populations, it has potential advantages over existing risk stratification approaches as a patient-physician aid for shared decision-making PMID:27513986
The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...
An automated construction of error models for uncertainty quantification and model calibration
NASA Astrophysics Data System (ADS)
Josset, L.; Lunati, I.
2015-12-01
To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math.ematical Geosciences, 2013 [2] Josset, L., D. Ginsbourger, and I. Lunati, Functional Error Modeling for uncertainty quantification in hydrogeology, Water Resources Research, 2015 [3] Josset, L., V. Demyanov, A.H. Elsheikhb, and I. Lunati, Accelerating Monte Carlo Markov chains with proxy and error models, Computer & Geosciences, 2015 (In press)
Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation
NASA Astrophysics Data System (ADS)
Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi
2016-09-01
We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.
Ng, Jacky Y K; Chan, Alan H S
2018-05-14
The shortage in Hong Kong of construction workers is expected to worsen in future due to the aging population and increasing construction activity. Construction work is dangerous and to help reduce the premature loss of construction workers due to work-related disabilities, this study measured the work ability of 420 Hong Kong construction workers with a Work Ability Index (WAI) which can be used to predict present and future work performance. Given the importance of WAI, in this study the effects of individual and work-related factors on WAI were examined to develop and validate a WAI model to predict how individual and work-related factors affect work ability. The findings will be useful for formulating a pragmatic intervention program to improve the work ability of construction workers and keep them in the work force.
Designing and benchmarking the MULTICOM protein structure prediction system
2013-01-01
Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819
Development of Interpretable Predictive Models for BPH and Prostate Cancer.
Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, J A
2015-01-01
Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced.
Prediction of the properties anhydrite construction mixtures based on neural network approach
NASA Astrophysics Data System (ADS)
Fedorchuk, Y. M.; Zamyatin, N. V.; Smirnov, G. V.; Rusina, O. N.; Sadenova, M. A.
2017-08-01
The article considered the question of applying the backstop modeling mechanism from the components of anhydride mixtures in the process of managing the technological processes of receiving construction products which based on fluoranhydrite.
A New Approach to Predict the Fish Fillet Shelf-Life in Presence of Natural Preservative Agents.
Giuffrida, Alessandro; Giarratana, Filippo; Valenti, Davide; Muscolino, Daniele; Parisi, Roberta; Parco, Alessio; Marotta, Stefania; Ziino, Graziella; Panebianco, Antonio
2017-04-13
Three data sets concerning the behaviour of spoilage flora of fillets treated with natural preservative substances (NPS) were used to construct a new kind of mathematical predictive model. This model, unlike other ones, allows expressing the antibacterial effect of the NPS separately from the prediction of the growth rate. This approach, based on the introduction of a parameter into the predictive primary model, produced a good fitting of observed data and allowed characterising quantitatively the increase of shelf-life of fillets.
An Evaluation of the Pavement Condition Index Prediction Model for Rigid Airfield Pavements
1982-09-01
UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGO(I*%A Data Entotoi) The United States Army Corps of Engineers, Construction Engineering Research Laboratory...Condition . . . 31 Pavement Design/ Construction ....... . 82 Aircraft Traffic ........ .............. 82 Climate Conditions ........ ............. 84...PATTERSON AFB . . . . . . . . . . . . . . . . . . . . . . . . . 155 C. DATA OBTAINED FROM THE CONSTRUCTION ENGINEERING RESEARCH LABORATORY. .. .. 168 D
NASA Astrophysics Data System (ADS)
Liu, Sijun; Chen, Jiaping; Wang, Jianming; Wu, Zhuchao; Wu, Weihua; Xu, Zhiwei; Hu, Wenbiao; Xu, Fei; Tong, Shilu; Shen, Hongbing
2017-10-01
Hand, foot, and mouth disease (HFMD) is a significant public health issue in China and an accurate prediction of epidemic can improve the effectiveness of HFMD control. This study aims to develop a weather-based forecasting model for HFMD using the information on climatic variables and HFMD surveillance in Nanjing, China. Daily data on HFMD cases and meteorological variables between 2010 and 2015 were acquired from the Nanjing Center for Disease Control and Prevention, and China Meteorological Data Sharing Service System, respectively. A multivariate seasonal autoregressive integrated moving average (SARIMA) model was developed and validated by dividing HFMD infection data into two datasets: the data from 2010 to 2013 were used to construct a model and those from 2014 to 2015 were used to validate it. Moreover, we used weekly prediction for the data between 1 January 2014 and 31 December 2015 and leave-1-week-out prediction was used to validate the performance of model prediction. SARIMA (2,0,0)52 associated with the average temperature at lag of 1 week appeared to be the best model (R 2 = 0.936, BIC = 8.465), which also showed non-significant autocorrelations in the residuals of the model. In the validation of the constructed model, the predicted values matched the observed values reasonably well between 2014 and 2015. There was a high agreement rate between the predicted values and the observed values (sensitivity 80%, specificity 96.63%). This study suggests that the SARIMA model with average temperature could be used as an important tool for early detection and prediction of HFMD outbreaks in Nanjing, China.
Applying a health action model to predict and improve healthy behaviors in coal miners.
Vahedian-Shahroodi, Mohammad; Tehrani, Hadi; Mohammadi, Faeze; Gholian-Aval, Mahdi; Peyman, Nooshin
2018-05-01
One of the most important ways to prevent work-related diseases in occupations such as mining is to promote healthy behaviors among miners. This study aimed to predict and promote healthy behaviors among coal miners by using a health action model (HAM). The study was conducted on 200 coal miners in Iran in two steps. In the first step, a descriptive study was implemented to determine predictive constructs and effectiveness of HAM on behavioral intention. The second step involved a quasi-experimental study to determine the effect of an HAM-based education intervention. This intervention was implemented by the researcher and the head of the safety unit based on the predictive construct specified in the first step over 12 sessions of 60 min. The data was collected using an HAM questionnaire and a checklist of healthy behavior. The results of the first step of the study showed that attitude, belief, and normative constructs were meaningful predictors of behavioral intention. Also, the results of the second step revealed that the mean score of attitude and behavioral intention increased significantly after conducting the intervention in the experimental group, while the mean score of these constructs decreased significantly in the control group. The findings of this study showed that HAM-based educational intervention could improve the healthy behaviors of mine workers. Therefore, it is recommended to extend the application of this model to other working groups to improve healthy behaviors.
Mansberger, Steven L; Sheppler, Christina R; McClure, Tina M; Vanalstine, Cory L; Swanson, Ingrid L; Stoumbos, Zoey; Lambert, William E
2013-09-01
To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach's alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R (2)) of .40. Test-retest reliability was 90%. The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence.
Acoustical transmission-line model of the middle-ear cavities and mastoid air cells.
Keefe, Douglas H
2015-04-01
An acoustical transmission line model of the middle-ear cavities and mastoid air cell system (MACS) was constructed for the adult human middle ear with normal function. The air-filled cavities comprised the tympanic cavity, aditus, antrum, and MACS. A binary symmetrical airway branching model of the MACS was constructed using an optimization procedure to match the average total volume and surface area of human temporal bones. The acoustical input impedance of the MACS was calculated using a recursive procedure, and used to predict the input impedance of the middle-ear cavities at the location of the tympanic membrane. The model also calculated the ratio of the acoustical pressure in the antrum to the pressure in the middle-ear cavities at the location of the tympanic membrane. The predicted responses were sensitive to the magnitude of the viscothermal losses within the MACS. These predicted input impedance and pressure ratio functions explained the presence of multiple resonances reported in published data, which were not explained by existing MACS models.
NASA Astrophysics Data System (ADS)
Quan, Guo-zheng; Zhan, Zong-yang; Wang, Tong; Xia, Yu-feng
2017-01-01
The response of true stress to strain rate, temperature and strain is a complex three-dimensional (3D) issue, and the accurate description of such constitutive relationships significantly contributes to the optimum process design. To obtain the true stress-strain data of ultra-high-strength steel, BR1500HS, a series of isothermal hot tensile tests were conducted in a wide temperature range of 973-1,123 K and a strain rate range of 0.01-10 s-1 on a Gleeble 3800 testing machine. Then the constitutive relationships were modeled by an optimally constructed and well-trained backpropagation artificial neural network (BP-ANN). The evaluation of BP-ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of BR1500HS. A comparison on improved Arrhenius-type constitutive equation and BP-ANN model shows that the latter has higher accuracy. Consequently, the developed BP-ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions. Then a 3D continuous interaction space for temperature, strain rate, strain and stress was constructed based on these predicted data. The developed 3D continuous interaction space for hot working parameters contributes to fully revealing the intrinsic relationships of BR1500HS steel.
Quinn, Francis; Johnston, Marie; Johnston, Derek W
2013-01-01
Previous research has supported an integrated biomedical and behavioural model explaining activity limitations. However, further tests of this model are required at the within-person level, because while it proposes that the constructs are related within individuals, it has primarily been tested between individuals in large group studies. We aimed to test the integrated model at the within-person level. Six correlational N-of-1 studies in participants with arthritis, chronic pain and walking limitations were carried out. Daily measures of theoretical constructs were collected using a hand-held computer (PDA), the activity was assessed by self-report and accelerometer and the data were analysed using time-series analysis. The biomedical model was not supported as pain impairment did not predict activity, so the integrated model was supported partially. Impairment predicted intention to move around, while perceived behavioural control (PBC) and intention predicted activity. PBC did not predict activity limitation in the expected direction. The integrated model of disability was partially supported within individuals, especially the behavioural elements. However, results suggest that different elements of the model may drive activity (limitations) for different individuals. The integrated model provides a useful framework for understanding disability and suggests interventions, and the utility of N-of-1 methodology for testing theory is illustrated.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Tien Bui, Dieu; Pourghasemi, Hamid Reza; Indra, Prakash; Dholakia, M. B.
2017-04-01
The objective of this study is to make a comparison of the prediction performance of three techniques, Functional Trees (FT), Multilayer Perceptron Neural Networks (MLP Neural Nets), and Naïve Bayes (NB) for landslide susceptibility assessment at the Uttarakhand Area (India). Firstly, a landslide inventory map with 430 landslide locations in the study area was constructed from various sources. Landslide locations were then randomly split into two parts (i) 70 % landslide locations being used for training models (ii) 30 % landslide locations being employed for validation process. Secondly, a total of eleven landslide conditioning factors including slope angle, slope aspect, elevation, curvature, lithology, soil, land cover, distance to roads, distance to lineaments, distance to rivers, and rainfall were used in the analysis to elucidate the spatial relationship between these factors and landslide occurrences. Feature selection of Linear Support Vector Machine (LSVM) algorithm was employed to assess the prediction capability of these conditioning factors on landslide models. Subsequently, the NB, MLP Neural Nets, and FT models were constructed using training dataset. Finally, success rate and predictive rate curves were employed to validate and compare the predictive capability of three used models. Overall, all the three models performed very well for landslide susceptibility assessment. Out of these models, the MLP Neural Nets and the FT models had almost the same predictive capability whereas the MLP Neural Nets (AUC = 0.850) was slightly better than the FT model (AUC = 0.849). The NB model (AUC = 0.838) had the lowest predictive capability compared to other models. Landslide susceptibility maps were final developed using these three models. These maps would be helpful to planners and engineers for the development activities and land-use planning.
[Application of ARIMA model on prediction of malaria incidence].
Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai
2016-01-29
To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.
Lee, Byeong-Ju; Zhou, Yaoyao; Lee, Jae Soung; Shin, Byeung Kon; Seo, Jeong-Ah; Lee, Doyup; Kim, Young-Suk
2018-01-01
The ability to determine the origin of soybeans is an important issue following the inclusion of this information in the labeling of agricultural food products becoming mandatory in South Korea in 2017. This study was carried out to construct a prediction model for discriminating Chinese and Korean soybeans using Fourier-transform infrared (FT-IR) spectroscopy and multivariate statistical analysis. The optimal prediction models for discriminating soybean samples were obtained by selecting appropriate scaling methods, normalization methods, variable influence on projection (VIP) cutoff values, and wave-number regions. The factors for constructing the optimal partial-least-squares regression (PLSR) prediction model were using second derivatives, vector normalization, unit variance scaling, and the 4000–400 cm–1 region (excluding water vapor and carbon dioxide). The PLSR model for discriminating Chinese and Korean soybean samples had the best predictability when a VIP cutoff value was not applied. When Chinese soybean samples were identified, a PLSR model that has the lowest root-mean-square error of the prediction value was obtained using a VIP cutoff value of 1.5. The optimal PLSR prediction model for discriminating Korean soybean samples was also obtained using a VIP cutoff value of 1.5. This is the first study that has combined FT-IR spectroscopy with normalization methods, VIP cutoff values, and selected wave-number regions for discriminating Chinese and Korean soybeans. PMID:29689113
Beukinga, Roelof J; Hulshoff, Jan B; van Dijk, Lisanne V; Muijs, Christina T; Burgerhof, Johannes G M; Kats-Ugurlu, Gursah; Slart, Riemer H J A; Slump, Cornelis H; Mul, Véronique E M; Plukker, John Th M
2017-05-01
Adequate prediction of tumor response to neoadjuvant chemoradiotherapy (nCRT) in esophageal cancer (EC) patients is important in a more personalized treatment. The current best clinical method to predict pathologic complete response is SUV max in 18 F-FDG PET/CT imaging. To improve the prediction of response, we constructed a model to predict complete response to nCRT in EC based on pretreatment clinical parameters and 18 F-FDG PET/CT-derived textural features. Methods: From a prospectively maintained single-institution database, we reviewed 97 consecutive patients with locally advanced EC and a pretreatment 18 F-FDG PET/CT scan between 2009 and 2015. All patients were treated with nCRT (carboplatin/paclitaxel/41.4 Gy) followed by esophagectomy. We analyzed clinical, geometric, and pretreatment textural features extracted from both 18 F-FDG PET and CT. The current most accurate prediction model with SUV max as a predictor variable was compared with 6 different response prediction models constructed using least absolute shrinkage and selection operator regularized logistic regression. Internal validation was performed to estimate the model's performances. Pathologic response was defined as complete versus incomplete response (Mandard tumor regression grade system 1 vs. 2-5). Results: Pathologic examination revealed 19 (19.6%) complete and 78 (80.4%) incomplete responders. Least absolute shrinkage and selection operator regularization selected the clinical parameters: histologic type and clinical T stage, the 18 F-FDG PET-derived textural feature long run low gray level emphasis, and the CT-derived textural feature run percentage. Introducing these variables to a logistic regression analysis showed areas under the receiver-operating-characteristic curve (AUCs) of 0.78 compared with 0.58 in the SUV max model. The discrimination slopes were 0.17 compared with 0.01, respectively. After internal validation, the AUCs decreased to 0.74 and 0.54, respectively. Conclusion: The predictive values of the constructed models were superior to the standard method (SUV max ). These results can be considered as an initial step in predicting tumor response to nCRT in locally advanced EC. Further research in refining the predictive value of these models is needed to justify omission of surgery. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme
2013-05-01
Many construction activities can put workers at risk of breathing silica containing dusts, and there is an important body of literature documenting exposure levels using a task-based strategy. In this study, statistical modeling was used to analyze a data set containing 1466 task-based, personal respirable crystalline silica (RCS) measurements gathered from 46 sources to estimate exposure levels during construction tasks and the effects of determinants of exposure. Monte-Carlo simulation was used to recreate individual exposures from summary parameters, and the statistical modeling involved multimodel inference with Tobit models containing combinations of the following exposure variables: sampling year, sampling duration, construction sector, project type, workspace, ventilation, and controls. Exposure levels by task were predicted based on the median reported duration by activity, the year 1998, absence of source control methods, and an equal distribution of the other determinants of exposure. The model containing all the variables explained 60% of the variability and was identified as the best approximating model. Of the 27 tasks contained in the data set, abrasive blasting, masonry chipping, scabbling concrete, tuck pointing, and tunnel boring had estimated geometric means above 0.1mg m(-3) based on the exposure scenario developed. Water-fed tools and local exhaust ventilation were associated with a reduction of 71 and 69% in exposure levels compared with no controls, respectively. The predictive model developed can be used to estimate RCS concentrations for many construction activities in a wide range of circumstances.
Updraft Fixed Bed Gasification Aspen Plus Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
2007-09-27
The updraft fixed bed gasification model provides predictive modeling capabilities for updraft fixed bed gasifiers, when devolatilization data is available. The fixed bed model is constructed using Aspen Plus, process modeling software, coupled with a FORTRAN user kinetic subroutine. Current updraft gasification models created in Aspen Plus have limited predictive capabilities and must be "tuned" to reflect a generalized gas composition as specified in literature or by the gasifier manufacturer. This limits the applicability of the process model.
Translational systems pharmacology‐based predictive assessment of drug‐induced cardiomyopathy
Messinis, Dimitris E.; Melas, Ioannis N.; Hur, Junguk; Varshney, Navya; Alexopoulos, Leonidas G.
2018-01-01
Drug‐induced cardiomyopathy contributes to drug attrition. We compared two pipelines of predictive modeling: (1) applying elastic net (EN) to differentially expressed genes (DEGs) of drugs; (2) applying integer linear programming (ILP) to construct each drug's signaling pathway starting from its targets to downstream proteins, to transcription factors, and to its DEGs in human cardiomyocytes, and then subjecting the genes/proteins in the drugs' signaling networks to EN regression. We classified 31 drugs with availability of DEGs into 13 toxic and 18 nontoxic drugs based on a clinical cardiomyopathy incidence cutoff of 0.1%. The ILP‐augmented modeling increased prediction accuracy from 79% to 88% (sensitivity: 88%; specificity: 89%) under leave‐one‐out cross validation. The ILP‐constructed signaling networks of drugs were better predictors than DEGs. Per literature, the microRNAs that reportedly regulate expression of our six top predictors are of diagnostic value for natural heart failure or doxorubicin‐induced cardiomyopathy. This translational predictive modeling might uncover potential biomarkers. PMID:29341478
NASA Astrophysics Data System (ADS)
Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.
2008-06-01
In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
Assessing waveform predictions of recent three-dimensional velocity models of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Bao, Xueyang; Shen, Yang
2016-04-01
Accurate velocity models are essential for both the determination of earthquake locations and source moments and the interpretation of Earth structures. With the increasing number of three-dimensional velocity models, it has become necessary to assess the models for accuracy in predicting seismic observations. Six models of the crustal and uppermost mantle structures in Tibet and surrounding regions are investigated in this study. Regional Rayleigh and Pn (or Pnl) waveforms from two ground truth events, including one nuclear explosion and one natural earthquake located in the study area, are simulated by using a three-dimensional finite-difference method. Synthetics are compared to observed waveforms in multiple period bands of 20-75 s for Rayleigh waves and 1-20 s for Pn/Pnl waves. The models are evaluated based on the phase delays and cross-correlation coefficients between synthetic and observed waveforms. A model generated from full-wave ambient noise tomography best predicts Rayleigh waves throughout the data set, as well as Pn/Pnl waves traveling from the Tarim Basin to the stations located in central Tibet. In general, the models constructed from P wave tomography are not well suited to predict Rayleigh waves, and vice versa. Possible causes of the differences between observed and synthetic waveforms, and frequency-dependent variations of the "best matching" models with the smallest prediction errors are discussed. This study suggests that simultaneous prediction for body and surface waves requires an integrated velocity model constructed with multiple seismic waveforms and consideration of other important properties, such as anisotropy.
Jaspers, Arne; De Beéck, Tim Op; Brink, Michel S; Frencken, Wouter G P; Staes, Filip; Davis, Jesse J; Helsen, Werner F
2018-05-01
Machine learning may contribute to understanding the relationship between the external load and internal load in professional soccer. Therefore, the relationship between external load indicators (ELIs) and the rating of perceived exertion (RPE) was examined using machine learning techniques on a group and individual level. Training data were collected from 38 professional soccer players over 2 seasons. The external load was measured using global positioning system technology and accelerometry. The internal load was obtained using the RPE. Predictive models were constructed using 2 machine learning techniques, artificial neural networks and least absolute shrinkage and selection operator (LASSO) models, and 1 naive baseline method. The predictions were based on a large set of ELIs. Using each technique, 1 group model involving all players and 1 individual model for each player were constructed. These models' performance on predicting the reported RPE values for future training sessions was compared with the naive baseline's performance. Both the artificial neural network and LASSO models outperformed the baseline. In addition, the LASSO model made more accurate predictions for the RPE than did the artificial neural network model. Furthermore, decelerations were identified as important ELIs. Regardless of the applied machine learning technique, the group models resulted in equivalent or better predictions for the reported RPE values than the individual models. Machine learning techniques may have added value in predicting RPE for future sessions to optimize training design and evaluation. These techniques may also be used in conjunction with expert knowledge to select key ELIs for load monitoring.
Predicting the Development of Analytical and Creative Abilities in Upper Elementary Grades
ERIC Educational Resources Information Center
Gubbels, Joyce; Segers, Eliane; Verhoeven, Ludo
2017-01-01
In some models, intelligence has been described as a multidimensional construct comprising both analytical and creative abilities. In addition, intelligence is considered to be dynamic rather than static. A structural equation model was used to examine the predictive role of cognitive (visual short-term memory, verbal short-term memory, selective…
Response of Douglas-fir advance regeneration to overstory removal
J. Chris Maranto; Dennis E. Ferguson; David L. Adams
2008-01-01
A statistical model is presented that predicts periodic height growth for released Pseudotsuga menziesii var. glauca [Beissn.] Franco advance regeneration in central Idaho. Individual tree and site variables were used to construct a model that predicts 5-year height growth for years 6 through 10 after release. Habitat type and height growth prior to...
The Validity of the Three-Component Model of Organizational Commitment in a Chinese Context.
ERIC Educational Resources Information Center
Cheng, Yuqiu; Stockdale, Margaret S.
2003-01-01
The construct validity of a three-component model of organizational commitment was tested with 226 Chinese employees. Affective and normative commitment significantly predicted job satisfaction; all three components predicted turnover intention. Compared with Canadian (n=603) and South Korean (n=227) samples, normative and affective commitment…
NASA Astrophysics Data System (ADS)
Kinnell, P. I. A.
2015-09-01
Trenouth and Gharabaghi (2015) present two models which replace the EI30 index used as the event erosivity index in the USLE/RUSLE with ones that include runoff and values of EI30 to powers that differ for 1.0 as the event erosivity factor in modelling soil loss for construction sites. Their analysis on the application of these models focused on data from 5 locations as a whole but did not show how the models worked at each location. Practically, the ability to predict sediment yields at a specific location is more relevant than the capacity of a model to predict sediment yields globally. Also, the mathematical structure of their proposed models shows little regard to the physical processes involved in causing erosion and sediment yield. There is still the need to develop event-based empirical models for construction sites that are robust because they give proper consideration to the erosion process involved, and take account of the fact that sediment yield is usually determined from measurements of suspended load whereas soil loss at the scale for which the USLE/RUSLE model was developed includes both suspended load and bed load.
NASA Astrophysics Data System (ADS)
Peoples, Shelagh
The purpose of this study was to determine which of three competing models will provide, reliable, interpretable, and responsive measures of elementary students' understanding of the nature of science (NOS). The Nature of Science Instrument-Elementary (NOSI-E), a 28-item Rasch-based instrument, was used to assess students' NOS understanding. The NOS construct was conceptualized using five construct dimensions (Empirical, Inventive, Theory-laden, Certainty and Socially & Culturally Embedded). The competing models represent three internal models for the NOS construct. One postulate is that the NOS construct is unidimensional where one latent construct explains the relationship between the 28 items of the NOSI-E. Alternatively, the NOS construct is composed of five independent unidimensional constructs (the consecutive approach). Lastly, the NOS construct is multidimensional and composed of five inter-related but separate dimensions. A validity argument was developed that hypothesized that the internal structure of the NOS construct is best represented by the multidimensional Rasch model. Four sets of analyses were performed in which the three representations were compared. These analyses addressed five validity aspects (content, substantive, generalizability, structural and external) of construct validity. The vast body of evidence supported the claim that the NOS construct is composed of five separate but inter-related dimensions that is best represented by the multidimensional Rasch model. The results of the multidimensional analyses indicated that the items of the five subscales were of excellent technical quality, exhibited no differential item functioning (based on gender), had an item hierarchy that conformed to theoretical expectations; and together formed subscales of reasonable reliability (> 0.7 on each subscale) that were responsive to change in the construct. Theory-laden scores from the multidimensional model predicted students' science achievement with scores from all five NOS dimensions significantly predicting students' perceptions of the constructivist nature of their classroom learning environment. The NOSI-E instrument is a theoretically grounded scale that can measure elementary students' NOS understanding and appears suitable for use in science education research.
Ng, Jacky Y. K.
2018-01-01
The shortage in Hong Kong of construction workers is expected to worsen in future due to the aging population and increasing construction activity. Construction work is dangerous and to help reduce the premature loss of construction workers due to work-related disabilities, this study measured the work ability of 420 Hong Kong construction workers with a Work Ability Index (WAI) which can be used to predict present and future work performance. Given the importance of WAI, in this study the effects of individual and work-related factors on WAI were examined to develop and validate a WAI model to predict how individual and work-related factors affect work ability. The findings will be useful for formulating a pragmatic intervention program to improve the work ability of construction workers and keep them in the work force. PMID:29758018
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic
2018-07-01
Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.
Chouinard, Maud-Christine; Robichaud-Ekstrand, Sylvie
2007-02-01
Several authors have questioned the transtheoretical model. Determining the predictive value of each cognitive-behavioural element within this model could explain the multiple successes reported in smoking cessation programmes. The purpose of this study was to predict point-prevalent smoking abstinence at 2 and 6 months, using the constructs of the transtheoretical model, when applied to a pooled sample of individuals who were hospitalized for a cardiovascular event. The study follows a predictive correlation design. Recently hospitalized patients (n=168) with cardiovascular disease were pooled from a randomized, controlled trial. Independent variables of the predictive transtheoretical model comprise stages and processes of change, pros and cons to quit smoking (decisional balance), self-efficacy, and social support. These were evaluated at baseline, 2 and 6 months. Compared to smokers, individuals who abstained from smoking at 2 and 6 months were more confident at baseline to remain non-smokers, perceived less pros and cons to continue smoking, utilized less consciousness raising and self-re-evaluation experiential processes of change, and received more positive reinforcement from their social network with regard to their smoke-free behaviour. Self-efficacy and stages of change at baseline were predictive of smoking abstinence after 6 months. Other variables found to be predictive of smoking abstinence at 6 months were an increase in self-efficacy; an increase in positive social support behaviour and a decrease of the pros within the decisional balance. The results partially support the predictive value of the transtheoretical model constructs in smoking cessation for cardiovascular disease patients.
A scoring system to predict breast cancer mortality at 5 and 10 years.
Paredes-Aracil, Esther; Palazón-Bru, Antonio; Folgado-de la Rosa, David Manuel; Ots-Gutiérrez, José Ramón; Compañ-Rosique, Antonio Fernando; Gil-Guillén, Vicente Francisco
2017-03-24
Although predictive models exist for mortality in breast cancer (BC) (generally all cause-mortality), they are not applicable to all patients and their statistical methodology is not the most powerful to develop a predictive model. Consequently, we developed a predictive model specific for BC mortality at 5 and 10 years resolving the above issues. This cohort study included 287 patients diagnosed with BC in a Spanish region in 2003-2016. time-to-BC death. Secondary variables: age, personal history of breast surgery, personal history of any cancer/BC, premenopause, postmenopause, grade, estrogen receptor, progesterone receptor, c-erbB2, TNM stage, multicentricity/multifocality, diagnosis and treatment. A points system was constructed to predict BC mortality at 5 and 10 years. The model was internally validated by bootstrapping. The points system was integrated into a mobile application for Android. Mean follow-up was 8.6 ± 3.5 years and 55 patients died of BC. The points system included age, personal history of BC, grade, TNM stage and multicentricity. Validation was satisfactory, in both discrimination and calibration. In conclusion, we constructed and internally validated a scoring system for predicting BC mortality at 5 and 10 years. External validation studies are needed for its use in other geographical areas.
A stochastic model for optimizing composite predictors based on gene expression profiles.
Ramanathan, Murali
2003-07-01
This project was done to develop a mathematical model for optimizing composite predictors based on gene expression profiles from DNA arrays and proteomics. The problem was amenable to a formulation and solution analogous to the portfolio optimization problem in mathematical finance: it requires the optimization of a quadratic function subject to linear constraints. The performance of the approach was compared to that of neighborhood analysis using a data set containing cDNA array-derived gene expression profiles from 14 multiple sclerosis patients receiving intramuscular inteferon-beta1a. The Markowitz portfolio model predicts that the covariance between genes can be exploited to construct an efficient composite. The model predicts that a composite is not needed for maximizing the mean value of a treatment effect: only a single gene is needed, but the usefulness of the effect measure may be compromised by high variability. The model optimized the composite to yield the highest mean for a given level of variability or the least variability for a given mean level. The choices that meet this optimization criteria lie on a curve of composite mean vs. composite variability plot referred to as the "efficient frontier." When a composite is constructed using the model, it outperforms the composite constructed using the neighborhood analysis method. The Markowitz portfolio model may find potential applications in constructing composite biomarkers and in the pharmacogenomic modeling of treatment effects derived from gene expression endpoints.
Research on reverse logistics location under uncertainty environment based on grey prediction
NASA Astrophysics Data System (ADS)
Zhenqiang, Bao; Congwei, Zhu; Yuqin, Zhao; Quanke, Pan
This article constructs reverse logistic network based on uncertain environment, integrates the reverse logistics network and distribution network, and forms a closed network. An optimization model based on cost is established to help intermediate center, manufacturing center and remanufacturing center make location decision. A gray model GM (1, 1) is used to predict the product holdings of the collection points, and then prediction results are carried into the cost optimization model and a solution is got. Finally, an example is given to verify the effectiveness and feasibility of the model.
Crimmins, Theresa M; Crimmins, Michael A; Gerst, Katharine L; Rosemartin, Alyssa H; Weltzin, Jake F
2017-01-01
In support of science and society, the USA National Phenology Network (USA-NPN) maintains a rapidly growing, continental-scale, species-rich dataset of plant and animal phenology observations that with over 10 million records is the largest such database in the United States. The aim of this study was to explore the potential that exists in the broad and rich volunteer-collected dataset maintained by the USA-NPN for constructing models predicting the timing of phenological transition across species' ranges within the continental United States. Contributed voluntarily by professional and citizen scientists, these opportunistically collected observations are characterized by spatial clustering, inconsistent spatial and temporal sampling, and short temporal depth (2009-present). Whether data exhibiting such limitations can be used to develop predictive models appropriate for use across large geographic regions has not yet been explored. We constructed predictive models for phenophases that are the most abundant in the database and also relevant to management applications for all species with available data, regardless of plant growth habit, location, geographic extent, or temporal depth of the observations. We implemented a very basic model formulation-thermal time models with a fixed start date. Sufficient data were available to construct 107 individual species × phenophase models. Remarkably, given the limited temporal depth of this dataset and the simple modeling approach used, fifteen of these models (14%) met our criteria for model fit and error. The majority of these models represented the "breaking leaf buds" and "leaves" phenophases and represented shrub or tree growth forms. Accumulated growing degree day (GDD) thresholds that emerged ranged from 454 GDDs (Amelanchier canadensis-breaking leaf buds) to 1,300 GDDs (Prunus serotina-open flowers). Such candidate thermal time thresholds can be used to produce real-time and short-term forecast maps of the timing of these phenophase transition. In addition, many of the candidate models that emerged were suitable for use across the majority of the species' geographic ranges. Real-time and forecast maps of phenophase transitions could support a wide range of natural resource management applications, including invasive plant management, issuing asthma and allergy alerts, and anticipating frost damage for crops in vulnerable states. Our finding that several viable thermal time threshold models that work across the majority of the species ranges could be constructed from the USA-NPN database provides clear evidence that great potential exists this dataset to develop more enhanced predictive models for additional species and phenophases. Further, the candidate models that emerged have immediate utility for supporting a wide range of management applications.
Kim, Jung Kwon; Ha, Seung Beom; Jeon, Chan Hoo; Oh, Jong Jin; Cho, Sung Yong; Oh, Seung-June; Kim, Hyeon Hoe; Jeong, Chang Wook
2016-01-01
Purpose Shock-wave lithotripsy (SWL) is accepted as the first line treatment modality for uncomplicated upper urinary tract stones; however, validated prediction models with regards to stone-free rates (SFRs) are still needed. We aimed to develop nomograms predicting SFRs after the first and within the third session of SWL. Computed tomography (CT) information was also modeled for constructing nomograms. Materials and Methods From March 2006 to December 2013, 3028 patients were treated with SWL for ureter and renal stones at our three tertiary institutions. Four cohorts were constructed: Total-development, Total-validation, CT-development, and CT-validation cohorts. The nomograms were developed using multivariate logistic regression models with selected significant variables in a univariate logistic regression model. A C-index was used to assess the discrimination accuracy of nomograms and calibration plots were used to analyze the consistency of prediction. Results The SFR, after the first and within the third session, was 48.3% and 68.8%, respectively. Significant variables were sex, stone location, stone number, and maximal stone diameter in the Total-development cohort, and mean Hounsfield unit (HU) and grade of hydronephrosis (HN) were additional parameters in the CT-development cohort. The C-indices were 0.712 and 0.723 for after the first and within the third session of SWL in the Total-development cohort, and 0.755 and 0.756, in the CT-development cohort, respectively. The calibration plots showed good correspondences. Conclusions We constructed and validated nomograms to predict SFR after SWL. To the best of our knowledge, these are the first graphical nomograms to be modeled with CT information. These may be useful for patient counseling and treatment decision-making. PMID:26890006
NASA Astrophysics Data System (ADS)
Spitulnik, Michele Wisnudel
Science education reforms advocate inquiry as a way to build explanations and make informed decisions. Based on this call this dissertation (1) defines flexible scientific understanding by elaborating on content, inquiry and epistemic understandings; (2) describes an inquiry based unit that integrates dynamic modeling software; (3) examines students' understandings as they construct models; and (4) identifies instructional strategies that support inquiry and model building. A curriculum unit was designed to engage students in inquiry by identifying problems and constructing models to represent, explain and predict phenomena. Ninth grade students in a public mid-western high school worked in teams of 2-3 to ask questions, find information and reflect on the purposes of models. Data sources including classroom video, observations, interviews, student models and handouts were used to formulate cases that examine how two groups construct understanding. A teacher case study identifies the teaching strategies that support understanding. Categories within content, inquiry and epistemic understandings were used to analyze student understandings and teaching supports. The findings demonstrate that students can build flexible understanding by constructing models. Students built: (1) content understanding by identifying key ideas and building relationships and explanations of phenomena; (2) inquiry understanding by defining problems, constructing models and developing positions; and (3) epistemic understanding by describing the purposes of models as generalizing phenomena, testing hypotheses and making predictions. However, students demonstrated difficulty in using evidence to defend scientific arguments. Strategies that support flexible understanding were also identified. Content supports include: setting expectations for explanations; using examples to illustrate explanations; modeling questions; and providing feedback that prompts detailed explanations. Supports for inquiry are setting expectations for data gathering; using examples that illustrate model building; modeling the development of an argument; and providing feedback to promote coherent models. Epistemic supports include: using examples to illustrate purposes and assumptions within models, and providing feedback as students evaluate their models. The dissertation demonstrates that teaching strategies impact student understanding but are challenging to implement. When strategies are not used, students do not necessarily construct desired outcomes such as, using evidence to support arguments.
Reduced order modeling of fluid/structure interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph
2009-11-01
This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preservesmore » numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.« less
The transferability of safety-driven access management models for application to other sites.
DOT National Transportation Integrated Search
2001-01-01
Several research studies have produced mathematical models that predict the safety impacts of selected access management techniques. Since new models require substantial resources to construct, this study evaluated five existing models with regard to...
Technological aspects of lift-slab method in high-rise-building construction.
NASA Astrophysics Data System (ADS)
Gaidukov, Pavel V.; Pugach, Evgeny M.
2018-03-01
The utilization efficiency of slab lifting technology for high-rise-building construction is regarded in the present article. The main problem of the article is organizing technology abilities indication, which proves the method application possibility. There is the comparing of lifting technologies and sequential concrete-frame extension, as follows: the first one: the parameters are defined, and the second one: the organizational model is executed. This model defines borders of the usage methods, as well. There is the mathematic model creating, which describes boundary conditions of the present technologies usage. This model allows to predict construction efficiency for different stored-number buildings.
Ritenberga, Olga; Sofiev, Mikhail; Siljamo, Pilvi; Saarto, Annika; Dahl, Aslog; Ekebom, Agneta; Sauliene, Ingrida; Shalaboda, Valentina; Severova, Elena; Hoebeke, Lucie; Ramfjord, Hallvard
2018-02-15
The paper suggests a methodology for predicting next-year seasonal pollen index (SPI, a sum of daily-mean pollen concentrations) over large regions and demonstrates its performance for birch in Northern and North-Eastern Europe. A statistical model is constructed using meteorological, geophysical and biological characteristics of the previous year). A cluster analysis of multi-annual data of European Aeroallergen Network (EAN) revealed several large regions in Europe, where the observed SPI exhibits similar patterns of the multi-annual variability. We built the model for the northern cluster of stations, which covers Finland, Sweden, Baltic States, part of Belarus, and, probably, Russia and Norway, where the lack of data did not allow for conclusive analysis. The constructed model was capable of predicting the SPI with correlation coefficient reaching up to 0.9 for some stations, odds ratio is infinitely high for 50% of sites inside the region and the fraction of prediction falling within factor of 2 from observations, stays within 40-70%. In particular, model successfully reproduced both the bi-annual cycle of the SPI and years when this cycle breaks down. Copyright © 2017 Elsevier B.V. All rights reserved.
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali
2013-09-01
The research aims to develop global modeling tools capable of categorizing structurally diverse chemicals in various toxicity classes according to the EEC and European Community directives, and to predict their acute toxicity in fathead minnow using set of selected molecular descriptors. Accordingly, artificial intelligence approach based classification and regression models, such as probabilistic neural networks (PNN), generalized regression neural networks (GRNN), multilayer perceptron neural network (MLPN), radial basis function neural network (RBFN), support vector machines (SVM), gene expression programming (GEP), and decision tree (DT) were constructed using the experimental toxicity data. Diversity and non-linearity in the chemicals' data were tested using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Predictive and generalization abilities of various models constructed here were compared using several statistical parameters. PNN and GRNN models performed relatively better than MLPN, RBFN, SVM, GEP, and DT. Both in two and four category classifications, PNN yielded a considerably high accuracy of classification in training (95.85 percent and 90.07 percent) and validation data (91.30 percent and 86.96 percent), respectively. GRNN rendered a high correlation between the measured and model predicted -log LC50 values both for the training (0.929) and validation (0.910) data and low prediction errors (RMSE) of 0.52 and 0.49 for two sets. Efficiency of the selected PNN and GRNN models in predicting acute toxicity of new chemicals was adequately validated using external datasets of different fish species (fathead minnow, bluegill, trout, and guppy). The PNN and GRNN models showed good predictive and generalization abilities and can be used as tools for predicting toxicities of structurally diverse chemical compounds. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fang, Kaizheng; Mu, Daobin; Chen, Shi; Wu, Borong; Wu, Feng
2012-06-01
In this study, a prediction model based on artificial neural network is constructed for surface temperature simulation of nickel-metal hydride battery. The model is developed from a back-propagation network which is trained by Levenberg-Marquardt algorithm. Under each ambient temperature of 10 °C, 20 °C, 30 °C and 40 °C, an 8 Ah cylindrical Ni-MH battery is charged in the rate of 1 C, 3 C and 5 C to its SOC of 110% in order to provide data for the model training. Linear regression method is adopted to check the quality of the model training, as well as mean square error and absolute error. It is shown that the constructed model is of excellent training quality for the guarantee of prediction accuracy. The surface temperature of battery during charging is predicted under various ambient temperatures of 50 °C, 60 °C, 70 °C by the model. The results are validated in good agreement with experimental data. The value of battery surface temperature is calculated to exceed 90 °C under the ambient temperature of 60 °C if it is overcharged in 5 C, which might cause battery safety issues.
An S 4 model inspired from self-complementary neutrino mixing
NASA Astrophysics Data System (ADS)
Zhang, Xinyi
2018-03-01
We build an S 4 model for neutrino masses and mixings based on the self-complementary (SC) neutrino mixing pattern. The SC mixing is constructed from the self-complementarity relation plus {δ }CP}=-\\tfrac{π }{2}. We elaborately construct the model at a percent level of accuracy to reproduce the structure given by the SC mixing. After performing a numerical study on the model’s parameter space, we find that in the case of normal ordering, the model can give predictions on the observables that are compatible with their 3σ ranges, and give predictions for the not-yet observed quantities like the lightest neutrino mass m 1 ∈ [0.003, 0.010] eV and the Dirac CP violating phase {δ }CP}\\in [256.72^\\circ ,283.33^\\circ ].
Wavelet synthetic method for turbulent flow.
Zhou, Long; Rauh, Cornelia; Delgado, Antonio
2015-07-01
Based on the idea of random cascades on wavelet dyadic trees and the energy cascade model known as the wavelet p model, a series of velocity increments in two-dimensional space are constructed in different levels of scale. The dynamics is imposed on the generated scales by solving the Euler equation in the Lagrangian framework. A dissipation model is used in order to cover the shortage of the p model, which only predicts in inertial range. Wavelet reconstruction as well as the multiresolution analysis are then performed on each scales. As a result, a type of isotropic velocity field is created. The statistical properties show that the constructed velocity fields share many important features with real turbulence. The pertinence of this approach in the prediction of flow intermittency is also discussed.
Niche construction game cancer cells play
NASA Astrophysics Data System (ADS)
Bergman, Aviv; Gligorijevic, Bojana
2015-10-01
Niche construction concept was originally defined in evolutionary biology as the continuous interplay between natural selection via environmental conditions and the modification of these conditions by the organism itself. Processes unraveling during cancer metastasis include construction of niches, which cancer cells use towards more efficient survival, transport into new environments and preparation of the remote sites for their arrival. Many elegant experiments were done lately illustrating, for example, the premetastatic niche construction, but there is practically no mathematical modeling done which would apply the niche construction framework. To create models useful for understanding niche construction role in cancer progression, we argue that a) genetic, b) phenotypic and c) ecological levels are to be included. While the model proposed here is phenomenological in its current form, it can be converted into a predictive outcome model via experimental measurement of the model parameters. Here we give an overview of an experimentally formulated problem in cancer metastasis and propose how niche construction framework can be utilized and broadened to model it. Other life science disciplines, such as host-parasite coevolution, may also benefit from niche construction framework adaptation, to satisfy growing need for theoretical considerations of data collected by experimental biology.
Niche construction game cancer cells play.
Bergman, Aviv; Gligorijevic, Bojana
2015-10-01
Niche construction concept was originally defined in evolutionary biology as the continuous interplay between natural selection via environmental conditions and the modification of these conditions by the organism itself. Processes unraveling during cancer metastasis include construction of niches, which cancer cells use towards more efficient survival, transport into new environments and preparation of the remote sites for their arrival. Many elegant experiments were done lately illustrating, for example, the premetastatic niche construction, but there is practically no mathematical modeling done which would apply the niche construction framework. To create models useful for understanding niche construction role in cancer progression, we argue that a) genetic, b) phenotypic and c) ecological levels are to be included. While the model proposed here is phenomenological in its current form, it can be converted into a predictive outcome model via experimental measurement of the model parameters. Here we give an overview of an experimentally formulated problem in cancer metastasis and propose how niche construction framework can be utilized and broadened to model it. Other life science disciplines, such as host-parasite coevolution, may also benefit from niche construction framework adaptation, to satisfy growing need for theoretical considerations of data collected by experimental biology.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Predictive factors of telemedicine service acceptance and behavioral intention of physicians.
Rho, Mi Jung; Choi, In Young; Lee, Jaebeom
2014-08-01
Despite the proliferation of telemedicine technology, telemedicine service acceptance has been slow in actual healthcare settings. The purpose of this research is to develop a theoretical model for explaining the predictive factors influencing physicians' willingness to use telemedicine technology to provide healthcare services. We developed the Telemedicine Service Acceptance model based on the technology acceptance model (TAM) with the inclusion of three predictive constructs from the previously published telemedicine literature: (1) accessibility of medical records and of patients as clinical factors, (2) self-efficacy as an individual factor and (3) perceived incentives as regulatory factors. A survey was conducted, and structural equation modeling was applied to evaluate the empirical validity of the model and causal relationships within the model using the data collected from 183 physicians. Our results confirmed the validity of the original TAM constructs: the perceived usefulness of telemedicine directly impacted the behavioral intention to use it, and the perceived ease of use directly impacted both the perceived usefulness and the behavioral intention to use it. In addition, new predictive constructs were found to have ramifications on TAM variables: the accessibility of medical records and of patients directly impacted the perceived usefulness of telemedicine, self-efficacy had a significant positive effect on both the perceived ease of use and the perceived usefulness of telemedicine, and perceived incentives were found to be important with respect to the intention to use telemedicine technology. This study demonstrated that the Telemedicine Service Acceptance model was feasible and could explain the acceptance of telemedicine services by physicians. These results identified important factors for increasing the involvement of physicians in telemedicine practice. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Xiong, Qingrong; Baychev, Todor G; Jivkov, Andrey P
2016-09-01
Pore network models have been applied widely for simulating a variety of different physical and chemical processes, including phase exchange, non-Newtonian displacement, non-Darcy flow, reactive transport and thermodynamically consistent oil layers. The realism of such modelling, i.e. the credibility of their predictions, depends to a large extent on the quality of the correspondence between the pore space of a given medium and the pore network constructed as its representation. The main experimental techniques for pore space characterisation, including direct imaging, mercury intrusion porosimetry and gas adsorption, are firstly summarised. A review of the main pore network construction techniques is then presented. Particular focus is given on how such constructions are adapted to the data from experimentally characterised pore systems. Current applications of pore network models are considered, with special emphasis on the effects of adsorption, dissolution and precipitation, as well as biomass growth, on transport coefficients. Pore network models are found to be a valuable tool for understanding and predicting meso-scale phenomena, linking single pore processes, where other techniques are more accurate, and the homogenised continuum porous media, used by engineering community. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Zhang, Huai-zhu; Lin, Jun; Zhang, Huai-Zhu
2014-06-01
In the present paper, the outlier detection methods for determination of oil yield in oil shale using near-infrared (NIR) diffuse reflection spectroscopy was studied. During the quantitative analysis with near-infrared spectroscopy, environmental change and operator error will both produce outliers. The presence of outliers will affect the overall distribution trend of samples and lead to the decrease in predictive capability. Thus, the detection of outliers are important for the construction of high-quality calibration models. The methods including principal component analysis-Mahalanobis distance (PCA-MD) and resampling by half-means (RHM) were applied to the discrimination and elimination of outliers in this work. The thresholds and confidences for MD and RHM were optimized using the performance of partial least squares (PLS) models constructed after the elimination of outliers, respectively. Compared with the model constructed with the data of full spectrum, the values of RMSEP of the models constructed with the application of PCA-MD with a threshold of a value equal to the sum of average and standard deviation of MD, RHM with the confidence level of 85%, and the combination of PCA-MD and RHM, were reduced by 48.3%, 27.5% and 44.8%, respectively. The predictive ability of the calibration model has been improved effectively.
Mansberger, Steven L.; Sheppler, Christina R.; McClure, Tina M.; VanAlstine, Cory L.; Swanson, Ingrid L.; Stoumbos, Zoey; Lambert, William E.
2013-01-01
Purpose: To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. Methods: We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Results: Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach’s alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R2) of .40. Test-retest reliability was 90%. Conclusion: The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence. PMID:24072942
Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.
2016-01-01
Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444
Gupta, Shikha; Basant, Nikita; Mohan, Dinesh; Singh, Kunwar P
2016-07-01
The persistence and the removal of organic chemicals from the atmosphere are largely determined by their reactions with the OH radical and O3. Experimental determinations of the kinetic rate constants of OH and O3 with a large number of chemicals are tedious and resource intensive and development of computational approaches has widely been advocated. Recently, ensemble machine learning (EML) methods have emerged as unbiased tools to establish relationship between independent and dependent variables having a nonlinear dependence. In this study, EML-based, temperature-dependent quantitative structure-reactivity relationship (QSRR) models have been developed for predicting the kinetic rate constants for OH (kOH) and O3 (kO3) reactions with diverse chemicals. Structural diversity of chemicals was evaluated using a Tanimoto similarity index. The generalization and prediction abilities of the constructed models were established through rigorous internal and external validation performed employing statistical checks. In test data, the EML QSRR models yielded correlation (R (2)) of ≥0.91 between the measured and the predicted reactivities. The applicability domains of the constructed models were determined using methods based on descriptors range, Euclidean distance, leverage, and standardization approaches. The prediction accuracies for the higher reactivity compounds were relatively better than those of the low reactivity compounds. Proposed EML QSRR models performed well and outperformed the previous reports. The proposed QSRR models can make predictions of rate constants at different temperatures. The proposed models can be useful tools in predicting the reactivities of chemicals towards OH radical and O3 in the atmosphere.
ERIC Educational Resources Information Center
Xu, Xiaohe; Tung, Yuk-Ying; Dunaway, R. Gregory
2000-01-01
This article constructs a model to predict the likelihood of parental use of corporal punishment on children in two-parent families. Reports that corporal punishment is primarily determined by cultural, human, and social capital that are available to, or already acquired by parents. Discusses an integrated, resource-based theory for predicting use…
Predicting sugar consumption: Application of an integrated dual-process, dual-phase model.
Hagger, Martin S; Trost, Nadine; Keech, Jacob J; Chan, Derwin K C; Hamilton, Kyra
2017-09-01
Excess consumption of added dietary sugars is related to multiple metabolic problems and adverse health conditions. Identifying the modifiable social cognitive and motivational constructs that predict sugar consumption is important to inform behavioral interventions aimed at reducing sugar intake. We tested the efficacy of an integrated dual-process, dual-phase model derived from multiple theories to predict sugar consumption. Using a prospective design, university students (N = 90) completed initial measures of the reflective (autonomous and controlled motivation, intentions, attitudes, subjective norm, perceived behavioral control), impulsive (implicit attitudes), volitional (action and coping planning), and behavioral (past sugar consumption) components of the proposed model. Self-reported sugar consumption was measured two weeks later. A structural equation model revealed that intentions, implicit attitudes, and, indirectly, autonomous motivation to reduce sugar consumption had small, significant effects on sugar consumption. Attitudes, subjective norm, and, indirectly, autonomous motivation to reduce sugar consumption predicted intentions. There were no effects of the planning constructs. Model effects were independent of the effects of past sugar consumption. The model identified the relative contribution of reflective and impulsive components in predicting sugar consumption. Given the prominent role of the impulsive component, interventions that assist individuals in managing cues-to-action and behavioral monitoring are likely to be effective in regulating sugar consumption. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Seay, Christopher; Wang, Ruoyan; Fortney, Jonathan
2018-01-01
We construct a grid of brown dwarf model atmospheres spanning a wide range of atmospheric metallicity (0.3x ≤ met ≤ 100x), C/O ratios (0.25x ≤ C/O ≤ 2.5x), and cloud properties, encompassing atmospheres of effective temperatures 200 ≤ Teff ≤ 2400 K and gravities 2.5 ≤ log g ≤ 5.5. We produce the expected temperature-pressure profiles and emergent spectra from an atmosphere in radiative-convective equilibrium. We can then compare our predicted spectra to observations and retrieval results to aid in their predictions and influence future missions and telescopic observations. In our poster we briefly describe our modeling methodology and present our progress on model grid construction, spanning solar and subsolar C/O and metallicity.
Jones, Rachael M; Simmons, Catherine; Boelter, Fred
2011-06-01
Drywall finishing is a dusty construction activity. We describe a mathematical model that predicts the time-weighted average concentration of respirable and total dusts in the personal breathing zone of the sander, and in the area surrounding joint compound sanding activities. The model represents spatial variation in dust concentrations using two-zones, and temporal variation using an exponential function. Interzone flux and the relationships between respirable and total dusts are described using empirical factors. For model evaluation, we measured dust concentrations in two field studies, including three workers from a commercial contracting crew, and one unskilled worker. Data from the field studies confirm that the model assumptions and parameterization are reasonable and thus validate the modeling approach. Predicted dust C(twa) were in concordance with measured values for the contracting crew, but under estimated measured values for the unskilled worker. Further characterization of skill-related exposure factors is indicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyun Jin; Han, Seungbong; Kim, Young Seok, E-mail: ysk@amc.seoul.kr
Purpose: A nomogram is a predictive statistical model that generates the continuous probability of a clinical event such as death or recurrence. The aim of the study was to construct a nomogram to predict 5-year overall survival after postoperative radiation therapy for stage IB to IIA cervical cancer. Methods and Materials: The clinical data from 1702 patients with early-stage cervical cancer, treated at 10 participating hospitals from 1990 to 2011, were reviewed to develop a prediction nomogram based on the Cox proportional hazards model. Demographic, clinical, and pathologic variables were included and analyzed to formulate the nomogram. The discrimination andmore » calibration power of the model was measured using a concordance index (c-index) and calibration curve. Results: The median follow-up period for surviving patients was 75.6 months, and the 5-year overall survival probability was 87.1%. The final model was constructed using the following variables: age, number of positive pelvic lymph nodes, parametrial invasion, lymphovascular invasion, and the use of concurrent chemotherapy. The nomogram predicted the 5-year overall survival with a c-index of 0.69, which was superior to the predictive power of the International Federation of Gynecology and Obstetrics (FIGO) staging system (c-index of 0.54). Conclusions: A survival-predicting nomogram that offers an accurate level of prediction and discrimination was developed based on a large multi-center study. The model may be more useful than the FIGO staging system for counseling individual patients regarding prognosis.« less
NASA Astrophysics Data System (ADS)
Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam
2017-11-01
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
3D Printed Vascular Networks Enhance Viability in High-Volume Perfusion Bioreactor.
Ball, Owen; Nguyen, Bao-Ngoc B; Placone, Jesse K; Fisher, John P
2016-12-01
There is a significant clinical need for engineered bone graft substitutes that can quickly, effectively, and safely repair large segmental bone defects. One emerging field of interest involves the growth of engineered bone tissue in vitro within bioreactors, the most promising of which are perfusion bioreactors. Using bioreactor systems, tissue engineered bone constructs can be fabricated in vitro. However, these engineered constructs lack inherent vasculature and once implanted, quickly develop a necrotic core, where no nutrient exchange occurs. Here, we utilized COMSOL modeling to predict oxygen diffusion gradients throughout aggregated alginate constructs, which allowed for the computer-aided design of printable vascular networks, compatible with any large tissue engineered construct cultured in a perfusion bioreactor. We investigated the effect of 3D printed macroscale vascular networks with various porosities on the viability of human mesenchymal stem cells in vitro, using both gas-permeable, and non-gas permeable bioreactor growth chamber walls. Through the use of 3D printed vascular structures in conjunction with a tubular perfusion system bioreactor, cell viability was found to increase by as much as 50% in the core of these constructs, with in silico modeling predicting construct viability at steady state.
3D Printed Vascular Networks Enhance Viability in High-Volume Perfusion Bioreactor
Ball, Owen; Nguyen, Bao-Ngoc B.; Placone, Jesse K.; Fisher, John P.
2016-01-01
There is a significant clinical need for engineered bone graft substitutes that can quickly, effectively, and safely repair large segmental bone defects. One emerging field of interest involves the growth of engineered bone tissue in vitro within bioreactors, the most promising of which are perfusion bioreactors. Using bioreactor systems, tissue engineered bone constructs can be fabricated in vitro. However, these engineered constructs lack inherent vasculature and once implanted, quickly develop a necrotic core, where no nutrient exchange occurs. Here, we utilized COMSOL modeling to predict oxygen diffusion gradients throughout aggregated alginate constructs, which allowed for the computer-aided design of printable vascular networks, compatible with any large tissue engineered construct cultured in a perfusion bioreactor. We investigated the effect of 3D printed macroscale vascular networks with various porosities on the viability of human mesenchymal stem cells in vitro, using both gas-permeable, and non-gas permeable bioreactor growth chamber walls. Through the use of 3D printed vascular structures in conjunction with a tubular perfusion system bioreactor, cell viability was found to increase by as much as 50% in the core of these constructs, with in silico modeling predicting construct viability at steady state. PMID:27272210
Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne
2014-10-01
In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users' perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in 'Quality of Work Life', 'Perceived Usefulness', 'Perceived Ease of Use', and 'User Control', respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
ERIC Educational Resources Information Center
Freund, Philipp Alexander; Holling, Heinz
2011-01-01
The interpretation of retest scores is problematic because they are potentially affected by measurement and predictive bias, which impact construct validity, and because their size differs as a function of various factors. This paper investigates the construct stability of scores on a figural matrices test and models retest effects at the level of…
23 CFR 772.17 - Traffic noise prediction.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Traffic noise prediction. 772.17 Section 772.17 Highways... ABATEMENT OF HIGHWAY TRAFFIC NOISE AND CONSTRUCTION NOISE § 772.17 Traffic noise prediction. (a) Any analysis required by this subpart must use the FHWA Traffic Noise Model (FHWA TNM), which is described in...
The Theory of Planned Behavior as a Predictor of HIV Testing Intention.
Ayodele, Olabode
2017-03-01
This investigation tests the theory of planned behavior (TPB) as a predictor of HIV testing intention among Nigerian university undergraduate students. A cross-sectional study of 392 students was conducted using a self-administered structured questionnaire that measured socio-demographics, perceived risk of human immunodeficiency virus (HIV) infection, and TPB constructs. Analysis was based on 273 students who had never been tested for HIV. Hierarchical multiple regression analysis assessed the applicability of the TPB in predicting HIV testing intention and additional predictive value of perceived risk of HIV infection. The prediction model containing TPB constructs explained 35% of the variance in HIV testing intention, with attitude and perceived behavioral control making significant and unique contributions to intention. Perceived risk of HIV infection contributed marginally (2%) but significantly to the final prediction model. Findings supported the TPB in predicting HIV testing intention. Although future studies must determine the generalizability of these results, the findings highlight the importance of perceived behavioral control, attitude, and perceived risk of HIV infection in the prediction of HIV testing intention among students who have not previously tested for HIV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Brooks, Dusty Marie
In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions andmore » experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.« less
Prediction equations of forced oscillation technique: the insidious role of collinearity.
Narchi, Hassib; AlBlooshi, Afaf
2018-03-27
Many studies have reported reference data for forced oscillation technique (FOT) in healthy children. The prediction equation of FOT parameters were derived from a multivariable regression model examining the effect of age, gender, weight and height on each parameter. As many of these variables are likely to be correlated, collinearity might have affected the accuracy of the model, potentially resulting in misleading, erroneous or difficult to interpret conclusions.The aim of this work was: To review all FOT publications in children since 2005 to analyze whether collinearity was considered in the construction of the published prediction equations. Then to compare these prediction equations with our own study. And to analyse, in our study, how collinearity between the explanatory variables might affect the predicted equations if it was not considered in the model. The results showed that none of the ten reviewed studies had stated whether collinearity was checked for. Half of the reports had also included in their equations variables which are physiologically correlated, such as age, weight and height. The predicted resistance varied by up to 28% amongst these studies. And in our study, multicollinearity was identified between the explanatory variables initially considered for the regression model (age, weight and height). Ignoring it would have resulted in inaccuracies in the coefficients of the equation, their signs (positive or negative), their 95% confidence intervals, their significance level and the model goodness of fit. In Conclusion with inaccurately constructed and improperly reported models, understanding the results and reproducing the models for future research might be compromised.
THE DYNAMIC RESPONSE OF THERMOMETER-WELL ASSEMBLIES.
parameter models of the thermometric system were constructed and gave acceptable agreement with the experimental results. These models can be used to predict the dynamic behavior of any similar thermometer system. (Author)
Analyzing asset management data using data and text mining.
DOT National Transportation Integrated Search
2014-07-01
Predictive models using text from a sample competitively bid California highway projects have been used to predict a construction : projects likely level of cost overrun. A text description of the project and the text of the five largest project line...
Miller-Graff, Laura E; Cummings, E Mark; Bergman, Kathleen N
2016-10-01
The role of emotional security in promoting positive adjustment following exposure to marital conflict has been identified in a large number of empirical investigations, yet to date, no interventions have explicitly addressed the processes that predict child adjustment after marital conflict. The current study evaluated a randomized controlled trial of a family intervention program aimed at promoting constructive marital conflict behaviors thereby increasing adolescent emotional security and adjustment. Families (n = 225) were randomized into 1 of 4 conditions: Parent-Adolescent (n = 75), Parent-Only (n = 75), Self-Study (n = 38) and No Treatment (n = 37). Multi-informant and multi-method assessments were conducted at baseline, post-treatment and 6-month follow-up. Effects of treatment on destructive and constructive conflict behaviors were evaluated using multilevel models where observations were nested within individuals over time. Process models assessing the impact of constructive and destructive conflict behaviors on emotional insecurity and adolescent adjustment were evaluated using path modeling. Results indicated that the treatment was effective in increasing constructive conflict behaviors (d = 0.89) and decreasing destructive conflict behaviors (d = -0.30). For the Parent-Only Group, post-test constructive conflict behaviors directly predicted lower levels of adolescent externalizing behaviors at 6-month follow-up. Post-test constructive conflict skills also indirectly affected adolescent internalizing behaviors through adolescent emotional security. These findings support the use of a brief psychoeducational intervention in improving post-treatment conflict and emotional security about interparental relationships.
Automated antibody structure prediction using Accelrys tools: Results and best practices
Fasnacht, Marc; Butenhof, Ken; Goupil-Lamy, Anne; Hernandez-Guzman, Francisco; Huang, Hongwei; Yan, Lisa
2014-01-01
We describe the methodology and results from our participation in the second Antibody Modeling Assessment experiment. During the experiment we predicted the structure of eleven unpublished antibody Fv fragments. Our prediction methods centered on template-based modeling; potential templates were selected from an antibody database based on their sequence similarity to the target in the framework regions. Depending on the quality of the templates, we constructed models of the antibody framework regions either using a single, chimeric or multiple template approach. The hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. For the H3 loop region, we further refined models using ab initio methods. The final models were subjected to constrained energy minimization to resolve severe local structural problems. The analysis of the models submitted show that Accelrys tools allow for the construction of quite accurate models for the framework and the canonical CDR regions, with RMSDs to the X-ray structure on average below 1 Å for most of these regions. The results show that accurate prediction of the H3 hypervariable loops remains a challenge. Furthermore, model quality assessment of the submitted models show that the models are of quite high quality, with local geometry assessment scores similar to that of the target X-ray structures. Proteins 2014; 82:1583–1598. © 2014 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:24833271
Newman, J; Egan, T; Harbourne, N; O'Riordan, D; Jacquier, J C; O'Sullivan, M
2014-08-01
Sensory evaluation can be problematic for ingredients with a bitter taste during research and development phase of new food products. In this study, 19 dairy protein hydrolysates (DPH) were analysed by an electronic tongue and their physicochemical characteristics, the data obtained from these methods were correlated with their bitterness intensity as scored by a trained sensory panel and each model was also assessed by its predictive capabilities. The physiochemical characteristics of the DPHs investigated were degree of hydrolysis (DH%), and data relating to peptide size and relative hydrophobicity from size exclusion chromatography (SEC) and reverse phase (RP) HPLC. Partial least square regression (PLS) was used to construct the prediction models. All PLS regressions had good correlations (0.78 to 0.93) with the strongest being the combination of data obtained from SEC and RP HPLC. However, the PLS with the strongest predictive power was based on the e-tongue which had the PLS regression with the lowest root mean predicted residual error sum of squares (PRESS) in the study. The results show that the PLS models constructed with the e-tongue and the combination of SEC and RP-HPLC has potential to be used for prediction of bitterness and thus reducing the reliance on sensory analysis in DPHs for future food research. Copyright © 2014 Elsevier B.V. All rights reserved.
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
Nonstationary time series prediction combined with slow feature analysis
NASA Astrophysics Data System (ADS)
Wang, G.; Chen, X.
2015-07-01
Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.
Predicting local field potentials with recurrent neural networks.
Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter
2016-08-01
We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.
Predicting future forestland area: a comparison of econometric approaches.
SoEun Ahn; Andrew J. Plantinga; Ralph J. Alig
2000-01-01
Predictions of future forestland area are an important component of forest policy analyses. In this article, we test the ability of econometric land use models to accurately forecast forest area. We construct a panel data set for Alabama consisting of county and time-series observation for the period 1964 to 1992. We estimate models using restricted data sets-namely,...
This program is part of a larger program called ECOHAB: Florida that includes this study as well as physical oceanography, circulation patterns, and shelf scale modeling for predicting the occurrence and transport of Karenia brevis (=Gymnodinium breve) red tides. The physical par...
Fei, Y; Hu, J; Li, W-Q; Wang, W; Zong, G-Q
2017-03-01
Essentials Predicting the occurrence of portosplenomesenteric vein thrombosis (PSMVT) is difficult. We studied 72 patients with acute pancreatitis. Artificial neural networks modeling was more accurate than logistic regression in predicting PSMVT. Additional predictive factors may be incorporated into artificial neural networks. Objective To construct and validate artificial neural networks (ANNs) for predicting the occurrence of portosplenomesenteric venous thrombosis (PSMVT) and compare the predictive ability of the ANNs with that of logistic regression. Methods The ANNs and logistic regression modeling were constructed using simple clinical and laboratory data of 72 acute pancreatitis (AP) patients. The ANNs and logistic modeling were first trained on 48 randomly chosen patients and validated on the remaining 24 patients. The accuracy and the performance characteristics were compared between these two approaches by SPSS17.0 software. Results The training set and validation set did not differ on any of the 11 variables. After training, the back propagation network training error converged to 1 × 10 -20 , and it retained excellent pattern recognition ability. When the ANNs model was applied to the validation set, it revealed a sensitivity of 80%, specificity of 85.7%, a positive predictive value of 77.6% and negative predictive value of 90.7%. The accuracy was 83.3%. Differences could be found between ANNs modeling and logistic regression modeling in these parameters (10.0% [95% CI, -14.3 to 34.3%], 14.3% [95% CI, -8.6 to 37.2%], 15.7% [95% CI, -9.9 to 41.3%], 11.8% [95% CI, -8.2 to 31.8%], 22.6% [95% CI, -1.9 to 47.1%], respectively). When ANNs modeling was used to identify PSMVT, the area under receiver operating characteristic curve was 0.849 (95% CI, 0.807-0.901), which demonstrated better overall properties than logistic regression modeling (AUC = 0.716) (95% CI, 0.679-0.761). Conclusions ANNs modeling was a more accurate tool than logistic regression in predicting the occurrence of PSMVT following AP. More clinical factors or biomarkers may be incorporated into ANNs modeling to improve its predictive ability. © 2016 International Society on Thrombosis and Haemostasis.
Alternative approaches to predicting methane emissions from dairy cows.
Mills, J A N; Kebreab, E; Yates, C M; Crompton, L A; Cammell, S B; Dhanoa, M S; Agnew, R E; France, J
2003-12-01
Previous attempts to apply statistical models, which correlate nutrient intake with methane production, have been of limited value where predictions are obtained for nutrient intakes and diet types outside those used in model construction. Dynamic mechanistic models have proved more suitable for extrapolation, but they remain computationally expensive and are not applied easily in practical situations. The first objective of this research focused on employing conventional techniques to generate statistical models of methane production appropriate to United Kingdom dairy systems. The second objective was to evaluate these models and a model published previously using both United Kingdom and North American data sets. Thirdly, nonlinear models were considered as alternatives to the conventional linear regressions. The United Kingdom calorimetry data used to construct the linear models also were used to develop the three nonlinear alternatives that were all of modified Mitscherlich (monomolecular) form. Of the linear models tested, an equation from the literature proved most reliable across the full range of evaluation data (root mean square prediction error = 21.3%). However, the Mitscherlich models demonstrated the greatest degree of adaptability across diet types and intake level. The most successful model for simulating the independent data was a modified Mitscherlich equation with the steepness parameter set to represent dietary starch-to-ADF ratio (root mean square prediction error = 20.6%). However, when such data were unavailable, simpler Mitscherlich forms relating dry matter or metabolizable energy intake to methane production remained better alternatives relative to their linear counterparts.
NASA Astrophysics Data System (ADS)
Zhan, Weiwei; Fan, Xuanmei; Huang, Runqiu; Pei, Xiangjun; Xu, Qiang; Li, Weile
2017-06-01
Rock avalanches are extremely rapid, massive flow-like movements of fragmented rock. The travel path of the rock avalanches may be confined by channels in some cases, which are referred to as channelized rock avalanches. Channelized rock avalanches are potentially dangerous due to their difficult-to-predict travel distance. In this study, we constructed a dataset with detailed characteristic parameters of 38 channelized rock avalanches triggered by the 2008 Wenchuan earthquake using the visual interpretation of remote sensing imagery, field investigation and literature review. Based on this dataset, we assessed the influence of different factors on the runout distance and developed prediction models of the channelized rock avalanches using the multivariate regression method. The results suggested that the movement of channelized rock avalanche was dominated by the landslide volume, total relief and channel gradient. The performance of both models was then tested with an independent validation dataset of eight rock avalanches that were induced by the 2008 Wenchuan earthquake, the Ms 7.0 Lushan earthquake and heavy rainfall in 2013, showing acceptable good prediction results. Therefore, the travel-distance prediction models for channelized rock avalanches constructed in this study are applicable and reliable for predicting the runout of similar rock avalanches in other regions.
Mathur, Rinku; Adlakha, Neeru
2014-06-01
Phylogenetic trees give the information about the vertical relationships of ancestors and descendants but phylogenetic networks are used to visualize the horizontal relationships among the different organisms. In order to predict reticulate events there is a need to construct phylogenetic networks. Here, a Linear Programming (LP) model has been developed for the construction of phylogenetic network. The model is validated by using data sets of chloroplast of 16S rRNA sequences of photosynthetic organisms and Influenza A/H5N1 viruses. Results obtained are in agreement with those obtained by earlier researchers.
New, Leslie; Bjerre, Emily; Millsap, Brian A.; Otto, Mark C.; Runge, Michael C.
2015-01-01
Wind power is a major candidate in the search for clean, renewable energy. Beyond the technical and economic challenges of wind energy development are environmental issues that may restrict its growth. Avian fatalities due to collisions with rotating turbine blades are a leading concern and there is considerable uncertainty surrounding avian collision risk at wind facilities. This uncertainty is not reflected in many models currently used to predict the avian fatalities that would result from proposed wind developments. We introduce a method to predict fatalities at wind facilities, based on pre-construction monitoring. Our method can directly incorporate uncertainty into the estimates of avian fatalities and can be updated if information on the true number of fatalities becomes available from post-construction carcass monitoring. Our model considers only three parameters: hazardous footprint, bird exposure to turbines and collision probability. By using a Bayesian analytical framework we account for uncertainties in these values, which are then reflected in our predictions and can be reduced through subsequent data collection. The simplicity of our approach makes it accessible to ecologists concerned with the impact of wind development, as well as to managers, policy makers and industry interested in its implementation in real-world decision contexts. We demonstrate the utility of our method by predicting golden eagle (Aquila chrysaetos) fatalities at a wind installation in the United States. Using pre-construction data, we predicted 7.48 eagle fatalities year-1 (95% CI: (1.1, 19.81)). The U.S. Fish and Wildlife Service uses the 80th quantile (11.0 eagle fatalities year-1) in their permitting process to ensure there is only a 20% chance a wind facility exceeds the authorized fatalities. Once data were available from two-years of post-construction monitoring, we updated the fatality estimate to 4.8 eagle fatalities year-1 (95% CI: (1.76, 9.4); 80th quantile, 6.3). In this case, the increased precision in the fatality prediction lowered the level of authorized take, and thus lowered the required amount of compensatory mitigation.
New, Leslie; Bjerre, Emily; Millsap, Brian; Otto, Mark C.; Runge, Michael C.
2015-01-01
Wind power is a major candidate in the search for clean, renewable energy. Beyond the technical and economic challenges of wind energy development are environmental issues that may restrict its growth. Avian fatalities due to collisions with rotating turbine blades are a leading concern and there is considerable uncertainty surrounding avian collision risk at wind facilities. This uncertainty is not reflected in many models currently used to predict the avian fatalities that would result from proposed wind developments. We introduce a method to predict fatalities at wind facilities, based on pre-construction monitoring. Our method can directly incorporate uncertainty into the estimates of avian fatalities and can be updated if information on the true number of fatalities becomes available from post-construction carcass monitoring. Our model considers only three parameters: hazardous footprint, bird exposure to turbines and collision probability. By using a Bayesian analytical framework we account for uncertainties in these values, which are then reflected in our predictions and can be reduced through subsequent data collection. The simplicity of our approach makes it accessible to ecologists concerned with the impact of wind development, as well as to managers, policy makers and industry interested in its implementation in real-world decision contexts. We demonstrate the utility of our method by predicting golden eagle (Aquila chrysaetos) fatalities at a wind installation in the United States. Using pre-construction data, we predicted 7.48 eagle fatalities year-1 (95% CI: (1.1, 19.81)). The U.S. Fish and Wildlife Service uses the 80th quantile (11.0 eagle fatalities year-1) in their permitting process to ensure there is only a 20% chance a wind facility exceeds the authorized fatalities. Once data were available from two-years of post-construction monitoring, we updated the fatality estimate to 4.8 eagle fatalities year-1 (95% CI: (1.76, 9.4); 80th quantile, 6.3). In this case, the increased precision in the fatality prediction lowered the level of authorized take, and thus lowered the required amount of compensatory mitigation. PMID:26134412
New, Leslie; Bjerre, Emily; Millsap, Brian; Otto, Mark C; Runge, Michael C
2015-01-01
Wind power is a major candidate in the search for clean, renewable energy. Beyond the technical and economic challenges of wind energy development are environmental issues that may restrict its growth. Avian fatalities due to collisions with rotating turbine blades are a leading concern and there is considerable uncertainty surrounding avian collision risk at wind facilities. This uncertainty is not reflected in many models currently used to predict the avian fatalities that would result from proposed wind developments. We introduce a method to predict fatalities at wind facilities, based on pre-construction monitoring. Our method can directly incorporate uncertainty into the estimates of avian fatalities and can be updated if information on the true number of fatalities becomes available from post-construction carcass monitoring. Our model considers only three parameters: hazardous footprint, bird exposure to turbines and collision probability. By using a Bayesian analytical framework we account for uncertainties in these values, which are then reflected in our predictions and can be reduced through subsequent data collection. The simplicity of our approach makes it accessible to ecologists concerned with the impact of wind development, as well as to managers, policy makers and industry interested in its implementation in real-world decision contexts. We demonstrate the utility of our method by predicting golden eagle (Aquila chrysaetos) fatalities at a wind installation in the United States. Using pre-construction data, we predicted 7.48 eagle fatalities year-1 (95% CI: (1.1, 19.81)). The U.S. Fish and Wildlife Service uses the 80th quantile (11.0 eagle fatalities year-1) in their permitting process to ensure there is only a 20% chance a wind facility exceeds the authorized fatalities. Once data were available from two-years of post-construction monitoring, we updated the fatality estimate to 4.8 eagle fatalities year-1 (95% CI: (1.76, 9.4); 80th quantile, 6.3). In this case, the increased precision in the fatality prediction lowered the level of authorized take, and thus lowered the required amount of compensatory mitigation.
Crimmins, Michael A.; Gerst, Katharine L.; Rosemartin, Alyssa H.; Weltzin, Jake F.
2017-01-01
Purpose In support of science and society, the USA National Phenology Network (USA-NPN) maintains a rapidly growing, continental-scale, species-rich dataset of plant and animal phenology observations that with over 10 million records is the largest such database in the United States. The aim of this study was to explore the potential that exists in the broad and rich volunteer-collected dataset maintained by the USA-NPN for constructing models predicting the timing of phenological transition across species’ ranges within the continental United States. Contributed voluntarily by professional and citizen scientists, these opportunistically collected observations are characterized by spatial clustering, inconsistent spatial and temporal sampling, and short temporal depth (2009-present). Whether data exhibiting such limitations can be used to develop predictive models appropriate for use across large geographic regions has not yet been explored. Methods We constructed predictive models for phenophases that are the most abundant in the database and also relevant to management applications for all species with available data, regardless of plant growth habit, location, geographic extent, or temporal depth of the observations. We implemented a very basic model formulation—thermal time models with a fixed start date. Results Sufficient data were available to construct 107 individual species × phenophase models. Remarkably, given the limited temporal depth of this dataset and the simple modeling approach used, fifteen of these models (14%) met our criteria for model fit and error. The majority of these models represented the “breaking leaf buds” and “leaves” phenophases and represented shrub or tree growth forms. Accumulated growing degree day (GDD) thresholds that emerged ranged from 454 GDDs (Amelanchier canadensis-breaking leaf buds) to 1,300 GDDs (Prunus serotina-open flowers). Such candidate thermal time thresholds can be used to produce real-time and short-term forecast maps of the timing of these phenophase transition. In addition, many of the candidate models that emerged were suitable for use across the majority of the species’ geographic ranges. Real-time and forecast maps of phenophase transitions could support a wide range of natural resource management applications, including invasive plant management, issuing asthma and allergy alerts, and anticipating frost damage for crops in vulnerable states. Implications Our finding that several viable thermal time threshold models that work across the majority of the species ranges could be constructed from the USA-NPN database provides clear evidence that great potential exists this dataset to develop more enhanced predictive models for additional species and phenophases. Further, the candidate models that emerged have immediate utility for supporting a wide range of management applications. PMID:28829783
NASA Astrophysics Data System (ADS)
Balachandran, Prasanna V.; Emery, Antoine A.; Gubernatis, James E.; Lookman, Turab; Wolverton, Chris; Zunger, Alex
2018-04-01
We apply machine learning (ML) methods to a database of 390 experimentally reported A B O3 compounds to construct two statistical models that predict possible new perovskite materials and possible new cubic perovskites. The first ML model classified the 390 compounds into 254 perovskites and 136 that are not perovskites with a 90% average cross-validation (CV) accuracy; the second ML model further classified the perovskites into 22 known cubic perovskites and 232 known noncubic perovskites with a 94% average CV accuracy. We find that the most effective chemical descriptors affecting our classification include largely geometric constructs such as the A and B Shannon ionic radii, the tolerance and octahedral factors, the A -O and B -O bond length, and the A and B Villars' Mendeleev numbers. We then construct an additional list of 625 A B O3 compounds assembled from charge conserving combinations of A and B atoms absent from our list of known compounds. Then, using the two ML models constructed on the known compounds, we predict that 235 of the 625 exist in a perovskite structure with a confidence greater than 50% and among them that 20 exist in the cubic structure (albeit, the latter with only ˜50 % confidence). We find that the new perovskites are most likely to occur when the A and B atoms are a lanthanide or actinide, when the A atom is an alkali, alkali earth, or late transition metal atom, or when the B atom is a p -block atom. We also compare the ML findings with the density functional theory calculations and convex hull analyses in the Open Quantum Materials Database (OQMD), which predicts the T =0 K ground-state stability of all the A B O3 compounds. We find that OQMD predicts 186 of 254 of the perovskites in the experimental database to be thermodynamically stable within 100 meV/atom of the convex hull and predicts 87 of the 235 ML-predicted perovskite compounds to be thermodynamically stable within 100 meV/atom of the convex hull, including 6 of these to be in cubic structures. We suggest these 87 as the most promising candidates for future experimental synthesis of novel perovskites.
Nguyen, X Cuong; Chang, S Woong; Nguyen, Thi Loan; Ngo, H Hao; Kumar, Gopalakrishnan; Banu, J Rajesh; Vu, M Cuong; Le, H Sinh; Nguyen, D Duc
2018-09-15
A pilot-scale hybrid constructed wetland with vertical flow and horizontal flow in series was constructed and used to investigate organic material and nutrient removal rate constants for wastewater treatment and establish a practical predictive model for use. For this purpose, the performance of multiple parameters was statistically evaluated during the process and predictive models were suggested. The measurement of the kinetic rate constant was based on the use of the first-order derivation and Monod kinetic derivation (Monod) paired with a plug flow reactor (PFR) and a continuously stirred tank reactor (CSTR). Both the Lindeman, Merenda, and Gold (LMG) analysis and Bayesian model averaging (BMA) method were employed for identifying the relative importance of variables and their optimal multiple regression (MR). The results showed that the first-order-PFR (M 2 ) model did not fit the data (P > 0.05, and R 2 < 0.5), whereas the first-order-CSTR (M 1 ) model for the chemical oxygen demand (COD Cr ) and Monod-CSTR (M 3 ) model for the COD Cr and ammonium nitrogen (NH 4 -N) showed a high correlation with the experimental data (R 2 > 0.5). The pollutant removal rates in the case of M 1 were 0.19 m/d (COD Cr ) and those for M 3 were 25.2 g/m 2 ∙d for COD Cr and 2.63 g/m 2 ∙d for NH 4 -N. By applying a multi-variable linear regression method, the optimal empirical models were established for predicting the final effluent concentration of five days' biochemical oxygen demand (BOD 5 ) and NH 4 -N. In general, the hydraulic loading rate was considered an important variable having a high value of relative importance, which appeared in all the optimal predictive models. Copyright © 2018 Elsevier Ltd. All rights reserved.
Reliability Prediction Models for Discrete Semiconductor Devices
1988-07-01
influence failure rate were device construction, semiconductor material, junction temperature, electrical stress, circuit application., a plication...found to influence failure rate were device construction, semiconductor material, junction temperature, electrical stress, circuit application...MFA Airbreathlng 14issile, Flight MFF Missile, Free Flight ML Missile, Launch MMIC Monolithic Microwave Integrated Circuits MOS Metal-Oxide
Engineering a Functional Small RNA Negative Autoregulation Network with Model-Guided Design.
Hu, Chelsea Y; Takahashi, Melissa K; Zhang, Yan; Lucks, Julius B
2018-05-22
RNA regulators are powerful components of the synthetic biology toolbox. Here, we expand the repertoire of synthetic gene networks built from these regulators by constructing a transcriptional negative autoregulation (NAR) network out of small RNAs (sRNAs). NAR network motifs are core motifs of natural genetic networks, and are known for reducing network response time and steady state signal. Here we use cell-free transcription-translation (TX-TL) reactions and a computational model to design and prototype sRNA NAR constructs. Using parameter sensitivity analysis, we design a simple set of experiments that allow us to accurately predict NAR function in TX-TL. We transfer successful network designs into Escherichia coli and show that our sRNA transcriptional network reduces both network response time and steady-state gene expression. This work broadens our ability to construct increasingly sophisticated RNA genetic networks with predictable function.
Brownian motion with adaptive drift for remaining useful life prediction: Revisited
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.
Stewart, James A.; Kohnert, Aaron A.; Capolungo, Laurent; ...
2018-03-06
The complexity of radiation effects in a material’s microstructure makes developing predictive models a difficult task. In principle, a complete list of all possible reactions between defect species being considered can be used to elucidate damage evolution mechanisms and its associated impact on microstructure evolution. However, a central limitation is that many models use a limited and incomplete catalog of defect energetics and associated reactions. Even for a given model, estimating its input parameters remains a challenge, especially for complex material systems. Here, we present a computational analysis to identify the extent to which defect accumulation, energetics, and irradiation conditionsmore » can be determined via forward and reverse regression models constructed and trained from large data sets produced by cluster dynamics simulations. A global sensitivity analysis, via Sobol’ indices, concisely characterizes parameter sensitivity and demonstrates how this can be connected to variability in defect evolution. Based on this analysis and depending on the definition of what constitutes the input and output spaces, forward and reverse regression models are constructed and allow for the direct calculation of defect accumulation, defect energetics, and irradiation conditions. Here, this computational analysis, exercised on a simplified cluster dynamics model, demonstrates the ability to design predictive surrogate and reduced-order models, and provides guidelines for improving model predictions within the context of forward and reverse engineering of mathematical models for radiation effects in a materials’ microstructure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, James A.; Kohnert, Aaron A.; Capolungo, Laurent
The complexity of radiation effects in a material’s microstructure makes developing predictive models a difficult task. In principle, a complete list of all possible reactions between defect species being considered can be used to elucidate damage evolution mechanisms and its associated impact on microstructure evolution. However, a central limitation is that many models use a limited and incomplete catalog of defect energetics and associated reactions. Even for a given model, estimating its input parameters remains a challenge, especially for complex material systems. Here, we present a computational analysis to identify the extent to which defect accumulation, energetics, and irradiation conditionsmore » can be determined via forward and reverse regression models constructed and trained from large data sets produced by cluster dynamics simulations. A global sensitivity analysis, via Sobol’ indices, concisely characterizes parameter sensitivity and demonstrates how this can be connected to variability in defect evolution. Based on this analysis and depending on the definition of what constitutes the input and output spaces, forward and reverse regression models are constructed and allow for the direct calculation of defect accumulation, defect energetics, and irradiation conditions. Here, this computational analysis, exercised on a simplified cluster dynamics model, demonstrates the ability to design predictive surrogate and reduced-order models, and provides guidelines for improving model predictions within the context of forward and reverse engineering of mathematical models for radiation effects in a materials’ microstructure.« less
Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B
2013-12-01
Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication is that by using such 'efficient' sampling methods for obtaining an initial sample of fruit that represents the variability of the population and for sub-sampling to form training and test sets it should be possible to use relatively small sample sizes to develop spectral predictions of fruit quality. Using feature selection and elastic net appears to improve the SSC model performance in terms of R(2), RMSECV and RMSEP for 'Aroma' apples. © 2013 Society of Chemical Industry.
Prediction and visualization of redox conditions in the groundwater of Central Valley, California
NASA Astrophysics Data System (ADS)
Rosecrans, Celia Z.; Nolan, Bernard T.; Gronberg, JoAnn M.
2017-03-01
Regional-scale, three-dimensional continuous probability models, were constructed for aspects of redox conditions in the groundwater system of the Central Valley, California. These models yield grids depicting the probability that groundwater in a particular location will have dissolved oxygen (DO) concentrations less than selected threshold values representing anoxic groundwater conditions, or will have dissolved manganese (Mn) concentrations greater than selected threshold values representing secondary drinking water-quality contaminant levels (SMCL) and health-based screening levels (HBSL). The probability models were constrained by the alluvial boundary of the Central Valley to a depth of approximately 300 m. Probability distribution grids can be extracted from the 3-D models at any desired depth, and are of interest to water-resource managers, water-quality researchers, and groundwater modelers concerned with the occurrence of natural and anthropogenic contaminants related to anoxic conditions. Models were constructed using a Boosted Regression Trees (BRT) machine learning technique that produces many trees as part of an additive model and has the ability to handle many variables, automatically incorporate interactions, and is resistant to collinearity. Machine learning methods for statistical prediction are becoming increasing popular in that they do not require assumptions associated with traditional hypothesis testing. Models were constructed using measured dissolved oxygen and manganese concentrations sampled from 2767 wells within the alluvial boundary of the Central Valley, and over 60 explanatory variables representing regional-scale soil properties, soil chemistry, land use, aquifer textures, and aquifer hydrologic properties. Models were trained on a USGS dataset of 932 wells, and evaluated on an independent hold-out dataset of 1835 wells from the California Division of Drinking Water. We used cross-validation to assess the predictive performance of models of varying complexity, as a basis for selecting final models. Trained models were applied to cross-validation testing data and a separate hold-out dataset to evaluate model predictive performance by emphasizing three model metrics of fit: Kappa; accuracy; and the area under the receiver operator characteristic curve (ROC). The final trained models were used for mapping predictions at discrete depths to a depth of 304.8 m. Trained DO and Mn models had accuracies of 86-100%, Kappa values of 0.69-0.99, and ROC values of 0.92-1.0. Model accuracies for cross-validation testing datasets were 82-95% and ROC values were 0.87-0.91, indicating good predictive performance. Kappas for the cross-validation testing dataset were 0.30-0.69, indicating fair to substantial agreement between testing observations and model predictions. Hold-out data were available for the manganese model only and indicated accuracies of 89-97%, ROC values of 0.73-0.75, and Kappa values of 0.06-0.30. The predictive performance of both the DO and Mn models was reasonable, considering all three of these fit metrics and the low percentages of low-DO and high-Mn events in the data.
A study on specialist or special disease clinics based on big data.
Fang, Zhuyuan; Fan, Xiaowei; Chen, Gong
2014-09-01
Correlation analysis and processing of massive medical information can be implemented through big data technology to find the relevance of different factors in the life cycle of a disease and to provide the basis for scientific research and clinical practice. This paper explores the concept of constructing a big medical data platform and introduces the clinical model construction. Medical data can be collected and consolidated by distributed computing technology. Through analysis technology, such as artificial neural network and grey model, a medical model can be built. Big data analysis, such as Hadoop, can be used to construct early prediction and intervention models as well as clinical decision-making model for specialist and special disease clinics. It establishes a new model for common clinical research for specialist and special disease clinics.
NASA Astrophysics Data System (ADS)
Santosa, H.; Hobara, Y.
2017-01-01
The electric field amplitude of very low frequency (VLF) transmitter from Hawaii (NPM) has been continuously recorded at Chofu (CHF), Tokyo, Japan. The VLF amplitude variability indicates lower ionospheric perturbation in the D region (60-90 km altitude range) around the NPM-CHF propagation path. We carried out the prediction of daily nighttime mean VLF amplitude by using Nonlinear Autoregressive with Exogenous Input Neural Network (NARX NN). The NARX NN model, which was built based on the daily input variables of various physical parameters such as stratospheric temperature, total column ozone, cosmic rays, Dst, and Kp indices possess good accuracy during the model building. The fitted model was constructed within the training period from 1 January 2011 to 4 February 2013 by using three algorithms, namely, Bayesian Neural Network (BRANN), Levenberg Marquardt Neural Network (LMANN), and Scaled Conjugate Gradient (SCG). The LMANN has the largest Pearson correlation coefficient (r) of 0.94 and smallest root-mean-square error (RMSE) of 1.19 dB. The constructed models by using LMANN were applied to predict the VLF amplitude from 5 February 2013 to 31 December 2013. As a result the one step (1 day) ahead predicted nighttime VLF amplitude has the r of 0.93 and RMSE of 2.25 dB. We conclude that the model built according to the proposed methodology provides good predictions of the electric field amplitude of VLF waves for NPM-CHF (midlatitude) propagation path.
NASA Astrophysics Data System (ADS)
Wernet, A. K.; Beighley, R. E.
2006-12-01
Soil erosion is a power process that continuously alters the Earth's landscape. Human activities, such as construction and agricultural practices, and natural events, such as forest fires and landslides, disturb the landscape and intensify erosion processes leading to sudden increases in runoff sediment concentrations and degraded stream water quality. Understanding soil erosion and sediment transport processes is of great importance to researchers and practicing engineers, who routinely use models to predict soil erosion and sediment movement for varied land use and climate change scenarios. However, existing erosion models are limited in their applicability to constructions sites which have highly variable soil conditions (density, moisture, surface roughness, and best management practices) that change often in both space and time. The goal of this research is to improve the understanding, predictive capabilities and integration of treatment methodologies for controlling soil erosion and sediment export from construction sites. This research combines modeling with field monitoring and laboratory experiments to quantify: (a) spatial and temporal distribution of soil conditions on construction sites, (b) soil erosion due to event rainfall, and (c) potential offsite discharge of sediment with and without treatment practices. Field sites in southern California were selected to monitor the effects of common construction activities (ex., cut/fill, grading, foundations, roads) on soil conditions and sediment discharge. Laboratory experiments were performed in the Soil Erosion Research Laboratory (SERL), part of the Civil and Environmental Engineering department at San Diego State University, to quantify the impact of individual factors leading to sediment export. SERL experiments utilize a 3-m by 10-m tilting soil bed with soil depths up to 1 m, slopes ranging from 0 to 50 percent, and rainfall rates up to 150 mm/hr (6 in/hr). Preliminary modeling, field and laboratory results are presented.
Liao, Quan; Yao, Jianhua; Yuan, Shengang
2007-05-01
The study of prediction of toxicity is very important and necessary because measurement of toxicity is typically time-consuming and expensive. In this paper, Recursive Partitioning (RP) method was used to select descriptors. RP and Support Vector Machines (SVM) were used to construct structure-toxicity relationship models, RP model and SVM model, respectively. The performances of the two models are different. The prediction accuracies of the RP model are 80.2% for mutagenic compounds in MDL's toxicity database, 83.4% for compounds in CMC and 84.9% for agrochemicals in in-house database respectively. Those of SVM model are 81.4%, 87.0% and 87.3% respectively.
Inter-fraction variations in respiratory motion models
NASA Astrophysics Data System (ADS)
McClelland, J. R.; Hughes, S.; Modat, M.; Qureshi, A.; Ahmad, S.; Landau, D. B.; Ourselin, S.; Hawkes, D. J.
2011-01-01
Respiratory motion can vary dramatically between the planning stage and the different fractions of radiotherapy treatment. Motion predictions used when constructing the radiotherapy plan may be unsuitable for later fractions of treatment. This paper presents a methodology for constructing patient-specific respiratory motion models and uses these models to evaluate and analyse the inter-fraction variations in the respiratory motion. The internal respiratory motion is determined from the deformable registration of Cine CT data and related to a respiratory surrogate signal derived from 3D skin surface data. Three different models for relating the internal motion to the surrogate signal have been investigated in this work. Data were acquired from six lung cancer patients. Two full datasets were acquired for each patient, one before the course of radiotherapy treatment and one at the end (approximately 6 weeks later). Separate models were built for each dataset. All models could accurately predict the respiratory motion in the same dataset, but had large errors when predicting the motion in the other dataset. Analysis of the inter-fraction variations revealed that most variations were spatially varying base-line shifts, but changes to the anatomy and the motion trajectories were also observed.
Tavousi, Mahmoud; Montazeri, Ali; Hidarnia, Alireza; Hajizadeh, Ebrahim; Taremian, Farhad; Haerimehrizi, Aliasghar
2015-08-01
The theory of reasoned action (TRA) is one of the most common models in predicting health-related behaviors and is used more often in health education studies. This study aimed to add two control constructs (perceived behavioral control - PBC and self-efficacy - SE) to the TRA and compare them using the structural equation modeling (SEM) for substance use avoidance among Iranian male adolescents in order to find out which model was a better fit in predicting the intention. This was a cross-sectional study carried out in Tehran, Iran. Data were collected from a random sample of high school male students (15-19 years of age) using a questionnaire containing items related to the TRA plus items reflecting two additional constructs (SE and PBC). In all, 433 students completed the questionnaires. The results obtained from SEM indicated a better fit to the data for the TRA with SE compared to the TPB (TRA with PBC) and TRA (χ2/df=2.55, RMSEA=0.072, CFI=0.96, NFI=0.94, NNFI=0.95, SRMR=0.058). Comparing SE and PBC, the results showed that self-efficacy was a better control construct in improving the TRA and predicting substance use avoidance intention (41%). The TRA with SE had a better model fit than TPB and the original version of the TRA.
A voxel-based finite element model for the prediction of bladder deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.
2012-01-15
Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classicalmore » FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to manual contours and <0.02 cm difference in mean standard deviation of residual errors). The average equation solving time (without manual intervention) for the first two types of hexahedral meshes increased to 2.3 h and 2.6 h compared to the 1.1 h needed for the tetrahedral mesh, however, the low-resolution nonuniform hexahedral mesh dramatically decreased the equation solving time to 3 min without reducing accuracy. Conclusions: Voxel-based mesh generation allows fast, automatic, and robust creation of finite element bladder models directly from binary segmentation images without user intervention. Even the low-resolution voxel-based hexahedral mesh yields comparable accuracy in bladder shape prediction and more than 20 times faster in computational speed compared to the tetrahedral mesh. This approach makes it more feasible and accessible to apply FE method to model bladder deformation in adaptive radiotherapy.« less
Glassman, Tavis; Braun, Robert E; Dodd, Virginia; Miller, Jeffrey M; Miller, E Maureen
2010-04-01
This study assessed the extent to which the Theory of Planned Behavior (TPB) correctly predicted college student's motivation to consume alcohol on game day based on alcohol consumption rates. Three cohorts of 1,000 participants each (N = 3,000) were randomly selected and invited to complete an anonymous web-based survey the Monday following one of three designated college home football games. Path analyses were conducted to determine which of the TPB constructs were most effective in predicting Behavioral Intention and alcohol consumption among social, high-risk, and extreme drinkers. Social drinkers, high-risk, and those drinkers who engage in Extreme Ritualistic Alcohol Consumption (ERAC) were defined as males who consumed 1-4, 5-9, or 10 or more drinks on game day (1-3, 4-8, or nine or more drinks for females), respectively. Attitude Towards the Behavior and Subjective Norm constructs predicted participant's intentions to consume alcohol and corresponding behavior among all three classifications of drinkers; whereas the Perceived Behavioral Control (PBC) construct inconsistently predicted intention and alcohol consumption. Based on Behavioral Intention, the proportion of variance the TPB model explained decreased as participants alcohol consumption increased. It appears that the TPB constructs Attitude Toward the Behavior and Subjective Norm can effectively be utilized when designing universal prevention interventions targeting game day alcohol consumption among college students. However, the applicability of the PBC construct remains in question. While select constructs in the TPB appear to have predictive ability, the usefulness of the complete theoretical framework is limited when trying to predict high-risk drinking and ERAC. These findings suggest that other behavioral theories should be considered when addressing the needs of high-risk and extreme drinkers.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Learning Instance-Specific Predictive Models
Visweswaran, Shyam; Cooper, Gregory F.
2013-01-01
This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325
Predictions of Cockpit Simulator Experimental Outcome Using System Models
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.
NASA Technical Reports Server (NTRS)
Lydon, Thomas J.; Fox, Peter A.; Sofia, Sabatino
1993-01-01
We have constructed a series of models of Alpha Centauri A and Alpha Centauri B for the purposes of testing the effects of convection modeling both by means of the mixing-length theory (MLT), and by means of parameterization of energy fluxes based upon numerical simulations of turbulent compressible convection. We demonstrate that while MLT, through its adjustable parameter alpha, can be used to match any given values of luminosities and radii, our treatment of convection, which lacks any adjustable parameters, makes specific predictions of stellar radii. Since the predicted radii of the Alpha Centauri system fall within the errors of the observed radii, our treatment of convection is applicable to other stars in the H-R diagram in addition to the sun. A second set of models is constructed using MLT, adjusting alpha to yield not the 'measured' radii but, instead, the radii predictions of our revised treatment of convection. We conclude by assessing the appropriateness of using a single value of alpha to model a wide variety of stars.
Dhana, Klodian; Ikram, M Arfan; Hofman, Albert; Franco, Oscar H; Kavousi, Maryam
2015-03-01
Body mass index (BMI) has been used to simplify cardiovascular risk prediction models by substituting total cholesterol and high-density lipoprotein cholesterol. In the elderly, the ability of BMI as a predictor of cardiovascular disease (CVD) declines. We aimed to find the most predictive anthropometric measure for CVD risk to construct a non-laboratory-based model and to compare it with the model including laboratory measurements. The study included 2675 women and 1902 men aged 55-79 years from the prospective population-based Rotterdam Study. We used Cox proportional hazard regression analysis to evaluate the association of BMI, waist circumference, waist-to-hip ratio and a body shape index (ABSI) with CVD, including coronary heart disease and stroke. The performance of the laboratory-based and non-laboratory-based models was evaluated by studying the discrimination, calibration, correlation and risk agreement. Among men, ABSI was the most informative measure associated with CVD, therefore ABSI was used to construct the non-laboratory-based model. Discrimination of the non-laboratory-based model was not different than laboratory-based model (c-statistic: 0.680-vs-0.683, p=0.71); both models were well calibrated (15.3% observed CVD risk vs 16.9% and 17.0% predicted CVD risks by the non-laboratory-based and laboratory-based models, respectively) and Spearman rank correlation and the agreement between non-laboratory-based and laboratory-based models were 0.89 and 91.7%, respectively. Among women, none of the anthropometric measures were independently associated with CVD. Among middle-aged and elderly where the ability of BMI to predict CVD declines, the non-laboratory-based model, based on ABSI, could predict CVD risk as accurately as the laboratory-based model among men. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
An Open Singularity-Free Cosmological Model with Inflation
NASA Astrophysics Data System (ADS)
Karaca, Koray; Bayin, Selçuk
In the light of recent observations which point to an open universe (Ω0 < 1), we construct an open singularity-free cosmological model by reconsidering a model originally constructed for a closed universe. Our model starts from a nonsingular state called prematter, governed by an inflationary equation of state P = (γp - 1)ρ where γp (~= 10-3) is a small positive parameter representing the initial vacuum dominance of the universe. Unlike the closed models universe cannot be initially static hence, starts with an initial expansion rate represented by the initial value of the Hubble constant H(0). Therefore, our model is a two-parameter universe model (γp,H(0)). Comparing the predictions of this model for the present properties of the universe with the recent observational results, we argue that the model constructed in this work could be used as a realistic universe model.
Howard, Matt C
2018-01-01
The current article performs the first focused investigation into the construct of perceived self-esteem instability (P-SEI). Four studies investigate the construct's measurement, nomological net, and theoretical dynamics. Study 1 confirms the factor structure of a P-SEI Measure, supporting that P-SEI can be adequately measured. Study 2 identifies an initial nomological net surrounding P-SEI, showing that the construct is strongly related to stable aspects of the self (i.e., neuroticism and core self-evaluations). In Studies 3 and 4, the Conservation of Resources Theory is applied to develop and test five hypotheses. These studies show that P-SEI is predicted by self-esteem level and stressors, and the relationship of certain stressors is moderated by self-esteem contingencies. P-SEI also predicts stress, depression, anxiety, and certain defensive postures. From these studies and the integration of Conservation of Resources Theory, we suggest that P-SEI emerges through an interaction between environmental influences and personal resources, and we provide a theoretical model to better understand the construct of P-SEI. We suggest that this theory-driven model can prompt the initial field of study on P-SEI.
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Brislin, Sarah J.; Drislane, Laura E.; Smith, Shannon Toney; Edens, John F.; Patrick, Christopher J.
2015-01-01
Psychopathy is conceptualized by the triarchic model as encompassing three distinct phenotypic constructs: boldness, meanness, and disinhibition. In the current study, the Multidimensional Personality Questionnaire (MPQ), a normal-range personality measure, was evaluated for representation of these three constructs. Consensus ratings were used to identify MPQ items most related to each triarchic (Tri) construct. Scale measures were developed from items indicative of each construct, and scores for these scales were evaluated for convergent and discriminant validity in community (N = 176) and incarcerated samples (N = 240). A cross the two samples, MPQ-Tri scale scores demonstrated good internal consistencies and relationships with criterion measures of various types consistent with predictions based on the triarchic model. Findings are discussed in terms of their implications for further investigation of the triarchic model constructs in preexisting datasets that include the MPQ, in particular longitudinal and genetically informative datasets. PMID:25642934
Personalized Modeling for Prediction with Decision-Path Models
Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.
2015-01-01
Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570
Structure-activity relationships of cannabinoids: A joint CoMFA and pseudoreceptor modelling study
NASA Astrophysics Data System (ADS)
Schmetzer, Silke; Greenidge, Paulette; Kovar, Karl-Artur; Schulze-Alexandru, Meike; Folkers, Gerd
1997-05-01
A cannabinoid pseudoreceptor model for the CB1-receptor has been constructed for 31 cannabinoids using the molecular modelling software YAK. Additionally, two CoMFA studies were performed on these ligands, the first of which was conducted prior to the building of the pseudoreceptor. Its pharmacophore is identical with the initial superposition of ligands used for pseudoreceptor construction. In contrast, the ligand alignment for the second CoMFA study was taken directly from the final cannabinoid pseudoreceptor model. This altered alignment gives markedly improved cross-validated r2 values as compared to those obtained from the original alignment with{{r}}_{{{cross}}}^2 values of 0.79 and 0.63, respectively, for five components. However, the pharmacophore alignment has the better predictive ability. Both the CoMFA and pseudoreceptor methods predict the free energy of binding of test ligands well.
Ontology-based tools to expedite predictive model construction.
Haug, Peter; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Ferraro, Jeffrey
2014-01-01
Large amounts of medical data are collected electronically during the course of caring for patients using modern medical information systems. This data presents an opportunity to develop clinically useful tools through data mining and observational research studies. However, the work necessary to make sense of this data and to integrate it into a research initiative can require substantial effort from medical experts as well as from experts in medical terminology, data extraction, and data analysis. This slows the process of medical research. To reduce the effort required for the construction of computable, diagnostic predictive models, we have developed a system that hybridizes a medical ontology with a large clinical data warehouse. Here we describe components of this system designed to automate the development of preliminary diagnostic models and to provide visual clues that can assist the researcher in planning for further analysis of the data behind these models.
Event-based soil loss models for construction sites
NASA Astrophysics Data System (ADS)
Trenouth, William R.; Gharabaghi, Bahram
2015-05-01
The elevated rates of soil erosion stemming from land clearing and grading activities during urban development, can result in excessive amounts of eroded sediments entering waterways and causing harm to the biota living therein. However, construction site event-based soil loss simulations - required for reliable design of erosion and sediment controls - are one of the most uncertain types of hydrologic models. This study presents models with improved degree of accuracy to advance the design of erosion and sediment controls for construction sites. The new models are developed using multiple linear regression (MLR) on event-based permutations of the Universal Soil Loss Equation (USLE) and artificial neural networks (ANN). These models were developed using surface runoff monitoring datasets obtained from three sites - Greensborough, Cookstown, and Alcona - in Ontario and datasets mined from the literature for three additional sites - Treynor, Iowa, Coshocton, Ohio and Cordoba, Spain. The predictive MLR and ANN models can serve as both diagnostic and design tools for the effective sizing of erosion and sediment controls on active construction sites, and can be used for dynamic scenario forecasting when considering rapidly changing land use conditions during various phases of construction.
Diameter Growth Models for Inventory Applications
Ronald E. McRoberts; Christopher W. Woodall; Veronica C. Lessard; Margaret R. Holdaway
2002-01-01
Distant-independent, individual-tree, diametar growth models were constructed to update information for forest inventory plots measured in previous years. The models are nonlinear in the parameters and were calibrated weighted nonlinear least squares techniques and forest inventory plot data. Analyses of residuals indicated that model predictions compare favorably to...
2014-01-01
Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614
Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun
2016-08-01
Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.
Virtual Beach (VB) is a decision support tool that constructs site-specific statistical models to predict fecal indicator bacteria (FIB) at recreational beaches. Although primarily designed for making decisions regarding beach closures or issuance of swimming advisories based on...
London, Michael; Larkum, Matthew E; Häusser, Michael
2008-11-01
Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.
The Non-Axisymmetric Milky Way
NASA Technical Reports Server (NTRS)
Spergel, David N.
1996-01-01
The Dwek et al. model represents the current state-of-the-art model for the stellar structure of our Galaxy. The improvements we have made to this model take a number of forms: (1) the construction of a more detailed dust model so that we can extend our modeling to the galactic plane; (2) simultaneous fits to the bulge and the disk; (3) the construction of the first self-consistent model for a galactic bar; and (4) the development and application of algorithms for constructing nonparametric bar models. The improved Galaxy model has enabled a number of exciting science projects. In Zhao et al., we show that the number and duration of microlensing events seen by the OGLE and MACHO collaborations towards the bulge were consistent with the predictions of our bar model. In Malhotra et al., we constructed an infrared Tully-Fisher (TF) relation for the local group. We found the tightest TF relation ever seen in any band and in any group of galaxies. The tightness of the correlation places strong constraints on galaxy formation models and provides a independent check of the Cepheid distance scale.
Evolution of the Radial Abundance Gradient and Cold Gas along the Milky Way Disk
NASA Astrophysics Data System (ADS)
Chen, Q. S.; Chang, R. X.; Yin, J.
2014-03-01
We have constructed a phenomenological model of the chemical evolution of the Milky Way disk, and treated the molecular and atomic gas separately. Using this model, we explore the radial profiles of oxygen abundance, the surface density of cold gas, and their time evolutions. It is shown that the model predictions are very sensitive to the adopted infall time-scale. By comparing the model predictions with the observations, we find that the model adopting the star formation law based on H_2 can properly predict the observed radial distributions of cold gas and oxygen abundance gradient along the disk. We also compare the model results with the predictions of the model which adopts the instantaneous recycling approximation (IRA), and find that the IRA assumption has little influence on the model results, especially in the low-density gas region.
NASA Astrophysics Data System (ADS)
Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang
2018-06-01
Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.
The role of emotion regulation in predicting personality dimensions.
Borges, Lauren M; Naugle, Amy E
2017-11-01
Dimensional models of personality have been widely acknowledged in the field as alternatives to a trait-based system of nomenclature. While the importance of dimensional models has been established, less is known about the constructs underlying these personality dimensions. Emotion regulation is one such potential construct. The goal of the current study was to examine the relationship between personality dimensions and emotion regulation. More specifically, the predictive capacity of emotion regulation in accounting for personality dimensions and symptoms on the Schedule for Nonadaptive and Adaptive Personality-2 above and beyond a measure of general distress was evaluated. Emotion regulation was found to be predictive of most personality dimensions and symptoms of most personality disorders. Consistent with hypotheses, emotion regulation variables associated with undercontrol of emotions were most predictive of traits associated with Cluster B personality disorders whereas Cluster A and C traits were most associated with emotion regulation related to overcontrol of emotions. These findings provide preliminary evidence that some personality dimensions never assessed in relation to emotion regulation are strongly predicted by emotion regulation variables. Thus, the present study facilitates an initial step in understanding the relationship between personality dimensions and a multidimensional model of emotion regulation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Category-Specific Neural Oscillations Predict Recall Organization During Memory Search
Morton, Neal W.; Kahana, Michael J.; Rosenberg, Emily A.; Baltuch, Gordon H.; Litt, Brian; Sharan, Ashwini D.; Sperling, Michael R.; Polyn, Sean M.
2013-01-01
Retrieved-context models of human memory propose that as material is studied, retrieval cues are constructed that allow one to target particular aspects of past experience. We examined the neural predictions of these models by using electrocorticographic/depth recordings and scalp electroencephalography (EEG) to characterize category-specific oscillatory activity, while participants studied and recalled items from distinct, neurally discriminable categories. During study, these category-specific patterns predict whether a studied item will be recalled. In the scalp EEG experiment, category-specific activity during study also predicts whether a given item will be recalled adjacent to other same-category items, consistent with the proposal that a category-specific retrieval cue is used to guide memory search. Retrieved-context models suggest that integrative neural circuitry is involved in the construction and maintenance of the retrieval cue. Consistent with this hypothesis, we observe category-specific patterns that rise in strength as multiple same-category items are studied sequentially, and find that individual differences in this category-specific neural integration during study predict the degree to which a participant will use category information to organize memory search. Finally, we track the deployment of this retrieval cue during memory search: Category-specific patterns are stronger when participants organize their responses according to the category of the studied material. PMID:22875859
Brackman, Emily H; Morris, Blair W; Andover, Margaret S
2016-01-01
The interpersonal psychological theory of suicide provides a useful framework for considering the relationship between non-suicidal self-injury and suicide. Researchers propose that NSSI increases acquired capability for suicide. We predicted that both NSSI frequency and the IPTS acquired capability construct (decreased fear of death and increased pain tolerance) would separately interact with suicidal ideation to predict suicide attempts. Undergraduate students (N = 113) completed self-report questionnaires, and a subsample (n = 66) also completed a pain sensitivity task. NSSI frequency significantly moderated the association between suicidal ideation and suicide attempts. However, in a separate model, acquired capability did not moderate this relationship. Our understanding of the relationship between suicidal ideation and suicidal behavior can be enhanced by factors associated with NSSI that are distinct from the acquired capability construct.
Temporality of couple conflict and relationship perceptions.
Johnson, Matthew D; Horne, Rebecca M; Hardy, Nathan R; Anderson, Jared R
2018-05-03
Using 5 waves of longitudinal survey data gathered from 3,405 couples, the present study investigates the temporal associations between self-reported couple conflict (frequency and each partner's constructive and withdrawing behaviors) and relationship perceptions (satisfaction and perceived instability). Autoregressive cross-lagged model results revealed couple conflict consistently predicted future relationship perceptions: More frequent conflict and withdrawing behaviors and fewer constructive behaviors foretold reduced satisfaction and conflict frequency and withdrawal heightened perceived instability. Relationship perceptions also shaped future conflict, but in surprising ways: Perceptions of instability were linked with less frequent conflict, and male partner instability predicted fewer withdrawing behaviors for female partners. Higher satisfaction from male partners also predicted more frequent and less constructive conflict behavior in the future. These findings illustrate complex bidirectional linkages between relationship perceptions and couple conflict behaviors in the development of couple relations. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
Kane, Michael J.; Meier, Matt E.; Smeekens, Bridget A.; Gross, Georgina M.; Chun, Charlotte A.; Silvia, Paul J.; Kwapil, Thomas R.
2016-01-01
A large correlational study took a latent-variable approach to the generality of executive control by testing the individual-differences structure of executive-attention capabilities and assessing their prediction of schizotypy, a multidimensional construct (with negative, positive, disorganized, and paranoid factors) conveying risk for schizophrenia. Although schizophrenia is convincingly linked to executive deficits, the schizotypy literature is equivocal. Subjects completed tasks of working memory capacity (WMC), attention restraint (inhibiting prepotent responses), and attention constraint (focusing visual attention amid distractors), the latter two in an effort to fractionate the “inhibition” construct. We also assessed mind-wandering propensity (via in-task thought probes) and coefficient of variation in response times (RT CoV) from several tasks as more novel indices of executive attention. WMC, attention restraint, attention constraint, mind wandering, and RT CoV were correlated but separable constructs, indicating some distinctions among “attention control” abilities; WMC correlated more strongly with attentional restraint than constraint, and mind wandering correlated more strongly with attentional restraint, attentional constraint, and RT CoV than with WMC. Across structural models, no executive construct predicted negative schizotypy and only mind wandering and RT CoV consistently (but modestly) predicted positive, disorganized, and paranoid schizotypy; stalwart executive constructs in the schizophrenia literature — WMC and attention restraint — showed little to no predictive power, beyond restraint’s prediction of paranoia. Either executive deficits are consequences rather than risk factors for schizophrenia, or executive failures barely precede or precipitate diagnosable schizophrenia symptoms. PMID:27454042
A Simulation Model Articulation of the REA Ontology
NASA Astrophysics Data System (ADS)
Laurier, Wim; Poels, Geert
This paper demonstrates how the REA enterprise ontology can be used to construct simulation models for business processes, value chains and collaboration spaces in supply chains. These models support various high-level and operational management simulation applications, e.g. the analysis of enterprise sustainability and day-to-day planning. First, the basic constructs of the REA ontology and the ExSpect modelling language for simulation are introduced. Second, collaboration space, value chain and business process models and their conceptual dependencies are shown, using the ExSpect language. Third, an exhibit demonstrates the use of value chain models in predicting the financial performance of an enterprise.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bozcuk, H; Yıldız, M; Artaç, M; Kocer, M; Kaya, Ç; Ulukal, E; Ay, S; Kılıç, M P; Şimşek, E H; Kılıçkaya, P; Uçar, S; Coskun, H S; Savas, B
2015-06-01
There is clinical need to predict risk of febrile neutropenia before a specific cycle of chemotherapy in cancer patients. Data on 3882 chemotherapy cycles in 1089 consecutive patients with lung, breast, and colon cancer from four teaching hospitals were used to construct a predictive model for febrile neutropenia. A final nomogram derived from the multivariate predictive model was prospectively confirmed in a second cohort of 960 consecutive cases and 1444 cycles. The following factors were used to construct the nomogram: previous history of febrile neutropenia, pre-cycle lymphocyte count, type of cancer, cycle of current chemotherapy, and patient age. The predictive model had a concordance index of 0.95 (95 % confidence interval (CI) = 0.91-0.99) in the derivation cohort and 0.85 (95 % CI = 0.80-0.91) in the external validation cohort. A threshold of 15 % for the risk of febrile neutropenia in the derivation cohort was associated with a sensitivity of 0.76 and specificity of 0.98. These figures were 1.00 and 0.49 in the validation cohort if a risk threshold of 50 % was chosen. This nomogram is helpful in the prediction of febrile neutropenia after chemotherapy in patients with lung, breast, and colon cancer. Usage of this nomogram may help decrease the morbidity and mortality associated with febrile neutropenia and deserves further validation.
Toyabe, Shin-ichi
2014-01-01
Inpatient falls are the most common adverse events that occur in a hospital, and about 3 to 10% of falls result in serious injuries such as bone fractures and intracranial haemorrhages. We previously reported that bone fractures and intracranial haemorrhages were two major fall-related injuries and that risk assessment score for osteoporotic bone fracture was significantly associated not only with bone fractures after falls but also with intracranial haemorrhage after falls. Based on the results, we tried to establish a risk assessment tool for predicting fall-related severe injuries in a hospital. Possible risk factors related to fall-related serious injuries were extracted from data on inpatients that were admitted to a tertiary-care university hospital by using multivariate Cox’ s regression analysis and multiple logistic regression analysis. We found that fall risk score and fracture risk score were the two significant factors, and we constructed models to predict fall-related severe injuries incorporating these factors. When the prediction model was applied to another independent dataset, the constructed model could detect patients with fall-related severe injuries efficiently. The new assessment system could identify patients prone to severe injuries after falls in a reproducible fashion. PMID:25168984
Virtual Beach (VB) is a decision support tool that constructs site-specific statistical models to predict fecal indicator bacteria (FIB) at locations of exposure. Although primarily designed for making decisions regarding beach closures or issuance of swimming advisories based on...
Chemically-induced vascular toxicity during embryonic development may cause a wide range of adverse effects. To identify putative vascular disrupting chemicals (pVDCs), a predictive signature was constructed from U.S. EPA ToxCast high-throughput screening (HTS) assays that map to...
Ye, Jiang-Feng; Zhao, Yu-Xin; Ju, Jian; Wang, Wei
2017-10-01
To discuss the value of the Bedside Index for Severity in Acute Pancreatitis (BISAP), Modified Early Warning Score (MEWS), serum Ca2+, similarly hereinafter, and red cell distribution width (RDW) for predicting the severity grade of acute pancreatitis and to develop and verify a more accurate scoring system to predict the severity of AP. In 302 patients with AP, we calculated BISAP and MEWS scores and conducted regression analyses on the relationships of BISAP scoring, RDW, MEWS, and serum Ca2+ with the severity of AP using single-factor logistics. The variables with statistical significance in the single-factor logistic regression were used in a multi-factor logistic regression model; forward stepwise regression was used to screen variables and build a multi-factor prediction model. A receiver operating characteristic curve (ROC curve) was constructed, and the significance of multi- and single-factor prediction models in predicting the severity of AP using the area under the ROC curve (AUC) was evaluated. The internal validity of the model was verified through bootstrapping. Among 302 patients with AP, 209 had mild acute pancreatitis (MAP) and 93 had severe acute pancreatitis (SAP). According to single-factor logistic regression analysis, we found that BISAP, MEWS and serum Ca2+ are prediction indexes of the severity of AP (P-value<0.001), whereas RDW is not a prediction index of AP severity (P-value>0.05). The multi-factor logistic regression analysis showed that BISAP and serum Ca2+ are independent prediction indexes of AP severity (P-value<0.001), and MEWS is not an independent prediction index of AP severity (P-value>0.05); BISAP is negatively related to serum Ca2+ (r=-0.330, P-value<0.001). The constructed model is as follows: ln()=7.306+1.151*BISAP-4.516*serum Ca2+. The predictive ability of each model for SAP follows the order of the combined BISAP and serum Ca2+ prediction model>Ca2+>BISAP. There is no statistical significance for the predictive ability of BISAP and serum Ca2+ (P-value>0.05); however, there is remarkable statistical significance for the predictive ability using the newly built prediction model as well as BISAP and serum Ca2+ individually (P-value<0.01). Verification of the internal validity of the models by bootstrapping is favorable. BISAP and serum Ca2+ have high predictive value for the severity of AP. However, the model built by combining BISAP and serum Ca2+ is remarkably superior to those of BISAP and serum Ca2+ individually. Furthermore, this model is simple, practical and appropriate for clinical use. Copyright © 2016. Published by Elsevier Masson SAS.
Making predictions of mangrove deforestation: a comparison of two methods in Kenya.
Rideout, Alasdair J R; Joshi, Neha P; Viergever, Karin M; Huxham, Mark; Briers, Robert A
2013-11-01
Deforestation of mangroves is of global concern given their importance for carbon storage, biogeochemical cycling and the provision of other ecosystem services, but the links between rates of loss and potential drivers or risk factors are rarely evaluated. Here, we identified key drivers of mangrove loss in Kenya and compared two different approaches to predicting risk. Risk factors tested included various possible predictors of anthropogenic deforestation, related to population, suitability for land use change and accessibility. Two approaches were taken to modelling risk; a quantitative statistical approach and a qualitative categorical ranking approach. A quantitative model linking rates of loss to risk factors was constructed based on generalized least squares regression and using mangrove loss data from 1992 to 2000. Population density, soil type and proximity to roads were the most important predictors. In order to validate this model it was used to generate a map of losses of Kenyan mangroves predicted to have occurred between 2000 and 2010. The qualitative categorical model was constructed using data from the same selection of variables, with the coincidence of different risk factors in particular mangrove areas used in an additive manner to create a relative risk index which was then mapped. Quantitative predictions of loss were significantly correlated with the actual loss of mangroves between 2000 and 2010 and the categorical risk index values were also highly correlated with the quantitative predictions. Hence, in this case the relatively simple categorical modelling approach was of similar predictive value to the more complex quantitative model of mangrove deforestation. The advantages and disadvantages of each approach are discussed, and the implications for mangroves are outlined. © 2013 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Ahluwalia, Arti
2017-02-01
About two decades ago, West and coworkers established a model which predicts that metabolic rate follows a three quarter power relationship with the mass of an organism, based on the premise that tissues are supplied nutrients through a fractal distribution network. Quarter power scaling is widely considered a universal law of biology and it is generally accepted that were in-vitro cultures to obey allometric metabolic scaling, they would have more predictive potential and could, for instance, provide a viable substitute for animals in research. This paper outlines a theoretical and computational framework for establishing quarter power scaling in three-dimensional spherical constructs in-vitro, starting where fractal distribution ends. Allometric scaling in non-vascular spherical tissue constructs was assessed using models of Michaelis Menten oxygen consumption and diffusion. The models demonstrate that physiological scaling is maintained when about 5 to 60% of the construct is exposed to oxygen concentrations less than the Michaelis Menten constant, with a significant concentration gradient in the sphere. The results have important implications for the design of downscaled in-vitro systems with physiological relevance.
A Short Guide to the Climatic Variables of the Last Glacial Maximum for Biogeographers.
Varela, Sara; Lima-Ribeiro, Matheus S; Terribile, Levi Carina
2015-01-01
Ecological niche models are widely used for mapping the distribution of species during the last glacial maximum (LGM). Although the selection of the variables and General Circulation Models (GCMs) used for constructing those maps determine the model predictions, we still lack a discussion about which variables and which GCM should be included in the analysis and why. Here, we analyzed the climatic predictions for the LGM of 9 different GCMs in order to help biogeographers to select their GCMs and climatic layers for mapping the species ranges in the LGM. We 1) map the discrepancies between the climatic predictions of the nine GCMs available for the LGM, 2) analyze the similarities and differences between the GCMs and group them to help researchers choose the appropriate GCMs for calibrating and projecting their ecological niche models (ENM) during the LGM, and 3) quantify the agreement of the predictions for each bioclimatic variable to help researchers avoid the environmental variables with a poor consensus between models. Our results indicate that, in absolute values, GCMs have a strong disagreement in their temperature predictions for temperate areas, while the uncertainties for the precipitation variables are in the tropics. In spite of the discrepancies between model predictions, temperature variables (BIO1-BIO11) are highly correlated between models. Precipitation variables (BIO12-BIO19) show no correlation between models, and specifically, BIO14 (precipitation of the driest month) and BIO15 (Precipitation Seasonality (Coefficient of Variation)) show the highest level of discrepancy between GCMs. Following our results, we strongly recommend the use of different GCMs for constructing or projecting ENMs, particularly when predicting the distribution of species that inhabit the tropics and the temperate areas of the Northern and Southern Hemispheres, because climatic predictions for those areas vary greatly among GCMs. We also recommend the exclusion of BIO14 and BIO15 from ENMs because those variables show a high level of discrepancy between GCMs. Thus, by excluding them, we decrease the level of uncertainty of our predictions. All the climatic layers produced for this paper are freely available in http://ecoclimate.org/.
A Short Guide to the Climatic Variables of the Last Glacial Maximum for Biogeographers
Varela, Sara; Lima-Ribeiro, Matheus S.; Terribile, Levi Carina
2015-01-01
Ecological niche models are widely used for mapping the distribution of species during the last glacial maximum (LGM). Although the selection of the variables and General Circulation Models (GCMs) used for constructing those maps determine the model predictions, we still lack a discussion about which variables and which GCM should be included in the analysis and why. Here, we analyzed the climatic predictions for the LGM of 9 different GCMs in order to help biogeographers to select their GCMs and climatic layers for mapping the species ranges in the LGM. We 1) map the discrepancies between the climatic predictions of the nine GCMs available for the LGM, 2) analyze the similarities and differences between the GCMs and group them to help researchers choose the appropriate GCMs for calibrating and projecting their ecological niche models (ENM) during the LGM, and 3) quantify the agreement of the predictions for each bioclimatic variable to help researchers avoid the environmental variables with a poor consensus between models. Our results indicate that, in absolute values, GCMs have a strong disagreement in their temperature predictions for temperate areas, while the uncertainties for the precipitation variables are in the tropics. In spite of the discrepancies between model predictions, temperature variables (BIO1-BIO11) are highly correlated between models. Precipitation variables (BIO12- BIO19) show no correlation between models, and specifically, BIO14 (precipitation of the driest month) and BIO15 (Precipitation Seasonality (Coefficient of Variation)) show the highest level of discrepancy between GCMs. Following our results, we strongly recommend the use of different GCMs for constructing or projecting ENMs, particularly when predicting the distribution of species that inhabit the tropics and the temperate areas of the Northern and Southern Hemispheres, because climatic predictions for those areas vary greatly among GCMs. We also recommend the exclusion of BIO14 and BIO15 from ENMs because those variables show a high level of discrepancy between GCMs. Thus, by excluding them, we decrease the level of uncertainty of our predictions. All the climatic layers produced for this paper are freely available in http://ecoclimate.org/. PMID:26068930
Predicting language outcomes for children learning AAC: Child and environmental factors
Brady, Nancy C.; Thiemann-Bourque, Kathy; Fleming, Kandace; Matthews, Kris
2014-01-01
Purpose To investigate a model of language development for nonverbal preschool age children learning to communicate with AAC. Method Ninety-three preschool children with intellectual disabilities were assessed at Time 1, and 82 of these children were assessed one year later at Time 2. The outcome variable was the number of different words the children produced (with speech, sign or SGD). Children’s intrinsic predictor for language was modeled as a latent variable consisting of cognitive development, comprehension, play, and nonverbal communication complexity. Adult input at school and home, and amount of AAC instruction were proposed mediators of vocabulary acquisition. Results A confirmatory factor analysis revealed that measures converged as a coherent construct and an SEM model indicated that the intrinsic child predictor construct predicted different words children produced. The amount of input received at home but not at school was a significant mediator. Conclusions Our hypothesized model accurately reflected a latent construct of Intrinsic Symbolic Factor (ISF). Children who evidenced higher initial levels of ISF and more adult input at home produced more words one year later. Findings support the need to assess multiple child variables, and suggest interventions directed to the indicators of ISF and input. PMID:23785187
Machine Learning for Flood Prediction in Google Earth Engine
NASA Astrophysics Data System (ADS)
Kuhn, C.; Tellman, B.; Max, S. A.; Schwarz, B.
2015-12-01
With the increasing availability of high-resolution satellite imagery, dynamic flood mapping in near real time is becoming a reachable goal for decision-makers. This talk describes a newly developed framework for predicting biophysical flood vulnerability using public data, cloud computing and machine learning. Our objective is to define an approach to flood inundation modeling using statistical learning methods deployed in a cloud-based computing platform. Traditionally, static flood extent maps grounded in physically based hydrologic models can require hours of human expertise to construct at significant financial cost. In addition, desktop modeling software and limited local server storage can impose restraints on the size and resolution of input datasets. Data-driven, cloud-based processing holds promise for predictive watershed modeling at a wide range of spatio-temporal scales. However, these benefits come with constraints. In particular, parallel computing limits a modeler's ability to simulate the flow of water across a landscape, rendering traditional routing algorithms unusable in this platform. Our project pushes these limits by testing the performance of two machine learning algorithms, Support Vector Machine (SVM) and Random Forests, at predicting flood extent. Constructed in Google Earth Engine, the model mines a suite of publicly available satellite imagery layers to use as algorithm inputs. Results are cross-validated using MODIS-based flood maps created using the Dartmouth Flood Observatory detection algorithm. Model uncertainty highlights the difficulty of deploying unbalanced training data sets based on rare extreme events.
Using Machine Learning in Adversarial Environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren Leon Davis
Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approachesmore » only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.« less
Numerical Model for Predicting and Managing Heat Dissipation from a Neural Probe
2013-05-10
Distance from Probe Centerline [m] x ‐ 3D Model y ‐ 3D Model r ‐ 2D Model 12 difficult, especially on a micro-scale level. For this reason...of the screws and nuts 180 m across. 20 were of plastic construction. An aluminum sample holder was constructed by the USNA Fabrication Lab...voltage drop across the reference resistor. C. Biosimulant Gel At first, a hydroxyethyl cellulose gel was considered for use as the biosimulant gel, but
Examining the Latent Structure of the Delis-Kaplan Executive Function System.
Karr, Justin E; Hofer, Scott M; Iverson, Grant L; Garcia-Barrera, Mauricio A
2018-05-04
The current study aimed to determine whether the Delis-Kaplan Executive Function System (D-KEFS) taps into three executive function factors (inhibition, shifting, fluency) and to assess the relationship between these factors and tests of executive-related constructs less often measured in latent variable research: reasoning, abstraction, and problem solving. Participants included 425 adults from the D-KEFS standardization sample (20-49 years old; 50.1% female; 70.1% White). Eight alternative measurement models were compared based on model fit, with test scores assigned a priori to three factors: inhibition (Color-Word Interference, Tower), shifting (Trail Making, Sorting, Design Fluency), and fluency (Verbal/Design Fluency). The Twenty Questions, Word Context, and Proverb Tests were predicted in separate structural models. The three-factor model fit the data well (CFI = 0.938; RMSEA = 0.047), although a two-factor model, with shifting and fluency merged, fit similarly well (CFI = 0.929; RMSEA = 0.048). A bifactor model fit best (CFI = 0.977; RMSEA = 0.032) and explained the most variance in shifting indicators, but rarely converged among 5,000 bootstrapped samples. When the three first-order factors simultaneously predicted the criterion variables, only shifting was uniquely predictive (p < .05; R2 = 0.246-0.408). The bifactor significantly predicted all three criterion variables (p < .001; R2 = 0.141-242). Results supported a three-factor D-KEFS model (i.e., inhibition, shifting, and fluency), although shifting and fluency were highly related (r = 0.696). The bifactor showed superior fit, but converged less often than other models. Shifting best predicted tests of reasoning, abstraction, and problem solving. These findings support the validity of D-KEFS scores for measuring executive-related constructs and provide a framework through which clinicians can interpret D-KEFS results.
Constructing an everywhere and locally relevant predictive model of the West-African critical zone
NASA Astrophysics Data System (ADS)
Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.
2017-12-01
Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of a reference model to study regional CZ sensitivity to global changes and will help to identify prior parameters required and to construct meta-models for deeper investigations of interactions within the CZ.
Robinson, Natalie G; Masser, Barbara M; White, Katherine M; Hyde, Melissa K; Terry, Deborah J
2008-12-01
With an increasing demand for blood and blood products in Australia, there is a continual need to recruit blood donors. As such, it is important to investigate the factors that impact on nondonors' decision-making processes with regard to donating blood for the first time. Previous research has established the efficacy of the theory of planned behavior (TPB) in predicting blood donor intentions. The current research aimed to test a TPB model augmented with constructs implicated in previous blood donor research; specifically descriptive norm, moral norm, anticipated regret, and donation anxiety. Participants completed measures assessing the standard TPB variables of attitude, subjective norm, and perceived behavioral control (PBC) as well as descriptive norm, moral norm, donation anxiety, and anticipated regret. Path analysis examined the utility of the augmented TPB model to predict 195 non-blood donors' intentions to donate blood. A final revised model provided a very good fit to the data and included attitude, PBC, moral norm, descriptive norm, anticipated regret, and donation anxiety as direct predictors of intention, with these factors accounting for 70 percent of the variance in intentions to donate blood. A revised TPB model provided a more efficacious predictor of nondonors' intentions to donate than the standard TPB model and highlights the role that norm-based factors and affective-laden constructs play in predicting non-blood donors' intentions to donate.
Jung, Chan Sik; Koh, Sang-Hyun; Nam, Youngwoo; Ahn, Jeong Joon; Lee, Cha Young; Choi, Won I L
2015-08-01
Monochamus saltuarius Gebler is a vector that transmits the pine wood nematode, Bursaphelenchus xylophilus, to Korean white pine, Pinus koraiensis, in Korea. To reduce the damage caused by this nematode in pine forests, timely control measures are needed to suppress the cerambycid beetle population. This study sought to construct a forecasting model to predict beetle emergence based on spring temperature. Logs of Korean white pine were infested with M. saltuarius in 2009, and the infested logs were overwintered. In February 2010, infested logs were then moved into incubators held at constant temperature conditions of 16, 20, 23, 25, 27, 30 or 34°C until all adults had emerged. The developmental rate of the beetles was estimated by linear and nonlinear equations and a forecasting model for emergence of the beetle was constructed by pooling data based on normalized developmental rate. The lower threshold temperature for development was 8.3°C. The forecasting model relatively well predicted the emergence pattern of M. saltuarius collected from four areas in northern Republic of Korea. The median emergence dates predicted by the model were 2.2-5.9 d earlier than the observed median dates. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Prediction and visualization of redox conditions in the groundwater of Central Valley, California
Rosecrans, Celia Z.; Nolan, Bernard T.; Gronberg, JoAnn M.
2017-01-01
Regional-scale, three-dimensional continuous probability models, were constructed for aspects of redox conditions in the groundwater system of the Central Valley, California. These models yield grids depicting the probability that groundwater in a particular location will have dissolved oxygen (DO) concentrations less than selected threshold values representing anoxic groundwater conditions, or will have dissolved manganese (Mn) concentrations greater than selected threshold values representing secondary drinking water-quality contaminant levels (SMCL) and health-based screening levels (HBSL). The probability models were constrained by the alluvial boundary of the Central Valley to a depth of approximately 300 m. Probability distribution grids can be extracted from the 3-D models at any desired depth, and are of interest to water-resource managers, water-quality researchers, and groundwater modelers concerned with the occurrence of natural and anthropogenic contaminants related to anoxic conditions.Models were constructed using a Boosted Regression Trees (BRT) machine learning technique that produces many trees as part of an additive model and has the ability to handle many variables, automatically incorporate interactions, and is resistant to collinearity. Machine learning methods for statistical prediction are becoming increasing popular in that they do not require assumptions associated with traditional hypothesis testing. Models were constructed using measured dissolved oxygen and manganese concentrations sampled from 2767 wells within the alluvial boundary of the Central Valley, and over 60 explanatory variables representing regional-scale soil properties, soil chemistry, land use, aquifer textures, and aquifer hydrologic properties. Models were trained on a USGS dataset of 932 wells, and evaluated on an independent hold-out dataset of 1835 wells from the California Division of Drinking Water. We used cross-validation to assess the predictive performance of models of varying complexity, as a basis for selecting final models. Trained models were applied to cross-validation testing data and a separate hold-out dataset to evaluate model predictive performance by emphasizing three model metrics of fit: Kappa; accuracy; and the area under the receiver operator characteristic curve (ROC). The final trained models were used for mapping predictions at discrete depths to a depth of 304.8 m. Trained DO and Mn models had accuracies of 86–100%, Kappa values of 0.69–0.99, and ROC values of 0.92–1.0. Model accuracies for cross-validation testing datasets were 82–95% and ROC values were 0.87–0.91, indicating good predictive performance. Kappas for the cross-validation testing dataset were 0.30–0.69, indicating fair to substantial agreement between testing observations and model predictions. Hold-out data were available for the manganese model only and indicated accuracies of 89–97%, ROC values of 0.73–0.75, and Kappa values of 0.06–0.30. The predictive performance of both the DO and Mn models was reasonable, considering all three of these fit metrics and the low percentages of low-DO and high-Mn events in the data.
Andersen, Claus E; Raaschou-Nielsen, Ole; Andersen, Helle Primdal; Lind, Morten; Gravesen, Peter; Thomsen, Birthe L; Ulbak, Kaare
2007-01-01
A linear regression model has been developed for the prediction of indoor (222)Rn in Danish houses. The model provides proxy radon concentrations for about 21,000 houses in a Danish case-control study on the possible association between residential radon and childhood cancer (primarily leukaemia). The model was calibrated against radon measurements in 3116 houses. An independent dataset with 788 house measurements was used for model performance assessment. The model includes nine explanatory variables, of which the most important ones are house type and geology. All explanatory variables are available from central databases. The model was fitted to log-transformed radon concentrations and it has an R(2) of 40%. The uncertainty associated with individual predictions of (untransformed) radon concentrations is about a factor of 2.0 (one standard deviation). The comparison with the independent test data shows that the model makes sound predictions and that errors of radon predictions are only weakly correlated with the estimates themselves (R(2) = 10%).
Latent constructs model explaining the attachment-linked variation in autobiographical remembering.
Öner, Sezin; Gülgöz, Sami
2016-01-01
In the current study, we proposed a latent constructs model to characterise the qualitative aspects of autobiographical remembering and investigated the structural relations in the model that may vary across individuals. Primarily, we focused on the memories of romantic relationships and argued that attachment anxiety and avoidance would be reflected in the ways that individuals encode, rehearse, or remember autobiographical memories in close relationships. Participants reported two positive and two negative relationship-specific memories and rated the characteristics for each memory. As predicted, the basic memory model yielded appropriate fit, indicating that event characteristics (EC) predicted the frequency of rehearsal (RC) and phenomenology at retrieval (PC). When attachment variables were integrated, the model showed that rehearsal mediated the link between anxiety and PC, especially for negative memories. On the other hand, for avoidance EC was the key factor mediating the link between avoidance and RC, as well as PC. Findings were discussed with respect to autobiographical memory functions emphasising a systematically, integrated framework.
Distributed and Lumped Parameter Models for the Characterization of High Throughput Bioreactors
Conoscenti, Gioacchino; Cutrì, Elena; Tuan, Rocky S.; Raimondi, Manuela T.; Gottardi, Riccardo
2016-01-01
Next generation bioreactors are being developed to generate multiple human cell-based tissue analogs within the same fluidic system, to better recapitulate the complexity and interconnection of human physiology [1, 2]. The effective development of these devices requires a solid understanding of their interconnected fluidics, to predict the transport of nutrients and waste through the constructs and improve the design accordingly. In this work, we focus on a specific model of bioreactor, with multiple input/outputs, aimed at generating osteochondral constructs, i.e., a biphasic construct in which one side is cartilaginous in nature, while the other is osseous. We next develop a general computational approach to model the microfluidics of a multi-chamber, interconnected system that may be applied to human-on-chip devices. This objective requires overcoming several challenges at the level of computational modeling. The main one consists of addressing the multi-physics nature of the problem that combines free flow in channels with hindered flow in porous media. Fluid dynamics is also coupled with advection-diffusion-reaction equations that model the transport of biomolecules throughout the system and their interaction with living tissues and C constructs. Ultimately, we aim at providing a predictive approach useful for the general organ-on-chip community. To this end, we have developed a lumped parameter approach that allows us to analyze the behavior of multi-unit bioreactor systems with modest computational effort, provided that the behavior of a single unit can be fully characterized. PMID:27669413
Distributed and Lumped Parameter Models for the Characterization of High Throughput Bioreactors.
Iannetti, Laura; D'Urso, Giovanna; Conoscenti, Gioacchino; Cutrì, Elena; Tuan, Rocky S; Raimondi, Manuela T; Gottardi, Riccardo; Zunino, Paolo
Next generation bioreactors are being developed to generate multiple human cell-based tissue analogs within the same fluidic system, to better recapitulate the complexity and interconnection of human physiology [1, 2]. The effective development of these devices requires a solid understanding of their interconnected fluidics, to predict the transport of nutrients and waste through the constructs and improve the design accordingly. In this work, we focus on a specific model of bioreactor, with multiple input/outputs, aimed at generating osteochondral constructs, i.e., a biphasic construct in which one side is cartilaginous in nature, while the other is osseous. We next develop a general computational approach to model the microfluidics of a multi-chamber, interconnected system that may be applied to human-on-chip devices. This objective requires overcoming several challenges at the level of computational modeling. The main one consists of addressing the multi-physics nature of the problem that combines free flow in channels with hindered flow in porous media. Fluid dynamics is also coupled with advection-diffusion-reaction equations that model the transport of biomolecules throughout the system and their interaction with living tissues and C constructs. Ultimately, we aim at providing a predictive approach useful for the general organ-on-chip community. To this end, we have developed a lumped parameter approach that allows us to analyze the behavior of multi-unit bioreactor systems with modest computational effort, provided that the behavior of a single unit can be fully characterized.
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad
2016-02-01
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less
Mares-García, Emma; Palazón-Bru, Antonio; Folgado-de la Rosa, David Manuel; Pereira-Expósito, Avelino; Martínez-Martín, Álvaro; Cortés-Castell, Ernesto; Gil-Guillén, Vicente Francisco
2017-01-01
Other studies have assessed nonadherence to proton pump inhibitors (PPIs), but none has developed a screening test for its detection. To construct and internally validate a predictive model for nonadherence to PPIs. This prospective observational study with a one-month follow-up was carried out in 2013 in Spain, and included 302 patients with a prescription for PPIs. The primary variable was nonadherence to PPIs (pill count). Secondary variables were gender, age, antidepressants, type of PPI, non-guideline-recommended prescription (NGRP) of PPIs, and total number of drugs. With the secondary variables, a binary logistic regression model to predict nonadherence was constructed and adapted to a points system. The ROC curve, with its area (AUC), was calculated and the optimal cut-off point was established. The points system was internally validated through 1,000 bootstrap samples and implemented in a mobile application (Android). The points system had three prognostic variables: total number of drugs, NGRP of PPIs, and antidepressants. The AUC was 0.87 (95% CI [0.83-0.91], p < 0.001). The test yielded a sensitivity of 0.80 (95% CI [0.70-0.87]) and a specificity of 0.82 (95% CI [0.76-0.87]). The three parameters were very similar in the bootstrap validation. A points system to predict nonadherence to PPIs has been constructed, internally validated and implemented in a mobile application. Provided similar results are obtained in external validation studies, we will have a screening tool to detect nonadherence to PPIs.
Overview of the 1986--1987 atomic mass predictions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.
1988-07-01
The need for a comprehensive update of earlier sets of atomic mass predictions is documented. A project that grew from this need and which resulted in the preparation of the 1986--1987 Atomic Mass Predictions is summarized. Ten sets of new mass predictions and expository text from a variety of types of mass models are combined with the latest evaluation of experimentally determined atomic masses. The methodology employed in constructing these mass predictions is outlined. The models are compared with regard to their reproduction of the experimental mass surface and their use of varying numbers of adjustable parameters. Plots are presented,more » for each set of predictions, of differences between model calculations and the measured masses. These plots may be used to estimate the reliability of the new mass predictions in unmeasured regions that border the experimetally known mass surface. copyright 1988 Academic Press, Inc.« less
Britton, Gary I.; Davey, Graham C. L.
2017-01-01
Emerging evidence suggests that many of the clinical constructs used to help understand and explain obsessive-compulsive (OC) symptoms, and negative mood, may be causally interrelated. One approach to understanding this interrelatedness is a motivational systems approach. This approach suggests that rather than considering clinical constructs and negative affect as separable entities, they are all features of an integrated threat management system, and as such are highly coordinated and interdependent. The aim of the present study was to examine if clinical constructs related to OC symptoms and negative mood are best treated as separable or, alternatively, if these clinical constructs and negative mood are best seen as indicators of an underlying superordinate variable, as would be predicted by a motivational systems approach. A sample of 370 student participants completed measures of mood and the clinical constructs of inflated responsibility, intolerance of uncertainty, not just right experiences, and checking stop rules. An exploratory factor analysis suggested two plausible factor structures, one where all construct items and negative mood items loaded onto one underlying superordinate variable, and a second structure comprising of five factors, where each item loaded onto a factor representative of what the item was originally intended to measure. A confirmatory factor analysis showed that the five factor model was preferential to the one factor model, suggesting the four constructs and negative mood are best conceptualized as separate variables. Given the predictions of a motivational systems approach were not supported in the current study, other possible explanations for the causal interrelatedness between clinical constructs and negative mood are discussed. PMID:28959224
A Predictive Model of Anesthesia Depth Based on SVM in the Primary Visual Cortex
Shi, Li; Li, Xiaoyuan; Wan, Hong
2013-01-01
In this paper, a novel model for predicting anesthesia depth is put forward based on local field potentials (LFPs) in the primary visual cortex (V1 area) of rats. The model is constructed using a Support Vector Machine (SVM) to realize anesthesia depth online prediction and classification. The raw LFP signal was first decomposed into some special scaling components. Among these components, those containing higher frequency information were well suited for more precise analysis of the performance of the anesthetic depth by wavelet transform. Secondly, the characteristics of anesthetized states were extracted by complexity analysis. In addition, two frequency domain parameters were selected. The above extracted features were used as the input vector of the predicting model. Finally, we collected the anesthesia samples from the LFP recordings under the visual stimulus experiments of Long Evans rats. Our results indicate that the predictive model is accurate and computationally fast, and that it is also well suited for online predicting. PMID:24044024
Uzun, Harun; Yıldız, Zeynep; Goldfarb, Jillian L; Ceylan, Selim
2017-06-01
As biomass becomes more integrated into our energy feedstocks, the ability to predict its combustion enthalpies from routine data such as carbon, ash, and moisture content enables rapid decisions about utilization. The present work constructs a novel artificial neural network model with a 3-3-1 tangent sigmoid architecture to predict biomasses' higher heating values from only their proximate analyses, requiring minimal specificity as compared to models based on elemental composition. The model presented has a considerably higher correlation coefficient (0.963) and lower root mean square (0.375), mean absolute (0.328), and mean bias errors (0.010) than other models presented in the literature which, at least when applied to the present data set, tend to under-predict the combustion enthalpy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sutphin, David M.; Bliss, James D.
1990-01-01
On the basis of differences derived from genetic, descriptive, and grade-tonnage data, graphite deposits are classified here into three deposit types: disseminated flake, amorphous (microcrystalline), or graphite veins. Descriptive models have been constructed for each of these deposit types, and grade-tonnage models are constructed for disseminated flake and amorphous deposit types. Grade and tonnage data are used also to construct grade-tonnage models that assist in predicting the size and grade of undiscovered graphite deposits. The median tonnage and carbon grade of disseminated flake deposits are 240 000 tonnes and 9% carbon and for amorphous deposits, 130 000 tonnes and 40% carbon. The differences in grade between disseminated flake and amorphous deposit types are statistically significant, whereas the differences in amount of contained carbon are not.
Predicting use of effective vegetable parenting practices with the Model of Goal Directed Behavior
USDA-ARS?s Scientific Manuscript database
Our objective was to model effective vegetable parenting practices using the Model of Goal Directed Vegetable Parenting Practices construct scales. An internet survey was conducted with 307 parents (mostly mothers) of preschoolers in Houston, Texas to assess their agreement with effective vegetable ...
Comparison of Two Analysis Approaches for Measuring Externalized Mental Models
ERIC Educational Resources Information Center
Al-Diban, Sabine; Ifenthaler, Dirk
2011-01-01
Mental models are basic cognitive constructs that are central for understanding phenomena of the world and predicting future events. Our comparison of two analysis approaches, SMD and QFCA, for measuring externalized mental models reveals different levels of abstraction and different perspectives. The advantages of the SMD include possibilities…
Modeling patients' acceptance of provider-delivered e-health.
Wilson, E Vance; Lankton, Nancy K
2004-01-01
Health care providers are beginning to deliver a range of Internet-based services to patients; however, it is not clear which of these e-health services patients need or desire. The authors propose that patients' acceptance of provider-delivered e-health can be modeled in advance of application development by measuring the effects of several key antecedents to e-health use and applying models of acceptance developed in the information technology (IT) field. This study tested three theoretical models of IT acceptance among patients who had recently registered for access to provider-delivered e-health. An online questionnaire administered items measuring perceptual constructs from the IT acceptance models (intrinsic motivation, perceived ease of use, perceived usefulness/extrinsic motivation, and behavioral intention to use e-health) and five hypothesized antecedents (satisfaction with medical care, health care knowledge, Internet dependence, information-seeking preference, and health care need). Responses were collected and stored in a central database. All tested IT acceptance models performed well in predicting patients' behavioral intention to use e-health. Antecedent factors of satisfaction with provider, information-seeking preference, and Internet dependence uniquely predicted constructs in the models. Information technology acceptance models provide a means to understand which aspects of e-health are valued by patients and how this may affect future use. In addition, antecedents to the models can be used to predict e-health acceptance in advance of system development.
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
ERIC Educational Resources Information Center
Parkes, Kelly A.; Jones, Brett D.
2012-01-01
The primary purpose of this study was to examine whether any of the six motivational constructs in the expectancy-value model of motivation (i.e., expectancy, ability perceptions, intrinsic interest value, attainment value, social utility value, and cost) would predict whether students intended to have a career teaching classroom music or…
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
2012-09-01
supported by the National Science Foundation (NSF) IGERT 9972762, the Army Research Institute (ARI) W91WAW07C0063, the Army Research Laboratory (ARL/CTA...prediction models in AutoMap .................................................. 144 Figure 13: Decision Tree for prediction model selection in...generated for nationally funded initiatives and made available through the Linguistic Data Consortium (LDC). An overview of these datasets is provided in
Thiels, Cornelius A; Yu, Denny; Abdelrahman, Amro M; Habermann, Elizabeth B; Hallbeck, Susan; Pasupathy, Kalyan S; Bingener, Juliane
2017-01-01
Reliable prediction of operative duration is essential for improving patient and care team satisfaction, optimizing resource utilization and reducing cost. Current operative scheduling systems are unreliable and contribute to costly over- and underestimation of operative time. We hypothesized that the inclusion of patient-specific factors would improve the accuracy in predicting operative duration. We reviewed all elective laparoscopic cholecystectomies performed at a single institution between 01/2007 and 06/2013. Concurrent procedures were excluded. Univariate analysis evaluated the effect of age, gender, BMI, ASA, laboratory values, smoking, and comorbidities on operative duration. Multivariable linear regression models were constructed using the significant factors (p < 0.05). The patient factors model was compared to the traditional surgical scheduling system estimates, which uses historical surgeon-specific and procedure-specific operative duration. External validation was done using the ACS-NSQIP database (n = 11,842). A total of 1801 laparoscopic cholecystectomy patients met inclusion criteria. Female sex was associated with reduced operative duration (-7.5 min, p < 0.001 vs. male sex) while increasing BMI (+5.1 min BMI 25-29.9, +6.9 min BMI 30-34.9, +10.4 min BMI 35-39.9, +17.0 min BMI 40 + , all p < 0.05 vs. normal BMI), increasing ASA (+7.4 min ASA III, +38.3 min ASA IV, all p < 0.01 vs. ASA I), and elevated liver function tests (+7.9 min, p < 0.01 vs. normal) were predictive of increased operative duration on univariate analysis. A model was then constructed using these predictive factors. The traditional surgical scheduling system was poorly predictive of actual operative duration (R 2 = 0.001) compared to the patient factors model (R 2 = 0.08). The model remained predictive on external validation (R 2 = 0.14).The addition of surgeon as a variable in the institutional model further improved predictive ability of the model (R 2 = 0.18). The use of routinely available pre-operative patient factors improves the prediction of operative duration during cholecystectomy.
NASA Astrophysics Data System (ADS)
Gottschalk, Ian P.; Hermans, Thomas; Knight, Rosemary; Caers, Jef; Cameron, David A.; Regnery, Julia; McCray, John E.
2017-12-01
Geophysical data have proven to be very useful for lithological characterization. However, quantitatively integrating the information gained from acquiring geophysical data generally requires colocated lithological and geophysical data for constructing a rock-physics relationship. In this contribution, the issue of integrating noncolocated geophysical and lithological data is addressed, and the results are applied to simulate groundwater flow in a heterogeneous aquifer in the Prairie Waters Project North Campus aquifer recharge site, Colorado. Two methods of constructing a rock-physics transform between electrical resistivity tomography (ERT) data and lithology measurements are assessed. In the first approach, a maximum likelihood estimation (MLE) is used to fit a bimodal lognormal distribution to horizontal crosssections of the ERT resistivity histogram. In the second approach, a spatial bootstrap is applied to approximate the rock-physics relationship. The rock-physics transforms provide soft data for multiple point statistics (MPS) simulations. Subsurface models are used to run groundwater flow and tracer test simulations. Each model's uncalibrated, predicted breakthrough time is evaluated based on its agreement with measured subsurface travel time values from infiltration basins to selected groundwater recovery wells. We find that incorporating geophysical information into uncalibrated flow models reduces the difference with observed values, as compared to flow models without geophysical information incorporated. The integration of geophysical data also narrows the variance of predicted tracer breakthrough times substantially. Accuracy is highest and variance is lowest in breakthrough predictions generated by the MLE-based rock-physics transform. Calibrating the ensemble of geophysically constrained models would help produce a suite of realistic flow models for predictive purposes at the site. We find that the success of breakthrough predictions is highly sensitive to the definition of the rock-physics transform; it is therefore important to model this transfer function accurately.
Integration of QUARK and I-TASSER for ab initio protein structure prediction in CASP11
Zhang, Wenxuan; Yang, Jianyi; He, Baoji; Walker, Sara Elizabeth; Zhang, Hongjiu; Govindarajoo, Brandon; Virtanen, Jouko; Xue, Zhidong; Shen, Hong-Bin; Zhang, Yang
2015-01-01
We tested two pipelines developed for template-free protein structure prediction in the CASP11 experiment. First, the QUARK pipeline constructs structure models by reassembling fragments of continuously distributed lengths excised from unrelated proteins. Five free-modeling (FM) targets have the model successfully constructed by QUARK with a TM-score above 0.4, including the first model of T0837-D1, which has a TM-score=0.736 and RMSD=2.9 Å to the native. Detailed analysis showed that the success is partly attributed to the high-resolution contact map prediction derived from fragment-based distance-profiles, which are mainly located between regular secondary structure elements and loops/turns and help guide the orientation of secondary structure assembly. In the Zhang-Server pipeline, weakly scoring threading templates are re-ordered by the structural similarity to the ab initio folding models, which are then reassembled by I-TASSER based structure assembly simulations; 60% more domains with length up to 204 residues, compared to the QUARK pipeline, were successfully modeled by the I-TASSER pipeline with a TM-score above 0.4. The robustness of the I-TASSER pipeline can stem from the composite fragment-assembly simulations that combine structures from both ab initio folding and threading template refinements. Despite the promising cases, challenges still exist in long-range beta-strand folding, domain parsing, and the uncertainty of secondary structure prediction; the latter of which was found to affect nearly all aspects of FM structure predictions, from fragment identification, target classification, structure assembly, to final model selection. Significant efforts are needed to solve these problems before real progress on FM could be made. PMID:26370505
Integration of QUARK and I-TASSER for Ab Initio Protein Structure Prediction in CASP11.
Zhang, Wenxuan; Yang, Jianyi; He, Baoji; Walker, Sara Elizabeth; Zhang, Hongjiu; Govindarajoo, Brandon; Virtanen, Jouko; Xue, Zhidong; Shen, Hong-Bin; Zhang, Yang
2016-09-01
We tested two pipelines developed for template-free protein structure prediction in the CASP11 experiment. First, the QUARK pipeline constructs structure models by reassembling fragments of continuously distributed lengths excised from unrelated proteins. Five free-modeling (FM) targets have the model successfully constructed by QUARK with a TM-score above 0.4, including the first model of T0837-D1, which has a TM-score = 0.736 and RMSD = 2.9 Å to the native. Detailed analysis showed that the success is partly attributed to the high-resolution contact map prediction derived from fragment-based distance-profiles, which are mainly located between regular secondary structure elements and loops/turns and help guide the orientation of secondary structure assembly. In the Zhang-Server pipeline, weakly scoring threading templates are re-ordered by the structural similarity to the ab initio folding models, which are then reassembled by I-TASSER based structure assembly simulations; 60% more domains with length up to 204 residues, compared to the QUARK pipeline, were successfully modeled by the I-TASSER pipeline with a TM-score above 0.4. The robustness of the I-TASSER pipeline can stem from the composite fragment-assembly simulations that combine structures from both ab initio folding and threading template refinements. Despite the promising cases, challenges still exist in long-range beta-strand folding, domain parsing, and the uncertainty of secondary structure prediction; the latter of which was found to affect nearly all aspects of FM structure predictions, from fragment identification, target classification, structure assembly, to final model selection. Significant efforts are needed to solve these problems before real progress on FM could be made. Proteins 2016; 84(Suppl 1):76-86. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Li, Ke; Zhang, Peng; Crittenden, John C; Guhathakurta, Subhrajit; Chen, Yongsheng; Fernando, Harindra; Sawhney, Anil; McCartney, Peter; Grimm, Nancy; Kahhat, Ramzy; Joshi, Himanshu; Konjevod, Goran; Choi, Yu-Jin; Fonseca, Ernesto; Allenby, Braden; Gerrity, Daniel; Torrens, Paul M
2007-07-15
To encourage sustainable development, engineers and scientists need to understand the interactions among social decision-making, development and redevelopment, land, energy and material use, and their environmental impacts. In this study, a framework that connects these interactions was proposed to guide more sustainable urban planning and construction practices. Focusing on the rapidly urbanizing setting of Phoenix, Arizona, complexity models and deterministic models were assembled as a metamodel, which is called Sustainable Futures 2100 and were used to predict land use and development, to quantify construction material demands, to analyze the life cycle environmental impacts, and to simulate future ground-level ozone formation.
Hagger, Martin S; Hardcastle, Sarah J; Hingley, Catherine; Strickland, Ella; Pang, Jing; Watts, Gerald F
2016-06-01
Patients with familial hypercholesterolemia (FH) are at markedly increased risk of coronary artery disease. Regular participation in three self-management behaviors, physical activity, healthy eating, and adherence to medication, can significantly reduce this risk in FH patients. We aimed to predict intentions to engage in these self-management behaviors in FH patients using a multi-theory, integrated model that makes the distinction between beliefs about illness and beliefs about self-management behaviors. Using a cross-sectional, correlational design, patients (N = 110) diagnosed with FH from a clinic in Perth, Western Australia, self-completed a questionnaire that measured constructs from three health behavior theories: the common sense model of illness representations (serious consequences, timeline, personal control, treatment control, illness coherence, emotional representations); theory of planned behavior (attitudes, subjective norms, perceived behavioral control); and social cognitive theory (self-efficacy). Structural equation models for each self-management behavior revealed consistent and statistically significant effects of attitudes on intentions across the three behaviors. Subjective norms predicted intentions for health eating only and self-efficacy predicted intentions for physical activity only. There were no effects for the perceived behavioral control and common sense model constructs in any model. Attitudes feature prominently in determining intentions to engage in self-management behaviors in FH patients. The prominence of these attitudinal beliefs about self-management behaviors, as opposed to illness beliefs, suggest that addressing these beliefs may be a priority in the management of FH.
[Study on brand traceability of vinegar based on near infrared spectroscopy technology].
Guan, Xiao; Liu, Jing; Gu, Fang-Qing; Yang, Yong-Jian
2014-09-01
In the present paper, 152 vinegar samples with four different brands were chosen as research targets, and their near infrared spectra were collected by diffusion reflection mode and transmission mode, respectively. Furthermore, the brand traceability models for edible vinegar were constructed. The effects of the collection mode and pretreatment methods of spectrum on the precision of traceability models were investigated intensively. The models constructed by PLS1-DA modeling method using spectrum data of 114 training samples were applied to predict 38 test samples, and R2, RMSEC and RMSEP of the model based on transmission mode data were 0.92, 0.113 and 0.127, respectively, with recognition rate of 76.32%, and those based on diffusion reflection mode data were 0.97, 0.102 and 0.119, with recognition rate of 86.84%. The results demonstrated that the near infrared spectrum combined with PLS1-DA can be used to establish the brand traceability models for edible vinegar, and diffuse reflection mode is more beneficial for predictive ability of the model.
Kothe, Emily J; Mullan, Barbara A
2015-09-01
The theory of planned behaviour (TPB) has been criticized for not including interactions between major constructs thought to underlie behaviour. This study investigated the application of the TPB to the prediction of fruit and vegetable consumption across three prospective cohorts. The primary aim of the study was to investigate whether interactions between major constructs in the theory would increase the ability of the model to predict intention to consume fruit and vegetables (i.e., attitude × perceived behavioural control [PBC], subjective norm × PBC, subjective norm × attitude) and self-reported fruit and vegetable intake (i.e., PBC × intention). Secondary data analysis from three cohorts: One predictive study (cohort 1) and two intervention studies (cohorts 2 and 3). Participants completed a TPB measure at baseline and a measure of fruit and vegetable intake at 1 week (cohort 1; n = 90) or 1 month (cohorts 2 and 3; n = 296). Attitude moderated the impact of PBC on intention. PBC moderated the impact of intention on behaviour at 1 week but not 1 month. The variance accounted for by the interactions was small. However, the presence of interactions between constructs within the TPB demonstrates a need to consider interactions between variables within the TPB in both theoretical and applied research using the model. © 2014 The British Psychological Society.
[Application of an artificial neural network in the design of sustained-release dosage forms].
Wei, X H; Wu, J J; Liang, W Q
2001-09-01
To use the artificial neural network (ANN) in Matlab 5.1 tool-boxes to predict the formulations of sustained-release tablets. The solubilities of nine drugs and various ratios of HPMC: Dextrin for 63 tablet formulations were used as the ANN model input, and in vitro accumulation released at 6 sampling times were used as output. The ANN model was constructed by selecting the optimal number of iterations (25) and model structure in which there are one hidden layer and five hidden layer nodes. The optimized ANN model was used for prediction of formulation based on desired target in vitro dissolution-time profiles. ANN predicted profiles based on ANN predicted formulations were closely similar to the target profiles. The ANN could be used for predicting the dissolution profiles of sustained release dosage form and for the design of optimal formulation.
The 2 × 2 Standpoints Model of Achievement Goals
Korn, Rachel M.; Elliot, Andrew J.
2016-01-01
In the present research, we proposed and tested a 2 × 2 standpoints model of achievement goals grounded in the development-demonstration and approach-avoidance distinctions. Three empirical studies are presented. Study 1 provided evidence supporting the structure and psychometric properties of a newly developed measure of the goals of the 2 × 2 standpoints model. Study 2 documented the predictive utility of these goal constructs for intrinsic motivation: development-approach and development-avoidance goals were positive predictors, and demonstration-avoidance goals were a negative predictor of intrinsic motivation. Study 3 documented the predictive utility of these goal constructs for performance attainment: Demonstration-approach goals were a positive predictor and demonstration-avoidance goals were a negative predictor of exam performance. The conceptual and empirical contributions of the present research were discussed within the broader context of existing achievement goal theory and research. PMID:27242641
Exploring Student, Family, and School Predictors of Self-Determination Using NLTS2 Data
ERIC Educational Resources Information Center
Shogren, Karrie A.; Garnier Villarreal, Mauricio; Dowsett, Chantelle; Little, Todd D.
2016-01-01
This study conducted secondary analysis of data from the National Longitudinal Transition Study-2 (NLTS2) to examine the degree to which student, family, and school constructs predicted self-determination outcomes. Multi-group structural equation modeling was used to examine predictive relationships between 5 students, 4 family, and 7 school…
Exploring Student, Family, and School Predictors of Self-Determination Using NLTS2 Data
ERIC Educational Resources Information Center
Shogren, Karrie A.; Garnier Villarreal, Mauricio; Dowsett, Chantelle; Little, Todd D.
2016-01-01
This study conducted secondary analysis of data from the National Longitudinal Transition Study-2 (NLTS2) to examine the degree to which student, family, and school constructs predicted self-determination outcomes. Multi-group structural equation modeling was used to examine predictive relationships between 5 student, 4 family, and 7 school…
ERIC Educational Resources Information Center
Kiel, Elizabeth J.; Buss, Kristin A.
2014-01-01
Two recent advances in the study of fearful temperament (behavioural inhibition) include the validation of dysregulated fear as a temperamental construct that more specifically predicts later social withdrawal and anxiety, and the use of conceptual and statistical models that place parenting as a mechanism of development from temperament to these…
O’Brien, Michael; Lyubchich, Vyacheslav; Roberts, Jason J.; Halpin, Patrick N.; Rice, Aaron N.; Bailey, Helen
2017-01-01
Offshore windfarms provide renewable energy, but activities during the construction phase can affect marine mammals. To understand how the construction of an offshore windfarm in the Maryland Wind Energy Area (WEA) off Maryland, USA, might impact harbour porpoises (Phocoena phocoena), it is essential to determine their poorly understood year-round distribution. Although habitat-based models can help predict the occurrence of species in areas with limited or no sampling, they require validation to determine the accuracy of the predictions. Incorporating more than 18 months of harbour porpoise detection data from passive acoustic monitoring, generalized auto-regressive moving average and generalized additive models were used to investigate harbour porpoise occurrence within and around the Maryland WEA in relation to temporal and environmental variables. Acoustic detection metrics were compared to habitat-based density estimates derived from aerial and boat-based sightings to validate the model predictions. Harbour porpoises occurred significantly more frequently during January to May, and foraged significantly more often in the evenings to early mornings at sites within and outside the Maryland WEA. Harbour porpoise occurrence peaked at sea surface temperatures of 5°C and chlorophyll a concentrations of 4.5 to 7.4 mg m-3. The acoustic detections were significantly correlated with the predicted densities, except at the most inshore site. This study provides insight into previously unknown fine-scale spatial and temporal patterns in distribution of harbour porpoises offshore of Maryland. The results can be used to help inform future monitoring and mitigate the impacts of windfarm construction and other human activities. PMID:28467455
Wingfield, Jessica E; O'Brien, Michael; Lyubchich, Vyacheslav; Roberts, Jason J; Halpin, Patrick N; Rice, Aaron N; Bailey, Helen
2017-01-01
Offshore windfarms provide renewable energy, but activities during the construction phase can affect marine mammals. To understand how the construction of an offshore windfarm in the Maryland Wind Energy Area (WEA) off Maryland, USA, might impact harbour porpoises (Phocoena phocoena), it is essential to determine their poorly understood year-round distribution. Although habitat-based models can help predict the occurrence of species in areas with limited or no sampling, they require validation to determine the accuracy of the predictions. Incorporating more than 18 months of harbour porpoise detection data from passive acoustic monitoring, generalized auto-regressive moving average and generalized additive models were used to investigate harbour porpoise occurrence within and around the Maryland WEA in relation to temporal and environmental variables. Acoustic detection metrics were compared to habitat-based density estimates derived from aerial and boat-based sightings to validate the model predictions. Harbour porpoises occurred significantly more frequently during January to May, and foraged significantly more often in the evenings to early mornings at sites within and outside the Maryland WEA. Harbour porpoise occurrence peaked at sea surface temperatures of 5°C and chlorophyll a concentrations of 4.5 to 7.4 mg m-3. The acoustic detections were significantly correlated with the predicted densities, except at the most inshore site. This study provides insight into previously unknown fine-scale spatial and temporal patterns in distribution of harbour porpoises offshore of Maryland. The results can be used to help inform future monitoring and mitigate the impacts of windfarm construction and other human activities.
NASA Astrophysics Data System (ADS)
Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.
2018-03-01
Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Del Rio-Chanona, Ehecatl A; Liu, Jiao; Wagner, Jonathan L; Zhang, Dongda; Meng, Yingying; Xue, Song; Shah, Nilay
2018-02-01
Biodiesel produced from microalgae has been extensively studied due to its potentially outstanding advantages over traditional transportation fuels. In order to facilitate its industrialization and improve the process profitability, it is vital to construct highly accurate models capable of predicting the complex behavior of the investigated biosystem for process optimization and control, which forms the current research goal. Three original contributions are described in this paper. Firstly, a dynamic model is constructed to simulate the complicated effect of light intensity, nutrient supply and light attenuation on both biomass growth and biolipid production. Secondly, chlorophyll fluorescence, an instantly measurable variable and indicator of photosynthetic activity, is embedded into the model to monitor and update model accuracy especially for the purpose of future process optimal control, and its correlation between intracellular nitrogen content is quantified, which to the best of our knowledge has never been addressed so far. Thirdly, a thorough experimental verification is conducted under different scenarios including both continuous illumination and light/dark cycle conditions to testify the model predictive capability particularly for long-term operation, and it is concluded that the current model is characterized by a high level of predictive capability. Based on the model, the optimal light intensity for algal biomass growth and lipid synthesis is estimated. This work, therefore, paves the way to forward future process design and real-time optimization. © 2017 Wiley Periodicals, Inc.
Su, Guosheng; Christensen, Ole F.; Ostersen, Tage; Henryon, Mark; Lund, Mogens S.
2012-01-01
Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1) a simple additive genetic model (MA), 2) a model including both additive and additive by additive epistatic genetic effects (MAE), 3) a model including both additive and dominance genetic effects (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions. PMID:23028912
Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S
2015-11-13
The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.
Elsaadany, Mostafa; Yan, Karen Chang; Yildirim-Ayan, Eda
2017-06-01
Successful tissue engineering and regenerative therapy necessitate having extensive knowledge about mechanical milieu in engineered tissues and the resident cells. In this study, we have merged two powerful analysis tools, namely finite element analysis and stochastic analysis, to understand the mechanical strain within the tissue scaffold and residing cells and to predict the cell viability upon applying mechanical strains. A continuum-based multi-length scale finite element model (FEM) was created to simulate the physiologically relevant equiaxial strain exposure on cell-embedded tissue scaffold and to calculate strain transferred to the tissue scaffold (macro-scale) and residing cells (micro-scale) upon various equiaxial strains. The data from FEM were used to predict cell viability under various equiaxial strain magnitudes using stochastic damage criterion analysis. The model validation was conducted through mechanically straining the cardiomyocyte-encapsulated collagen constructs using a custom-built mechanical loading platform (EQUicycler). FEM quantified the strain gradients over the radial and longitudinal direction of the scaffolds and the cells residing in different areas of interest. With the use of the experimental viability data, stochastic damage criterion, and the average cellular strains obtained from multi-length scale models, cellular viability was predicted and successfully validated. This methodology can provide a great tool to characterize the mechanical stimulation of bioreactors used in tissue engineering applications in providing quantification of mechanical strain and predicting cellular viability variations due to applied mechanical strain.
Hydrogen-bond coordination in organic crystal structures: statistics, predictions and applications.
Galek, Peter T A; Chisholm, James A; Pidcock, Elna; Wood, Peter A
2014-02-01
Statistical models to predict the number of hydrogen bonds that might be formed by any donor or acceptor atom in a crystal structure have been derived using organic structures in the Cambridge Structural Database. This hydrogen-bond coordination behaviour has been uniquely defined for more than 70 unique atom types, and has led to the development of a methodology to construct hypothetical hydrogen-bond arrangements. Comparing the constructed hydrogen-bond arrangements with known crystal structures shows promise in the assessment of structural stability, and some initial examples of industrially relevant polymorphs, co-crystals and hydrates are described.
Zheng, Jia; Goodyear, Laurie J.
2016-01-01
The development of animal models with construct, face, and predictive validity to accurately model human depression has been a major challenge. One proposed rodent model is the 5 d repeated forced swim stress (5d-RFSS) paradigm, which progressively increases floating during individual swim sessions. The onset and persistence of this floating behavior has been anthropomorphically characterized as a measure of depression. This interpretation has been under debate because a progressive increase in floating over time may reflect an adaptive learned behavioral response promoting survival, and not depression (Molendijk and de Kloet, 2015). To assess construct and face validity, we applied 5d-RFSS to C57BL/6J and BALB/cJ mice, two mouse strains commonly used in neuropsychiatric research, and measured a combination of emotional, homeostatic, and psychomotor symptoms indicative of a depressive-like state. We also compared the efficacy of 5d-RFSS and chronic social defeat stress (CSDS), a validated depression model, to induce a depressive-like state in C57BL/6J mice. In both strains, 5d-RFSS progressively increased floating behavior that persisted for at least 4 weeks. 5d-RFSS did not alter sucrose preference, body weight, appetite, locomotor activity, anxiety-like behavior, or immobility behavior during a tail-suspension test compared with nonstressed controls. In contrast, CSDS altered several of these parameters, suggesting a depressive-like state. Finally, predictive validity was assessed using voluntary wheel running (VWR), a known antidepressant intervention. Four weeks of VWR after 5d-RFSS normalized floating behavior toward nonstressed levels. These observations suggest that 5d-RFSS has no construct or face validity but might have predictive validity to model human depression. PMID:28058270
Mul, Joram D; Zheng, Jia; Goodyear, Laurie J
2016-01-01
The development of animal models with construct, face, and predictive validity to accurately model human depression has been a major challenge. One proposed rodent model is the 5 d repeated forced swim stress (5d-RFSS) paradigm, which progressively increases floating during individual swim sessions. The onset and persistence of this floating behavior has been anthropomorphically characterized as a measure of depression. This interpretation has been under debate because a progressive increase in floating over time may reflect an adaptive learned behavioral response promoting survival, and not depression (Molendijk and de Kloet, 2015). To assess construct and face validity, we applied 5d-RFSS to C57BL/6J and BALB/cJ mice, two mouse strains commonly used in neuropsychiatric research, and measured a combination of emotional, homeostatic, and psychomotor symptoms indicative of a depressive-like state. We also compared the efficacy of 5d-RFSS and chronic social defeat stress (CSDS), a validated depression model, to induce a depressive-like state in C57BL/6J mice. In both strains, 5d-RFSS progressively increased floating behavior that persisted for at least 4 weeks. 5d-RFSS did not alter sucrose preference, body weight, appetite, locomotor activity, anxiety-like behavior, or immobility behavior during a tail-suspension test compared with nonstressed controls. In contrast, CSDS altered several of these parameters, suggesting a depressive-like state. Finally, predictive validity was assessed using voluntary wheel running (VWR), a known antidepressant intervention. Four weeks of VWR after 5d-RFSS normalized floating behavior toward nonstressed levels. These observations suggest that 5d-RFSS has no construct or face validity but might have predictive validity to model human depression.
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya
2013-03-01
This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.
Kane, Michael J; Meier, Matt E; Smeekens, Bridget A; Gross, Georgina M; Chun, Charlotte A; Silvia, Paul J; Kwapil, Thomas R
2016-08-01
A large correlational study took a latent-variable approach to the generality of executive control by testing the individual-differences structure of executive-attention capabilities and assessing their prediction of schizotypy, a multidimensional construct (with negative, positive, disorganized, and paranoid factors) conveying risk for schizophrenia. Although schizophrenia is convincingly linked to executive deficits, the schizotypy literature is equivocal. Subjects completed tasks of working memory capacity (WMC), attention restraint (inhibiting prepotent responses), and attention constraint (focusing visual attention amid distractors), the latter 2 in an effort to fractionate the "inhibition" construct. We also assessed mind-wandering propensity (via in-task thought probes) and coefficient of variation in response times (RT CoV) from several tasks as more novel indices of executive attention. WMC, attention restraint, attention constraint, mind wandering, and RT CoV were correlated but separable constructs, indicating some distinctions among "attention control" abilities; WMC correlated more strongly with attentional restraint than constraint, and mind wandering correlated more strongly with attentional restraint, attentional constraint, and RT CoV than with WMC. Across structural models, no executive construct predicted negative schizotypy and only mind wandering and RT CoV consistently (but modestly) predicted positive, disorganized, and paranoid schizotypy; stalwart executive constructs in the schizophrenia literature-WMC and attention restraint-showed little to no predictive power, beyond restraint's prediction of paranoia. Either executive deficits are consequences rather than risk factors for schizophrenia, or executive failures barely precede or precipitate diagnosable schizophrenia symptoms. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Shepherd, Micah; Leishman, Timothy W.; Utami, Sentagi
2005-09-01
Brigham Young University has recently constructed a planetarium with a 38-ft.-diameter dome. The facility also serves as a classroom. Since planetariums typically have poor acoustics due to their domed ceiling structures, acoustical recommendations were requested before its construction. The recommendations were made in an attempt to create an acceptable listening environment for lectures and other listening events. They were based in part on computer models and auralizations intended to predict the effectiveness of several acoustical treatments on the outer walls and on the dome itself. The recommendations were accepted and the planetarium was completed accordingly. A series of acoustical measurements was subsequently made in the room and the resulting acoustical parameters were mapped over the floor plan. This paper discusses these results and compares them with the predictions of the computer models.
Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L
2018-06-19
This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.
Martinez-Fiestas, Myriam; Rodríguez-Garzón, Ignacio; Delgado-Padial, Antonio; Lucas-Ruiz, Valeriano
2017-09-01
This article presents a cross-cultural study on perceived risk in the construction industry. Worker samples from three different countries were studied: Spain, Peru and Nicaragua. The main goal was to explain how construction workers perceive their occupational hazard and to analyze how this is related to their national culture. The model used to measure perceived risk was the psychometric paradigm. The results show three very similar profiles, indicating that risk perception is independent of nationality. A cultural analysis was conducted using the Hofstede model. The results of this analysis and the relation to perceived risk showed that risk perception in construction is independent of national culture. Finally, a multiple lineal regression analysis was conducted to determine what qualitative attributes could predict the global quantitative size of risk perception. All of the findings have important implications regarding the management of safety in the workplace.
Potential Predictability and Prediction Skill for Southern Peru Summertime Rainfall
NASA Astrophysics Data System (ADS)
WU, S.; Notaro, M.; Vavrus, S. J.; Mortensen, E.; Block, P. J.; Montgomery, R. J.; De Pierola, J. N.; Sanchez, C.
2016-12-01
The central Andes receive over 50% of annual climatological rainfall during the short period of January-March. This summertime rainfall exhibits strong interannual and decadal variability, including severe drought events that incur devastating societal impacts and cause agricultural communities and mining facilities to compete for limited water resources. An improved seasonal prediction skill of summertime rainfall would aid in water resource planning and allocation across the water-limited southern Peru. While various underlying mechanisms have been proposed by past studies for the drivers of interannual variability in summertime rainfall across southern Peru, such as the El Niño-Southern Oscillation (ENSO), Madden Julian Oscillation (MJO), and extratropical forcings, operational forecasts continue to be largely based on rudimentary ENSO-based indices, such as NINO3.4, justifying further exploration of predictive skill. In order to bridge this gap between the understanding of driving mechanisms and the operational forecast, we performed systematic studies on the predictability and prediction skill of southern Peru summertime rainfall by constructing statistical forecast models using best available weather station and reanalysis datasets. At first, by assuming the first two empirical orthogonal functions (EOFs) of summertime rainfall are predictable, the potential predictability skill was evaluated for southern Peru. Then, we constructed a simple regression model, based on the time series of tropical Pacific sea-surface temperatures (SSTs), and a more advanced Linear Inverse Model (LIM), based on the EOFs of tropical ocean SSTs and large-scale atmosphere variables from reanalysis. Our results show that the LIM model consistently outperforms the more rudimentary regression models on the forecast skill of domain averaged precipitation index and individual station indices. The improvement of forecast correlation skill ranges from 10% to over 200% for different stations. Further analysis shows that this advantage of LIM is likely to arise from its representation of local zonal winds and the position of Intertropical Convergence Zone (ITCZ).
A review and update of the Virginia Department of Transportation cash flow forecasting model.
DOT National Transportation Integrated Search
1996-01-01
This report details the research done to review and update components of the VDOT cash flow forecasting model. Specifically, the study updated the monthly factors submodel used to predict payments on construction contracts. For the other submodel rev...
REVIEWS OF TOPICAL PROBLEMS: Physics of pulsar magnetospheres
NASA Astrophysics Data System (ADS)
Beskin, Vasilii S.; Gurevich, Aleksandr V.; Istomin, Yakov N.
1986-10-01
A self-consistent model of the magnetosphere of a pulsar is constructed. This model is based on a successive solution of the equations describing global properties of the magnetosphere and on a comparison of the basic predictions of the developed theory and observational data.
Comparing species distribution models constructed with different subsets of environmental predictors
Bucklin, David N.; Basille, Mathieu; Benscoter, Allison M.; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.; Speroterra, Carolina; Watling, James I.
2014-01-01
Our results indicate that additional predictors have relatively minor effects on the accuracy of climate-based species distribution models and minor to moderate effects on spatial predictions. We suggest that implementing species distribution models with only climate predictors may provide an effective and efficient approach for initial assessments of environmental suitability.
Outward Bound Outcome Model Validation and Multilevel Modeling
ERIC Educational Resources Information Center
Luo, Yuan-Chun
2011-01-01
This study was intended to measure construct validity for the Outward Bound Outcomes Instrument (OBOI) and to predict outcome achievement from individual characteristics and course attributes using multilevel modeling. A sample of 2,340 participants was collected by Outward Bound USA between May and September 2009 using the OBOI. Two phases of…
Modeling carbon and nitrogen biogeochemistry in forest ecosystems
Changsheng Li; Carl Trettin; Ge Sun; Steve McNulty; Klaus Butterbach-Bahl
2005-01-01
A forest biogeochemical model, Forest-DNDC, was developed to quantify carbon sequestration in and trace gas emissions from forest ecosystems. Forest-DNDC was constructed by integrating two existing moels, PnET and DNDC, with several new features including nitrification, forest litter layer, soil freezing and thawing etc, PnET is a forest physiological model predicting...
Modeling as an Anchoring Scientific Practice for Explaining Friction Phenomena
ERIC Educational Resources Information Center
Neilson, Drew; Campbell, Todd
2017-01-01
Through examining the day-to-day work of scientists, researchers in science studies have revealed how models are a central sense-making practice of scientists as they construct and critique explanations about how the universe works. Additionally, they allow predictions to be made using the tenets of the model. Given this, alongside research…
FOOTPRINT: A New Tool to Predict the Potential Impact of Biofuels on BTEX Plumes
Ahsanuzzaman et al. (2008) used the Deeb et al. (2002) conceptual model to construct a simple screening model to estimate the area of a plume of benzene produced from a release of gasoline containing ethanol. The screening model estimates the plume area, or footprint of the plum...
The construction of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available ...
Impact damage of composite plates
NASA Technical Reports Server (NTRS)
Lal, K. M.; Goglia, G. L.
1983-01-01
A simple model to study low velocity transverse impact of thin plates made of fiber-reinforced composite material, in particular T300/5208 graphite-epoxy was discussed. This model predicts the coefficient of restitution, which is a measure of the energy absorbed by the target during an impact event. The model is constructed on the assumption that the plate is inextensible in the fiber direction and that the material is incompressible in the z-direction. Such a plate essentially deforms by shear, hence this model neglects bending deformations of the plate. The coefficient of restitution is predicted to increase with large interlaminar shear strength and low transverse shear modulus of the laminate. Predictions are compared with the test results of impacted circular and rectangular clamped plates. Experimentally measured values of the coefficient of restitution are found to agree with the predicted values within a reasonable error.
Medium-term electric power demand forecasting based on economic-electricity transmission model
NASA Astrophysics Data System (ADS)
Li, Wenfeng; Bao, Fangmin; Bai, Hongkun; Liu, Wei; Liu, Yongmin; Mao, Yubin; Wang, Jiangbo; Liu, Junhui
2018-06-01
Electric demand forecasting is a basic work to ensure the safe operation of power system. Based on the theories of experimental economics and econometrics, this paper introduces Prognoz Platform 7.2 intelligent adaptive modeling platform, and constructs the economic electricity transmission model that considers the economic development scenarios and the dynamic adjustment of industrial structure to predict the region's annual electricity demand, and the accurate prediction of the whole society's electricity consumption is realized. Firstly, based on the theories of experimental economics and econometrics, this dissertation attempts to find the economic indicator variables that drive the most economical growth of electricity consumption and availability, and build an annual regional macroeconomic forecast model that takes into account the dynamic adjustment of industrial structure. Secondly, it innovatively put forward the economic electricity directed conduction theory and constructed the economic power transfer function to realize the group forecast of the primary industry + rural residents living electricity consumption, urban residents living electricity, the second industry electricity consumption, the tertiary industry electricity consumption; By comparing with the actual value of economy and electricity in Henan province in 2016, the validity of EETM model is proved, and the electricity consumption of the whole province from 2017 to 2018 is predicted finally.
Exploring Human Diseases and Biological Mechanisms by Protein Structure Prediction and Modeling.
Wang, Juexin; Luttrell, Joseph; Zhang, Ning; Khan, Saad; Shi, NianQing; Wang, Michael X; Kang, Jing-Qiong; Wang, Zheng; Xu, Dong
2016-01-01
Protein structure prediction and modeling provide a tool for understanding protein functions by computationally constructing protein structures from amino acid sequences and analyzing them. With help from protein prediction tools and web servers, users can obtain the three-dimensional protein structure models and gain knowledge of functions from the proteins. In this chapter, we will provide several examples of such studies. As an example, structure modeling methods were used to investigate the relation between mutation-caused misfolding of protein and human diseases including epilepsy and leukemia. Protein structure prediction and modeling were also applied in nucleotide-gated channels and their interaction interfaces to investigate their roles in brain and heart cells. In molecular mechanism studies of plants, rice salinity tolerance mechanism was studied via structure modeling on crucial proteins identified by systems biology analysis; trait-associated protein-protein interactions were modeled, which sheds some light on the roles of mutations in soybean oil/protein content. In the age of precision medicine, we believe protein structure prediction and modeling will play more and more important roles in investigating biomedical mechanism of diseases and drug design.
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
Xiaoyong, Wu; Xuzhao, Li; Deliang, Yu; Pengfei, Yu; Zhenning, Hang; Bin, Bai; zhengyan, Li; Fangning, Pang; Shiqi, Wang; Qingchuan, Zhao
2017-01-01
Identifying patients at high risk of tube feeding intolerance (TFI) after gastric cancer surgery may prevent the occurrence of TFI; however, a predictive model is lacking. We therefore analyzed the incidence of TFI and its associated risk factors after gastric cancer surgery in 225 gastric cancer patients divided into without-TFI (n = 114) and with-TFI (n = 111) groups. A total of 49.3% of patients experienced TFI after gastric cancer. Multivariate analysis identified a history of functional constipation (FC), a preoperative American Society of Anesthesiologists (ASA) score of III, a high pain score at 6-hour postoperation, and a high white blood cell (WBC) count on the first day after surgery as independent risk factors for TFI. The area under the curve (AUC) was 0.756, with an optimal cut-off value of 0.5410. In order to identify patients at high risk of TFI after gastric cancer surgery, we constructed a predictive nomogram model based on the selected independent risk factors to indicate the probability of developing TFI. Use of our predictive nomogram model in screening, if a probability > 0.5410, indicated a high-risk patients would with a 70.1% likelihood of developing TFI. These high-risk individuals should take measures to prevent TFI before feeding with enteral nutrition. PMID:29245951
Herman, Christine; Karolak, Wojtek; Yip, Alexandra M; Buth, Karen J; Hassan, Ansar; Légaré, Jean-Francois
2009-10-01
We sought to develop a predictive model based exclusively on preoperative factors to identify patients at risk for PrlICULOS following coronary artery bypass grafting (CABG). Retrospective analysis was performed on patients undergoing isolated CABG at a single center between June 1998 and December 2002. PrlICULOS was defined as initial admission to ICU exceeding 72 h. A parsimonious risk-predictive model was constructed on the basis of preoperative factors, with subsequent internal validation. Of 3483 patients undergoing isolated CABG between June 1998 and December 2002, 411 (11.8%) experienced PrlICULOS. Overall in-hospital mortality was higher among these patients (14.4% vs. 1.2%, P
Rane, Smita; Prabhakar, Bala
2013-07-01
The aim of this study was to investigate the combined influence of 3 independent variables in the preparation of paclitaxel containing pH-sensitive liposomes. A 3 factor, 3 levels Box-Behnken design was used to derive a second order polynomial equation and construct contour plots to predict responses. The independent variables selected were molar ratio phosphatidylcholine:diolylphosphatidylethanolamine (X1), molar concentration of cholesterylhemisuccinate (X2), and amount of drug (X3). Fifteen batches were prepared by thin film hydration method and evaluated for percent drug entrapment, vesicle size, and pH sensitivity. The transformed values of the independent variables and the percent drug entrapment were subjected to multiple regression to establish full model second order polynomial equation. F was calculated to confirm the omission of insignificant terms from the full model equation to derive a reduced model polynomial equation to predict the dependent variables. Contour plots were constructed to show the effects of X1, X2, and X3 on the percent drug entrapment. A model was validated for accurate prediction of the percent drug entrapment by performing checkpoint analysis. The computer optimization process and contour plots predicted the levels of independent variables X1, X2, and X3 (0.99, -0.06, 0, respectively), for maximized response of percent drug entrapment with constraints on vesicle size and pH sensitivity.
Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann
2003-01-01
Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.
Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model.
Winslow, Brent D; Nguyen, Nam; Venta, Kimberly E
2017-01-01
Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.
Updating the Standard Spatial Observer for Contrast Detection
NASA Technical Reports Server (NTRS)
Ahumada, Albert J.; Watson, Andrew B.
2011-01-01
Watson and Ahmuada (2005) constructed a Standard Spatial Observer (SSO) model for foveal luminance contrast signal detection based on the Medelfest data (Watson, 1999). Here we propose two changes to the model, dropping the oblique effect from the CSF and using the cone density data of Curcio et al. (1990) to estimate the variation of sensitivity with eccentricity. Dropping the complex images, and using medians to exclude outlier data points, the SSO model now accounts for essentially all the predictable variance in the data, with an RMS prediction error of only 0.67 dB.
Predicting community composition from pairwise interactions
NASA Astrophysics Data System (ADS)
Friedman, Jonathan; Higgins, Logan; Gore, Jeff
The ability to predict the structure of complex, multispecies communities is crucial for understanding the impact of species extinction and invasion on natural communities, as well as for engineering novel, synthetic communities. Communities are often modeled using phenomenological models, such as the classical generalized Lotka-Volterra (gLV) model. While a lot of our intuition comes from such models, their predictive power has rarely been tested experimentally. To directly assess the predictive power of this approach, we constructed synthetic communities comprised of up to 8 soil bacteria. We measured the outcome of competition between all species pairs, and used these measurements to predict the composition of communities composed of more than 2 species. The pairwise competitions resulted in a diverse set of outcomes, including coexistence, exclusion, and bistability, and displayed evidence for both interference and facilitation. Most pair outcomes could be captured by the gLV framework, and the composition of multispecies communities could be predicted for communities composed solely of such pairs. Our results demonstrate the predictive ability and utility of simple phenomenology, which enables accurate predictions in the absence of mechanistic details.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Ruyck, Kim, E-mail: kim.deruyck@UGent.be; Sabbe, Nick; Oberije, Cary
2011-10-01
Purpose: To construct a model for the prediction of acute esophagitis in lung cancer patients receiving chemoradiotherapy by combining clinical data, treatment parameters, and genotyping profile. Patients and Methods: Data were available for 273 lung cancer patients treated with curative chemoradiotherapy. Clinical data included gender, age, World Health Organization performance score, nicotine use, diabetes, chronic disease, tumor type, tumor stage, lymph node stage, tumor location, and medical center. Treatment parameters included chemotherapy, surgery, radiotherapy technique, tumor dose, mean fractionation size, mean and maximal esophageal dose, and overall treatment time. A total of 332 genetic polymorphisms were considered in 112 candidatemore » genes. The predicting model was achieved by lasso logistic regression for predictor selection, followed by classic logistic regression for unbiased estimation of the coefficients. Performance of the model was expressed as the area under the curve of the receiver operating characteristic and as the false-negative rate in the optimal point on the receiver operating characteristic curve. Results: A total of 110 patients (40%) developed acute esophagitis Grade {>=}2 (Common Terminology Criteria for Adverse Events v3.0). The final model contained chemotherapy treatment, lymph node stage, mean esophageal dose, gender, overall treatment time, radiotherapy technique, rs2302535 (EGFR), rs16930129 (ENG), rs1131877 (TRAF3), and rs2230528 (ITGB2). The area under the curve was 0.87, and the false-negative rate was 16%. Conclusion: Prediction of acute esophagitis can be improved by combining clinical, treatment, and genetic factors. A multicomponent prediction model for acute esophagitis with a sensitivity of 84% was constructed with two clinical parameters, four treatment parameters, and four genetic polymorphisms.« less
CORECLUST: identification of the conserved CRM grammar together with prediction of gene regulation.
Nikulova, Anna A; Favorov, Alexander V; Sutormin, Roman A; Makeev, Vsevolod J; Mironov, Andrey A
2012-07-01
Identification of transcriptional regulatory regions and tracing their internal organization are important for understanding the eukaryotic cell machinery. Cis-regulatory modules (CRMs) of higher eukaryotes are believed to possess a regulatory 'grammar', or preferred arrangement of binding sites, that is crucial for proper regulation and thus tends to be evolutionarily conserved. Here, we present a method CORECLUST (COnservative REgulatory CLUster STructure) that predicts CRMs based on a set of positional weight matrices. Given regulatory regions of orthologous and/or co-regulated genes, CORECLUST constructs a CRM model by revealing the conserved rules that describe the relative location of binding sites. The constructed model may be consequently used for the genome-wide prediction of similar CRMs, and thus detection of co-regulated genes, and for the investigation of the regulatory grammar of the system. Compared with related methods, CORECLUST shows better performance at identification of CRMs conferring muscle-specific gene expression in vertebrates and early-developmental CRMs in Drosophila.
Early warning model based on correlated networks in global crude oil markets
NASA Astrophysics Data System (ADS)
Yu, Jia-Wei; Xie, Wen-Jie; Jiang, Zhi-Qiang
2018-01-01
Applying network tools on predicting and warning the systemic risks provides a novel avenue to manage risks in financial markets. Here, we construct a series of global crude oil correlated networks based on the historical 57 oil prices covering a period from 1993 to 2012. Two systemic risk indicators are constructed based on the density and modularity of correlated networks. The local maximums of the risk indicators are found to have the ability to predict the trends of oil prices. In our sample periods, the indicator based on the network density sends five signals and the indicator based on the modularity index sends four signals. The four signals sent by both indicators are able to warn the drop of future oil prices and the signal only sent by the network density is followed by a huge rise of oil prices. Our results deepen the application of network measures on building early warning models of systemic risks and can be applied to predict the trends of future prices in financial markets.
Circulating tumour DNA methylation markers for diagnosis and prognosis of hepatocellular carcinoma
NASA Astrophysics Data System (ADS)
Xu, Rui-Hua; Wei, Wei; Krawczyk, Michal; Wang, Wenqiu; Luo, Huiyan; Flagg, Ken; Yi, Shaohua; Shi, William; Quan, Qingli; Li, Kang; Zheng, Lianghong; Zhang, Heng; Caughey, Bennett A.; Zhao, Qi; Hou, Jiayi; Zhang, Runze; Xu, Yanxin; Cai, Huimin; Li, Gen; Hou, Rui; Zhong, Zheng; Lin, Danni; Fu, Xin; Zhu, Jie; Duan, Yaou; Yu, Meixing; Ying, Binwu; Zhang, Wengeng; Wang, Juan; Zhang, Edward; Zhang, Charlotte; Li, Oulan; Guo, Rongping; Carter, Hannah; Zhu, Jian-Kang; Hao, Xiaoke; Zhang, Kang
2017-11-01
An effective blood-based method for the diagnosis and prognosis of hepatocellular carcinoma (HCC) has not yet been developed. Circulating tumour DNA (ctDNA) carrying cancer-specific genetic and epigenetic aberrations may enable a noninvasive `liquid biopsy' for diagnosis and monitoring of cancer. Here, we identified an HCC-specific methylation marker panel by comparing HCC tissue and normal blood leukocytes and showed that methylation profiles of HCC tumour DNA and matched plasma ctDNA are highly correlated. Using cfDNA samples from a large cohort of 1,098 HCC patients and 835 normal controls, we constructed a diagnostic prediction model that showed high diagnostic specificity and sensitivity (P < 0.001) and was highly correlated with tumour burden, treatment response, and stage. Additionally, we constructed a prognostic prediction model that effectively predicted prognosis and survival (P < 0.001). Together, these findings demonstrate in a large clinical cohort the utility of ctDNA methylation markers in the diagnosis, surveillance, and prognosis of HCC.
Prevention through Design Adoption Readiness Model (PtD ARM): An integrated conceptual model.
Weidman, Justin; Dickerson, Deborah E; Koebel, Charles T
2015-01-01
Prevention through Design (PtD), eliminating hazards at the design-stage of tools and systems, is the optimal method of mitigating occupational health and safety risks. A recent National Institute of Safety and Health initiative has established a goal to increase adoption of PtD innovation in industry. The construction industry has traditionally lagged behind other sectors in the adoption of innovation, in general; and of safety and health prevention innovation, in particular. Therefore, as a first step toward improving adoption trends in this sector, a conceptual model was developed to describe the parameters and causal relationships that influence and predict construction stakeholder "adoption readiness" for PtD technology innovation. This model was built upon three well-established theoretical frameworks: the Health Belief Model, the Diffusion of Innovation Model, and the Technology Acceptance Model. Earp and Ennett's model development methodology was employed to build a depiction of the key constructs and directionality and magnitude of relationships among them. Key constructs were identified from the literature associated with the three theoretical frameworks, with special emphasis given to studies related to construction or OHS technology adoption. A conceptual model is presented. Recommendations for future research are described and include confirmatory structural equation modeling of model parameters and relationships, additional descriptive investigation of barriers to adoption in some trade sectors, and design and evaluation of an intervention strategy.
Predicting use of effective vegetable parenting practices with the Model of Goal Directed Behavior.
Diep, Cassandra S; Beltran, Alicia; Chen, Tzu-An; Thompson, Debbe; O'Connor, Teresia; Hughes, Sheryl; Baranowski, Janice; Baranowski, Tom
2015-06-01
To model effective vegetable parenting practices using the Model of Goal Directed Vegetable Parenting Practices construct scales. An Internet survey was conducted with parents of pre-school children to assess their agreement with effective vegetable parenting practices and Model of Goal Directed Vegetable Parenting Practices items. Block regression modelling was conducted using the composite score of effective vegetable parenting practices scales as the outcome variable and the Model of Goal Directed Vegetable Parenting Practices constructs as predictors in separate and sequential blocks: demographics, intention, desire (intrinsic motivation), perceived barriers, autonomy, relatedness, self-efficacy, habit, anticipated emotions, perceived behavioural control, attitudes and lastly norms. Backward deletion was employed at the end for any variable not significant at P<0·05. Houston, TX, USA. Three hundred and seven parents (mostly mothers) of pre-school children. Significant predictors in the final model in order of relationship strength included habit of active child involvement in vegetable selection, habit of positive vegetable communications, respondent not liking vegetables, habit of keeping a positive vegetable environment and perceived behavioural control of having a positive influence on child's vegetable consumption. The final model's adjusted R 2 was 0·486. This was the first study to test scales from a behavioural model to predict effective vegetable parenting practices. Further research needs to assess these Model of Goal Directed Vegetable Parenting Practices scales for their (i) predictiveness of child consumption of vegetables in longitudinal samples and (ii) utility in guiding design of vegetable parenting practices interventions.
Ye, Hao; Luo, Heng; Ng, Hui Wen; Meehan, Joe; Ge, Weigong; Tong, Weida; Hong, Huixiao
2016-01-01
ToxCast data have been used to develop models for predicting in vivo toxicity. To predict the in vivo toxicity of a new chemical using a ToxCast data based model, its ToxCast bioactivity data are needed but not normally available. The capability of predicting ToxCast bioactivity data is necessary to fully utilize ToxCast data in the risk assessment of chemicals. We aimed to understand and elucidate the relationships between the chemicals and bioactivity data of the assays in ToxCast and to develop a network analysis based method for predicting ToxCast bioactivity data. We conducted modularity analysis on a quantitative network constructed from ToxCast data to explore the relationships between the assays and chemicals. We further developed Nebula (neighbor-edges based and unbiased leverage algorithm) for predicting ToxCast bioactivity data. Modularity analysis on the network constructed from ToxCast data yielded seven modules. Assays and chemicals in the seven modules were distinct. Leave-one-out cross-validation yielded a Q(2) of 0.5416, indicating ToxCast bioactivity data can be predicted by Nebula. Prediction domain analysis showed some types of ToxCast assay data could be more reliably predicted by Nebula than others. Network analysis is a promising approach to understand ToxCast data. Nebula is an effective algorithm for predicting ToxCast bioactivity data, helping fully utilize ToxCast data in the risk assessment of chemicals. Published by Elsevier Ltd.
Prediction of chemo-response in serous ovarian cancer.
Gonzalez Bosquet, Jesus; Newtson, Andreea M; Chung, Rebecca K; Thiel, Kristina W; Ginader, Timothy; Goodheart, Michael J; Leslie, Kimberly K; Smith, Brian J
2016-10-19
Nearly one-third of serous ovarian cancer (OVCA) patients will not respond to initial treatment with surgery and chemotherapy and die within one year of diagnosis. If patients who are unlikely to respond to current standard therapy can be identified up front, enhanced tumor analyses and treatment regimens could potentially be offered. Using the Cancer Genome Atlas (TCGA) serous OVCA database, we previously identified a robust molecular signature of 422-genes associated with chemo-response. Our objective was to test whether this signature is an accurate and sensitive predictor of chemo-response in serous OVCA. We first constructed prediction models to predict chemo-response using our previously described 422-gene signature that was associated with response to treatment in serous OVCA. Performance of all prediction models were measured with area under the curves (AUCs, a measure of the model's accuracy) and their respective confidence intervals (CIs). To optimize the prediction process, we determined which elements of the signature most contributed to chemo-response prediction. All prediction models were replicated and validated using six publicly available independent gene expression datasets. The 422-gene signature prediction models predicted chemo-response with AUCs of ~70 %. Optimization of prediction models identified the 34 most important genes in chemo-response prediction. These 34-gene models had improved performance, with AUCs approaching 80 %. Both 422-gene and 34-gene prediction models were replicated and validated in six independent datasets. These prediction models serve as the foundation for the future development and implementation of a diagnostic tool to predict response to chemotherapy for serous OVCA patients.
An integrated theory of language production and comprehension.
Pickering, Martin J; Garrod, Simon
2013-08-01
Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal.
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)
Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).
Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.
Murphy, Brittany L; L Hoskin, Tanya; Heins, Courtney Day N; Habermann, Elizabeth B; Boughey, Judy C
2017-09-01
Axillary node status after neoadjuvant chemotherapy (NAC) influences the axillary surgical staging procedure as well as recommendations regarding reconstruction and radiation. Our aim was to construct a clinical preoperative prediction model to identify the likelihood of patients being node negative after NAC. Using the National Cancer Database (NCDB) from January 2010 to December 2012, we identified cT1-T4c, N0-N3 breast cancer patients treated with NAC. The effects of patient and tumor factors on pathologic node status were assessed by multivariable logistic regression separately for clinically node negative (cN0) and clinically node positive (cN+) disease, and two models were constructed. Model performance was validated in a cohort of NAC patients treated at our institution (January 2013-July 2016), and model discrimination was assessed by estimating the area under the curve (AUC). Of 16,153 NCDB patients, 6659 (41%) were cN0 and 9494 (59%) were cN+. Factors associated with pathologic nodal status and included in the models were patient age, tumor grade, biologic subtype, histology, clinical tumor category, and, in cN+ patients only, clinical nodal category. The validation dataset included 194 cN0 and 180 cN+ patients. The cN0 model demonstrated good discrimination, with an AUC of 0.73 (95% confidence interval [CI] 0.72-0.74) in the NCDB and 0.77 (95% CI 0.68-0.85) in the external validation, while the cN+ patient model AUC was 0.71 (95% CI 0.70-0.72) in the NCDB and 0.74 (95% CI 0.67-0.82) in the external validation. We constructed two models that showed good discrimination for predicting ypN0 status following NAC in cN0 and cN+ patients. These clinically useful models can guide surgical planning after NAC.
Kiviniemi, Marc T.; Bennett, Alyssa; Zaiter, Marie; Marshall, James R.
2010-01-01
Compliance with colorectal cancer screening recommendations requires considerable conscious effort on the part of the individual patient, making an individual's decisions about engagement in screening an important contributor to compliance or noncompliance. The objective of this paper was to examine the effectiveness of individual-level behavior theories and their associated constructs in accounting for engagement in colorectal cancer screening behavior. We reviewed the literature examining constructs from formal models of individual-level health behavior as factors associated with compliance with screening for colorectal cancer. All published studies examining one or more constructs from the health belief model, theory of planned behavior, transtheoretical model, or social cognitive theory and their relation to screening behavior or behavioral intentions were included in the analysis. By and large, results of studies supported the theory-based predictions for the influence of constructs on cancer screening behavior. However, the evidence base for many of these relations, especially for models other than the health belief model, is quite limited. Suggestions are made for future research on individual-level determinants of colorectal cancer screening. PMID:21954045
Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.
2014-01-01
Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376
ERIC Educational Resources Information Center
Kieffer, Kevin M.; Schinka, John A.; Curtiss, Glenn
2004-01-01
This study examined the contributions of the 5-Factor Model (FFM; P. T. Costa & R. R. McCrae, 1992) and RIASEC (J. L. Holland, 1994) constructs of consistency, differentiation, and person-environment congruence in predicting job performance ratings in a large sample (N = 514) of employees. Hierarchical regression analyses conducted separately by…
2008-12-01
1979; Wasserman and Faust, 1994). SNA thus relies heavily on graph theory to make predictions about network structure and thus social behavior...becomes a tool for increasing the specificity of theory , thinking through the theoretical implications, and generating testable predictions. In...to summarize Construct and its roots in constructural sociological theory . We discover that the (LPM) provides a mathematical bridge between
NASA Astrophysics Data System (ADS)
Falugi, P.; Olaru, S.; Dumur, D.
2010-08-01
This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.
NASA Astrophysics Data System (ADS)
Goudarzi, Nasser
2016-04-01
In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.
Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi
2018-02-01
To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.
DOT National Transportation Integrated Search
2015-01-01
Not long after the construction of a pavement or a new pavement surface, various : forms of deterioration begin to accumulate due to the harsh effects of traffic loading : combined with weathering action. In a recent NEXTRANS project, a pavement crac...
Prediction of physical workload in reduced gravity environments
NASA Technical Reports Server (NTRS)
Goldberg, Joseph H.
1987-01-01
The background, development, and application of a methodology to predict human energy expenditure and physical workload in low gravity environments, such as a Lunar or Martian base, is described. Based on a validated model to predict energy expenditures in Earth-based industrial jobs, the model relies on an elemental analysis of the proposed job. Because the job itself need not physically exist, many alternative job designs may be compared in their physical workload. The feasibility of using the model for prediction of low gravity work was evaluated by lowering body and load weights, while maintaining basal energy expenditure. Comparison of model results was made both with simulated low gravity energy expenditure studies and with reported Apollo 14 Lunar EVA expenditure. Prediction accuracy was very good for walking and for cart pulling on slopes less than 15 deg, but the model underpredicted the most difficult work conditions. This model was applied to example core sampling and facility construction jobs, as presently conceptualized for a Lunar or Martian base. Resultant energy expenditures and suggested work-rest cycles were well within the range of moderate work difficulty. Future model development requirements were also discussed.
Goya Jorge, Elizabeth; Rayar, Anita Maria; Barigye, Stephen J; Jorge Rodríguez, María Elisa; Sylla-Iyarreta Veitía, Maité
2016-06-07
A quantitative structure-activity relationship (QSAR) study of the 2,2-diphenyl-l-picrylhydrazyl (DPPH•) radical scavenging ability of 1373 chemical compounds, using DRAGON molecular descriptors (MD) and the neural network technique, a technique based on the multilayer multilayer perceptron (MLP), was developed. The built model demonstrated a satisfactory performance for the training ( R 2 = 0.713 ) and test set ( Q ext 2 = 0.654 ) , respectively. To gain greater insight on the relevance of the MD contained in the MLP model, sensitivity and principal component analyses were performed. Moreover, structural and mechanistic interpretation was carried out to comprehend the relationship of the variables in the model with the modeled property. The constructed MLP model was employed to predict the radical scavenging ability for a group of coumarin-type compounds. Finally, in order to validate the model's predictions, an in vitro assay for one of the compounds (4-hydroxycoumarin) was performed, showing a satisfactory proximity between the experimental and predicted pIC50 values.
Considerations on the Use of 3-D Geophysical Models to Predict Test Ban Monitoring Observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, D B; Zucca, J J; McCallen, D B
2007-07-09
The use of 3-D geophysical models to predict nuclear test ban monitoring observables (phase travel times, amplitudes, dispersion, etc.) is widely anticipated to provide improvements in the basic seismic monitoring functions of detection, association, location, discrimination and yield estimation. A number of questions arise when contemplating a transition from 1-D, 2-D and 2.5-D models to constructing and using 3-D models, among them: (1) Can a 3-D geophysical model or a collection of 3-D models provide measurably improved predictions of seismic monitoring observables over existing 1-D models, or 2-D and 2 1/2-D models currently under development? (2) Is a single modelmore » that can predict all observables achievable, or must separate models be devised for each observable? How should joint inversion of disparate observable data be performed, if required? (3) What are the options for model representation? Are multi-resolution models essential? How does representation affect the accuracy and speed of observable predictions? (4) How should model uncertainty be estimated, represented and how should it be used? Are stochastic models desirable? (5) What data types should be used to construct the models? What quality control regime should be established? (6) How will 3-D models be used in operations? Will significant improvements in the basic monitoring functions result from the use of 3-D models? Will the calculation of observables through 3-D models be fast enough for real-time use or must a strategy of pre-computation be employed? (7) What are the theoretical limits to 3-D model development (resolution, uncertainty) and performance in predicting monitoring observables? How closely can those limits be approached with projected data availability, station distribution and inverse methods? (8) What priorities should be placed on the acquisition of event ground truth information, deployment of new stations, development of new inverse techniques, exploitation of large-scale computing and other activities in the pursuit of 3-D model development and use? In this paper, we examine what technical issues must be addressed to answer these questions. Although convened for a somewhat broader purpose, the June 2007 Workshop on Multi-resolution 3D Earth Models held in Berkeley, CA also touched on this topic. Results from the workshop are summarized in this paper.« less
Arntzen, J W
2006-05-04
Aim of the study was to identify the conditions under which spatial-environmental models can be used for the improved understanding of species distributions, under the explicit criterion of model predictive performance. I constructed distribution models for 17 amphibian and 21 reptile species in Portugal from atlas data and 13 selected ecological variables with stepwise logistic regression and a geographic information system. Models constructed for Portugal were extrapolated over Spain and tested against range maps and atlas data. Descriptive model precision ranged from 'fair' to 'very good' for 12 species showing a range border inside Portugal ('edge species', kappa (k) 0.35-0.89, average 0.57) and was at best 'moderate' for 26 species with a countrywide Portuguese distribution ('non-edge species', k = 0.03-0.54, average 0.29). The accuracy of the prediction for Spain was significantly related to the precision of the descriptive model for the group of edge species and not for the countrywide species. In the latter group data were consistently better captured with the single variable search-effort than by the panel of environmental data. Atlas data in presence-absence format are often inadequate to model the distribution of species if the considered area does not include part of the range border. Conversely, distribution models for edge-species, especially those displaying high precision, may help in the correct identification of parameters underlying the species range and assist with the informed choice of conservation measures.
2009-01-01
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input–output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input–output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down. PMID:20596382
Ahadian, Samad; Kawazoe, Yoshiyuki
2009-06-04
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input-output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input-output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.
NASA Astrophysics Data System (ADS)
Mukhin, Dmitry; Gavrilov, Andrey; Loskutov, Evgeny; Feigin, Alexander
2016-04-01
We suggest a method for empirical forecast of climate dynamics basing on the reconstruction of reduced dynamical models in a form of random dynamical systems [1,2] derived from observational time series. The construction of proper embedding - the set of variables determining the phase space the model works in - is no doubt the most important step in such a modeling, but this task is non-trivial due to huge dimension of time series of typical climatic fields. Actually, an appropriate expansion of observational time series is needed yielding the number of principal components considered as phase variables, which are to be efficient for the construction of low-dimensional evolution operator. We emphasize two main features the reduced models should have for capturing the main dynamical properties of the system: (i) taking into account time-lagged teleconnections in the atmosphere-ocean system and (ii) reflecting the nonlinear nature of these teleconnections. In accordance to these principles, in this report we present the methodology which includes the combination of a new way for the construction of an embedding by the spatio-temporal data expansion and nonlinear model construction on the basis of artificial neural networks. The methodology is aplied to NCEP/NCAR reanalysis data including fields of sea level pressure, geopotential height, and wind speed, covering Northern Hemisphere. Its efficiency for the interannual forecast of various climate phenomena including ENSO, PDO, NAO and strong blocking event condition over the mid latitudes, is demonstrated. Also, we investigate the ability of the models to reproduce and predict the evolution of qualitative features of the dynamics, such as spectral peaks, critical transitions and statistics of extremes. This research was supported by the Government of the Russian Federation (Agreement No. 14.Z50.31.0033 with the Institute of Applied Physics RAS) [1] Y. I. Molkov, E. M. Loskutov, D. N. Mukhin, and A. M. Feigin, "Random dynamical models from time series," Phys. Rev. E, vol. 85, no. 3, p. 036216, 2012. [2] D. Mukhin, D. Kondrashov, E. Loskutov, A. Gavrilov, A. Feigin, and M. Ghil, "Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models," J. Clim., vol. 28, no. 5, pp. 1962-1976, 2015.
ERIC Educational Resources Information Center
Knezek, Gerald; Christensen, Rhonda
2016-01-01
An expansion of the Will, Skill, Tool Model of Technology Integration to include teacher's pedagogical style is proposed by the authors as a means of advancing the predictive power of the model for level of classroom technology integration to beyond 90%. Suggested advantages to this expansion include more precise identification of areas to be…
NASA Astrophysics Data System (ADS)
Luo, Junhui; Wu, Chao; Liu, Xianlin; Mi, Decai; Zeng, Fuquan; Zeng, Yongjun
2018-01-01
At present, the prediction of soft foundation settlement mostly use the exponential curve and hyperbola deferred approximation method, and the correlation between the results is poor. However, the application of neural network in this area has some limitations, and none of the models used in the existing cases adopted the TS fuzzy neural network of which calculation combines the characteristics of fuzzy system and neural network to realize the mutual compatibility methods. At the same time, the developed and optimized calculation program is convenient for engineering designers. Taking the prediction and analysis of soft foundation settlement of gully soft soil in granite area of Guangxi Guihe road as an example, the fuzzy neural network model is established and verified to explore the applicability. The TS fuzzy neural network is used to construct the prediction model of settlement and deformation, and the corresponding time response function is established to calculate and analyze the settlement of soft foundation. The results show that the prediction of short-term settlement of the model is accurate and the final settlement prediction result has certain engineering reference value.
The dynamic financial distress prediction method of EBW-VSTW-SVM
NASA Astrophysics Data System (ADS)
Sun, Jie; Li, Hui; Chang, Pei-Chann; He, Kai-Yu
2016-07-01
Financial distress prediction (FDP) takes important role in corporate financial risk management. Most of former researches in this field tried to construct effective static FDP (SFDP) models that are difficult to be embedded into enterprise information systems, because they are based on horizontal data-sets collected outside the modelling enterprise by defining the financial distress as the absolute conditions such as bankruptcy or insolvency. This paper attempts to propose an approach for dynamic evaluation and prediction of financial distress based on the entropy-based weighting (EBW), the support vector machine (SVM) and an enterprise's vertical sliding time window (VSTW). The dynamic FDP (DFDP) method is named EBW-VSTW-SVM, which keeps updating the FDP model dynamically with time goes on and only needs the historic financial data of the modelling enterprise itself and thus is easier to be embedded into enterprise information systems. The DFDP method of EBW-VSTW-SVM consists of four steps, namely evaluation of vertical relative financial distress (VRFD) based on EBW, construction of training data-set for DFDP modelling according to VSTW, training of DFDP model based on SVM and DFDP for the future time point. We carry out case studies for two listed pharmaceutical companies and experimental analysis for some other companies to simulate the sliding of enterprise vertical time window. The results indicated that the proposed approach was feasible and efficient to help managers improve corporate financial management.
Coffey, C Adam; Cox, Jennifer; Kopkin, Megan R
2018-02-01
Few studies have examined the extent to which psychopathic traits relate to the commission of mild to moderate acts of deviance, such as vandalism and minor traffic violations. Given that psychopathy is now studied in community populations, the relationship between psychopathic traits and less severe deviant behaviors, which are more normative among noninstitutionalized samples, warrants investigation. The current study examined the relationships between the triarchic model of psychopathy (Patrick, Fowles & Krueger, 2009) and seven forms of deviant behavior (drug use, alcohol use, theft, vandalism, school misconduct, assault, and general deviance) in a nationally representative sample. Triarchic disinhibition positively predicted each form of normative deviance. Boldness positively predicted drug and alcohol use as well as general deviance, while meanness negatively predicted school misconduct. Boldness and disinhibition also positively predicted overall lifetime engagement in deviant behavior. Implications are discussed, including support of the role of boldness within the psychopathy construct.
Myers, Hector F; Wyatt, Gail E; Ullman, Jodie B; Loeb, Tamra B; Chin, Dorothy; Prause, Nicole; Zhang, Muyu; Williams, John K; Slavich, George M; Liu, Honghu
2015-05-01
This study examined the utility of a lifetime cumulative adversities and trauma model in predicting the severity of mental health symptoms of depression, anxiety, and posttraumatic stress disorder. We also tested whether ethnicity and gender moderate the effects of this stress exposure construct on mental health using multigroup structural equation modeling. A sample of 500 low-socioeconomic status African American and Latino men and women with histories of adversities and trauma were recruited and assessed with a standard battery of self-report measures of stress and mental health. Multiple-group structural equation models indicated good overall model fit. As hypothesized, experiences of discrimination, childhood family adversities, childhood sexual abuse, other childhood trauma, and chronic stresses all loaded on the latent cumulative burden of adversities and trauma construct (CBAT). The CBAT stress exposure index in turn predicted the mental health status latent variable. Although there were several significant univariate ethnic and gender differences, and ethnic and gender differences were observed on several paths, there were no significant ethnic differences in the final model fit of the data. These findings highlight the deleterious consequences of cumulative stress and trauma for mental health and underscore a need to assess these constructs in selecting appropriate clinical interventions for reducing mental health disparities and improving human health. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Wang, Yu; Guo, Yanzhi; Kuang, Qifan; Pu, Xuemei; Ji, Yue; Zhang, Zhihang; Li, Menglong
2015-04-01
The assessment of binding affinity between ligands and the target proteins plays an essential role in drug discovery and design process. As an alternative to widely used scoring approaches, machine learning methods have also been proposed for fast prediction of the binding affinity with promising results, but most of them were developed as all-purpose models despite of the specific functions of different protein families, since proteins from different function families always have different structures and physicochemical features. In this study, we proposed a random forest method to predict the protein-ligand binding affinity based on a comprehensive feature set covering protein sequence, binding pocket, ligand structure and intermolecular interaction. Feature processing and compression was respectively implemented for different protein family datasets, which indicates that different features contribute to different models, so individual representation for each protein family is necessary. Three family-specific models were constructed for three important protein target families of HIV-1 protease, trypsin and carbonic anhydrase respectively. As a comparison, two generic models including diverse protein families were also built. The evaluation results show that models on family-specific datasets have the superior performance to those on the generic datasets and the Pearson and Spearman correlation coefficients ( R p and Rs) on the test sets are 0.740, 0.874, 0.735 and 0.697, 0.853, 0.723 for HIV-1 protease, trypsin and carbonic anhydrase respectively. Comparisons with the other methods further demonstrate that individual representation and model construction for each protein family is a more reasonable way in predicting the affinity of one particular protein family.
Morsink, Maarten C; Dukers, Danny F
2009-03-01
Animal models have been widely used for studying the physiology and pharmacology of psychiatric and neurological diseases. The concepts of face, construct, and predictive validity are used as indicators to estimate the extent to which the animal model mimics the disease. Currently, we used these three concepts to design a theoretical assignment to integrate the teaching of neurophysiology, neuropharmacology, and experimental design. For this purpose, seven case studies were developed in which animal models for several psychiatric and neurological diseases were described and in which neuroactive drugs used to treat or study these diseases were introduced. Groups of undergraduate students were assigned to one of these case studies and asked to give a classroom presentation in which 1) the disease and underlying pathophysiology are described, 2) face and construct validity of the animal model are discussed, and 3) a pharmacological experiment with the associated neuroactive drug to assess predictive validity is presented. After evaluation of the presentations, we found that the students had gained considerable insight into disease phenomenology, its underlying neurophysiology, and the mechanism of action of the neuroactive drug. Moreover, the assignment was very useful in the teaching of experimental design, allowing an in-depth discussion of experimental control groups and the prediction of outcomes in these groups if the animal model were to display predictive validity. Finally, the highly positive responses in the student evaluation forms indicated that the assignment was of great interest to the students. Hence, the currently developed case studies constitute a very useful tool for teaching neurophysiology, neuropharmacology, and experimental design.
Predicting grain yield using canopy hyperspectral reflectance in wheat breeding data.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; de Los Campos, Gustavo; Alvarado, Gregorio; Suchismita, Mondal; Rutkoski, Jessica; González-Pérez, Lorena; Burgueño, Juan
2017-01-01
Modern agriculture uses hyperspectral cameras to obtain hundreds of reflectance data measured at discrete narrow bands to cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra, depending on the camera. This information is used to construct vegetation indices (VI) (e.g., green normalized difference vegetation index or GNDVI, simple ratio or SRa, etc.) which are used for the prediction of primary traits (e.g., biomass). However, these indices only use some bands and are cultivar-specific; therefore they lose considerable information and are not robust for all cultivars. This study proposes models that use all available bands as predictors to increase prediction accuracy; we compared these approaches with eight conventional vegetation indexes (VIs) constructed using only some bands. The data set we used comes from CIMMYT's global wheat program and comprises 1170 genotypes evaluated for grain yield (ton/ha) in five environments (Drought, Irrigated, EarlyHeat, Melgas and Reduced Irrigated); the reflectance data were measured in 250 discrete narrow bands ranging between 392 and 851 nm. The proposed models for the simultaneous analysis of all the bands were ordinal least square (OLS), Bayes B, principal components with Bayes B, functional B-spline, functional Fourier and functional partial least square. The results of these models were compared with the OLS performed using as predictors each of the eight VIs individually and combined. We found that using all bands simultaneously increased prediction accuracy more than using VI alone. The Splines and Fourier models had the best prediction accuracy for each of the nine time-points under study. Combining image data collected at different time-points led to a small increase in prediction accuracy relative to models that use data from a single time-point. Also, using bands with heritabilities larger than 0.5 only in Drought as predictor variables showed improvements in prediction accuracy.
ERIC Educational Resources Information Center
Jones, Brett D.; Sahbaz, Sumeyra; Schram, Asta B.; Chittum, Jessica R.
2017-01-01
We investigated students' perceptions related to psychological constructs in their science classes and the influence of these perceptions on their science identification and science career goals. Participants included 575 middle school students from two countries (334 students in the U.S. and 241 students in Iceland). Students completed a…
Takahashi, Hiro; Kobayashi, Takeshi; Honda, Hiroyuki
2005-01-15
For establishing prognostic predictors of various diseases using DNA microarray analysis technology, it is desired to find selectively significant genes for constructing the prognostic model and it is also necessary to eliminate non-specific genes or genes with error before constructing the model. We applied projective adaptive resonance theory (PART) to gene screening for DNA microarray data. Genes selected by PART were subjected to our FNN-SWEEP modeling method for the construction of a cancer class prediction model. The model performance was evaluated through comparison with a conventional screening signal-to-noise (S2N) method or nearest shrunken centroids (NSC) method. The FNN-SWEEP predictor with PART screening could discriminate classes of acute leukemia in blinded data with 97.1% accuracy and classes of lung cancer with 90.0% accuracy, while the predictor with S2N was only 85.3 and 70.0% or the predictor with NSC was 88.2 and 90.0%, respectively. The results have proven that PART was superior for gene screening. The software is available upon request from the authors. honda@nubio.nagoya-u.ac.jp
Gambling and the Reasoned Action Model: Predicting Past Behavior, Intentions, and Future Behavior.
Dahl, Ethan; Tagler, Michael J; Hohman, Zachary P
2018-03-01
Gambling is a serious concern for society because it is highly addictive and is associated with a myriad of negative outcomes. The current study applied the Reasoned Action Model (RAM) to understand and predict gambling intentions and behavior. Although prior studies have taken a reasoned action approach to understand gambling, no prior study has fully applied the RAM or used the RAM to predict future gambling. Across two studies the RAM was used to predict intentions to gamble, past gambling behavior, and future gambling behavior. In study 1 the model significantly predicted intentions and past behavior in both a college student and Amazon Mechanical Turk sample. In study 2 the model predicted future gambling behavior, measured 2 weeks after initial measurement of the RAM constructs. This study stands as the first to show the utility of the RAM in predicting future gambling behavior. Across both studies, attitudes and perceived normative pressure were the strongest predictors of intentions to gamble. These findings provide increased understanding of gambling and inform the development of gambling interventions based on the RAM.
Empirical prediction of the onset dates of South China Sea summer monsoon
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Li, Tim
2017-03-01
The onset of South China Sea summer monsoon (SCSSM) signifies the commencement of the wet season over East Asia. Predicting the SCSSM onset date is of significant importance. In this study, we establish two different statistical models, namely the physical-empirical model (PEM) and the spatial-temporal projection model (STPM) to predict the SCSSM onset. The PEM is constructed from the seasonal prediction perspective. Observational diagnoses reveal that the early onset of the SCSSM is preceded by (a) a warming tendency in middle and lower troposphere (850-500 hPa) over central Siberia from January to March, (b) a La Niña-like zonal dipole sea surface temperature pattern over the tropical Pacific in March, and (c) a dipole sea level pressure pattern with negative center in subtropics and positive center over high latitude of Southern Hemisphere in January. The PEM built on these predictors achieves a cross-validated reforecast temporal correlation coefficient (TCC) skill of 0.84 for the period of 1979-2004, and an independent forecast TCC skill of 0.72 for the period 2005-2014. The STPM is built on the extended-range forecast perspective. Pentad data are used to predict a zonal wind index over the South China Sea region. Similar to PEM, the STPM is constructed using 1979-2004 data. Based on the forecasted zonal wind index, the independent forecast of the SCSSM onset dates achieves a TCC skill of 0.90 for 2005-2014. The STPM provides more detailed information for the intraseasonal evolution during the period of the SCSSM onset (pentad 25-35). The two models proposed herein are expected to facilitate the real-time prediction of the SCSSM onset.
NASA Astrophysics Data System (ADS)
Banerjee, Priyanka; Preissner, Robert
2018-04-01
Taste of a chemical compounds present in food stimulates us to take in nutrients and avoid poisons. However, the perception of taste greatly depends on the genetic as well as evolutionary perspectives. The aim of this work was the development and validation of a machine learning model based on molecular fingerprints to discriminate between sweet and bitter taste of molecules. BitterSweetForest is the first open access model based on KNIME workflow that provides platform for prediction of bitter and sweet taste of chemical compounds using molecular fingerprints and Random Forest based classifier. The constructed model yielded an accuracy of 95% and an AUC of 0.98 in cross-validation. In independent test set, BitterSweetForest achieved an accuracy of 96 % and an AUC of 0.98 for bitter and sweet taste prediction. The constructed model was further applied to predict the bitter and sweet taste of natural compounds, approved drugs as well as on an acute toxicity compound data set. BitterSweetForest suggests 70% of the natural product space, as bitter and 10 % of the natural product space as sweet with confidence score of 0.60 and above. 77 % of the approved drug set was predicted as bitter and 2% as sweet with a confidence scores of 0.75 and above. Similarly, 75% of the total compounds from acute oral toxicity class were predicted only as bitter with a minimum confidence score of 0.75, revealing toxic compounds are mostly bitter. Furthermore, we applied a Bayesian based feature analysis method to discriminate the most occurring chemical features between sweet and bitter compounds from the feature space of a circular fingerprint.
Banerjee, Priyanka; Preissner, Robert
2018-01-01
Taste of a chemical compound present in food stimulates us to take in nutrients and avoid poisons. However, the perception of taste greatly depends on the genetic as well as evolutionary perspectives. The aim of this work was the development and validation of a machine learning model based on molecular fingerprints to discriminate between sweet and bitter taste of molecules. BitterSweetForest is the first open access model based on KNIME workflow that provides platform for prediction of bitter and sweet taste of chemical compounds using molecular fingerprints and Random Forest based classifier. The constructed model yielded an accuracy of 95% and an AUC of 0.98 in cross-validation. In independent test set, BitterSweetForest achieved an accuracy of 96% and an AUC of 0.98 for bitter and sweet taste prediction. The constructed model was further applied to predict the bitter and sweet taste of natural compounds, approved drugs as well as on an acute toxicity compound data set. BitterSweetForest suggests 70% of the natural product space, as bitter and 10% of the natural product space as sweet with confidence score of 0.60 and above. 77% of the approved drug set was predicted as bitter and 2% as sweet with a confidence score of 0.75 and above. Similarly, 75% of the total compounds from acute oral toxicity class were predicted only as bitter with a minimum confidence score of 0.75, revealing toxic compounds are mostly bitter. Furthermore, we applied a Bayesian based feature analysis method to discriminate the most occurring chemical features between sweet and bitter compounds using the feature space of a circular fingerprint. PMID:29696137
ERIC Educational Resources Information Center
Cheng, Sheung-Tak; Chan, Alfred C. M.
2007-01-01
Two theoretical models were constructed to illustrate how stressful events, family and friends support, depression, substance use, and death attitude mutually influence to create cumulative risks for suicide. The models were evaluated using structural equation modeling. Results showed that suicidality was strongly predicted by death attitude,…
ERIC Educational Resources Information Center
Morsink, Maarten C.; Dukers, Danny F.
2009-01-01
Animal models have been widely used for studying the physiology and pharmacology of psychiatric and neurological diseases. The concepts of face, construct, and predictive validity are used as indicators to estimate the extent to which the animal model mimics the disease. Currently, we used these three concepts to design a theoretical assignment to…
This presentation will examine the impact of data quality on the construction of QSAR models being developed within the EPA‘s National Center for Computational Toxicology. We have developed a public-facing platform to provide access to predictive models. As part of the work we ha...
Peter B. Woodbury; James E. Smith; David A. Weinstein; John A. Laurence
1998-01-01
Most models of the potential effects of climate change on forest growth have produced deterministic predictions. However, there are large uncertainties in data on regional forest condition, estimates of future climate, and quantitative relationships between environmental conditions and forest growth rate. We constructed a new model to analyze these uncertainties...
Lu, Jingtao; Goldsmith, Michael-Rock; Grulke, Christopher M; Chang, Daniel T; Brooks, Raina D; Leonard, Jeremy A; Phillips, Martin B; Hypes, Ethan D; Fair, Matthew J; Tornero-Velez, Rogelio; Johnson, Jeffre; Dary, Curtis C; Tan, Yu-Mei
2016-02-01
Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals.
Grulke, Christopher M.; Chang, Daniel T.; Brooks, Raina D.; Leonard, Jeremy A.; Phillips, Martin B.; Hypes, Ethan D.; Fair, Matthew J.; Tornero-Velez, Rogelio; Johnson, Jeffre; Dary, Curtis C.; Tan, Yu-Mei
2016-01-01
Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals. PMID:26871706
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrez, Loujaine; Ghanem, Roger; Aitharaju, Venkat
Design of non-crimp fabric (NCF) composites entails major challenges pertaining to (1) the complex fine-scale morphology of the constituents, (2) the manufacturing-produced inconsistency of this morphology spatially, and thus (3) the ability to build reliable, robust, and efficient computational surrogate models to account for this complex nature. Traditional approaches to construct computational surrogate models have been to average over the fluctuations of the material properties at different scale lengths. This fails to account for the fine-scale features and fluctuations in morphology, material properties of the constituents, as well as fine-scale phenomena such as damage and cracks. In addition, it failsmore » to accurately predict the scatter in macroscopic properties, which is vital to the design process and behavior prediction. In this work, funded in part by the Department of Energy, we present an approach for addressing these challenges by relying on polynomial chaos representations of both input parameters and material properties at different scales. Moreover, we emphasize the efficiency and robustness of integrating the polynomial chaos expansion with multiscale tools to perform multiscale assimilation, characterization, propagation, and prediction, all of which are necessary to construct the data-driven surrogate models required to design under the uncertainty of composites. These data-driven constructions provide an accurate map from parameters (and their uncertainties) at all scales and the system-level behavior relevant for design. While this perspective is quite general and applicable to all multiscale systems, NCF composites present a particular hierarchy of scales that permits the efficient implementation of these concepts.« less
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
Wasserkampf, A; Silva, M N; Santos, I C; Carraça, E V; Meis, J J M; Kremers, S P J; Teixeira, P J
2014-12-01
This study analyzed psychosocial predictors of the Theory of Planned Behavior (TPB) and Self-Determination Theory (SDT) and evaluated their associations with short- and long-term moderate plus vigorous physical activity (MVPA) and lifestyle physical activity (PA) outcomes in women who underwent a weight-management program. 221 participants (age 37.6 ± 7.02 years) completed a 12-month SDT-based lifestyle intervention and were followed-up for 24 months. Multiple linear regression analyses tested associations between psychosocial variables and self-reported short- and long-term PA outcomes. Regression analyses showed that control constructs of both theories were significant determinants of short- and long-term MVPA, whereas affective and self-determination variables were strong predictors of short- and long-term lifestyle PA. Regarding short-term prediction models, TPB constructs were stronger in predicting MVPA, whereas SDT was more effective in predicting lifestyle PA. For long-term models, both forms of PA were better predicted by SDT in comparison to TPB. These results highlight the importance of comparing health behavior theories to identify the mechanisms involved in the behavior change process. Control and competence constructs are crucial during early adoption of structured PA behaviors, whereas affective and intrinsic sources of motivation are more involved in incidental types of PA, particularly in relation to behavioral maintenance. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Development of the neurovascular unit (NVU) involves interactions between endothelial cells, pericytes, neuroprogenitor cells, and microglia. We constructed an in silico model of the developing neuroepithelium in CompuCell3D which recapitulated a suite of critical signaling pathw...
Analysis of functional importance of binding sites in the Drosophila gap gene network model.
Kozlov, Konstantin; Gursky, Vitaly V; Kulakovskiy, Ivan V; Dymova, Arina; Samsonova, Maria
2015-01-01
The statistical thermodynamics based approach provides a promising framework for construction of the genotype-phenotype map in many biological systems. Among important aspects of a good model connecting the DNA sequence information with that of a molecular phenotype (gene expression) is the selection of regulatory interactions and relevant transcription factor bindings sites. As the model may predict different levels of the functional importance of specific binding sites in different genomic and regulatory contexts, it is essential to formulate and study such models under different modeling assumptions. We elaborate a two-layer model for the Drosophila gap gene network and include in the model a combined set of transcription factor binding sites and concentration dependent regulatory interaction between gap genes hunchback and Kruppel. We show that the new variants of the model are more consistent in terms of gene expression predictions for various genetic constructs in comparison to previous work. We quantify the functional importance of binding sites by calculating their impact on gene expression in the model and calculate how these impacts correlate across all sites under different modeling assumptions. The assumption about the dual interaction between hb and Kr leads to the most consistent modeling results, but, on the other hand, may obscure existence of indirect interactions between binding sites in regulatory regions of distinct genes. The analysis confirms the previously formulated regulation concept of many weak binding sites working in concert. The model predicts a more or less uniform distribution of functionally important binding sites over the sets of experimentally characterized regulatory modules and other open chromatin domains.
Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á
2018-03-01
This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Y.
1993-01-01
Based on model approaches, three conifer species, red pine, Norway spruce and Scots pine grown in plantations at Pack Demonstration Forest, in the southeastern Adirondack mountains of New York, were chosen to study growth response to different environmental changes, including silvicultural treatments and changes in climate and chemical environment. Detailed stem analysis data provided a basis for constructing tree growth models. These models were organized into three groups: morphological, dynamic and predictive. The morphological model was designed to evaluate relationship between tree attributes and interactive influences of intrinsic and extrinsic factors on the annual increments. Three types of morphological patternsmore » have been characterized: space-time patterns of whole-stem rings, intrinsic wood deposition pattern along the tree-stem, and bolewood allocation ratio patterns along the tree-stem. The dynamic model reflects the growth process as a system which responds to extrinsic signal inputs, including fertilization pulses, spacing effects and climatic disturbance, as well as intrinsic feedback. Growth signals indicative of climatic effects were used to construct growth-climate models using both multivariate analysis and Kalman filter methods. The predictive model utilized GCMs and growth-climate relationships to forecast tree growth responses in relation to future scenarios of CO[sub 2]-induced climate change. Prediction results indicate that different conifer species have individualistic growth response to future climatic change and suggest possible changes in future growth and distribution of naturally occurring conifers in this region.« less
Prediction of coal grindability from exploration data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez, M.; Hazen, K.
1970-08-01
A general prediction model for the Hardgrove grindability index was constructed from 735 coal samples using the proximate analysis, heating value, and sulfur content. The coals used to develop the general model ranged in volatile matter from 12.8 to 49.2 percent, dry basis, and had grindability indexes ranging from 35 to 121. A restricted model applicable to bituminous coals having grindabilities in the 40 to 110 range was developed from the proximate analysis and the petrographic composition of the coal. The prediction of coal grindability within a single seam was also investigated. The results reported support the belief that mechanicalmore » properties of the coal are related to both chemical and petrographic factors of the coal. The mechanical properties coal may be forecast in advance of mining, because the variables used as input to the prediction models can be measured from drill core samples collected during exploration.« less
Reheating predictions in gravity theories with derivative coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalianis, Ioannis; Koutsoumbas, George; Ntrekis, Konstantinos
2017-02-01
We investigate the inflationary predictions of a simple Horndeski theory where the inflaton scalar field has a non-minimal derivative coupling (NMDC) to the Einstein tensor. The NMDC is very motivated for the construction of successful models for inflation, nevertheless its inflationary predictions are not observationally distinct. We show that it is possible to probe the effects of the NMDC on the CMB observables by taking into account both the dynamics of the inflationary slow-roll phase and the subsequent reheating. We perform a comparative study between representative inflationary models with canonical fields minimally coupled to gravity and models with NMDC. Wemore » find that the inflation models with dominant NMDC generically predict a higher reheating temperature and a different range for the tilt of the scalar perturbation spectrum n {sub s} and scalar-to-tensor ratio r , potentially testable by current and future CMB experiments.« less
CAD-model-based vision for space applications
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.
1988-01-01
A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.
Application of a Hybrid Model for Predicting the Incidence of Tuberculosis in Hubei, China
Zhang, Guoliang; Huang, Shuqiong; Duan, Qionghong; Shu, Wen; Hou, Yongchun; Zhu, Shiyu; Miao, Xiaoping; Nie, Shaofa; Wei, Sheng; Guo, Nan; Shan, Hua; Xu, Yihua
2013-01-01
Background A prediction model for tuberculosis incidence is needed in China which may be used as a decision-supportive tool for planning health interventions and allocating health resources. Methods The autoregressive integrated moving average (ARIMA) model was first constructed with the data of tuberculosis report rate in Hubei Province from Jan 2004 to Dec 2011.The data from Jan 2012 to Jun 2012 were used to validate the model. Then the generalized regression neural network (GRNN)-ARIMA combination model was established based on the constructed ARIMA model. Finally, the fitting and prediction accuracy of the two models was evaluated. Results A total of 465,960 cases were reported between Jan 2004 and Dec 2011 in Hubei Province. The report rate of tuberculosis was highest in 2005 (119.932 per 100,000 population) and lowest in 2010 (84.724 per 100,000 population). The time series of tuberculosis report rate show a gradual secular decline and a striking seasonal variation. The ARIMA (2, 1, 0) × (0, 1, 1)12 model was selected from several plausible ARIMA models. The residual mean square error of the GRNN-ARIMA model and ARIMA model were 0.4467 and 0.6521 in training part, and 0.0958 and 0.1133 in validation part, respectively. The mean absolute error and mean absolute percentage error of the hybrid model were also less than the ARIMA model. Discussion and Conclusions The gradual decline in tuberculosis report rate may be attributed to the effect of intensive measures on tuberculosis. The striking seasonal variation may have resulted from several factors. We suppose that a delay in the surveillance system may also have contributed to the variation. According to the fitting and prediction accuracy, the hybrid model outperforms the traditional ARIMA model, which may facilitate the allocation of health resources in China. PMID:24223232
Using simplifications of reality in the real world: Robust benefits of models for decision making
NASA Astrophysics Data System (ADS)
Hunt, R. J.
2008-12-01
Models are by definition simplifications of reality; the degree and nature of simplification, however, is debated. One view is "the world is 3D, heterogeneous, and transient, thus good models are too" - the more a model directly simulates the complexity of the real world the better it is considered to be. An alternative view is to only use simple models up front because real-world complexity can never be truly known. A third view is construct and calibrate as many models as predictions. A fourth is to build highly parameterized models and either look at an ensemble of results, or use mathematical regularization to identify an optimal most reasonable parameter set and fit. Although each view may have utility for a given decision-making process, there are common threads that perhaps run through all views. First, the model-construction process itself can help the decision-making process because it raises the discussion of opposing parties from one of contrasting professional opinions to discussion of reasonable types and ranges of model inputs and processes. Secondly, no matter what view is used to guide the model building, model predictions for the future might be expected to perform poorly in the future due to unanticipated future changes and stressors to the underlying system simulated. Although this does not reduce the obligation of the modeler to build representative tools for the system, it should serve to temper expectations of model performance. Finally, perhaps the most under-appreciated utility of models is for calculating the reduction in prediction uncertainty resulting from different data collection strategies - an attractive feature separate from the calculation and minimization of absolute prediction uncertainty itself. This type of model output facilitates focusing on efficient use of current and future monitoring resources - something valued by many decision-makers regardless of background, system managed, and societal context.
Povey, Jane F; O'Malley, Christopher J; Root, Tracy; Martin, Elaine B; Montague, Gary A; Feary, Marc; Trim, Carol; Lang, Dietmar A; Alldread, Richard; Racher, Andrew J; Smales, C Mark
2014-08-20
Despite many advances in the generation of high producing recombinant mammalian cell lines over the last few decades, cell line selection and development is often slowed by the inability to predict a cell line's phenotypic characteristics (e.g. growth or recombinant protein productivity) at larger scale (large volume bioreactors) using data from early cell line construction at small culture scale. Here we describe the development of an intact cell MALDI-ToF mass spectrometry fingerprinting method for mammalian cells early in the cell line construction process whereby the resulting mass spectrometry data are used to predict the phenotype of mammalian cell lines at larger culture scale using a Partial Least Squares Discriminant Analysis (PLS-DA) model. Using MALDI-ToF mass spectrometry, a library of mass spectrometry fingerprints was generated for individual cell lines at the 96 deep well plate stage of cell line development. The growth and productivity of these cell lines were evaluated in a 10L bioreactor model of Lonza's large-scale (up to 20,000L) fed-batch cell culture processes. Using the mass spectrometry information at the 96 deep well plate stage and phenotype information at the 10L bioreactor scale a PLS-DA model was developed to predict the productivity of unknown cell lines at the 10L scale based upon their MALDI-ToF fingerprint at the 96 deep well plate scale. This approach provides the basis for the very early prediction of cell lines' performance in cGMP manufacturing-scale bioreactors and the foundation for methods and models for predicting other mammalian cell phenotypes from rapid, intact-cell mass spectrometry based measurements. Copyright © 2014 Elsevier B.V. All rights reserved.
Yu, Ya-Hui; Xia, Wei-Xiong; Shi, Jun-Li; Ma, Wen-Juan; Li, Yong; Ye, Yan-Fang; Liang, Hu; Ke, Liang-Ru; Lv, Xing; Yang, Jing; Xiang, Yan-Qun; Guo, Xiang
2016-06-29
For patients with nasopharyngeal carcinoma (NPC) who undergo re-irradiation with intensity-modulated radiotherapy (IMRT), lethal nasopharyngeal necrosis (LNN) is a severe late adverse event. The purpose of this study was to identify risk factors for LNN and develop a model to predict LNN after radical re-irradiation with IMRT in patients with recurrent NPC. Patients who underwent radical re-irradiation with IMRT for locally recurrent NPC between March 2001 and December 2011 and who had no evidence of distant metastasis were included in this study. Clinical characteristics, including recurrent carcinoma conditions and dosimetric features, were evaluated as candidate risk factors for LNN. Logistic regression analysis was used to identify independent risk factors and construct the predictive scoring model. Among 228 patients enrolled in this study, 204 were at risk of developing LNN based on risk analysis. Of the 204 patients treated, 31 (15.2%) developed LNN. Logistic regression analysis showed that female sex (P = 0.008), necrosis before re-irradiation (P = 0.008), accumulated total prescription dose to the gross tumor volume (GTV) ≥145.5 Gy (P = 0.043), and recurrent tumor volume ≥25.38 cm(3) (P = 0.009) were independent risk factors for LNN. A model to predict LNN was then constructed that included these four independent risk factors. A model that includes sex, necrosis before re-irradiation, accumulated total prescription dose to GTV, and recurrent tumor volume can effectively predict the risk of developing LNN in NPC patients who undergo radical re-irradiation with IMRT.
Comparison of free-piston Stirling engine model predictions with RE1000 engine test data
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.
1984-01-01
Predictions of a free-piston Stirling engine model are compared with RE1000 engine test data taken at NASA-Lewis Research Center. The model validation and the engine testing are being done under a joint interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA-Lewis. A kinematic code developed at Lewis was upgraded to permit simulation of free-piston engine performance; it was further upgraded and modified at Lewis and is currently being validated. The model predicts engine performance by numerical integration of equations for each control volume in the working space. Piston motions are determined by numerical integration of the force balance on each piston or can be specified as Fourier series. In addition, the model Fourier analyzes the various piston forces to permit the construction of phasor force diagrams. The paper compares predicted and experimental values of power and efficiency and shows phasor force diagrams for the RE1000 engine displacer and piston. Further development plans for the model are also discussed.
Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan
2015-10-01
. Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies. © The Author(s) 2015.
Helbling, Damian E; Johnson, David R; Lee, Tae Kwon; Scheidegger, Andreas; Fenner, Kathrin
2015-03-01
The rates at which wastewater treatment plant (WWTP) microbial communities biotransform specific substrates can differ by orders of magnitude among WWTP communities. Differences in taxonomic compositions among WWTP communities may predict differences in the rates of some types of biotransformations. In this work, we present a novel framework for establishing predictive relationships between specific bacterial 16S rRNA sequence abundances and biotransformation rates. We selected ten WWTPs with substantial variation in their environmental and operational metrics and measured the in situ ammonia biotransformation rate constants in nine of them. We isolated total RNA from samples from each WWTP and analyzed 16S rRNA sequence reads. We then developed multivariate models between the measured abundances of specific bacterial 16S rRNA sequence reads and the ammonia biotransformation rate constants. We constructed model scenarios that systematically explored the effects of model regularization, model linearity and non-linearity, and aggregation of 16S rRNA sequences into operational taxonomic units (OTUs) as a function of sequence dissimilarity threshold (SDT). A large percentage (greater than 80%) of model scenarios resulted in well-performing and significant models at intermediate SDTs of 0.13-0.14 and 0.26. The 16S rRNA sequences consistently selected into the well-performing and significant models at those SDTs were classified as Nitrosomonas and Nitrospira groups. We then extend the framework by applying it to the biotransformation rate constants of ten micropollutants measured in batch reactors seeded with the ten WWTP communities. We identified phylogenetic groups that were robustly selected into all well-performing and significant models constructed with biotransformation rates of isoproturon, propachlor, ranitidine, and venlafaxine. These phylogenetic groups can be used as predictive biomarkers of WWTP microbial community activity towards these specific micropollutants. This work is an important step towards developing tools to predict biotransformation rates in WWTPs based on taxonomic composition. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardiansyah, Deni
2016-09-15
Purpose: The aim of this study was to investigate the accuracy of PET-based treatment planning for predicting the time-integrated activity coefficients (TIACs). Methods: The parameters of a physiologically based pharmacokinetic (PBPK) model were fitted to the biokinetic data of 15 patients to derive assumed true parameters and were used to construct true mathematical patient phantoms (MPPs). Biokinetics of 150 MBq {sup 68}Ga-DOTATATE-PET was simulated with different noise levels [fractional standard deviation (FSD) 10%, 1%, 0.1%, and 0.01%], and seven combinations of measurements at 30 min, 1 h, and 4 h p.i. PBPK model parameters were fitted to the simulated noisymore » PET data using population-based Bayesian parameters to construct predicted MPPs. Therapy simulations were performed as 30 min infusion of {sup 90}Y-DOTATATE of 3.3 GBq in both true and predicted MPPs. Prediction accuracy was then calculated as relative variability v{sub organ} between TIACs from both MPPs. Results: Large variability values of one time-point protocols [e.g., FSD = 1%, 240 min p.i., v{sub kidneys} = (9 ± 6)%, and v{sub tumor} = (27 ± 26)%] show inaccurate prediction. Accurate TIAC prediction of the kidneys was obtained for the case of two measurements (1 and 4 h p.i.), e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 10)%, or three measurements, e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 9)%. Conclusions: {sup 68}Ga-DOTATATE-PET measurements could possibly be used to predict the TIACs of {sup 90}Y-DOTATATE when using a PBPK model and population-based Bayesian parameters. The two time-point measurement at 1 and 4 h p.i. with a noise up to FSD = 1% allows an accurate prediction of the TIACs in kidneys.« less
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
Horner, Marc; Muralikrishnan, R.
2010-01-01
ABSTRACT Purpose A computational fluid dynamics (CFD) study examined the impact of particle size on dissolution rate and residence of intravitreal suspension depots of Triamcinolone Acetonide (TAC). Methods A model for the rabbit eye was constructed using insights from high-resolution NMR imaging studies (Sawada 2002). The current model was compared to other published simulations in its ability to predict clearance of various intravitreally injected materials. Suspension depots were constructed explicitly rendering individual particles in various configurations: 4 or 16 mg drug confined to a 100 μL spherical depot, or 4 mg exploded to fill the entire vitreous. Particle size was reduced systematically in each configuration. The convective diffusion/dissolution process was simulated using a multiphase model. Results Release rate became independent of particle diameter below a certain value. The size-independent limits occurred for particle diameters ranging from 77 to 428 μM depending upon the depot configuration. Residence time predicted for the spherical depots in the size-independent limit was comparable to that observed in vivo. Conclusions Since the size-independent limit was several-fold greater than the particle size of commercially available pharmaceutical TAC suspensions, differences in particle size amongst such products are predicted to be immaterial to their duration or performance. PMID:20467888
NASA Astrophysics Data System (ADS)
Zapata, Brian Jarvis
As military and diplomatic representatives of the United States are deployed throughout the world, they must frequently make use of local, existing facilities; it is inevitable that some of these will be load bearing unreinforced masonry (URM) structures. Although generally suitable for conventional design loads, load bearing URM presents a unique hazard, with respect to collapse, when exposed to blast loading. There is therefore a need to study the blast resistance of load bearing URM construction in order to better protect US citizens assigned to dangerous locales. To address this, the Department of Civil and Environmental Engineering at the University of North Carolina at Charlotte conducted three blast tests inside a decommissioned, coal-fired, power plant prior to its scheduled demolition. The power plant's walls were constructed of URM and provided an excellent opportunity to study the response of URM walls in-situ. Post-test analytical studies investigated the ability of existing blast load prediction methodologies to model the case of a cylindrical charge with a low height of burst. It was found that even for the relatively simple blast chamber geometries of these tests, simplified analysis methods predicted blast impulses with an average net error of 22%. The study suggested that existing simplified analysis methods would benefit from additional development to better predict blast loads from cylinders detonated near the ground's surface. A hydrocode, CTH, was also used to perform two and three-dimensional simulations of the blast events. In order to use the hydrocode, Jones Wilkins Lee (JWL) equation of state (EOS) coefficients were developed for the experiment's Unimax dynamite charges; a novel energy-scaling technique was developed which permits the derivation of new JWL coefficients from an existing coefficient set. The hydrocode simulations were able to simulate blast impulses with an average absolute error of 34.5%. Moreover, the hydrocode simulations provided highly resolved spatio-temporal blast loading data for subsequent structural simulations. Equivalent single-degree-of-freedom (ESDOF) structural response models were then used to predict the out-of-plane deflections of blast chamber walls. A new resistance function was developed which permits a URM wall to crack at any height; numerical methodologies were also developed to compute transformation factors required for use in the ESDOF method. When combined with the CTH derived blast loading predictions, the ESDOF models were able to predict out-of-plane deflections with reasonable accuracy. Further investigations were performed using finite element models constructed in LS-DYNA; the models used elastic elements combined with contacts possessing a tension/shear cutoff and the ability to simulate fracture energy release. Using the CTH predicted blast loads and carefully selected constitutive parameters, the LS-DYNA models were able to both qualitatively and quantitatively predict blast chamber wall deflections and damage patterns. Moreover, the finite element models suggested several modes of response which cannot be modeled by current ESDOF methods; the effect of these response modes on the accuracy of ESDOF predictions warrants further study.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.
Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong
2015-01-01
In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896
ERIC Educational Resources Information Center
Knowlden, Adam P.; Sharma, Manoj; Bernard, Amy L.
2012-01-01
The purpose of this study was to operationalize the constructs of the Theory of Planned Behavior (TPB) to predict the sleep intentions and behaviors of undergraduate college students attending a Midwestern University. Data collection spanned three phases. The first phase included a semi-structured qualitative interview (n = 11), readability by…
Health belief model and reasoned action theory in predicting water saving behaviors in yazd, iran.
Morowatisharifabad, Mohammad Ali; Momayyezi, Mahdieh; Ghaneian, Mohammad Taghi
2012-01-01
People's behaviors and intentions about healthy behaviors depend on their beliefs, values, and knowledge about the issue. Various models of health education are used in deter¬mining predictors of different healthy behaviors but their efficacy in cultural behaviors, such as water saving behaviors, are not studied. The study was conducted to explain water saving beha¬viors in Yazd, Iran on the basis of Health Belief Model and Reasoned Action Theory. The cross-sectional study used random cluster sampling to recruit 200 heads of households to collect the data. The survey questionnaire was tested for its content validity and reliability. Analysis of data included descriptive statistics, simple correlation, hierarchical multiple regression. Simple correlations between water saving behaviors and Reasoned Action Theory and Health Belief Model constructs were statistically significant. Health Belief Model and Reasoned Action Theory constructs explained 20.80% and 8.40% of the variances in water saving beha-viors, respectively. Perceived barriers were the strongest Predictor. Additionally, there was a sta¬tistically positive correlation between water saving behaviors and intention. In designing interventions aimed at water waste prevention, barriers of water saving behaviors should be addressed first, followed by people's attitude towards water saving. Health Belief Model constructs, with the exception of perceived severity and benefits, is more powerful than is Reasoned Action Theory in predicting water saving behavior and may be used as a framework for educational interventions aimed at improving water saving behaviors.
Health Belief Model and Reasoned Action Theory in Predicting Water Saving Behaviors in Yazd, Iran
Morowatisharifabad, Mohammad Ali; Momayyezi, Mahdieh; Ghaneian, Mohammad Taghi
2012-01-01
Background: People's behaviors and intentions about healthy behaviors depend on their beliefs, values, and knowledge about the issue. Various models of health education are used in deter¬mining predictors of different healthy behaviors but their efficacy in cultural behaviors, such as water saving behaviors, are not studied. The study was conducted to explain water saving beha¬viors in Yazd, Iran on the basis of Health Belief Model and Reasoned Action Theory. Methods: The cross-sectional study used random cluster sampling to recruit 200 heads of households to collect the data. The survey questionnaire was tested for its content validity and reliability. Analysis of data included descriptive statistics, simple correlation, hierarchical multiple regression. Results: Simple correlations between water saving behaviors and Reasoned Action Theory and Health Belief Model constructs were statistically significant. Health Belief Model and Reasoned Action Theory constructs explained 20.80% and 8.40% of the variances in water saving beha-viors, respectively. Perceived barriers were the strongest Predictor. Additionally, there was a sta¬tistically positive correlation between water saving behaviors and intention. Conclusion: In designing interventions aimed at water waste prevention, barriers of water saving behaviors should be addressed first, followed by people's attitude towards water saving. Health Belief Model constructs, with the exception of perceived severity and benefits, is more powerful than is Reasoned Action Theory in predicting water saving behavior and may be used as a framework for educational interventions aimed at improving water saving behaviors. PMID:24688927
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
Do bioclimate variables improve performance of climate envelope models?
Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.
2012-01-01
Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.
Predictive model for CO2 generation and decay in building envelopes
NASA Astrophysics Data System (ADS)
Aglan, Heshmat A.
2003-01-01
Understanding carbon dioxide generation and decay patterns in buildings with high occupancy levels is useful to identify their indoor air quality, air change rates, percent fresh air makeup, occupancy pattern, and how a variable air volume system to off-set undesirable CO2 level can be modulated. A mathematical model governing the generation and decay of CO2 in building envelopes with forced ventilation due to high occupancy is developed. The model has been verified experimentally in a newly constructed energy efficient healthy house. It was shown that the model accurately predicts the CO2 concentration at any time during the generation and decay processes.
2016-01-01
Modeling and prediction of polar organic chemical integrative sampler (POCIS) sampling rates (Rs) for 73 compounds using artificial neural networks (ANNs) is presented for the first time. Two models were constructed: the first was developed ab initio using a genetic algorithm (GSD-model) to shortlist 24 descriptors covering constitutional, topological, geometrical and physicochemical properties and the second model was adapted for Rs prediction from a previous chromatographic retention model (RTD-model). Mechanistic evaluation of descriptors showed that models did not require comprehensive a priori information to predict Rs. Average predicted errors for the verification and blind test sets were 0.03 ± 0.02 L d–1 (RTD-model) and 0.03 ± 0.03 L d–1 (GSD-model) relative to experimentally determined Rs. Prediction variability in replicated models was the same or less than for measured Rs. Networks were externally validated using a measured Rs data set of six benzodiazepines. The RTD-model performed best in comparison to the GSD-model for these compounds (average absolute errors of 0.0145 ± 0.008 L d–1 and 0.0437 ± 0.02 L d–1, respectively). Improvements to generalizability of modeling approaches will be reliant on the need for standardized guidelines for Rs measurement. The use of in silico tools for Rs determination represents a more economical approach than laboratory calibrations. PMID:27363449
Bian, Cheng; Xu, Shuman; Wang, Heng; Li, Niannian; Wu, Jingya; Zhao, Yunwu; Li, Peng; Lu, Hua
2015-01-01
The high prevalence of risky irrational drug use behaviors mean that outpatients face high risks of drug resistance and even death. This study represents the first application of the Information-Motivation-Behavioral Skills (IMB) model on rational drug use behavior among second-level hospital outpatients from three prefecture-level cities in Anhui, China. Using the IMB model, our study examined predictors of rational drug use behavior and determined the associations between the model constructs. This study was conducted with a sample of 1,214 outpatients aged 18 years and older in Anhui second-level hospitals and applied the structural equation model (SEM) to test predictive relations among the IMB model variables related to rational drug use behavior. Age, information and motivation had significant direct effects on rational drug use behavior. Behavioral skills as an intermediate variable also significantly predicted more rational drug use behavior. Female gender, higher educational level, more information and more motivation predicted more behavioral skills. In addition, there were significant indirect impacts on rational drug use behavior mediated through behavioral skills. The IMB-based model explained the relationships between the constructs and rational drug use behavior of outpatients in detail, and it suggests that future interventions among second-level hospital outpatients should consider demographic characteristics and should focus on improving motivation and behavioral skills in addition to the publicity of knowledge.
Wang, Heng; Li, Niannian; Wu, Jingya; Zhao, Yunwu; Li, Peng; Lu, Hua
2015-01-01
Background The high prevalence of risky irrational drug use behaviors mean that outpatients face high risks of drug resistance and even death. This study represents the first application of the Information-Motivation-Behavioral Skills (IMB) model on rational drug use behavior among second-level hospital outpatients from three prefecture-level cities in Anhui, China. Using the IMB model, our study examined predictors of rational drug use behavior and determined the associations between the model constructs. Methods This study was conducted with a sample of 1,214 outpatients aged 18 years and older in Anhui second-level hospitals and applied the structural equation model (SEM) to test predictive relations among the IMB model variables related to rational drug use behavior. Results Age, information and motivation had significant direct effects on rational drug use behavior. Behavioral skills as an intermediate variable also significantly predicted more rational drug use behavior. Female gender, higher educational level, more information and more motivation predicted more behavioral skills. In addition, there were significant indirect impacts on rational drug use behavior mediated through behavioral skills. Conclusions The IMB-based model explained the relationships between the constructs and rational drug use behavior of outpatients in detail, and it suggests that future interventions among second-level hospital outpatients should consider demographic characteristics and should focus on improving motivation and behavioral skills in addition to the publicity of knowledge. PMID:26275301
Computer-Aided Construction of Chemical Kinetic Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, William H.
2014-12-31
The combustion chemistry of even simple fuels can be extremely complex, involving hundreds or thousands of kinetically significant species. The most reasonable way to deal with this complexity is to use a computer not only to numerically solve the kinetic model, but also to construct the kinetic model in the first place. Because these large models contain so many numerical parameters (e.g. rate coefficients, thermochemistry) one never has sufficient data to uniquely determine them all experimentally. Instead one must work in “predictive” mode, using theoretical rather than experimental values for many of the numbers in the model, and as appropriatemore » refining the most sensitive numbers through experiments. Predictive chemical kinetics is exactly what is needed for computer-aided design of combustion systems based on proposed alternative fuels, particularly for early assessment of the value and viability of proposed new fuels before those fuels are commercially available. This project was aimed at making accurate predictive chemical kinetics practical; this is a challenging goal which requires a range of science advances. The project spanned a wide range from quantum chemical calculations on individual molecules and elementary-step reactions, through the development of improved rate/thermo calculation procedures, the creation of algorithms and software for constructing and solving kinetic simulations, the invention of methods for model-reduction while maintaining error control, and finally comparisons with experiment. Many of the parameters in the models were derived from quantum chemistry calculations, and the models were compared with experimental data measured in our lab or in collaboration with others.« less
NASA Astrophysics Data System (ADS)
Luo, Junhui; Mi, Decai; Ye, Qiongyao; Deng, Shengqiang; Zeng, Fuquan; Zeng, Yongjun
2018-01-01
Carbonaceous rock has the characteristics of easy disintegration, softening, swelling and environmental sensitivity, which belongs to soft surrounding rock, and the deformation during excavation and long-term stability of the surrounding rock of carbonaceous rock tunnel are common problems in the construction of carbonaceous rock tunnel. According to the above, the Monitor and measure the displacement, temperature and osmotic pressure of the surrounding carbonaceous rock of the tunnel of Guangxi Hebai highway. Then it based on the obtaining data to study the creep mechanism of surrounding rock using Singh-Mitchell model and predict the deformation of surrounding rock before the tunnel is operation. The results show that the Singh-Mitchell creep model can effectively analyse and predict the deformation development law of surrounding rock of tunnel without considering temperature and osmotic pressure, it can provide reference for the construction of carbonaceous rock tunnel and the measures to prevent and reinforce it..
NASA Astrophysics Data System (ADS)
Yu, Jianbo
2015-12-01
Prognostics is much efficient to achieve zero-downtime performance, maximum productivity and proactive maintenance of machines. Prognostics intends to assess and predict the time evolution of machine health degradation so that machine failures can be predicted and prevented. A novel prognostics system is developed based on the data-model-fusion scheme using the Bayesian inference-based self-organizing map (SOM) and an integration of logistic regression (LR) and high-order particle filtering (HOPF). In this prognostics system, a baseline SOM is constructed to model the data distribution space of healthy machine under an assumption that predictable fault patterns are not available. Bayesian inference-based probability (BIP) derived from the baseline SOM is developed as a quantification indication of machine health degradation. BIP is capable of offering failure probability for the monitored machine, which has intuitionist explanation related to health degradation state. Based on those historic BIPs, the constructed LR and its modeling noise constitute a high-order Markov process (HOMP) to describe machine health propagation. HOPF is used to solve the HOMP estimation to predict the evolution of the machine health in the form of a probability density function (PDF). An on-line model update scheme is developed to adapt the Markov process changes to machine health dynamics quickly. The experimental results on a bearing test-bed illustrate the potential applications of the proposed system as an effective and simple tool for machine health prognostics.
Stuart, K; Adderley, N J; Marshall, T; Rayman, G; Sitch, A; Manley, S; Ghosh, S; Toulis, K A; Nirantharakumar, K
2017-10-01
To explore whether a quantitative approach to identifying hospitalized patients with diabetes at risk of hypoglycaemia would be feasible through incorporation of routine biochemical, haematological and prescription data. A retrospective cross-sectional analysis of all diabetic admissions (n=9584) from 1 January 2014 to 31 December 2014 was performed. Hypoglycaemia was defined as a blood glucose level of <4 mmol/l. The prediction model was constructed using multivariable logistic regression, populated by clinically important variables and routine laboratory data. Using a prespecified variable selection strategy, it was shown that the occurrence of inpatient hypoglycaemia could be predicted by a combined model taking into account background medication (type of insulin, use of sulfonylureas), ethnicity (black and Asian), age (≥75 years), type of admission (emergency) and laboratory measurements (estimated GFR, C-reactive protein, sodium and albumin). Receiver-operating curve analysis showed that the area under the curve was 0.733 (95% CI 0.719 to 0.747). The threshold chosen to maximize both sensitivity and specificity was 0.15. The area under the curve obtained from internal validation did not differ from the primary model [0.731 (95% CI 0.717 to 0.746)]. The inclusion of routine biochemical data, available at the time of admission, can add prognostic value to demographic and medication history. The predictive performance of the constructed model indicates potential clinical utility for the identification of patients at risk of hypoglycaemia during their inpatient stay. © 2017 Diabetes UK.
Prognostic models for renal cell carcinoma recurrence: external validation in a Japanese population.
Utsumi, Takanobu; Ueda, Takeshi; Fukasawa, Satoshi; Komaru, Atsushi; Sazuka, Tomokazu; Kawamura, Koji; Imamoto, Takashi; Nihei, Naoki; Suzuki, Hiroyoshi; Ichikawa, Tomohiko
2011-09-01
The aim of the present study was to compare the accuracy of three prognostic models in predicting recurrence-free survival among Japanese patients who underwent nephrectomy for non-metastatic renal cell carcinoma (RCC). Patients originated from two centers: Chiba University Hospital (n = 152) and Chiba Cancer Center (n = 65). The following data were collected: age, sex, clinical presentation, Eastern Cooperative Oncology Group performance status, surgical technique, 1997 tumor-node-metastasis stage, clinical and pathological tumor size, histological subtype, disease recurrence, and progression. Three western models, including Yaycioglu's model, Cindolo's model and Kattan's nomogram, were used to predict recurrence-free survival. Predictive accuracy of these models were validated by using Harrell's concordance-index. Concordance-indexes were 0.795 and 0.745 for Kattan's nomogram, 0.700 and 0.634 for Yaycioglu's model, and 0.700 and 0.634 for Cindolo's model, respectively. Furthermore, the constructed calibration plots of Kattan's nomogram overestimated the predicted probability of recurrence-free survival after 5 years compared with the actual probability. Our findings suggest that despite working better than other predictive tools, Kattan's nomogram needs be used with caution when applied to Japanese patients who have undergone nephrectomy for non-metastatic RCC. © 2011 The Japanese Urological Association.
Chun, Ting Sie; Malek, M A; Ismail, Amelia Ritahani
2015-01-01
The development of effluent removal prediction is crucial in providing a planning tool necessary for the future development and the construction of a septic sludge treatment plant (SSTP), especially in the developing countries. In order to investigate the expected functionality of the required standard, the prediction of the effluent quality, namely biological oxygen demand, chemical oxygen demand and total suspended solid of an SSTP was modelled using an artificial intelligence approach. In this paper, we adopt the clonal selection algorithm (CSA) to set up a prediction model, with a well-established method - namely the least-square support vector machine (LS-SVM) as a baseline model. The test results of the case study showed that the prediction of the CSA-based SSTP model worked well and provided model performance as satisfactory as the LS-SVM model. The CSA approach shows that fewer control and training parameters are required for model simulation as compared with the LS-SVM approach. The ability of a CSA approach in resolving limited data samples, non-linear sample function and multidimensional pattern recognition makes it a powerful tool in modelling the prediction of effluent removals in an SSTP.
Construction schedule simulation of a diversion tunnel based on the optimized ventilation time.
Wang, Xiaoling; Liu, Xuepeng; Sun, Yuefeng; An, Juan; Zhang, Jing; Chen, Hongchao
2009-06-15
Former studies, the methods for estimating the ventilation time are all empirical in construction schedule simulation. However, in many real cases of construction schedule, the many factors have impact on the ventilation time. Therefore, in this paper the 3D unsteady quasi-single phase models are proposed to optimize the ventilation time with different tunneling lengths. The effect of buoyancy is considered in the momentum equation of the CO transport model, while the effects of inter-phase drag, lift force, and virtual mass force are taken into account in the momentum source of the dust transport model. The prediction by the present model for airflow in a diversion tunnel is confirmed by the experimental values reported by Nakayama [Nakayama, In-situ measurement and simulation by CFD of methane gas distribution at a heading faces, Shigen-to-Sozai 114 (11) (1998) 769-775]. The construction ventilation of the diversion tunnel of XinTangfang power station in China is used as a case. The distributions of airflow, CO and dust in the diversion tunnel are analyzed. A theory method for GIS-based dynamic visual simulation for the construction processes of underground structure groups is presented that combines cyclic operation network simulation, system simulation, network plan optimization, and GIS-based construction processes' 3D visualization. Based on the ventilation time the construction schedule of the diversion tunnel is simulated by the above theory method.
A Seamless, High-Resolution, Coastal Digital Elevation Model (DEM) for Southern California
Barnard, Patrick L.; Hoover, Daniel
2010-01-01
A seamless, 3-meter digital elevation model (DEM) was constructed for the entire Southern California coastal zone, extending 473 km from Point Conception to the Mexican border. The goal was to integrate the most recent, high-resolution datasets available (for example, Light Detection and Ranging (Lidar) topography, multibeam and single beam sonar bathymetry, and Interferometric Synthetic Aperture Radar (IfSAR) topography) into a continuous surface from at least the 20-m isobath to the 20-m elevation contour. This dataset was produced to provide critical boundary conditions (bathymetry and topography) for a modeling effort designed to predict the impacts of severe winter storms on the Southern California coast (Barnard and others, 2009). The hazards model, run in real-time or with prescribed scenarios, incorporates atmospheric information (wind and pressure fields) with a suite of state-of-the-art physical process models (tide, surge, and wave) to enable detailed prediction of water levels, run-up, wave heights, and currents. Research-grade predictions of coastal flooding, inundation, erosion, and cliff failure are also included. The DEM was constructed to define the general shape of nearshore, beach and cliff surfaces as accurately as possible, with less emphasis on the detailed variations in elevation inland of the coast and on bathymetry inside harbors. As a result this DEM should not be used for navigation purposes.
Yoon, Sung Ho; Turkarslan, Serdar; Reiss, David J.; Pan, Min; Burn, June A.; Costa, Kyle C.; Lie, Thomas J.; Slagel, Joseph; Moritz, Robert L.; Hackett, Murray; Leigh, John A.; Baliga, Nitin S.
2013-01-01
Methanogens catalyze the critical methane-producing step (called methanogenesis) in the anaerobic decomposition of organic matter. Here, we present the first predictive model of global gene regulation of methanogenesis in a hydrogenotrophic methanogen, Methanococcus maripaludis. We generated a comprehensive list of genes (protein-coding and noncoding) for M. maripaludis through integrated analysis of the transcriptome structure and a newly constructed Peptide Atlas. The environment and gene-regulatory influence network (EGRIN) model of the strain was constructed from a compendium of transcriptome data that was collected over 58 different steady-state and time-course experiments that were performed in chemostats or batch cultures under a spectrum of environmental perturbations that modulated methanogenesis. Analyses of the EGRIN model have revealed novel components of methanogenesis that included at least three additional protein-coding genes of previously unknown function as well as one noncoding RNA. We discovered that at least five regulatory mechanisms act in a combinatorial scheme to intercoordinate key steps of methanogenesis with different processes such as motility, ATP biosynthesis, and carbon assimilation. Through a combination of genetic and environmental perturbation experiments we have validated the EGRIN-predicted role of two novel transcription factors in the regulation of phosphate-dependent repression of formate dehydrogenase—a key enzyme in the methanogenesis pathway. The EGRIN model demonstrates regulatory affiliations within methanogenesis as well as between methanogenesis and other cellular functions. PMID:24089473
Shao, Wei; Liu, Mingxia; Zhang, Daoqiang
2016-01-01
The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting. We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method. The dataset and code can be downloaded from https://github.com/shaoweinuaa/. dqzhang@nuaa.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Modeling of autocatalytic hydrolysis of adefovir dipivoxil in solid formulations.
Dong, Ying; Zhang, Yan; Xiang, Bingren; Deng, Haishan; Wu, Jingfang
2011-04-01
The stability and hydrolysis kinetics of a phosphate prodrug, adefovir dipivoxil, in solid formulations were studied. The stability relationship between five solid formulations was explored. An autocatalytic mechanism for hydrolysis could be proposed according to the kinetic behavior which fits the Prout-Tompkins model well. For the classical kinetic models could hardly describe and predict the hydrolysis kinetics of adefovir dipivoxil in solid formulations accurately when the temperature is high, a feedforward multilayer perceptron (MLP) neural network was constructed to model the hydrolysis kinetics. The build-in approaches in Weka, such as lazy classifiers and rule-based learners (IBk, KStar, DecisionTable and M5Rules), were used to verify the performance of MLP. The predictability of the models was evaluated by 10-fold cross-validation and an external test set. It reveals that MLP should be of general applicability proposing an alternative efficient way to model and predict autocatalytic hydrolysis kinetics for phosphate prodrugs.
Using the domain identification model to study major and career decision-making processes
NASA Astrophysics Data System (ADS)
Tendhar, Chosang; Singh, Kusum; Jones, Brett D.
2018-03-01
The purpose of this study was to examine the extent to which (1) a domain identification model could be used to predict students' engineering major and career intentions and (2) the MUSIC Model of Motivation components could be used to predict domain identification. The data for this study were collected from first-year engineering students. We used a structural equation model to test the hypothesised relationship between variables in the partial domain identification model. The findings suggested that engineering identification significantly predicted engineering major intentions and career intentions and had the highest effect on those two variables compared to other motivational constructs. Furthermore, results suggested that success, interest, and caring are plausible contributors to students' engineering identification. Overall, there is strong evidence that the domain identification model can be used as a lens to study career decision-making processes in engineering, and potentially, in other fields as well.
Predicting the distribution of bed material accumulation using river network sediment budgets
NASA Astrophysics Data System (ADS)
Wilkinson, Scott N.; Prosser, Ian P.; Hughes, Andrew O.
2006-10-01
Assessing the spatial distribution of bed material accumulation in river networks is important for determining the impacts of erosion on downstream channel form and habitat and for planning erosion and sediment management. A model that constructs spatially distributed budgets of bed material sediment is developed to predict the locations of accumulation following land use change. For each link in the river network, GIS algorithms are used to predict bed material supply from gullies, river banks, and upstream tributaries and to compare total supply with transport capacity. The model is tested in the 29,000 km2 Murrumbidgee River catchment in southeast Australia. It correctly predicts the presence or absence of accumulation in 71% of river links, which is significantly better performance than previous models, which do not account for spatial variability in sediment supply and transport capacity. Representing transient sediment storage is important for predicting smaller accumulations. Bed material accumulation is predicted in 25% of the river network, indicating its importance as an environmental problem in Australia.
How long will my mouse live? Machine learning approaches for prediction of mouse life span.
Swindell, William R; Harper, James M; Miller, Richard A
2008-09-01
Prediction of individual life span based on characteristics evaluated at middle-age represents a challenging objective for aging research. In this study, we used machine learning algorithms to construct models that predict life span in a stock of genetically heterogeneous mice. Life-span prediction accuracy of 22 algorithms was evaluated using a cross-validation approach, in which models were trained and tested with distinct subsets of data. Using a combination of body weight and T-cell subset measures evaluated before 2 years of age, we show that the life-span quartile to which an individual mouse belongs can be predicted with an accuracy of 35.3% (+/-0.10%). This result provides a new benchmark for the development of life-span-predictive models, but improvement can be expected through identification of new predictor variables and development of computational approaches. Future work in this direction can provide tools for aging research and will shed light on associations between phenotypic traits and longevity.
New Ebb-Tidal Delta at an Old Inlet, Shark River Inlet, New Jersey
2011-01-01
examine interacting beach and inlet processes and to test numerical simulation models for predicting morphology change at inlets. This study was...intertidal, oyster-encrusted Figure 4. A) Shark River Inlet, February-March 1920, post early construction (1915), but during rehabilitation of...the original State-built, curved jetties; B) Shark River Inlet, 23 January 1933, post construction of curved jetties and land reclamation of the flood
Fukuda, Haruhisa; Kuroki, Manabu
2016-03-01
To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.
Ockhuijsen, Henrietta D L; van Smeden, Maarten; van den Hoogen, Agnes; Boivin, Jacky
2017-06-01
To examine construct and criterion validity of the Dutch SCREENIVF among women and men undergoing a fertility treatment. A prospective longitudinal study nested in a randomized controlled trial. University hospital. Couples, 468 women and 383 men, undergoing an IVF/intracytoplasmic sperm injection (ICSI) treatment in a fertility clinic, completed the SCREENIVF. Construct and criteria validity of the SCREENIVF. The comparative fit index and root mean square error of approximation for women and men show a good fit of the factor model. Across time, the sensitivity for Hospital Anxiety and Depression Scale subscale in women ranged from 61%-98%, specificity 53%-65%, predictive value of a positive test (PVP) 13%-56%, predictive value of a negative test (PVN) 70%-99%. The sensitivity scores for men ranged from 38%-100%, specificity 71%-75%, PVP 9%-27%, PVN 92%-100%. A prediction model revealed that for women 68.7% of the variance in the Hospital Anxiety and Depression Scale on time 1 and 42.5% at time 2 and 38.9% at time 3 was explained by the predictors, the sum score scales of the SCREENIVF. For men, 58.1% of the variance in the Hospital Anxiety and Depression Scale on time 1 and 46.5% at time 2 and 37.3% at time 3 was explained by the predictors, the sum score scales of the SCREENIVF. The SCREENIVF has good construct validity but the concurrent validity is better than the predictive validity. SCREENIVF will be most effectively used in fertility clinics at the start of treatment and should not be used as a predictive tool. Copyright © 2017 American Society for Reproductive Medicine. All rights reserved.
NASA Technical Reports Server (NTRS)
Petot, D.; Loiseau, H.
1982-01-01
Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.
The Will, Skill, Tool Model of Technology Integration: Adding Pedagogy as a New Model Construct
ERIC Educational Resources Information Center
Knezek, Gerald; Christensen, Rhonda
2015-01-01
An expansion of the Will, Skill, Tool Model of Technology Integration to include teacher's pedagogical style is proposed by the authors as a means of advancing the predictive power for level of classroom technology integration to beyond 90%. Suggested advantages to this expansion include more precise identification of areas to be targeted for…
Anger and the ABC model underlying Rational-Emotive Behavior Therapy.
Ziegler, Daniel J; Smith, Phillip N
2004-06-01
The ABC model underlying Ellis's Rational-Emotive Behavior Therapy predicts that people who think more irrationally should display greater trait anger than do people who think less irrationally. This study tested this prediction regarding the ABC model. 186 college students were administered the Survey of Personal Beliefs and the State-Trait Anger Expression Inventory-Second Edition to measure irrational thinking and trait anger, respectively. Students who scored higher on Overall Irrational Thinking and Low Frustration Tolerance scored significantly higher on Trait Anger than did those who scored lower on Overall Irrational Thinking and Low Frustration Tolerance. This indicates support for the ABC model, especially Ellis's construct of irrational beliefs which is central to the model.
Acceptance and relationship context: a model of substance use disorder treatment outcome.
Gifford, Elizabeth V; Ritsher, Jennifer B; McKellar, John D; Moos, Rudolf H
2006-08-01
This study presented and tested a model of behavior change in long-term substance use disorder recovery, the acceptance and relationship context (ARC) model. The model specifies that acceptance-based behavior and constructive social relationships lead to recovery, and that treatment programs with supportive, involved relationships facilitate the development of these factors. This study used a prospective longitudinal naturalistic design and controlled for baseline levels of study variables. The model was tested on a sample of 2549 patients in 15 residential substance use disorder treatment programs. Acceptance-based responding (ABR), social relationship quality (SRQ), treatment program alliance (TPA) and substance use-related impairment were assessed using interviews and self-report questionnaires. TPA predicted ABR and SRQ and, in turn, ABR predicted better 2-year and 5-year treatment outcomes. The baseline-controlled model accounted for 41% of the variance in outcome at 2-year follow-up and 28% of the variance in outcome at 5-year follow-up. CONCLUSIONS Patients from treatment programs with an affiliative relationship network are more likely to respond adaptively to internal states associated previously with substance use, develop constructive social relationships and achieve long-term treatment benefits.
Mealier, Anne-Laure; Pointeau, Gregoire; Mirliaz, Solène; Ogawa, Kenji; Finlayson, Mark; Dominey, Peter F
2017-01-01
It has been proposed that starting from meaning that the child derives directly from shared experience with others, adult narrative enriches this meaning and its structure, providing causal links between unseen intentional states and actions. This would require a means for representing meaning from experience-a situation model-and a mechanism that allows information to be extracted from sentences and mapped onto the situation model that has been derived from experience, thus enriching that representation. We present a hypothesis and theory concerning how the language processing infrastructure for grammatical constructions can naturally be extended to narrative constructions to provide a mechanism for using language to enrich meaning derived from physical experience. Toward this aim, the grammatical construction models are augmented with additional structures for representing relations between events across sentences. Simulation results demonstrate proof of concept for how the narrative construction model supports multiple successive levels of meaning creation which allows the system to learn about the intentionality of mental states, and argument substitution which allows extensions to metaphorical language and analogical problem solving. Cross-linguistic validity of the system is demonstrated in Japanese. The narrative construction model is then integrated into the cognitive system of a humanoid robot that provides the memory systems and world-interaction required for representing meaning in a situation model. In this context proof of concept is demonstrated for how the system enriches meaning in the situation model that has been directly derived from experience. In terms of links to empirical data, the model predicts strong usage based effects: that is, that the narrative constructions used by children will be highly correlated with those that they experience. It also relies on the notion of narrative or discourse function words. Both of these are validated in the experimental literature.
Ste-Marie, Diane M; Carter, Michael J; Law, Barbi; Vertes, Kelly; Smith, Victoria
2016-09-01
Research has shown learning advantages for self-controlled practice contexts relative to yoked (i.e., experimenter-imposed) contexts; yet, explanations for this phenomenon remain relatively untested. We examined, via path analysis, whether self-efficacy and intrinsic motivation are important constructs for explaining self-controlled learning benefits. The path model was created using theory-based and empirically supported relationships to examine causal links between these psychological constructs and physical performance. We hypothesised that self-efficacy and intrinsic motivation would have greater predictive power for learning under self-controlled compared to yoked conditions. Participants learned double-mini trampoline progressions, and measures of physical performance, self-efficacy and intrinsic motivation were collected over two practice days and a delayed retention day. The self-controlled group (M = 2.04, SD = .98) completed significantly more skill progressions in retention than their yoked counterparts (M = 1.3, SD = .65). The path model displayed adequate fit, and similar significant path coefficients were found for both groups wherein each variable was predominantly predicted by its preceding time point (e.g., self-efficacy time 1 predicts self-efficacy time 2). Interestingly, the model was not moderated by group; thus, failing to support the hypothesis that self-efficacy and intrinsic motivation have greater predictive power for learning under self-controlled relative to yoked conditions.
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa
2012-01-01
Measurement of compressional, shear, and Stoneley wave velocities, carried out by dipole sonic imager (DSI) logs, provides invaluable data in geophysical interpretation, geomechanical studies and hydrocarbon reservoir characterization. The presented study proposes an improved methodology for making a quantitative formulation between conventional well logs and sonic wave velocities. First, sonic wave velocities were predicted from conventional well logs using artificial neural network, fuzzy logic, and neuro-fuzzy algorithms. Subsequently, a committee machine with intelligent systems was constructed by virtue of hybrid genetic algorithm-pattern search technique while outputs of artificial neural network, fuzzy logic and neuro-fuzzy models were used as inputs of the committee machine. It is capable of improving the accuracy of final prediction through integrating the outputs of aforementioned intelligent systems. The hybrid genetic algorithm-pattern search tool, embodied in the structure of committee machine, assigns a weight factor to each individual intelligent system, indicating its involvement in overall prediction of DSI parameters. This methodology was implemented in Asmari formation, which is the major carbonate reservoir rock of Iranian oil field. A group of 1,640 data points was used to construct the intelligent model, and a group of 800 data points was employed to assess the reliability of the proposed model. The results showed that the committee machine with intelligent systems performed more effectively compared with individual intelligent systems performing alone.
Ridge regression for predicting elastic moduli and hardness of calcium aluminosilicate glasses
NASA Astrophysics Data System (ADS)
Deng, Yifan; Zeng, Huidan; Jiang, Yejia; Chen, Guorong; Chen, Jianding; Sun, Luyi
2018-03-01
It is of great significance to design glasses with satisfactory mechanical properties predictively through modeling. Among various modeling methods, data-driven modeling is such a reliable approach that can dramatically shorten research duration, cut research cost and accelerate the development of glass materials. In this work, the ridge regression (RR) analysis was used to construct regression models for predicting the compositional dependence of CaO-Al2O3-SiO2 glass elastic moduli (Shear, Bulk, and Young’s moduli) and hardness based on the ternary diagram of the compositions. The property prediction over a large glass composition space was accomplished with known experimental data of various compositions in the literature, and the simulated results are in good agreement with the measured ones. This regression model can serve as a facile and effective tool for studying the relationship between the compositions and the property, enabling high-efficient design of glasses to meet the requirements for specific elasticity and hardness.
Prediction models for Arabica coffee beverage quality based on aroma analyses and chemometrics.
Ribeiro, J S; Augusto, F; Salva, T J G; Ferreira, M M C
2012-11-15
In this work, soft modeling based on chemometric analyses of coffee beverage sensory data and the chromatographic profiles of volatile roasted coffee compounds is proposed to predict the scores of acidity, bitterness, flavor, cleanliness, body, and overall quality of the coffee beverage. A partial least squares (PLS) regression method was used to construct the models. The ordered predictor selection (OPS) algorithm was applied to select the compounds for the regression model of each sensory attribute in order to take only significant chromatographic peaks into account. The prediction errors of these models, using 4 or 5 latent variables, were equal to 0.28, 0.33, 0.35, 0.33, 0.34 and 0.41, for each of the attributes and compatible with the errors of the mean scores of the experts. Thus, the results proved the feasibility of using a similar methodology in on-line or routine applications to predict the sensory quality of Brazilian Arabica coffee. Copyright © 2012 Elsevier B.V. All rights reserved.
Richardson, Miles; Hunt, Thomas E; Richardson, Cassandra
2014-12-01
This paper presents a methodology to control construction task complexity and examined the relationships between construction performance and spatial and mathematical abilities in children. The study included three groups of children (N = 96); ages 7-8, 10-11, and 13-14 years. Each group constructed seven pre-specified objects. The study replicated and extended previous findings that indicated that the extent of component symmetry and variety, and the number of components for each object and available for selection, significantly predicted construction task difficulty. Results showed that this methodology is a valid and reliable technique for assessing and predicting construction play task difficulty. Furthermore, construction play performance predicted mathematical attainment independently of spatial ability.
Noh, Wonjung; Seomun, Gyeongae
2015-06-01
This study was conducted to develop key performance indicators (KPIs) for home care nursing (HCN) based on a balanced scorecard, and to construct a performance prediction model of strategic objectives using the Bayesian Belief Network (BBN). This methodological study included four steps: establishment of KPIs, performance prediction modeling, development of a performance prediction model using BBN, and simulation of a suggested nursing management strategy. An HCN expert group and a staff group participated. The content validity index was analyzed using STATA 13.0, and BBN was analyzed using HUGIN 8.0. We generated a list of KPIs composed of 4 perspectives, 10 strategic objectives, and 31 KPIs. In the validity test of the performance prediction model, the factor with the greatest variance for increasing profit was maximum cost reduction of HCN services. The factor with the smallest variance for increasing profit was a minimum image improvement for HCN. During sensitivity analysis, the probability of the expert group did not affect the sensitivity. Furthermore, simulation of a 10% image improvement predicted the most effective way to increase profit. KPIs of HCN can estimate financial and non-financial performance. The performance prediction model for HCN will be useful to improve performance.
Baba, Hiromi; Takahara, Jun-ichi; Yamashita, Fumiyoshi; Hashida, Mitsuru
2015-11-01
The solvent effect on skin permeability is important for assessing the effectiveness and toxicological risk of new dermatological formulations in pharmaceuticals and cosmetics development. The solvent effect occurs by diverse mechanisms, which could be elucidated by efficient and reliable prediction models. However, such prediction models have been hampered by the small variety of permeants and mixture components archived in databases and by low predictive performance. Here, we propose a solution to both problems. We first compiled a novel large database of 412 samples from 261 structurally diverse permeants and 31 solvents reported in the literature. The data were carefully screened to ensure their collection under consistent experimental conditions. To construct a high-performance predictive model, we then applied support vector regression (SVR) and random forest (RF) with greedy stepwise descriptor selection to our database. The models were internally and externally validated. The SVR achieved higher performance statistics than RF. The (externally validated) determination coefficient, root mean square error, and mean absolute error of SVR were 0.899, 0.351, and 0.268, respectively. Moreover, because all descriptors are fully computational, our method can predict as-yet unsynthesized compounds. Our high-performance prediction model offers an attractive alternative to permeability experiments for pharmaceutical and cosmetic candidate screening and optimizing skin-permeable topical formulations.
Admission Models for At-Risk Graduate Students in Different Academic Disciplines.
ERIC Educational Resources Information Center
Nelson, C. Van; Nelson, Jacquelyn S.; Malone, Bobby G.
In this study, models were constructed for eight academic areas, including applied sciences, communication sciences, education, physical sciences, life sciences, humanities and arts, psychology, and social sciences, to predict whether or not an at-risk graduate student would be successful in obtaining a master's degree. Records were available for…
Modelling Question Difficulty in an A Level Physics Examination
ERIC Educational Resources Information Center
Crisp, Victoria; Grayson, Rebecca
2013-01-01
"Item difficulty modelling" is a technique used for a number of purposes such as to support future item development, to explore validity in relation to the constructs that influence difficulty and to predict the difficulty of items. This research attempted to explore the factors influencing question difficulty in a general qualification…
Learning about Ecological Systems by Constructing Qualitative Models with DynaLearn
ERIC Educational Resources Information Center
Leiba, Moshe; Zuzovsky, Ruth; Mioduser, David; Benayahu, Yehuda; Nachmias, Rafi
2012-01-01
A qualitative model of a system is an abstraction that captures ordinal knowledge and predicts the set of qualitatively possible behaviours of the system, given a qualitative description of its structure and initial state. This paper examines an innovative approach to science education using an interactive learning environment that supports…
PERCEPTUAL SYSTEMS IN READING--THE PREDICTION OF A TEMPORAL EYE-VOICE SPAN CONSTANT. PAPER.
ERIC Educational Resources Information Center
GEYER, JOHN JACOB
A STUDY WAS CONDUCTED TO DELINEATE HOW PERCEPTION OCCURS DURING ORAL READING. FROM AN ANALYSIS OF CLASSICAL AND MODERN RESEARCH, A HEURISTIC MODEL WAS CONSTRUCTED WHICH DELINEATED THE DIRECTLY INTERACTING SYSTEMS POSTULATED AS FUNCTIONING DURING ORAL READING. THE MODEL AS OUTLINED WAS DIFFERENTIATED LOGICALLY INTO THREE MAJOR PROCESSING…
Using Indigenous Materials for Construction
2015-07-01
Theoretical models were devised for prediction of the structural attributes of indigenous ferrocement sheets and sandwich composite panels comprising the...indigenous ferrocement skins and aerated concrete core. Structural designs were developed for these indigenous sandwich composite panels in typical...indigenous materials and building systems developed in the project were evaluated. Numerical modeling capabilities were developed for structural
Impact of a Flexible Evaluation System on Effort and Timing of Study
ERIC Educational Resources Information Center
Pacharn, Parunchana; Bay, Darlene; Felton, Sandra
2012-01-01
This paper examines results of a flexible grading system that allows each student to influence the weight allocated to each performance measure. We construct a stylized model to determine students' optimal responses. Our analytical model predicts different optimal strategies for students with varying academic abilities: a frontloading strategy for…
Integrating the Demonstration Orientation and Standards-Based Models of Achievement Goal Theory
ERIC Educational Resources Information Center
Wynne, Heather Marie
2014-01-01
Achievement goal theory and thus, the empirical measures stemming from the research, are currently divided on two conceptual approaches, namely the reason versus aims-based models of achievement goals. The factor structure and predictive utility of goal constructs from the Patterns of Adaptive Learning Strategies (PALS) and the latest two versions…
Using Model Ecosystems to Predict the Environmental Behavior of Pesticides
ERIC Educational Resources Information Center
Booth, Gary M.
1977-01-01
Describes construction of a model ecosystem using a 10-gallon aquarium with a sand, water and air interface. Pesticides are placed on sorghum plants grown on the terrestrial portion. After 30 days, movement of the pesticide is traced using radioisotope techniques, from terrestrial to aquatic organisms. Details for calculating concentration factors…
Li, Jing-Sheng; Tsai, Tsung-Yuan; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Freiberg, Andrew; Rubash, Harry E.; Li, Guoan
2014-01-01
Using computed tomography (CT) or magnetic resonance (MR) images to construct 3D knee models has been widely used in biomedical engineering research. Statistical shape modeling (SSM) method is an alternative way to provide a fast, cost-efficient, and subject-specific knee modeling technique. This study was aimed to evaluate the feasibility of using a combined dual-fluoroscopic imaging system (DFIS) and SSM method to investigate in vivo knee kinematics. Three subjects were studied during a treadmill walking. The data were compared with the kinematics obtained using a CT-based modeling technique. Geometric root-mean-square (RMS) errors between the knee models constructed using the SSM and CT-based modeling techniques were 1.16 mm and 1.40 mm for the femur and tibia, respectively. For the kinematics of the knee during the treadmill gait, the SSM model can predict the knee kinematics with RMS errors within 3.3 deg for rotation and within 2.4 mm for translation throughout the stance phase of the gait cycle compared with those obtained using the CT-based knee models. The data indicated that the combined DFIS and SSM technique could be used for quick evaluation of knee joint kinematics. PMID:25320846
Hendry, Melissa C; Douglas, Kevin S; Winter, Elizabeth A; Edens, John F
2013-01-01
Much of the risk assessment literature has focused on the predictive validity of risk assessment tools. However, these tools often comprise a list of risk factors that are themselves complex constructs, and focusing on the quality of measurement of individual risk factors may improve the predictive validity of the tools. The present study illustrates this concern using the Antisocial Features and Aggression scales of the Personality Assessment Inventory (Morey, 1991). In a sample of 1,545 prison inmates and offenders undergoing treatment for substance abuse (85% male), we evaluated (a) the factorial validity of the ANT and AGG scales, (b) the utility of original ANT and AGG scales and newly derived ANT and AGG scales for predicting antisocial outcomes (recidivism and institutional infractions), and (c) whether items with a stronger relationship to the underlying constructs (higher factor loadings) were in turn more strongly related to antisocial outcomes. Confirmatory factor analyses (CFAs) indicated that ANT and AGG items were not structured optimally in these data in terms of correspondence to the subscale structure identified in the PAI manual. Exploratory factor analyses were conducted on a random split-half of the sample to derive optimized alternative factor structures, and cross-validated in the second split-half using CFA. Four-factor models emerged for both the ANT and AGG scales, and, as predicted, the size of item factor loadings was associated with the strength with which items were associated with institutional infractions and community recidivism. This suggests that the quality by which a construct is measured is associated with its predictive strength. Implications for risk assessment are discussed. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel
2017-06-01
Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.
Ball milling: An experimental support to the energy transfer evaluated by the collision model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magini, M.; Iasonna, A.; Padella, F.
1996-01-01
In recent years several attempts have been made in order to understand the fundamentals of the ball milling process. The aim of these approaches is to establish predictive capabilities for this process, i.e. the possibility of obtaining a given product by suitable choosing the proper milling conditions. Maurice and Courtney have modeled ball milling in a planetary and in a vibratory mill including parameters like impact times, areas of the colliding surfaces (derived from hertzian collision theory), powder strain rates and pressure peak during collision. Burgio et al derived the kinematic equations of a ball moving on a planetary millmore » and the consequent ball-to-powder energy transfer occurring in a single collision event. The fraction of input energy transferred to the powder was subsequently estimated by an analysis of the collision event. Finally an energy map was constructed which was the basis for a model with predictive capabilities. The aim of the present article is to show that the arguments used to construct the model of the milling process has substantial experimental support.« less
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
Kamalikhah, Tahereh; Morowatisharifabad, Mohammad Ali; Rezaei-Moghaddam, Farid; Ghasemi, Mohammad; Gholami-Fesharaki, Mohammad; Goklani, Salma
2016-09-01
Individuals suffering from chronic low back pain (CLBP) experience major physical, social, and occupational disruptions. Strong evidence confirms the effectiveness of Alexander technique (AT) training for CLBP. The present study applied an integrative model (IM) of behavioral prediction for improvement of AT training. This was a quasi-experimental study of female teachers with nonspecific LBP in southern Tehran in 2014. Group A contained 42 subjects and group B had 35 subjects. In group A, AT lessons were designed based on IM constructs, while in group B, AT lessons only were taught. The validity and reliability of the AT questionnaire were confirmed using content validity (CVR 0.91, CVI 0.96) and Cronbach's α (0.80). The IM constructs of both groups were measured after the completion of training. Statistical analysis used independent and paired samples t-tests and the univariate generalized linear model (GLM). Significant differences were recorded before and after intervention (P < 0.001) for the model constructs of intention, perceived risk, direct attitude, behavioral beliefs, and knowledge in both groups. Direct attitude and behavioral beliefs in group A were higher than in group B after the intervention (P < 0.03). The educational framework provided by IM for AT training improved attitude and behavioral beliefs that can facilitate the adoption of AT behavior and decreased CLBP.
Improved prediction models for PCC pavement performance-related specifications
DOT National Transportation Integrated Search
2000-01-01
Performance-related specifications (PRS) for the acceptance of newly constructed jointed plain concrete pavements (JPCP) have been developed over the past decade. The main objectives of this study were to improve the distress and smoothness predictio...
Bonetti, Debbie; Johnston, Marie; Clarkson, Jan E; Grimshaw, Jeremy; Pitts, Nigel B; Eccles, Martin; Steen, Nick; Thomas, Ruth; Maclennan, Graeme; Glidewell, Liz; Walker, Anne
2010-04-08
Psychological models are used to understand and predict behaviour in a wide range of settings, but have not been consistently applied to health professional behaviours, and the contribution of differing theories is not clear. This study explored the usefulness of a range of models to predict an evidence-based behaviour -- the placing of fissure sealants. Measures were collected by postal questionnaire from a random sample of general dental practitioners (GDPs) in Scotland. Outcomes were behavioural simulation (scenario decision-making), and behavioural intention. Predictor variables were from the Theory of Planned Behaviour (TPB), Social Cognitive Theory (SCT), Common Sense Self-regulation Model (CS-SRM), Operant Learning Theory (OLT), Implementation Intention (II), Stage Model, and knowledge (a non-theoretical construct). Multiple regression analysis was used to examine the predictive value of each theoretical model individually. Significant constructs from all theories were then entered into a 'cross theory' stepwise regression analysis to investigate their combined predictive value. Behavioural simulation - theory level variance explained was: TPB 31%; SCT 29%; II 7%; OLT 30%. Neither CS-SRM nor stage explained significant variance. In the cross theory analysis, habit (OLT), timeline acute (CS-SRM), and outcome expectancy (SCT) entered the equation, together explaining 38% of the variance. Behavioural intention - theory level variance explained was: TPB 30%; SCT 24%; OLT 58%, CS-SRM 27%. GDPs in the action stage had significantly higher intention to place fissure sealants. In the cross theory analysis, habit (OLT) and attitude (TPB) entered the equation, together explaining 68% of the variance in intention. The study provides evidence that psychological models can be useful in understanding and predicting clinical behaviour. Taking a theory-based approach enables the creation of a replicable methodology for identifying factors that may predict clinical behaviour and so provide possible targets for knowledge translation interventions. Results suggest that more evidence-based behaviour may be achieved by influencing beliefs about the positive outcomes of placing fissure sealants and building a habit of placing them as part of patient management. However a number of conceptual and methodological challenges remain.
Rivis, Amanda; Abraham, Charles; Snook, Sarah
2011-05-01
The present study examined the predictive utility of constructs specified by the theory of planned behaviour (TPB) and prototype willingness model (PWM) for young and older male drivers' willingness to drive while intoxicated. A cross-sectional questionnaire was employed. Two hundred male drivers, recruited via a street survey, voluntarily completed measures of attitude, subjective norm, perceived behavioural control, prototype perceptions, and willingness. Findings showed that the TPB and PWM variables explained 65% of the variance in young male drivers' willingness and 47% of the variance in older male drivers' willingness, with the interaction between prototype favourability and similarity contributing 7% to the variance explained in older males' willingness to drive while intoxicated. The findings possess implications for theory, research, and anti-drink driving campaigns. ©2010 The British Psychological Society.
Zamboni, B D; Crawford, I; Williams, P G
2000-12-01
The current study explored the relationship between communication and assertiveness in general and sexual contexts and examined each construct's differential ability to predict reported condom use among college students. The results suggest that the constructs are positively related to each other, but general communication does not predict sexual assertiveness. Although sexual assertiveness is a better predictor of condom use than general assertiveness, general communication, and sexual communication, it needs to be considered within the context of other variables (e.g., normative beliefs regarding condom use). HIV prevention programs and models of health behavior should incorporate individual characteristics such as sexual assertiveness. The results of this study suggest that sexual assertiveness, social norm perceptions of condom use, self-efficacy for HIV prevention, and condom attitudes are among the critical variables that should be examined in an integrated model of sexual health behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewsuk, Kevin Gregory; Arguello, Jose Guadalupe, Jr.; Reiterer, Markus W.
2006-02-01
The ease and ability to predict sintering shrinkage and densification with the Skorohod-Olevsky viscous sintering (SOVS) model within a finite-element (FE) code have been improved with the use of an Arrhenius-type viscosity function. The need for a better viscosity function was identified by evaluating SOVS model predictions made using a previously published polynomial viscosity function. Predictions made using the original, polynomial viscosity function do not accurately reflect experimentally observed sintering behavior. To more easily and better predict sintering behavior using FE simulations, a thermally activated viscosity function based on creep theory was used with the SOVS model. In comparison withmore » the polynomial viscosity function, SOVS model predictions made using the Arrhenius-type viscosity function are more representative of experimentally observed viscosity and sintering behavior. Additionally, the effects of changes in heating rate on densification can easily be predicted with the Arrhenius-type viscosity function. Another attribute of the Arrhenius-type viscosity function is that it provides the potential to link different sintering models. For example, the apparent activation energy, Q, for densification used in the construction of the master sintering curve for a low-temperature cofire ceramic dielectric has been used as the apparent activation energy for material flow in the Arrhenius-type viscosity function to predict heating rate-dependent sintering behavior using the SOVS model.« less
Comparison of Approaches to the Prediction of Surface Wave Phase Velocity
NASA Astrophysics Data System (ADS)
Godfrey, K. E.; Dalton, C. A.; Hjorleifsdottir, V.; Ekstrom, G.
2017-12-01
Global seismic models provide crucial information about the state, composition, and dynamics of the Earth's interior, and in the shallow mantle these models are primarily constrained by observations of surface waves. Models developed by different groups have been constructed using different data sets and different techniques. While these models exhibit good agreement on the long-wavelength features, there is less consistency in the patterns and amplitude of smaller-scale heterogeneity. Here we investigate how approximations in the theoretical treatment of wave propagation and excitation influence the interpretation of measured phase delays and the tomographic images that result from inverting them. Synthetic seismograms were generated using SPECFEM3D_GLOBE for 42 earthquakes, 134 receiver locations, and two 3-D models of elastic Earth structure: S362ANI (Kustowski et al., 2008) and a rougher model constructed by adding realistic small-scale structure to S362ANI. Fundamental-mode Rayleigh and Love wave phase delays in the period range 35-250 seconds were measured using the approach of Ekström et al. (1997), for which PREM is the assumed reference Earth model. These measurements were compared to phase-delay predictions generated for the great-circle ray approximation, exact ray theory, and finite-frequency theory. We find that for both 3-D earth models exact ray theory provides the best fit to the measurements at short periods. At longer periods finite frequency theory provides the best fit. For the smooth earth model, the differences in fit for the various predictions are less significant at long periods than at shorter periods. The differences at long periods become more significant with increasing model roughness. In all cases, the agreement between predictions and measurements is best for paths located away from nodes in the source radiation pattern. The ability of the measured phase delays to recover the input Earth models is assessed through tests that explore the influence of parameterization, regularization, and crustal corrections.
Atomic density functional and diagram of structures in the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ankudinov, V. E., E-mail: vladimir@ankudinov.org; Galenko, P. K.; Kropotin, N. V.
2016-02-15
The phase field crystal model provides a continual description of the atomic density over the diffusion time of reactions. We consider a homogeneous structure (liquid) and a perfect periodic crystal, which are constructed from the one-mode approximation of the phase field crystal model. A diagram of 2D structures is constructed from the analytic solutions of the model using atomic density functionals. The diagram predicts equilibrium atomic configurations for transitions from the metastable state and includes the domains of existence of homogeneous, triangular, and striped structures corresponding to a liquid, a body-centered cubic crystal, and a longitudinal cross section of cylindricalmore » tubes. The method developed here is employed for constructing the diagram for the homogeneous liquid phase and the body-centered iron lattice. The expression for the free energy is derived analytically from density functional theory. The specific features of approximating the phase field crystal model are compared with the approximations and conclusions of the weak crystallization and 2D melting theories.« less
Young Women’s Dynamic Family Size Preferences in the Context of Transitioning Fertility
Yeatman, Sara; Sennott, Christie; Culpepper, Steven
2013-01-01
Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways. PMID:23619999
Young women's dynamic family size preferences in the context of transitioning fertility.
Yeatman, Sara; Sennott, Christie; Culpepper, Steven
2013-10-01
Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways.
Valerio, Laura; North, Ace; Collins, C. Matilda; Mumford, John D.; Facchinelli, Luca; Spaccapelo, Roberta; Benedict, Mark Q.
2016-01-01
The persistence of transgenes in the environment is a consideration in risk assessments of transgenic organisms. Combining mathematical models that predict the frequency of transgenes and experimental demonstrations can validate the model predictions, or can detect significant biological deviations that were neither apparent nor included as model parameters. In order to assess the correlation between predictions and observations, models were constructed to estimate the frequency of a transgene causing male sexual sterility in simulated populations of a malaria mosquito Anopheles gambiae that were seeded with transgenic females at various proportions. Concurrently, overlapping-generation laboratory populations similar to those being modeled were initialized with various starting transgene proportions, and the subsequent proportions of transgenic individuals in populations were determined weekly until the transgene disappeared. The specific transgene being tested contained a homing endonuclease gene expressed in testes, I-PpoI, that cleaves the ribosomal DNA and results in complete male sexual sterility with no effect on female fertility. The transgene was observed to disappear more rapidly than the model predicted in all cases. The period before ovipositions that contained no transgenic progeny ranged from as little as three weeks after cage initiation to as long as 11 weeks. PMID:27669312
Ren, Y Y; Zhou, L C; Yang, L; Liu, P Y; Zhao, B W; Liu, H X
2016-09-01
The paper highlights the use of the logistic regression (LR) method in the construction of acceptable statistically significant, robust and predictive models for the classification of chemicals according to their aquatic toxic modes of action. Essentials accounting for a reliable model were all considered carefully. The model predictors were selected by stepwise forward discriminant analysis (LDA) from a combined pool of experimental data and chemical structure-based descriptors calculated by the CODESSA and DRAGON software packages. Model predictive ability was validated both internally and externally. The applicability domain was checked by the leverage approach to verify prediction reliability. The obtained models are simple and easy to interpret. In general, LR performs much better than LDA and seems to be more attractive for the prediction of the more toxic compounds, i.e. compounds that exhibit excess toxicity versus non-polar narcotic compounds and more reactive compounds versus less reactive compounds. In addition, model fit and regression diagnostics was done through the influence plot which reflects the hat-values, studentized residuals, and Cook's distance statistics of each sample. Overdispersion was also checked for the LR model. The relationships between the descriptors and the aquatic toxic behaviour of compounds are also discussed.
Newman, M C; McCloskey, J T; Tatara, C P
1998-01-01
Ecological risk assessment can be enhanced with predictive models for metal toxicity. Modelings of published data were done under the simplifying assumption that intermetal trends in toxicity reflect relative metal-ligand complex stabilities. This idea has been invoked successfully since 1904 but has yet to be applied widely in quantitative ecotoxicology. Intermetal trends in toxicity were successfully modeled with ion characteristics reflecting metal binding to ligands for a wide range of effects. Most models were useful for predictive purposes based on an F-ratio criterion and cross-validation, but anomalous predictions did occur if speciation was ignored. In general, models for metals with the same valence (i.e., divalent metals) were better than those combining mono-, di-, and trivalent metals. The softness parameter (sigma p) and the absolute value of the log of the first hydrolysis constant ([symbol: see text] log KOH [symbol: see text]) were especially useful in model construction. Also, delta E0 contributed substantially to several of the two-variable models. In contrast, quantitative attempts to predict metal interactions in binary mixtures based on metal-ligand complex stabilities were not successful. PMID:9860900
Diverse binding site structures revealed in homology models of polyreactive immunoglobulins
NASA Astrophysics Data System (ADS)
Ramsland, Paul A.; Guddat, Luke W.; Edmundson, Allen B.; Raison, Robert L.
1997-09-01
We describe here computer-assisted homology models of the combiningsite structure of three polyreactive immunoglobulins. Template-based modelsof Fv (VL-VH) fragments were derived forthe surface IgM expressed by the malignant CD5 positive B cells from threepatients with chronic lymphocytic leukaemia (CLL). The conserved frameworkregions were constructed using crystal coordinates taken from highlyhomologous human variable domain structures (Pot and Hil). Complementaritydetermining regions (CDRs) were predicted by grafting loops, taken fromknown immunoglobulin structures, onto the Fv framework models. The CDRtemplates were chosen, where possible, to be of the same length and of highresidue identity or similarity. LCDR1, 2 and 3 as well as HCDR1 and 2 forthe Fv were constructed using this strategy. For HCDR3 prediction, adatabase containing the Cartesian coordinates of 30 of these loops wascompiled from unliganded antibody X-ray crystallographic structures and anHCDR3 of the same length as that of the B CLL Fv was selected as a template.In one case (Yar), the resulting HCDR3 model gave unfavourable interactionswhen incorporated into the Fv model. This HCDR3 was therefore modelled usingan alternative strategy of construction of the loop stems, using apreviously described HCDR3 conformation (Pot), followed by chain closurewith a β-turn. The template models were subjected to positionalrefinement using energy minimisation and molecular dynamics simulations(X-PLOR). An electrostatic surface description (GRASP) did not reveal acommon structural feature within the binding sites of the three polyreactiveFv. Thus, polyreactive immunoglobulins may recognise similar and multipleantigens through a diverse array of binding site structures.
Hill, Rachel T; Matthews, Russell A; Walsh, Benjamin M
2016-12-01
Implicit to the definitions of both family-supportive supervision (FSS) and family-supportive organization perceptions (FSOP) is the argument that these constructs may manifest at a higher (e.g. group or organizational) level. In line with these conceptualizations, grounded in tenants of conservation of resources theory, we argue that FSS and FSOP, as universal resources, are emergent constructs at the organizational level, which have cross-level effects on work-family conflict and turnover intentions. To test our theoretically derived hypotheses, a multilevel model was examined in which FSS and FSOP at the unit level predict individual work-to-family conflict, which in turn predicts turnover intentions. Our hypothesized model was generally supported. Collectively, our results point to FSOP serving as an explanatory mechanism of the effects that mutual perceptions of FSS have on individual experiences of work-to-family conflict and turnover intentions. Lagged (i.e. overtime) cross-level effects of the model were also confirmed in supplementary analyses. Our results extend our theoretical understanding of FSS and FSOP by demonstrating the utility of conceptualizing them as universal resources, opening up a variety of avenues for future research. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Pedersen, Sue D.; Brar, Sony; Faris, Peter; Corenblum, Bernard
2007-01-01
OBJECTIVE To construct and validate a questionnaire for use in diagnosis of polycystic ovary syndrome (PCOS). DESIGN All participants completed a questionnaire, which asked clinical questions designed to assist in the diagnosis of PCOS, before their appointments with an endocrinologist. Following completion of the questionnaire, the endocrinologist (blinded to the answers) made or excluded a diagnosis of PCOS using clinical criteria and biochemical data as indicated. Questions were then evaluated for their power to predict PCOS, and a model was constructed using the most reliable items to establish a system to predict a diagnosis of PCOS. SETTING An outpatient reproductive endocrinology clinic in Calgary, Alta. PARTICIPANTS Adult women patients who had been referred to the clinic. Fifty patients with PCOS and 50 patients without PCOS were included in the study. MAIN OUTCOME MEASURES Demographic information, medical history, related diagnoses, menstrual history, and fertility history. RESULTS A history of infrequent menses, hirsutism, obesity, and acne were strongly predictive of a diagnosis of PCOS, whereas a history of failed pregnancy attempts was not useful. A history of nipple discharge outside of pregnancy strongly predicted no diagnosis of PCOS. We constructed a 4-item questionnaire for use in diagnosis of PCOS; the questionnaire yielded a sensitivity of 85% and a specificity of 85% on multivariate logistic regression and a sensitivity of 77% and a specificity of 94% using the 4-item questionnaire. Predictive accuracy was validated using a second sample of 117 patients, in addition to internal validation using bootstrap analysis. CONCLUSION We have constructed a simple clinical tool to help diagnose PCOS. This questionnaire can be easily incorporated into family physicians’ busy practices. PMID:17872783
Knowledge of the trophic structure of biota in aquatic sites offers potential for the construction of models to allow the prediction of contaminant bioaccumulation. Measurements of trophic position have been conducted using stable-nitrogen isotope ratios ( 15N) measured in fish m...
Johnson, Douglas H.; Cook, R.D.
2013-01-01
In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.
Sahoo, B K; Sapra, B K; Gaware, J J; Kanse, S D; Mayya, Y S
2011-06-01
In recognition of the fact that building materials are an important source of indoor radon, second only to soil, surface radon exhalation fluxes have been extensively measured from the samples of these materials. Based on this flux data, several researchers have attempted to predict the inhalation dose attributable to radon emitted from walls and ceilings made up of these materials. However, an important aspect not considered in this methodology is the enhancement of the radon flux from the wall or the ceiling constructed using the same building material. This enhancement occurs mainly because of the change in the radon diffusion process from the former to the latter configuration. To predict the true radon flux from the wall based on the flux data of building material samples, we now propose a semi-empirical model involving radon diffusion length and the physical dimensions of the samples as well as wall thickness as other input parameters. This model has been established by statistically fitting the ratio of the solution to radon diffusion equations for the cases of three-dimensional cuboidal shaped building materials (such as brick, concrete block) and one dimensional wall system to a simple mathematical function. The model predictions have been validated against the measurements made at a new construction site. This model provides an alternative tool (substitute to conventional 1-D model) to estimate radon flux from a wall without relying on ²²⁶Ra content, radon emanation factor and bulk density of the samples. Moreover, it may be very useful in the context of developing building codes for radon regulation in new buildings. Copyright © 2011 Elsevier B.V. All rights reserved.
A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change.
Ashraf, M Irfan; Meng, Fan-Rui; Bourque, Charles P-A; MacLean, David A
2015-01-01
Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm(2) 5-year(-1) and volume: 0.0008 m(3) 5-year(-1)). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm(2) 5-year(-1) and 0.0393 m(3) 5-year(-1) in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling.
A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change
Ashraf, M. Irfan; Meng, Fan-Rui; Bourque, Charles P.-A.; MacLean, David A.
2015-01-01
Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm2 5-year-1 and volume: 0.0008 m3 5-year-1). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm2 5-year-1 and 0.0393 m3 5-year-1 in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling. PMID:26173081
Wu, Baojian; Morrow, John Kenneth; Singh, Rashim; Zhang, Shuxing; Hu, Ming
2011-02-01
Glucuronidation is often recognized as one of the rate-determining factors that limit the bioavailability of flavonols. Hence, design and synthesis of more bioavailable flavonols would benefit from the establishment of predictive models of glucuronidation using kinetic parameters [e.g., K(m), V(max), intrinsic clearance (CL(int)) = V(max)/K(m)] derived for flavonols. This article aims to construct position (3-OH)-specific comparative molecular field analysis (CoMFA) models to describe UDP-glucuronosyltransferase (UGT) 1A9-mediated glucuronidation of flavonols, which can be used to design poor UGT1A9 substrates. The kinetics of recombinant UGT1A9-mediated 3-O-glucuronidation of 30 flavonols was characterized, and kinetic parameters (K(m), V(max), CL(int)) were obtained. The observed K(m), V(max), and CL(int) values of 3-O-glucuronidation ranged from 0.04 to 0.68 μM, 0.04 to 12.95 nmol/mg/min, and 0.06 to 109.60 ml/mg/min, respectively. To model UGT1A9-mediated glucuronidation, 30 flavonols were split into the training (23 compounds) and test (7 compounds) sets. These flavonols were then aligned by mapping the flavonols to specific common feature pharmacophores, which were used to construct CoMFA models of V(max) and CL(int), respectively. The derived CoMFA models possessed good internal and external consistency and showed statistical significance and substantive predictive abilities (V(max) model: q(2) = 0.738, r(2) = 0.976, r(pred)(2) = 0.735; CL(int) model: q(2) = 0.561, r(2) = 0.938, r(pred)(2) = 0.630). The contour maps derived from CoMFA modeling clearly indicate structural characteristics associated with rapid or slow 3-O-glucuronidation. In conclusion, the approach of coupling CoMFA analysis with a pharmacophore-based structural alignment is viable for constructing a predictive model for regiospecific glucuronidation rates of flavonols by UGT1A9.
Identification of informative features for predicting proinflammatory potentials of engine exhausts.
Wang, Chia-Chi; Lin, Ying-Chi; Lin, Yuan-Chung; Jhang, Syu-Ruei; Tung, Chun-Wei
2017-08-18
The immunotoxicity of engine exhausts is of high concern to human health due to the increasing prevalence of immune-related diseases. However, the evaluation of immunotoxicity of engine exhausts is currently based on expensive and time-consuming experiments. It is desirable to develop efficient methods for immunotoxicity assessment. To accelerate the development of safe alternative fuels, this study proposed a computational method for identifying informative features for predicting proinflammatory potentials of engine exhausts. A principal component regression (PCR) algorithm was applied to develop prediction models. The informative features were identified by a sequential backward feature elimination (SBFE) algorithm. A total of 19 informative chemical and biological features were successfully identified by SBFE algorithm. The informative features were utilized to develop a computational method named FS-CBM for predicting proinflammatory potentials of engine exhausts. FS-CBM model achieved a high performance with correlation coefficient values of 0.997 and 0.943 obtained from training and independent test sets, respectively. The FS-CBM model was developed for predicting proinflammatory potentials of engine exhausts with a large improvement on prediction performance compared with our previous CBM model. The proposed method could be further applied to construct models for bioactivities of mixtures.
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Generalized role for the cerebellum in encoding internal models: evidence from semantic processing.
Moberget, Torgeir; Gullesen, Eva Hilland; Andersson, Stein; Ivry, Richard B; Endestad, Tor
2014-02-19
The striking homogeneity of cerebellar microanatomy is strongly suggestive of a corresponding uniformity of function. Consequently, theoretical models of the cerebellum's role in motor control should offer important clues regarding cerebellar contributions to cognition. One such influential theory holds that the cerebellum encodes internal models, neural representations of the context-specific dynamic properties of an object, to facilitate predictive control when manipulating the object. The present study examined whether this theoretical construct can shed light on the contribution of the cerebellum to language processing. We reasoned that the cerebellum might perform a similar coordinative function when the context provided by the initial part of a sentence can be highly predictive of the end of the sentence. Using functional MRI in humans we tested two predictions derived from this hypothesis, building on previous neuroimaging studies of internal models in motor control. First, focal cerebellar activation-reflecting the operation of acquired internal models-should be enhanced when the linguistic context leads terminal words to be predictable. Second, more widespread activation should be observed when such predictions are violated, reflecting the processing of error signals that can be used to update internal models. Both predictions were confirmed, with predictability and prediction violations associated with increased blood oxygenation level-dependent signal in the posterior cerebellum (Crus I/II). Our results provide further evidence for cerebellar involvement in predictive language processing and suggest that the notion of cerebellar internal models may be extended to the language domain.
Effect of thermal noise on vesicles and capsules in shear flow.
Abreu, David; Seifert, Udo
2012-07-01
We add thermal noise consistently to reduced models of undeformable vesicles and capsules in shear flow and derive analytically the corresponding stochastic equations of motion. We calculate the steady-state probability distribution function and construct the corresponding phase diagrams for the different dynamical regimes. For fluid vesicles, we predict that at small shear rates thermal fluctuations induce a tumbling motion for any viscosity contrast. For elastic capsules, due to thermal mixing, an intermittent regime appears in regions where deterministic models predict only pure tank treading or tumbling.
Modeling polyvinyl chloride Plasma Modification by Neural Networks
NASA Astrophysics Data System (ADS)
Wang, Changquan
2018-03-01
Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.
The construction phase’s influence to the moving ability of cross-sections of woven structure
NASA Astrophysics Data System (ADS)
Inogamdjanov, D.; Daminov, A.; Kasimov, O.
2017-10-01
The purpose of this study is to work out bases to predict properties for single layer flat woven fabrics depending on changes of construction phases. A structural model of cross-section of single layered fabric is described based on the Pierce’s model. Form transformation of the yarn like straight, semi-arch and arch yarn is considered according to the alteration of yarn tension under the theory of Novikov. The value contributions to movement index of warp and weft yarn and their total moving ability in cross-sections at all structure phases of fabric are summarized.
Starobinsky-like inflation and neutrino masses in a no-scale SO(10) model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, John; Theoretical Physics Department, CERN,CH-1211 Geneva 23; Garcia, Marcos A.G.
2016-11-08
Using a no-scale supergravity framework, we construct an SO(10) model that makes predictions for cosmic microwave background observables similar to those of the Starobinsky model of inflation, and incorporates a double-seesaw model for neutrino masses consistent with oscillation experiments and late-time cosmology. We pay particular attention to the behaviour of the scalar fields during inflation and the subsequent reheating.
Modeling listeners' emotional response to music.
Eerola, Tuomas
2012-10-01
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.
Foundations for computer simulation of a low pressure oil flooded single screw air compressor
NASA Astrophysics Data System (ADS)
Bein, T. W.
1981-12-01
The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.
Antecedents of eating disorders and muscle dysmorphia in a non-clinical sample.
Lamanna, J; Grieve, F G; Derryberry, W Pitt; Hakman, M; McClure, A
2010-01-01
Muscle Dysmorphia (MD) has recently been conceptualized as the male form of Eating Disorders (ED); although, it is not currently classified as an ED. The current study compares etiological models of MD symptomatology and ED symptomatology. It was hypothesized that sociocultural influences on appearance (SIA) would predict body dissatisfaction (BD), and that this relationship would be mediated by self-esteem (SE) and perfectionism (P); that BD would predict negative affect (NA); and that NA would predict MD and ED symptomatology. Two-hundred-forty-seven female and 101 male college students at a midsouth university completed the study. All participants completed measures assessing each of the constructs, and multiple regression analyses were conducted to test each model's fit. In both models, most predictor paths were significant. These results suggest similarity in symptomatology and etiological models between ED and MD.
Predicting DNA hybridization kinetics from sequence
NASA Astrophysics Data System (ADS)
Zhang, Jinny X.; Fang, John Z.; Duan, Wei; Wu, Lucia R.; Zhang, Angela W.; Dalchau, Neil; Yordanov, Boyan; Petersen, Rasmus; Phillips, Andrew; Zhang, David Yu
2018-01-01
Hybridization is a key molecular process in biology and biotechnology, but so far there is no predictive model for accurately determining hybridization rate constants based on sequence information. Here, we report a weighted neighbour voting (WNV) prediction algorithm, in which the hybridization rate constant of an unknown sequence is predicted based on similarity reactions with known rate constants. To construct this algorithm we first performed 210 fluorescence kinetics experiments to observe the hybridization kinetics of 100 different DNA target and probe pairs (36 nt sub-sequences of the CYCS and VEGF genes) at temperatures ranging from 28 to 55 °C. Automated feature selection and weighting optimization resulted in a final six-feature WNV model, which can predict hybridization rate constants of new sequences to within a factor of 3 with ∼91% accuracy, based on leave-one-out cross-validation. Accurate prediction of hybridization kinetics allows the design of efficient probe sequences for genomics research.
Automated adaptive inference of phenomenological dynamical models.
Daniels, Bryan C; Nemenman, Ilya
2015-08-21
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.
Modeling and dynamic environment analysis technology for spacecraft
NASA Astrophysics Data System (ADS)
Fang, Ren; Zhaohong, Qin; Zhong, Zhang; Zhenhao, Liu; Kai, Yuan; Long, Wei
Spacecraft sustains complex and severe vibrations and acoustic environments during flight. Predicting the resulting structures, including numerical predictions of fluctuating pressure, updating models and random vibration and acoustic analysis, plays an important role during the design, manufacture and ground testing of spacecraft. In this paper, Monotony Integrative Large Eddy Simulation (MILES) is introduced to predict the fluctuating pressure of the fairing. The exact flow structures of the fairing wall surface under different Mach numbers are obtained, then a spacecraft model is constructed using the finite element method (FEM). According to the modal test data, the model is updated by the penalty method. On this basis, the random vibration and acoustic responses of the fairing and satellite are analyzed by different methods. The simulated results agree well with the experimental ones, which shows the validity of the modeling and dynamic environment analysis technology. This information can better support test planning, defining test conditions and designing optimal structures.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508
Tsunashima, Ryo; Naoi, Yasuto; Shimazu, Kenzo; Kagara, Naofumi; Shimoda, Masashi; Tanei, Tomonori; Miyake, Tomohiro; Kim, Seung Jin; Noguchi, Shinzaburo
2018-05-04
Prediction models for late (> 5 years) recurrence in ER-positive breast cancer need to be developed for the accurate selection of patients for extended hormonal therapy. We attempted to develop such a prediction model focusing on the differences in gene expression between breast cancers with early and late recurrence. For the training set, 779 ER-positive breast cancers treated with tamoxifen alone for 5 years were selected from the databases (GSE6532, GSE12093, GSE17705, and GSE26971). For the validation set, 221 ER-positive breast cancers treated with adjuvant hormonal therapy for 5 years with or without chemotherapy at our hospital were included. Gene expression was assayed by DNA microarray analysis (Affymetrix U133 plus 2.0). With the 42 genes differentially expressed in early and late recurrence breast cancers in the training set, a prediction model (42GC) for late recurrence was constructed. The patients classified by 42GC into the late recurrence-like group showed a significantly (P = 0.006) higher late recurrence rate as expected but a significantly (P = 1.62 × E-13) lower rate for early recurrence than non-late recurrence-like group. These observations were confirmed for the validation set, i.e., P = 0.020 for late recurrence and P = 5.70 × E-5 for early recurrence. We developed a unique prediction model (42GC) for late recurrence by focusing on the biological differences between breast cancers with early and late recurrence. Interestingly, patients in the late recurrence-like group by 42GC were at low risk for early recurrence.
Boik, John C; Newman, Robert A
2008-01-01
Background Quantitative structure-activity relationship (QSAR) models have become popular tools to help identify promising lead compounds in anticancer drug development. Few QSAR studies have investigated multitask learning, however. Multitask learning is an approach that allows distinct but related data sets to be used in training. In this paper, a suite of three QSAR models is developed to identify compounds that are likely to (a) exhibit cytotoxic behavior against cancer cells, (b) exhibit high rat LD50 values (low systemic toxicity), and (c) exhibit low to modest human oral clearance (favorable pharmacokinetic characteristics). Models were constructed using Kernel Multitask Latent Analysis (KMLA), an approach that can effectively handle a large number of correlated data features, nonlinear relationships between features and responses, and multitask learning. Multitask learning is particularly useful when the number of available training records is small relative to the number of features, as was the case with the oral clearance data. Results Multitask learning modestly but significantly improved the classification precision for the oral clearance model. For the cytotoxicity model, which was constructed using a large number of records, multitask learning did not affect precision but did reduce computation time. The models developed here were used to predict activities for 115,000 natural compounds. Hundreds of natural compounds, particularly in the anthraquinone and flavonoids groups, were predicted to be cytotoxic, have high LD50 values, and have low to moderate oral clearance. Conclusion Multitask learning can be useful in some QSAR models. A suite of QSAR models was constructed and used to screen a large drug library for compounds likely to be cytotoxic to multiple cancer cell lines in vitro, have low systemic toxicity in rats, and have favorable pharmacokinetic properties in humans. PMID:18554402