Sample records for introducing model predictive

  1. Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions

    NASA Technical Reports Server (NTRS)

    Balmes, Etienne

    1993-01-01

    An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.

  2. The Real World Significance of Performance Prediction

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu

    2012-01-01

    In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…

  3. An Excel Solver Exercise to Introduce Nonlinear Regression

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Business students taking business analytics courses that have significant predictive modeling components, such as marketing research, data mining, forecasting, and advanced financial modeling, are introduced to nonlinear regression using application software that is a "black box" to the students. Thus, although correct models are…

  4. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  5. Application of ''Earl's Assessment "as", Assessment "for", and Assessment "of" Learning Model'' with Orthopaedic Assessment Clinical Competence

    ERIC Educational Resources Information Center

    Lafave, Mark R.; Katz, Larry; Vaughn, Norman

    2013-01-01

    Context: In order to study the efficacy of assessment methods, a theoretical framework of Earl's model of assessment was introduced. Objective: (1) Introduce the predictive learning assessment model (PLAM) as an application of Earl's model of learning; (2) test Earl's model of learning through the use of the Standardized Orthopedic Assessment Tool…

  6. A summary of wind power prediction methods

    NASA Astrophysics Data System (ADS)

    Wang, Yuqi

    2018-06-01

    The deterministic prediction of wind power, the probability prediction and the prediction of wind power ramp events are introduced in this paper. Deterministic prediction includes the prediction of statistical learning based on histor ical data and the prediction of physical models based on NWP data. Due to the great impact of wind power ramp events on the power system, this paper also introduces the prediction of wind power ramp events. At last, the evaluation indicators of all kinds of prediction are given. The prediction of wind power can be a good solution to the adverse effects of wind power on the power system due to the abrupt, intermittent and undulation of wind power.

  7. Threshold models for genome-enabled prediction of ordinal categorical traits in plant breeding.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo; Eskridge, Kent; Crossa, José

    2014-12-23

    Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9-14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. Copyright © 2015 Montesinos-López et al.

  8. Prediction of pilot opinion ratings using an optimal pilot model. [of aircraft handling qualities in multiaxis tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.

  9. Assimilation of Satellite to Improve Cloud Simulation in Wrf Model

    NASA Astrophysics Data System (ADS)

    Park, Y. H.; Pour Biazar, A.; McNider, R. T.

    2012-12-01

    A simple approach has been introduced to improve cloud simulation spatially and temporally in a meteorological model. The first step for this approach is to use Geostationary Operational Environmental Satellite (GOES) observations to identify clouds and estimate the clouds structure. Then by comparing GOES observations to model cloud field, we identify areas in which model has under-predicted or over-predicted clouds. Next, by introducing subsidence in areas with over-prediction and lifting in areas with under-prediction, erroneous clouds are removed and new clouds are formed. The technique estimates a vertical velocity needed for the cloud correction and then uses a one dimensional variation schemes (1D_Var) to calculate the horizontal divergence components and the consequent horizontal wind components needed to sustain such vertical velocity. Finally, the new horizontal winds are provided as a nudging field to the model. This nudging provides the dynamical support needed to create/clear clouds in a sustainable manner. The technique was implemented and tested in the Weather Research and Forecast (WRF) Model and resulted in substantial improvement in model simulated clouds. Some of the results are presented here.

  10. A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares

    PubMed Central

    Yang, Chuanlei; Wang, Yinyan; Wang, Hechun

    2018-01-01

    To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future. PMID:29410849

  11. A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares.

    PubMed

    Li, Xu; Yang, Chuanlei; Wang, Yinyan; Wang, Hechun

    2018-01-01

    To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future.

  12. Evaluating the accuracy of recent electron transport models at predicting Hall thruster plasma dynamics

    NASA Astrophysics Data System (ADS)

    Cappelli, Mark; Young, Christopher

    2016-10-01

    We present continued efforts towards introducing physical models for cross-magnetic field electron transport into Hall thruster discharge simulations. In particular, we seek to evaluate whether such models accurately capture ion dynamics, both averaged and resolved in time, through comparisons with measured ion velocity distributions which are now becoming available for several devices. Here, we describe a turbulent electron transport model that is integrated into 2-D hybrid fluid/PIC simulations of a 72 mm diameter laboratory thruster operating at 400 W. We also compare this model's predictions with one recently proposed by Lafluer et al.. Introducing these models into 2-D hybrid simulations is relatively straightforward and leverages the existing framework for solving the electron fluid equations. The models are tested for their ability to capture the time-averaged experimental discharge current and its fluctuations due to ionization instabilities. Model predictions are also more rigorously evaluated against recent laser-induced fluorescence measurements of time-resolved ion velocity distributions.

  13. Quicksilver: Fast predictive image registration - A deep learning approach.

    PubMed

    Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc

    2017-09-01

    This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Applications for predictive microbiology to food packaging

    USDA-ARS?s Scientific Manuscript database

    Predictive microbiology has been used for several years in the food industry to predict microbial growth, inactivation and survival. Predictive models provide a useful tool in risk assessment, HACCP set-up and GMP for the food industry to enhance microbial food safety. This report introduces the c...

  15. Classification and Prediction of RF Coupling inside A-320 and A-319 Airplanes using Feed Forward Neural Networks

    NASA Technical Reports Server (NTRS)

    Jafri, Madiha; Ely, Jay; Vahala, Linda

    2006-01-01

    Neural Network Modeling is introduced in this paper to classify and predict Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data and a plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.

  16. Development of a coupled hydrological - hydrodynamic model for probabilistic catchment flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Quinn, Niall; Freer, Jim; Coxon, Gemma; Dunne, Toby; Neal, Jeff; Bates, Paul; Sampson, Chris; Smith, Andy; Parkin, Geoff

    2017-04-01

    Computationally efficient flood inundation modelling systems capable of representing important hydrological and hydrodynamic flood generating processes over relatively large regions are vital for those interested in flood preparation, response, and real time forecasting. However, such systems are currently not readily available. This can be particularly important where flood predictions from intense rainfall are considered as the processes leading to flooding often involve localised, non-linear spatially connected hillslope-catchment responses. Therefore, this research introduces a novel hydrological-hydraulic modelling framework for the provision of probabilistic flood inundation predictions across catchment to regional scales that explicitly account for spatial variability in rainfall-runoff and routing processes. Approaches have been developed to automate the provision of required input datasets and estimate essential catchment characteristics from freely available, national datasets. This is an essential component of the framework as when making predictions over multiple catchments or at relatively large scales, and where data is often scarce, obtaining local information and manually incorporating it into the model quickly becomes infeasible. An extreme flooding event in the town of Morpeth, NE England, in 2008 was used as a first case study evaluation of the modelling framework introduced. The results demonstrated a high degree of prediction accuracy when comparing modelled and reconstructed event characteristics for the event, while the efficiency of the modelling approach used enabled the generation of relatively large ensembles of realisations from which uncertainty within the prediction may be represented. This research supports previous literature highlighting the importance of probabilistic forecasting, particularly during extreme events, which can be often be poorly characterised or even missed by deterministic predictions due to the inherent uncertainty in any model application. Future research will aim to further evaluate the robustness of the approaches introduced by applying the modelling framework to a variety of historical flood events across UK catchments. Furthermore, the flexibility and efficiency of the framework is ideally suited to the examination of the propagation of errors through the model which will help gain a better understanding of the dominant sources of uncertainty currently impacting flood inundation predictions.

  17. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  18. Warped Linear Prediction of Physical Model Excitations with Applications in Audio Compression and Instrument Synthesis

    NASA Astrophysics Data System (ADS)

    Glass, Alexis; Fukudome, Kimitoshi

    2004-12-01

    A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.

  19. Propagule pressure and stream characteristics influence introgression: Cutthroat and rainbow trout in British Columbia

    USGS Publications Warehouse

    Bennett, S.N.; Olson, J.R.; Kershner, J.L.; Corbett, P.

    2010-01-01

    Hybridization and introgression between introduced and native salmonids threaten the continued persistence of many inland cutthroat trout species. Environmental models have been developed to predict the spread of introgression, but few studies have assessed the role of propagule pressure. We used an extensive set of fish stocking records and geographic information system (GIS) data to produce a spatially explicit index of potential propagule pressure exerted by introduced rainbow trout in the Upper Kootenay River, British Columbia, Canada. We then used logistic regression and the information-theoretic approach to test the ability of a set of environmental and spatial variables to predict the level of introgression between native westslope cutthroat trout and introduced rainbow trout. Introgression was assessed using between four and seven co-dominant, diagnostic nuclear markers at 45 sites in 31 different streams. The best model for predicting introgression included our GIS propagule pressure index and an environmental variable that accounted for the biogeoclimatic zone of the site (r2 = 0.62). This model was 1.4 times more likely to explain introgression than the next-best model, which consisted of only the propagule pressure index variable. We created a composite model based on the model-averaged results of the seven top models that included environmental, spatial, and propagule pressure variables. The propagule pressure index had the highest importance weight (0.995) of all variables tested and was negatively related to sites with no introgression. This study used an index of propagule pressure and demonstrated that propagule pressure had the greatest influence on the level of introgression between a native and introduced trout in a human-induced hybrid zone. ?? 2010 by the Ecological Society of America.

  20. Predicting healthcare trajectories from medical records: A deep learning approach.

    PubMed

    Pham, Trang; Tran, Truyen; Phung, Dinh; Venkatesh, Svetha

    2017-05-01

    Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  2. Application of linear regression analysis in accuracy assessment of rolling force calculations

    NASA Astrophysics Data System (ADS)

    Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.

    1998-10-01

    Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.

  3. Ghirardi-Rimini-Weber model with massive flashes

    NASA Astrophysics Data System (ADS)

    Tilloy, Antoine

    2018-01-01

    I introduce a modification of the Ghirardi-Rimini-Weber (GRW) model in which the flashes (or space-time collapse events) source a classical gravitational field. The resulting semiclassical theory of Newtonian gravity preserves the statistical interpretation of quantum states of matter in contrast with mean field approaches. It can be seen as a discrete version of recent proposals of consistent hybrid quantum classical theories. The model is in agreement with known experimental data and introduces new falsifiable predictions: (1) single particles do not self-interact, (2) the 1 /r gravitational potential of Newtonian gravity is cut off at short (≲10-7 m ) distances, and (3) gravity makes spatial superpositions decohere at a rate inversely proportional to that coming from the vanilla GRW model. Together, the last two predictions make the model experimentally falsifiable for all values of its parameters.

  4. A mathematical model for predicting fire spread in wildland fuels

    Treesearch

    Richard C. Rothermel

    1972-01-01

    A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.

  5. Word of Mouth : An Agent-based Approach to Predictability of Stock Prices

    NASA Astrophysics Data System (ADS)

    Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko

    This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.

  6. Building Models to Predict Hint-or-Attempt Actions of Students

    ERIC Educational Resources Information Center

    Castro, Francisco Enrique Vicente; Adjei, Seth; Colombo, Tyler; Heffernan, Neil

    2015-01-01

    A great deal of research in educational data mining is geared towards predicting student performance. Bayesian Knowledge Tracing, Performance Factors Analysis, and the different variations of these have been introduced and have had some success at predicting student knowledge. It is worth noting, however, that very little has been done to…

  7. Sound transmission in the chest under surface excitation - An experimental and computational study with diagnostic applications

    PubMed Central

    Peng, Ying; Dai, Zoujun; Mansy, Hansen A.; Sandler, Richard H.; Balk, Robert A; Royston, Thomas. J

    2014-01-01

    Chest physical examination often includes performing chest percussion, which involves introducing sound stimulus to the chest wall and detecting an audible change. This approach relies on observations that underlying acoustic transmission, coupling, and resonance patterns can be altered by chest structure changes due to pathologies. More accurate detection and quantification of these acoustic alterations may provide further useful diagnostic information. To elucidate the physical processes involved, a realistic computer model of sound transmission in the chest is helpful. In the present study, a computational model was developed and validated by comparing its predictions with results from animal and human experiments which involved applying acoustic excitation to the anterior chest while detecting skin vibrations at the posterior chest. To investigate the effect of pathology on sound transmission, the computational model was used to simulate the effects of pneumothorax on sounds introduced at the anterior chest and detected at the posterior. Model predictions and experimental results showed similar trends. The model also predicted wave patterns inside the chest, which may be used to assess results of elastography measurements. Future animal and human tests may expand the predictive power of the model to include acoustic behavior for a wider range of pulmonary conditions. PMID:25001497

  8. Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25

    ERIC Educational Resources Information Center

    Kane, Michael T.

    2017-01-01

    By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…

  9. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

    PubMed

    Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

    2016-02-01

    To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Interoceptive predictions in the brain

    PubMed Central

    Barrett, Lisa Feldman; Simmons, W. Kyle

    2016-01-01

    Intuition suggests that perception follows sensation and therefore bodily feelings originate in the body. However, recent evidence goes against this logic: interoceptive experience may largely reflect limbic predictions about the expected state of the body that are constrained by ascending visceral sensations. In this Opinion article, we introduce the Embodied Predictive Interoception Coding model, which integrates an anatomical model of corticocortical connections with Bayesian active inference principles, to propose that agranular visceromotor cortices contribute to interoception by issuing interoceptive predictions. We then discuss how disruptions in interoceptive predictions could function as a common vulnerability for mental and physical illness. PMID:26016744

  11. A thermal sensation prediction tool for use by the profession

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fountain, M.E.; Huizenga, C.

    1997-12-31

    As part of a recent ASHRAE research project (781-RP), a thermal sensation prediction tool has been developed. This paper introduces the tool, describes the component thermal sensation models, and presents examples of how the tool can be used in practice. Since the main end product of the HVAC industry is the comfort of occupants indoors, tools for predicting occupant thermal response can be an important asset to designers of indoor climate control systems. The software tool presented in this paper incorporates several existing models for predicting occupant comfort.

  12. Experimental evaluation of models for predicting Cherenkov light intensities from short-cooled nuclear fuel assemblies

    NASA Astrophysics Data System (ADS)

    Branger, E.; Grape, S.; Jansson, P.; Jacobsson Svärd, S.

    2018-02-01

    The Digital Cherenkov Viewing Device (DCVD) is a tool used by nuclear safeguards inspectors to verify irradiated nuclear fuel assemblies in wet storage based on the recording of Cherenkov light produced by the assemblies. One type of verification involves comparing the measured light intensity from an assembly with a predicted intensity, based on assembly declarations. Crucial for such analyses is the performance of the prediction model used, and recently new modelling methods have been introduced to allow for enhanced prediction capabilities by taking the irradiation history into account, and by including the cross-talk radiation from neighbouring assemblies in the predictions. In this work, the performance of three models for Cherenkov-light intensity prediction is evaluated by applying them to a set of short-cooled PWR 17x17 assemblies for which experimental DCVD measurements and operator-declared irradiation data was available; (1) a two-parameter model, based on total burnup and cooling time, previously used by the safeguards inspectors, (2) a newly introduced gamma-spectrum-based model, which incorporates cycle-wise burnup histories, and (3) the latter gamma-spectrum-based model with the addition to account for contributions from neighbouring assemblies. The results show that the two gamma-spectrum-based models provide significantly higher precision for the measured inventory compared to the two-parameter model, lowering the standard deviation between relative measured and predicted intensities from 15.2 % to 8.1 % respectively 7.8 %. The results show some systematic differences between assemblies of different designs (produced by different manufacturers) in spite of their similar PWR 17x17 geometries, and possible ways are discussed to address such differences, which may allow for even higher prediction capabilities. Still, it is concluded that the gamma-spectrum-based models enable confident verification of the fuel assembly inventory at the currently used detection limit for partial defects, being a 30 % discrepancy between measured and predicted intensities, while some false detection occurs with the two-parameter model. The results also indicate that the gamma-spectrum-based prediction methods are accurate enough that the 30 % discrepancy limit could potentially be lowered.

  13. Nonstationary time series prediction combined with slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, G.; Chen, X.

    2015-07-01

    Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.

  14. Modeling Ability Differentiation in the Second-Order Factor Model

    ERIC Educational Resources Information Center

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  15. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  16. Adapting APSIM to model the physiology and genetics of complex adaptive traits in field crops.

    PubMed

    Hammer, Graeme L; van Oosterom, Erik; McLean, Greg; Chapman, Scott C; Broad, Ian; Harland, Peter; Muchow, Russell C

    2010-05-01

    Progress in molecular plant breeding is limited by the ability to predict plant phenotype based on its genotype, especially for complex adaptive traits. Suitably constructed crop growth and development models have the potential to bridge this predictability gap. A generic cereal crop growth and development model is outlined here. It is designed to exhibit reliable predictive skill at the crop level while also introducing sufficient physiological rigour for complex phenotypic responses to become emergent properties of the model dynamics. The approach quantifies capture and use of radiation, water, and nitrogen within a framework that predicts the realized growth of major organs based on their potential and whether the supply of carbohydrate and nitrogen can satisfy that potential. The model builds on existing approaches within the APSIM software platform. Experiments on diverse genotypes of sorghum that underpin the development and testing of the adapted crop model are detailed. Genotypes differing in height were found to differ in biomass partitioning among organs and a tall hybrid had significantly increased radiation use efficiency: a novel finding in sorghum. Introducing these genetic effects associated with plant height into the model generated emergent simulated phenotypic differences in green leaf area retention during grain filling via effects associated with nitrogen dynamics. The relevance to plant breeding of this capability in complex trait dissection and simulation is discussed.

  17. A Combined High and Low Cycle Fatigue Model for Life Prediction of Turbine Blades

    PubMed Central

    Yue, Peng; Yu, Zheng-Yong; Wang, Qingyuan

    2017-01-01

    Combined high and low cycle fatigue (CCF) generally induces the failure of aircraft gas turbine attachments. Based on the aero-engine load spectrum, accurate assessment of fatigue damage due to the interaction of high cycle fatigue (HCF) resulting from high frequency vibrations and low cycle fatigue (LCF) from ground-air-ground engine cycles is of critical importance for ensuring structural integrity of engine components, like turbine blades. In this paper, the influence of combined damage accumulation on the expected CCF life are investigated for turbine blades. The CCF behavior of a turbine blade is usually studied by testing with four load-controlled parameters, including high cycle stress amplitude and frequency, and low cycle stress amplitude and frequency. According to this, a new damage accumulation model is proposed based on Miner’s rule to consider the coupled damage due to HCF-LCF interaction by introducing the four load parameters. Five experimental datasets of turbine blade alloys and turbine blades were introduced for model validation and comparison between the proposed Miner, Manson-Halford, and Trufyakov-Kovalchuk models. Results show that the proposed model provides more accurate predictions than others with lower mean and standard deviation values of model prediction errors. PMID:28773064

  18. A Combined High and Low Cycle Fatigue Model for Life Prediction of Turbine Blades.

    PubMed

    Zhu, Shun-Peng; Yue, Peng; Yu, Zheng-Yong; Wang, Qingyuan

    2017-06-26

    Combined high and low cycle fatigue (CCF) generally induces the failure of aircraft gas turbine attachments. Based on the aero-engine load spectrum, accurate assessment of fatigue damage due to the interaction of high cycle fatigue (HCF) resulting from high frequency vibrations and low cycle fatigue (LCF) from ground-air-ground engine cycles is of critical importance for ensuring structural integrity of engine components, like turbine blades. In this paper, the influence of combined damage accumulation on the expected CCF life are investigated for turbine blades. The CCF behavior of a turbine blade is usually studied by testing with four load-controlled parameters, including high cycle stress amplitude and frequency, and low cycle stress amplitude and frequency. According to this, a new damage accumulation model is proposed based on Miner's rule to consider the coupled damage due to HCF-LCF interaction by introducing the four load parameters. Five experimental datasets of turbine blade alloys and turbine blades were introduced for model validation and comparison between the proposed Miner, Manson-Halford, and Trufyakov-Kovalchuk models. Results show that the proposed model provides more accurate predictions than others with lower mean and standard deviation values of model prediction errors.

  19. Propagation of Bayesian Belief for Near-Real Time Statistical Assessment of Geosynchronous Satellite Status Based on Non-Resolved Photometry Data

    DTIC Science & Technology

    2014-09-01

    of the BRDF for the Body and Panel. In order to provide a continuously updated baseline, the Photometry Model application is performed using a...brightness to its predicted brightness. The brightness predictions can be obtained using any analytical model chosen by the user. The inference for a...the analytical model as possible; and to mitigate the effect of bias that could be introduced by the choice of analytical model . It considers that a

  20. Irruptive dynamics of introduced caribou on Adak Island, Alaska: an evaluation of Riney-Caughley model predictions

    USGS Publications Warehouse

    Ricca, Mark A.; Van Vuren, Dirk H.; Weckerly, Floyd W.; Williams, Jeffrey C.; Miles, A. Keith

    2014-01-01

    Large mammalian herbivores introduced to islands without predators are predicted to undergo irruptive population and spatial dynamics, but only a few well-documented case studies support this paradigm. We used the Riney-Caughley model as a framework to test predictions of irruptive population growth and spatial expansion of caribou (Rangifer tarandus granti) introduced to Adak Island in the Aleutian archipelago of Alaska in 1958 and 1959. We utilized a time series of spatially explicit counts conducted on this population intermittently over a 54-year period. Population size increased from 23 released animals to approximately 2900 animals in 2012. Population dynamics were characterized by two distinct periods of irruptive growth separated by a long time period of relative stability, and the catalyst for the initial irruption was more likely related to annual variation in hunting pressure than weather conditions. An unexpected pattern resembling logistic population growth occurred between the peak of the second irruption in 2005 and the next survey conducted seven years later in 2012. Model simulations indicated that an increase in reported harvest alone could not explain the deceleration in population growth, yet high levels of unreported harvest combined with increasing density-dependent feedbacks on fecundity and survival were the most plausible explanation for the observed population trend. No studies of introduced island Rangifer have measured a time series of spatial use to the extent described in this study. Spatial use patterns during the post-calving season strongly supported Riney-Caughley model predictions, whereby high-density core areas expanded outwardly as population size increased. During the calving season, caribou displayed marked site fidelity across the full range of population densities despite availability of other suitable habitats for calving. Finally, dispersal and reproduction on neighboring Kagalaska Island represented a new dispersal front for irruptive dynamics and a new challenge for resource managers. The future demography of caribou on both islands is far from certain, yet sustained and significant hunting pressure should be a vital management tool.

  1. Reframed Genome-Scale Metabolic Model to Facilitate Genetic Design and Integration with Expression Data.

    PubMed

    Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang

    2017-01-01

    Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.

  2. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  3. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    DTIC Science & Technology

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS

  4. On inflation, cosmological constant, and SUSY breaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linde, Andrei

    2016-11-02

    We consider a broad class of inflationary models of two unconstrained chiral superfields, the stabilizer S and the inflaton Φ, which can describe inflationary models with nearly arbitrary potentials. These models include, in particular, the recently introduced theories of cosmological attractors, which provide an excellent fit to the latest Planck data. We show that by adding to the superpotential of the fields S and Φ a small term depending on a nilpotent chiral superfield P one can break SUSY and introduce a small cosmological constant without affecting main predictions of the original inflationary scenario.

  5. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  6. Learning Instance-Specific Predictive Models

    PubMed Central

    Visweswaran, Shyam; Cooper, Gregory F.

    2013-01-01

    This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325

  7. A mechanistic model for spread of livestock-associated methicillin-resistant Staphylococcus aureus (LA-MRSA) within a pig herd

    PubMed Central

    Toft, Nils; Boklund, Anette; Espinosa-Gongora, Carmen; Græsbøll, Kaare; Larsen, Jesper; Halasa, Tariq

    2017-01-01

    Before an efficient control strategy for livestock-associated methicillin resistant Staphylococcus aureus (LA-MRSA) in pigs can be decided upon, it is necessary to obtain a better understanding of how LA-MRSA spreads and persists within a pig herd, once it is introduced. We here present a mechanistic stochastic discrete-event simulation model for spread of LA-MRSA within a farrow-to-finish sow herd to aid in this. The model was individual-based and included three different disease compartments: susceptible, intermittent or persistent shedder of MRSA. The model was used for studying transmission dynamics and within-farm prevalence after different introductions of LA-MRSA into a farm. The spread of LA-MRSA throughout the farm mainly followed the movement of pigs. After spread of LA-MRSA had reached equilibrium, the prevalence of LA-MRSA shedders was predicted to be highest in the farrowing unit, independent of how LA-MRSA was introduced. LA-MRSA took longer to spread to the whole herd if introduced in the finisher stable, rather than by gilts in the mating stable. The more LA-MRSA positive animals introduced, the shorter time before the prevalence in the herd stabilised. Introduction of a low number of intermittently shedding pigs was predicted to frequently result in LA-MRSA fading out. The model is a potential decision support tool for assessments of short and long term consequences of proposed intervention strategies or surveillance options for LA-MRSA within pig herds. PMID:29182655

  8. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  9. Structure Prediction and Analysis of Neuraminidase Sequence Variants

    ERIC Educational Resources Information Center

    Thayer, Kelly M.

    2016-01-01

    Analyzing protein structure has become an integral aspect of understanding systems of biochemical import. The laboratory experiment endeavors to introduce protein folding to ascertain structures of proteins for which the structure is unavailable, as well as to critically evaluate the quality of the prediction obtained. The model system used is the…

  10. Predicting long-term forest development following hemlock mortality

    Treesearch

    Jennifer C. Jenkins; Charles D. Canham; Paul K. Barten

    2000-01-01

    The hemlock woolly adelgid (Adelges tsugae Annand.), an introduced pest specializing on eastern hemlock (Tsuga canadensis (L.) Carr.), threatens to cause widespread hemlock mortality in New England forests. In this study, we used a stem-based model of forest dynamics (SORTIE) to predict forest development in a northeastern forest...

  11. Calibration and prediction of removal function in magnetorheological finishing.

    PubMed

    Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng

    2010-01-20

    A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.

  12. Prediction of moisture variation during composting process: A comparison of mathematical models.

    PubMed

    Wang, Yongjiang; Ai, Ping; Cao, Hongliang; Liu, Zhigang

    2015-10-01

    This study was carried out to develop and compare three models for simulating the moisture content during composting. Model 1 described changes in water content using mass balance, while Model 2 introduced a liquid-gas transferred water term. Model 3 predicted changes in moisture content without complex degradation kinetics. Average deviations for Model 1-3 were 8.909, 7.422 and 5.374 kg m(-3) while standard deviations were 10.299, 8.374 and 6.095, respectively. The results showed that Model 1 is complex and involves more state variables, but can be used to reveal the effect of humidity on moisture content. Model 2 tested the hypothesis of liquid-gas transfer and was shown to be capable of predicting moisture content during composting. Model 3 could predict water content well without considering degradation kinetics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Personalized Modeling for Prediction with Decision-Path Models

    PubMed Central

    Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.

    2015-01-01

    Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570

  14. Hemodynamics-Driven Deposition of Intraluminal Thrombus in Abdominal Aortic Aneurysms

    PubMed Central

    Di Achille, P.; Tellides, G.; Humphrey, J.D.

    2016-01-01

    Accumulating evidence suggests that intraluminal thrombus plays many roles in the natural history of abdominal aortic aneurysms. There is, therefore, a pressing need for computational models that can describe and predict the initiation and progression of thrombus in aneurysms. In this paper, we introduce a phenomenological metric for thrombus deposition potential and use hemodynamic simulations based on medical images from six patients to identify best-fit values of the two key model parameters. We then introduce a shape optimization method to predict the associated radial growth of the thrombus into the lumen based on the expectation that thrombus initiation will create a thrombogenic surface, which in turn will promote growth until increasing hemodynamically induced frictional forces prevent any further cell or protein deposition. Comparisons between predicted and actual intraluminal thrombus in the six patient-specific aneurysms suggest that this phenomenological description provides a good first estimate of thrombus deposition. We submit further that, because the biologically active region of the thrombus appears to be confined to a thin luminal layer, predictions of morphology alone may be sufficient to inform fluid-solid-growth models of aneurysmal growth and remodeling. PMID:27569676

  15. Improvement of solar-cycle prediction: Plateau of solar axial dipole moment

    NASA Astrophysics Data System (ADS)

    Iijima, H.; Hotta, H.; Imada, S.; Kusano, K.; Shiota, D.

    2017-11-01

    Aims: We report the small temporal variation of the axial dipole moment near the solar minimum and its application to the solar-cycle prediction by the surface flux transport (SFT) model. Methods: We measure the axial dipole moment using the photospheric synoptic magnetogram observed by the Wilcox Solar Observatory (WSO), the ESA/NASA Solar and Heliospheric Observatory Michelson Doppler Imager (MDI), and the NASA Solar Dynamics Observatory Helioseismic and Magnetic Imager (HMI). We also use the SFT model for the interpretation and prediction of the observed axial dipole moment. Results: We find that the observed axial dipole moment becomes approximately constant during the period of several years before each cycle minimum, which we call the axial dipole moment plateau. The cross-equatorial magnetic flux transport is found to be small during the period, although a significant number of sunspots are still emerging. The results indicate that the newly emerged magnetic flux does not contribute to the build up of the axial dipole moment near the end of each cycle. This is confirmed by showing that the time variation of the observed axial dipole moment agrees well with that predicted by the SFT model without introducing new emergence of magnetic flux. These results allow us to predict the axial dipole moment at the Cycle 24/25 minimum using the SFT model without introducing new flux emergence. The predicted axial dipole moment at the Cycle 24/25 minimum is 60-80 percent of Cycle 23/24 minimum, which suggests the amplitude of Cycle 25 is even weaker than the current Cycle 24. Conclusions: The plateau of the solar axial dipole moment is an important feature for the longer-term prediction of the solar cycle based on the SFT model.

  16. Soft Wall Ion Channel in Continuum Representation with Application to Modeling Ion Currents in α-Hemolysin

    PubMed Central

    Simakov, Nikolay A.

    2010-01-01

    A soft repulsion (SR) model of short range interactions between mobile ions and protein atoms is introduced in the framework of continuum representation of the protein and solvent. The Poisson-Nernst-Plank (PNP) theory of ion transport through biological channels is modified to incorporate this soft wall protein model. Two sets of SR parameters are introduced: the first is parameterized for all essential amino acid residues using all atom molecular dynamic simulations; the second is a truncated Lennard – Jones potential. We have further designed an energy based algorithm for the determination of the ion accessible volume, which is appropriate for a particular system discretization. The effects of these models of short-range interaction were tested by computing current-voltage characteristics of the α-hemolysin channel. The introduced SR potentials significantly improve prediction of channel selectivity. In addition, we studied the effect of choice of some space-dependent diffusion coefficient distributions on the predicted current-voltage properties. We conclude that the diffusion coefficient distributions largely affect total currents and have little effect on rectifications, selectivity or reversal potential. The PNP-SR algorithm is implemented in a new efficient parallel Poisson, Poisson-Boltzman and PNP equation solver, also incorporated in a graphical molecular modeling package HARLEM. PMID:21028776

  17. Educators' Percpetions of the Substitution, Augmentation, Modification, Redefinition Model for Technology Integration

    ERIC Educational Resources Information Center

    Savignano, Mark Angelo

    2017-01-01

    The Substitution, Augmentation, Modification, Redefinition (SAMR) model has been introduced (Puentedura, 2006) claims that use of technology could predict student outcomes. School districts and educational institutions have been adopting this model in hopes to enhance the educational experience and outcomes for their students (SAMR Model, n.d.).…

  18. QCT/FEA predictions of femoral stiffness are strongly affected by boundary condition modeling

    PubMed Central

    Rossman, Timothy; Kushvaha, Vinod; Dragomir-Daescu, Dan

    2015-01-01

    Quantitative computed tomography-based finite element models of proximal femora must be validated with cadaveric experiments before using them to assess fracture risk in osteoporotic patients. During validation it is essential to carefully assess whether the boundary condition modeling matches the experimental conditions. This study evaluated proximal femur stiffness results predicted by six different boundary condition methods on a sample of 30 cadaveric femora and compared the predictions with experimental data. The average stiffness varied by 280% among the six boundary conditions. Compared with experimental data the predictions ranged from overestimating the average stiffness by 65% to underestimating it by 41%. In addition we found that the boundary condition that distributed the load to the contact surfaces similar to the expected contact mechanics predictions had the best agreement with experimental stiffness. We concluded that boundary conditions modeling introduced large variations in proximal femora stiffness predictions. PMID:25804260

  19. Bayesian Unimodal Density Regression for Causal Inference

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2011-01-01

    Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

  20. Research on Fault Rate Prediction Method of T/R Component

    NASA Astrophysics Data System (ADS)

    Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu

    2017-07-01

    T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.

  1. Potential impact of initialization on decadal predictions as assessed for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Branstator, Grant; Teng, Haiyan

    2012-06-01

    To investigate the potential for initialization to improve decadal range predictions, we quantify the initial value predictability of upper 300 m temperature in the two northern ocean basins for 12 models from Coupled Model Intercomparison Project phase 5 (CMIP5), and we contrast it with the forced predictability in Representative Concentration Pathways (RCP) 4.5 climate change projections. We use a recently introduced method that produces predictability estimates from long control runs. Many initial states are considered, and we find on average 1) initialization has the potential to improve skill in the first 5 years in the North Pacific and the first 9 years in the North Atlantic, and 2) the impact from initialization becomes secondary compared to the impact of RCP4.5 forcing after 6 1/2 and 8 years in the two basins, respectively. Model-to-model and spatial variations in these limits are, however, substantial.

  2. Study on China’s Earthquake Prediction by Mathematical Analysis and its Application in Catastrophe Insurance

    NASA Astrophysics Data System (ADS)

    Jianjun, X.; Bingjie, Y.; Rongji, W.

    2018-03-01

    The purpose of this paper was to improve catastrophe insurance level. Firstly, earthquake predictions were carried out using mathematical analysis method. Secondly, the foreign catastrophe insurances’ policies and models were compared. Thirdly, the suggestions on catastrophe insurances to China were discussed. The further study should be paid more attention on the earthquake prediction by introducing big data.

  3. The Answering Process for Multiple-Choice Questions in Collaborative Learning: A Mathematical Learning Model Analysis

    ERIC Educational Resources Information Center

    Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro

    2014-01-01

    In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…

  4. A New Ductility Exhaustion Model for High Temperature Low Cycle Fatigue Life Prediction of Turbine Disk Alloys

    NASA Astrophysics Data System (ADS)

    Zhu, Shun-Peng; Huang, Hong-Zhong; Li, Haiqing; Sun, Rui; Zuo, Ming J.

    2011-06-01

    Based on ductility exhaustion theory and the generalized energy-based damage parameter, a new viscosity-based life prediction model is introduced to account for the mean strain/stress effects in the low cycle fatigue regime. The loading waveform parameters and cyclic hardening effects are also incorporated within this model. It is assumed that damage accrues by means of viscous flow and ductility consumption is only related to plastic strain and creep strain under high temperature low cycle fatigue conditions. In the developed model, dynamic viscosity is used to describe the flow behavior. This model provides a better prediction of Superalloy GH4133's fatigue behavior when compared to Goswami's ductility model and the generalized damage parameter. Under non-zero mean strain conditions, moreover, the proposed model provides more accurate predictions of Superalloy GH4133's fatigue behavior than that with zero mean strains.

  5. Predicting invasiveness of species in trade: Climate match, trophic guild and fecundity influence establishment and impact of non-native freshwater fishes

    USGS Publications Warehouse

    Howeth, Jennifer G.; Gantz, Crysta A.; Angermeier, Paul; Frimpong, Emmanuel A.; Hoff, Michael H.; Keller, Reuben P.; Mandrak, Nicholas E.; Marchetti, Michael P.; Olden, Julian D.; Romagosa, Christina M.; Lodge, David M.

    2016-01-01

    AimImpacts of non-native species have motivated development of risk assessment tools for identifying introduced species likely to become invasive. Here, we develop trait-based models for the establishment and impact stages of freshwater fish invasion, and use them to screen non-native species common in international trade. We also determine which species in the aquarium, biological supply, live bait, live food and water garden trades are likely to become invasive. Results are compared to historical patterns of non-native fish establishment to assess the relative importance over time of pathways in causing invasions.LocationLaurentian Great Lakes region.MethodsTrait-based classification trees for the establishment and impact stages of invasion were developed from data on freshwater fish species that established or failed to establish in the Great Lakes. Fishes in trade were determined from import data from Canadian and United States regulatory agencies, assigned to specific trades and screened through the developed models.ResultsClimate match between a species’ native range and the Great Lakes region predicted establishment success with 75–81% accuracy. Trophic guild and fecundity predicted potential harmful impacts of established non-native fishes with 75–83% accuracy. Screening outcomes suggest the water garden trade poses the greatest risk of introducing new invasive species, followed by the live food and aquarium trades. Analysis of historical patterns of introduction pathways demonstrates the increasing importance of these trades relative to other pathways. Comparisons among trades reveal that model predictions parallel historical patterns; all fishes previously introduced from the water garden trade have established. The live bait, biological supply, aquarium and live food trades have also contributed established non-native fishes.Main conclusionsOur models predict invasion risk of potential fish invaders to the Great Lakes region and could help managers prioritize efforts among species and pathways to minimize such risk. Similar approaches could be applied to other taxonomic groups and geographic regions.

  6. Naïve Bayes classification in R.

    PubMed

    Zhang, Zhongheng

    2016-06-01

    Naïve Bayes classification is a kind of simple probabilistic classification methods based on Bayes' theorem with the assumption of independence between features. The model is trained on training dataset to make predictions by predict() function. This article introduces two functions naiveBayes() and train() for the performance of Naïve Bayes classification.

  7. Ecological prediction with nonlinear multivariate time-frequency functional data models

    USGS Publications Warehouse

    Yang, Wen-Hsi; Wikle, Christopher K.; Holan, Scott H.; Wildhaber, Mark L.

    2013-01-01

    Time-frequency analysis has become a fundamental component of many scientific inquiries. Due to improvements in technology, the amount of high-frequency signals that are collected for ecological and other scientific processes is increasing at a dramatic rate. In order to facilitate the use of these data in ecological prediction, we introduce a class of nonlinear multivariate time-frequency functional models that can identify important features of each signal as well as the interaction of signals corresponding to the response variable of interest. Our methodology is of independent interest and utilizes stochastic search variable selection to improve model selection and performs model averaging to enhance prediction. We illustrate the effectiveness of our approach through simulation and by application to predicting spawning success of shovelnose sturgeon in the Lower Missouri River.

  8. Nonlinear-programming mathematical modeling of coal blending for power plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang Longhua; Zhou Junhu; Yao Qiang

    At present most of the blending works are guided by experience or linear-programming (LP) which can not reflect the coal complicated characteristics properly. Experimental and theoretical research work shows that most of the coal blend properties can not always be measured as a linear function of the properties of the individual coals in the blend. The authors introduced nonlinear functions or processes (including neural network and fuzzy mathematics), established on the experiments directed by the authors and other researchers, to quantitatively describe the complex coal blend parameters. Finally nonlinear-programming (NLP) mathematical modeling of coal blend is introduced and utilized inmore » the Hangzhou Coal Blending Center. Predictions based on the new method resulted in different results from the ones based on LP modeling. The authors concludes that it is very important to introduce NLP modeling, instead of NL modeling, into the work of coal blending.« less

  9. Accounting for nonsampling error in estimates of HIV epidemic trends from antenatal clinic sentinel surveillance

    PubMed Central

    Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801

  10. Remote sensing-based measurement of Living Environment Deprivation: Improving classical approaches with machine learning

    PubMed Central

    2017-01-01

    This paper provides evidence on the usefulness of very high spatial resolution (VHR) imagery in gathering socioeconomic information in urban settlements. We use land cover, spectral, structure and texture features extracted from a Google Earth image of Liverpool (UK) to evaluate their potential to predict Living Environment Deprivation at a small statistical area level. We also contribute to the methodological literature on the estimation of socioeconomic indices with remote-sensing data by introducing elements from modern machine learning. In addition to classical approaches such as Ordinary Least Squares (OLS) regression and a spatial lag model, we explore the potential of the Gradient Boost Regressor and Random Forests to improve predictive performance and accuracy. In addition to novel predicting methods, we also introduce tools for model interpretation and evaluation such as feature importance and partial dependence plots, or cross-validation. Our results show that Random Forest proved to be the best model with an R2 of around 0.54, followed by Gradient Boost Regressor with 0.5. Both the spatial lag model and the OLS fall behind with significantly lower performances of 0.43 and 0.3, respectively. PMID:28464010

  11. Remote sensing-based measurement of Living Environment Deprivation: Improving classical approaches with machine learning.

    PubMed

    Arribas-Bel, Daniel; Patino, Jorge E; Duque, Juan C

    2017-01-01

    This paper provides evidence on the usefulness of very high spatial resolution (VHR) imagery in gathering socioeconomic information in urban settlements. We use land cover, spectral, structure and texture features extracted from a Google Earth image of Liverpool (UK) to evaluate their potential to predict Living Environment Deprivation at a small statistical area level. We also contribute to the methodological literature on the estimation of socioeconomic indices with remote-sensing data by introducing elements from modern machine learning. In addition to classical approaches such as Ordinary Least Squares (OLS) regression and a spatial lag model, we explore the potential of the Gradient Boost Regressor and Random Forests to improve predictive performance and accuracy. In addition to novel predicting methods, we also introduce tools for model interpretation and evaluation such as feature importance and partial dependence plots, or cross-validation. Our results show that Random Forest proved to be the best model with an R2 of around 0.54, followed by Gradient Boost Regressor with 0.5. Both the spatial lag model and the OLS fall behind with significantly lower performances of 0.43 and 0.3, respectively.

  12. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  13. Climatic niche shift predicts thermal trait response in one but not both introductions of the Puerto Rican lizard Anolis cristatellus to Miami, Florida, USA

    PubMed Central

    Kolbe, Jason J; VanMiddlesworth, Paul S; Losin, Neil; Dappen, Nathan; Losos, Jonathan B

    2012-01-01

    Global change is predicted to alter environmental conditions for populations in numerous ways; for example, invasive species often experience substantial shifts in climatic conditions during introduction from their native to non-native ranges. Whether these shifts elicit a phenotypic response, and how adaptation and phenotypic plasticity contribute to phenotypic change, are key issues for understanding biological invasions and how populations may respond to local climate change. We combined modeling, field data, and a laboratory experiment to test for changing thermal tolerances during the introduction of the tropical lizard Anolis cristatellus from Puerto Rico to Miami, Florida. Species distribution models and bioclimatic data analyses showed lower minimum temperatures, and greater seasonal and annual variation in temperature for Miami compared to Puerto Rico. Two separate introductions of A. cristatellus occurred in Miami about 12 km apart, one in South Miami and the other on Key Biscayne, an offshore island. As predicted from the shift in the thermal climate and the thermal tolerances of other Anolis species in Miami, laboratory acclimation and field acclimatization showed that the introduced South Miami population of A. cristatellus has diverged from its native-range source population by acquiring low-temperature acclimation ability. By contrast, the introduced Key Biscayne population showed little change compared to its source. Our analyses predicted an adaptive response for introduced populations, but our comparisons to native-range sources provided evidence for thermal plasticity in one introduced population but not the other. The rapid acquisition of thermal plasticity by A. cristatellus in South Miami may be advantageous for its long-term persistence there and expansion of its non-native range. Our results also suggest that the common assumption of no trait variation when modeling non-native species distributions is invalid. PMID:22957158

  14. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants

    PubMed Central

    Yu, Zheng-Yong; Liu, Qiang; Liu, Yunhan

    2017-01-01

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi–Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings. PMID:28792487

  15. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants.

    PubMed

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-08-09

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi-Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings.

  16. A superstatistical model of metastasis and cancer survival

    NASA Astrophysics Data System (ADS)

    Leon Chen, L.; Beck, Christian

    2008-05-01

    We introduce a superstatistical model for the progression statistics of malignant cancer cells. The metastatic cascade is modeled as a complex nonequilibrium system with several macroscopic pathways and inverse-chi-square distributed parameters of the underlying Poisson processes. The predictions of the model are in excellent agreement with observed survival-time probability distributions of breast cancer patients.

  17. Solubility of organic compounds in octanol: Improved predictions based on the geometrical fragment approach.

    PubMed

    Mathieu, Didier

    2017-09-01

    Two new models are introduced to predict the solubility of chemicals in octanol (S oct ), taking advantage of the extensive character of log(S oct ) through a decomposition of molecules into so-called geometrical fragments (GF). They are extensively validated and their compliance with regulatory requirements is demonstrated. The first model requires just a molecular formula as input. Despite an extreme simplicity, it performs as well as an advanced random forest model involving 86 descriptors, with a root mean square error (RMSE) of 0.64 log units for an external test set of 100 molecules. For the second one, which requires the melting point T m as input, introducing GF descriptors reduces the RMSE from about 0.7 to <0.5 log units, a performance that could previously be obtained only through the use of Abraham descriptors. A script is provided for easy application of the models, taking into account the limits of their applicability domains. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Reynolds-Stress and Triple-Product Models Applied to a Flow with Rotation and Curvature

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.

    2016-01-01

    Turbulence models, with increasing complexity, up to triple product terms, are applied to the flow in a rotating pipe. The rotating pipe is a challenging case for turbulence models as it contains significant rotational and curvature effects. The flow field starts with the classic fully developed pipe flow, with a stationary pipe wall. This well defined condition is then subjected to a section of pipe with a rotating wall. The rotating wall introduces a second velocity scale, and creates Reynolds shear stresses in the radial-circumferential and circumferential-axial planes. Furthermore, the wall rotation introduces a flow stabilization, and actually reduces the turbulent kinetic energy as the flow moves along the rotating wall section. It is shown in the present work that the Reynolds stress models are capable of predicting significant reduction in the turbulent kinetic energy, but triple product improves the predictions of the centerline turbulent kinetic energy, which is governed by convection, dissipation and transport terms, as the production terms vanish on the pipe axis.

  19. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  20. Modeling and predicting intertidal variations of the salinity field in the Bay/Delta

    USGS Publications Warehouse

    Knowles, Noah; Uncles, Reginald J.

    1995-01-01

    One approach to simulating daily to monthly variability in the bay is the development of intertidal model using tidally-averaged equations and a time step on the order of the day.  An intertidal numerical model of the bay's physics, capable of portraying seasonal and inter-annual variability, would have several uses.  Observations are limited in time and space, so simulation could help fill the gaps.  Also, the ability to simulate multi-year episodes (eg, an extended drought) could provide insight into the response of the ecosystem to such events.  Finally, such a model could be used in a forecast mode wherein predicted delta flow is used as model input, and predicted salinity distribution is output with estimates days and months in advance.  This note briefly introduces such a tidally-averaged model (Uncles and Peterson, in press) and a corresponding predictive scheme for baywide forecasting.

  1. Interpretable Deep Models for ICU Outcome Prediction

    PubMed Central

    Che, Zhengping; Purushotham, Sanjay; Khemani, Robinder; Liu, Yan

    2016-01-01

    Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians. PMID:28269832

  2. Operational Dust Prediction

    NASA Technical Reports Server (NTRS)

    Benedetti, Angela; Baldasano, Jose M.; Basart, Sara; Benincasa, Francesco; Boucher, Olivier; Brooks, Malcolm E.; Chen, Jen-Ping; Colarco, Peter R.; Gong, Sunlin; Huneeus, Nicolas; hide

    2014-01-01

    Over the last few years, numerical prediction of dust aerosol concentration has become prominent at several research and operational weather centres due to growing interest from diverse stakeholders, such as solar energy plant managers, health professionals, aviation and military authorities and policymakers. Dust prediction in numerical weather prediction-type models faces a number of challenges owing to the complexity of the system. At the centre of the problem is the vast range of scales required to fully account for all of the physical processes related to dust. Another limiting factor is the paucity of suitable dust observations available for model, evaluation and assimilation. This chapter discusses in detail numerical prediction of dust with examples from systems that are currently providing dust forecasts in near real-time or are part of international efforts to establish daily provision of dust forecasts based on multi-model ensembles. The various models are introduced and described along with an overview on the importance of dust prediction activities and a historical perspective. Assimilation and evaluation aspects in dust prediction are also discussed.

  3. Proactive Supply Chain Performance Management with Predictive Analytics

    PubMed Central

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605

  4. Proactive supply chain performance management with predictive analytics.

    PubMed

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  5. Latent spatial models and sampling design for landscape genetics

    Treesearch

    Ephraim M. Hanks; Melvin B. Hooten; Steven T. Knick; Sara J. Oyler-McCance; Jennifer A. Fike; Todd B. Cross; Michael K. Schwartz

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial...

  6. IRT-ZIP Modeling for Multivariate Zero-Inflated Count Data

    ERIC Educational Resources Information Center

    Wang, Lijuan

    2010-01-01

    This study introduces an item response theory-zero-inflated Poisson (IRT-ZIP) model to investigate psychometric properties of multiple items and predict individuals' latent trait scores for multivariate zero-inflated count data. In the model, two link functions are used to capture two processes of the zero-inflated count data. Item parameters are…

  7. Predicting Microstructure and Microsegregation in Multicomponent Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Yan, Xinyan; Ding, Ling; Chen, ShuangLin; Xie, Fanyou; Chu, M.; Chang, Y. Austin

    Accurate predictions of microstructure and microsegregation in metallic alloys are highly important for applications such as alloy design and process optimization. Restricted assumptions concerning the phase diagram could easily lead to erroneous predictions. The best approach is to couple microsegregation modeling with phase diagram computations. A newly developed numerical model for the prediction of microstructure and microsegregation in multicomponent alloys during dendritic solidification was introduced. The micromodel is directly coupled with phase diagram calculations using a user-friendly and robust phase diagram calculation engine-PANDAT. Solid state back diffusion, undercooling and coarsening effects are included in this model, and the experimentally measured cooling curves are used as the inputs to carry out the calculations. This model has been used to predict the microstructure and microsegregation in two multicomponent aluminum alloys, 2219 and 7050. The calculated values were confirmed using results obtained from directional solidification.

  8. Genetic programming based quantitative structure-retention relationships for the prediction of Kovats retention indices.

    PubMed

    Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S

    2015-11-13

    The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Species distribution models may misdirect assisted migration: insights from the introduction of Douglas-fir to Europe.

    PubMed

    Boiffin, Juliette; Badeau, Vincent; Bréda, Nathalie

    2017-03-01

    Species distribution models (SDMs), which statistically relate species occurrence to climatic variables, are widely used to identify areas suitable for species growth under future climates and to plan for assisted migration. When SDMs are projected across times or spaces, it is assumed that species climatic requirements remain constant. However, empirical evidence supporting this assumption is rare, and SDM predictions could be biased. Historical human-aided movements of tree species can shed light on the reliability of SDM predictions in planning for assisted migration. We used Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco), a North American conifer introduced into Europe during the mid-19th century, as a case-study to test niche conservatism. We combined transcontinental data sets of Douglas-fir occurrence and climatic predictors to compare the realized niches between native and introduced ranges. We calibrated a SDM in the native range and compared areas predicted to be climatically suitable with observed presences. The realized niches in the native and introduced ranges showed very limited overlap. The SDM calibrated in North America had very high predictive power in the native range, but failed to predict climatic suitability in Europe where Douglas-fir grows in climates that have no analogue in the native range. We review the ecological mechanisms and silvicultural practices that can trigger such shifts in realized niches. Retrospective analysis of tree species introduction revealed that the assumption of niche conservatism is erroneous. As a result, distributions predicted by SDM are importantly biased. There is a high risk that assisted migration programs may be misdirected and target inadequate species or introduction zones. © 2016 by the Ecological Society of America.

  10. Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA

    NASA Technical Reports Server (NTRS)

    Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.

    2017-01-01

    Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.

  11. Robust functional regression model for marginal mean and subject-specific inferences.

    PubMed

    Cao, Chunzheng; Shi, Jian Qing; Lee, Youngjo

    2017-01-01

    We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student t-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well as interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals (PIs) for conditional mean curves. Numerical studies show that the proposed model provides a robust approach against data contamination or distribution misspecification, and the proposed PIs maintain the nominal confidence levels. A real data application is presented as an illustrative example.

  12. Theoretical Prediction of Magnetism in C-doped TlBr

    NASA Astrophysics Data System (ADS)

    Zhou, Yuzhi; Haller, E. E.; Chrzan, D. C.

    2014-05-01

    We predict that C, N, and O dopants in TlBr can display large, localized magnetic moments. Density functional theory based electronic structure calculations show that the moments arise from partial filling of the crystal-field-split localized p states of the dopant atoms. A simple model is introduced to explain the magnitude of the moments.

  13. Predicting Negative Discipline in Traditional Families: A Multi-Dimensional Stress Model.

    ERIC Educational Resources Information Center

    Fisher, Philip A.

    An attempt is made to integrate existing theories of family violence by introducing the concept of family role stress. Role stressors may be defined as factors inhibiting the enactment of family roles. Multiple regression analyses were performed on data from 190 families to test a hypothesis involving the prediction of negative discipline at…

  14. A Hybrid RANS/LES Approach for Predicting Jet Noise

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.

  15. Financial technical indicator based on chaotic bagging predictors for adaptive stock selection in Japanese and American markets

    NASA Astrophysics Data System (ADS)

    Suzuki, Tomoya; Ohkura, Yuushi

    2016-01-01

    In order to examine the predictability and profitability of financial markets, we introduce three ideas to improve the traditional technical analysis to detect investment timings more quickly. Firstly, a nonlinear prediction model is considered as an effective way to enhance this detection power by learning complex behavioral patterns hidden in financial markets. Secondly, the bagging algorithm can be applied to quantify the confidence in predictions and compose new technical indicators. Thirdly, we also introduce how to select more profitable stocks to improve investment performance by the two-step selection: the first step selects more predictable stocks during the learning period, and then the second step adaptively and dynamically selects the most confident stock showing the most significant technical signal in each investment. Finally, some investment simulations based on real financial data show that these ideas are successful in overcoming complex financial markets.

  16. An age-specific biokinetic model for iodine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leggett, Richard Wayne

    This study reviews age-specific biokinetic data for iodine in humans and extends to pre-adult ages the baseline parameter values of the author’s previously published model for systemic iodine in adult humans. Compared with the ICRP’s current age-specific model for iodine introduced in Publication 56 (1989), the present model provides a more detailed description of the behavior of iodine in the human body; predicts greater cumulative (integrated) activity in the thyroid for short-lived isotopes of iodine; predicts similar cumulative activity in the thyroid for isotopes with half-time greater than a few hours; and, for most iodine isotopes, predicts much greater cumulativemore » activity in salivary glands, stomach wall, liver, and kidneys.« less

  17. An age-specific biokinetic model for iodine

    DOE PAGES

    Leggett, Richard Wayne

    2017-10-26

    This study reviews age-specific biokinetic data for iodine in humans and extends to pre-adult ages the baseline parameter values of the author’s previously published model for systemic iodine in adult humans. Compared with the ICRP’s current age-specific model for iodine introduced in Publication 56 (1989), the present model provides a more detailed description of the behavior of iodine in the human body; predicts greater cumulative (integrated) activity in the thyroid for short-lived isotopes of iodine; predicts similar cumulative activity in the thyroid for isotopes with half-time greater than a few hours; and, for most iodine isotopes, predicts much greater cumulativemore » activity in salivary glands, stomach wall, liver, and kidneys.« less

  18. An exponential filter model predicts lightness illusions

    PubMed Central

    Zeman, Astrid; Brooks, Kevin R.; Ghebreab, Sennay

    2015-01-01

    Lightness, or perceived reflectance of a surface, is influenced by surrounding context. This is demonstrated by the Simultaneous Contrast Illusion (SCI), where a gray patch is perceived lighter against a black background and vice versa. Conversely, assimilation is where the lightness of the target patch moves toward that of the bounding areas and can be demonstrated in White's effect. Blakeslee and McCourt (1999) introduced an oriented difference-of-Gaussian (ODOG) model that is able to account for both contrast and assimilation in a number of lightness illusions and that has been subsequently improved using localized normalization techniques. We introduce a model inspired by image statistics that is based on a family of exponential filters, with kernels spanning across multiple sizes and shapes. We include an optional second stage of normalization based on contrast gain control. Our model was tested on a well-known set of lightness illusions that have previously been used to evaluate ODOG and its variants, and model lightness values were compared with typical human data. We investigate whether predictive success depends on filters of a particular size or shape and whether pooling information across filters can improve performance. The best single filter correctly predicted the direction of lightness effects for 21 out of 27 illusions. Combining two filters together increased the best performance to 23, with asymptotic performance at 24 for an arbitrarily large combination of filter outputs. While normalization improved prediction magnitudes, it only slightly improved overall scores in direction predictions. The prediction performance of 24 out of 27 illusions equals that of the best performing ODOG variant, with greater parsimony. Our model shows that V1-style orientation-selectivity is not necessary to account for lightness illusions and that a low-level model based on image statistics is able to account for a wide range of both contrast and assimilation effects. PMID:26157381

  19. Nonstationary time series prediction combined with slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, G.; Chen, X.

    2015-01-01

    Almost all climate time series have some degree of nonstationarity due to external driving forces perturbations of the observed system. Therefore, these external driving forces should be taken into account when reconstructing the climate dynamics. This paper presents a new technique of combining the driving force of a time series obtained using the Slow Feature Analysis (SFA) approach, then introducing the driving force into a predictive model to predict non-stationary time series. In essence, the main idea of the technique is to consider the driving forces as state variables and incorporate them into the prediction model. To test the method, experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted. The results showed improved and effective prediction skill.

  20. The P-chain: relating sentence production and its disorders to comprehension and acquisition

    PubMed Central

    Dell, Gary S.; Chang, Franklin

    2014-01-01

    This article introduces the P-chain, an emerging framework for theory in psycholinguistics that unifies research on comprehension, production and acquisition. The framework proposes that language processing involves incremental prediction, which is carried out by the production system. Prediction necessarily leads to prediction error, which drives learning, including both adaptive adjustment to the mature language processing system as well as language acquisition. To illustrate the P-chain, we review the Dual-path model of sentence production, a connectionist model that explains structural priming in production and a number of facts about language acquisition. The potential of this and related models for explaining acquired and developmental disorders of sentence production is discussed. PMID:24324238

  1. The P-chain: relating sentence production and its disorders to comprehension and acquisition.

    PubMed

    Dell, Gary S; Chang, Franklin

    2014-01-01

    This article introduces the P-chain, an emerging framework for theory in psycholinguistics that unifies research on comprehension, production and acquisition. The framework proposes that language processing involves incremental prediction, which is carried out by the production system. Prediction necessarily leads to prediction error, which drives learning, including both adaptive adjustment to the mature language processing system as well as language acquisition. To illustrate the P-chain, we review the Dual-path model of sentence production, a connectionist model that explains structural priming in production and a number of facts about language acquisition. The potential of this and related models for explaining acquired and developmental disorders of sentence production is discussed.

  2. Genetic Model Fitting in IQ, Assortative Mating & Components of IQ Variance.

    ERIC Educational Resources Information Center

    Capron, Christiane; Vetta, Adrian R.; Vetta, Atam

    1998-01-01

    The biometrical school of scientists who fit models to IQ data traces their intellectual ancestry to R. Fisher (1918), but their genetic models have no predictive value. Fisher himself was critical of the concept of heritability, because assortative mating, such as for IQ, introduces complexities into the study of a genetic trait. (SLD)

  3. Using HFire for spatial modeling of fire in shrublands

    Treesearch

    Seth H. Peterson; Marco E. Morais; Jean M. Carlson; Philip E. Dennison; Dar A. Roberts; Max A. Moritz; David R. Weise

    2009-01-01

    An efficient raster fire-spread model named HFire is introduced. HFire can simulate single-fire events or long-term fire regimes, using the same fire-spread algorithm. This paper describes the HFire algorithm, benchmarks the model using a standard set of tests developed for FARSITE, and compares historical and predicted fire spread perimeters for three southern...

  4. New Correlation Methods of Evaporation Heat Transfer in Horizontal Microfine Tubes

    NASA Astrophysics Data System (ADS)

    Makishi, Osamu; Honda, Hiroshi

    A stratified flow model and an annular flow model of evaporation heat transfer in horizontal microfin tubes have been proposed. In the stratified flow model, the contributions of thin film evaporation and nucleate boiling in the groove above a stratified liquid were predicted by a previously reported numerical analysis and a newly developed correlation, respectively. The contributions of nucleate boiling and forced convection in the stratified liquid region were predicted by the new correlation and the Carnavos equation, respectively. In the annular flow model, the contributions of nucleate boiling and forced convection were predicted by the new correlation and the Carnavos equation in which the equivalent Reynolds number was introduced, respectively. A flow pattern transition criterion proposed by Kattan et al. was incorporated to predict the circumferential average heat transfer coefficient in the intermediate region by use of the two models. The predictions of the heat transfer coefficient compared well with available experimental data for ten tubes and four refrigerants.

  5. A LATIN-based model reduction approach for the simulation of cycling damage

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Mainak; Fau, Amelie; Nackenhorst, Udo; Néron, David; Ladevèze, Pierre

    2017-11-01

    The objective of this article is to introduce a new method including model order reduction for the life prediction of structures subjected to cycling damage. Contrary to classical incremental schemes for damage computation, a non-incremental technique, the LATIN method, is used herein as a solution framework. This approach allows to introduce a PGD model reduction technique which leads to a drastic reduction of the computational cost. The proposed framework is exemplified for structures subjected to cyclic loading, where damage is considered to be isotropic and micro-defect closure effects are taken into account. A difficulty herein for the use of the LATIN method comes from the state laws which can not be transformed into linear relations through an internal variable transformation. A specific treatment of this issue is introduced in this work.

  6. Handbook of Analytical Methods for Textile Composites

    NASA Technical Reports Server (NTRS)

    Cox, Brian N.; Flanagan, Gerry

    1997-01-01

    The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.

  7. Computational intelligence in earth sciences and environmental applications: issues and challenges.

    PubMed

    Cherkassky, V; Krasnopolsky, V; Solomatine, D P; Valdes, J

    2006-03-01

    This paper introduces a generic theoretical framework for predictive learning, and relates it to data-driven and learning applications in earth and environmental sciences. The issues of data quality, selection of the error function, incorporation of the predictive learning methods into the existing modeling frameworks, expert knowledge, model uncertainty, and other application-domain specific problems are discussed. A brief overview of the papers in the Special Issue is provided, followed by discussion of open issues and directions for future research.

  8. Care 3 model overview and user's guide, first revision

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Petersen, P. L.

    1985-01-01

    A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.

  9. Self-Esteem Maintenance in Family Dynamics.

    ERIC Educational Resources Information Center

    Tesser, Abraham

    1980-01-01

    In this study, a self-esteem maintenance model is introduced and is used to test predictions about sibling identification, sibling friction, and the closeness of father-son relationships as they relate to sibling performance. (Author/SS)

  10. Galileo Redux or, How Do Nonrigid, Extended Bodies Fall?

    ERIC Educational Resources Information Center

    Newburgh, Ronald; Andes, George M.

    1995-01-01

    Presents a model for the Slinky that allows for calculations that agree with observed behavior and predictions that suggest further experimentation. Offers an opportunity for introducing nonrigid bodies within the Galilean framework. (JRH)

  11. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

    USGS Publications Warehouse

    Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.

    2013-01-01

    At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

  12. Statistical variation in progressive scrambling

    NASA Astrophysics Data System (ADS)

    Clark, Robert D.; Fox, Peter C.

    2004-07-01

    The two methods most often used to evaluate the robustness and predictivity of partial least squares (PLS) models are cross-validation and response randomization. Both methods may be overly optimistic for data sets that contain redundant observations, however. The kinds of perturbation analysis widely used for evaluating model stability in the context of ordinary least squares regression are only applicable when the descriptors are independent of each other and errors are independent and normally distributed; neither assumption holds for QSAR in general and for PLS in particular. Progressive scrambling is a novel, non-parametric approach to perturbing models in the response space in a way that does not disturb the underlying covariance structure of the data. Here, we introduce adjustments for two of the characteristic values produced by a progressive scrambling analysis - the deprecated predictivity (Q_s^{ast^2}) and standard error of prediction (SDEP s * ) - that correct for the effect of introduced perturbation. We also explore the statistical behavior of the adjusted values (Q_0^{ast^2} and SDEP 0 * ) and the sensitivity to perturbation (d q 2/d r yy ' 2). It is shown that the three statistics are all robust for stable PLS models, in terms of the stochastic component of their determination and of their variation due to sampling effects involved in training set selection.

  13. A probabilistic neural network based approach for predicting the output power of wind turbines

    NASA Astrophysics Data System (ADS)

    Tabatabaei, Sajad

    2017-03-01

    Finding the authentic predicting tools of eliminating the uncertainty of wind speed forecasts is highly required while wind power sources are strongly penetrating. Recently, traditional predicting models of generating point forecasts have no longer been trustee. Thus, the present paper aims at utilising the concept of prediction intervals (PIs) to assess the uncertainty of wind power generation in power systems. Besides, this paper uses a newly introduced non-parametric approach called lower upper bound estimation (LUBE) to build the PIs since the forecasting errors are unable to be modelled properly by applying distribution probability functions. In the present proposed LUBE method, a PI combination-based fuzzy framework is used to overcome the performance instability of neutral networks (NNs) used in LUBE. In comparison to other methods, this formulation more suitably has satisfied the PI coverage and PI normalised average width (PINAW). Since this non-linear problem has a high complexity, a new heuristic-based optimisation algorithm comprising a novel modification is introduced to solve the aforesaid problems. Based on data sets taken from a wind farm in Australia, the feasibility and satisfying performance of the suggested method have been investigated.

  14. The predictability of consumer visitation patterns

    NASA Astrophysics Data System (ADS)

    Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban

    2013-04-01

    We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.

  15. The predictability of consumer visitation patterns

    PubMed Central

    Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban

    2013-01-01

    We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917

  16. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  17. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  18. Analysis of Mining-Induced Subsidence Prediction by Exponent Knothe Model Combined with Insar and Leveling

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Zhang, Liguo; Tang, Yixian; Zhang, Hong

    2018-04-01

    The principle of exponent Knothe model was introduced in detail and the variation process of mining subsidence with time was analysed based on the formulas of subsidence, subsidence velocity and subsidence acceleration in the paper. Five scenes of radar images and six levelling measurements were collected to extract ground deformation characteristics in one coal mining area in this study. Then the unknown parameters of exponent Knothe model were estimated by combined levelling data with deformation information along the line of sight obtained by InSAR technique. By compared the fitting and prediction results obtained by InSAR and levelling with that obtained only by levelling, it was shown that the accuracy of fitting and prediction combined with InSAR and levelling was obviously better than the other that. Therefore, the InSAR measurements can significantly improve the fitting and prediction accuracy of exponent Knothe model.

  19. Can We Predict Patient Wait Time?

    PubMed

    Pianykh, Oleg S; Rosenthal, Daniel I

    2015-10-01

    The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  20. A predictive framework for evaluating models of semantic organization in free recall

    PubMed Central

    Morton, Neal W; Polyn, Sean M.

    2016-01-01

    Research in free recall has demonstrated that semantic associations reliably influence the organization of search through episodic memory. However, the specific structure of these associations and the mechanisms by which they influence memory search remain unclear. We introduce a likelihood-based model-comparison technique, which embeds a model of semantic structure within the context maintenance and retrieval (CMR) model of human memory search. Within this framework, model variants are evaluated in terms of their ability to predict the specific sequence in which items are recalled. We compare three models of semantic structure, latent semantic analysis (LSA), global vectors (GloVe), and word association spaces (WAS), and find that models using WAS have the greatest predictive power. Furthermore, we find evidence that semantic and temporal organization is driven by distinct item and context cues, rather than a single context cue. This finding provides important constraint for theories of memory search. PMID:28331243

  1. On the selection of significant variables in a model for the deteriorating process of facades

    NASA Astrophysics Data System (ADS)

    Serrat, C.; Gibert, V.; Casas, J. R.; Rapinski, J.

    2017-10-01

    In previous works the authors of this paper have introduced a predictive system that uses survival analysis techniques for the study of time-to-failure in the facades of a building stock. The approach is population based, in order to obtain information on the evolution of the stock across time, and to help the manager in the decision making process on global maintenance strategies. For the decision making it is crutial to determine those covariates -like materials, morphology and characteristics of the facade, orientation or environmental conditions- that play a significative role in the progression of different failures. The proposed platform also incorporates an open source GIS plugin that includes survival and test moduli that allow the investigator to model the time until a lesion taking into account the variables collected during the inspection process. The aim of this paper is double: a) to shortly introduce the predictive system, as well as the inspection and the analysis methodologies and b) to introduce and illustrate the modeling strategy for the deteriorating process of an urban front. The illustration will be focused on the city of L’Hospitalet de Llobregat (Barcelona, Spain) in which more than 14,000 facades have been inspected and analyzed.

  2. Tachyon cosmology, supernovae data, and the big brake singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keresztes, Z.; Gergely, L. A.; Gorini, V.

    2009-04-15

    We compare the existing observational data on type Ia supernovae with the evolutions of the Universe predicted by a one-parameter family of tachyon models which we have introduced recently [Phys. Rev. D 69, 123512 (2004)]. Among the set of the trajectories of the model which are compatible with the data there is a consistent subset for which the Universe ends up in a new type of soft cosmological singularity dubbed big brake. This opens up yet another scenario for the future history of the Universe besides the one predicted by the standard {lambda}CDM model.

  3. Predicting transmittance spectra of electrophotographic color prints

    NASA Astrophysics Data System (ADS)

    Mourad, Safer; Emmel, Patrick; Hersch, Roger D.

    2000-12-01

    For dry toner electrophotographic color printers, we present a numerical simulation model describing the color printer responses based on a physical characterization of the different electrophotographic process steps. The proposed model introduces a Cross Transfer Efficiency designed to predict the color transmittance spectra of multi-color prints by taking into account the transfer influence of each deposited color toner layer upon the other layers. The simulation model leads to a better understanding of the factors that have an impact on printing quality. In order to avoid the additional optical non-linearities produced by light reflection on paper, we have limited the present investigation to transparency prints. The proposed model succeeded to predict the transmittance spectra of printed wedges combining two color toner layers with a mean deviation less than CIE-LAB (Delta) E equals 2.5.

  4. Assessing Tinto's Model of Institutional Departure Using American Indian and Alaskan Native Longitudinal Data. ASHE Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Pavel, D. Michael

    This paper on postsecondary outcomes illustrates a technique to determine whether or not mainstream models are appropriate for predicting educational outcomes of American Indians (AIs) and Alaskan Native (ANs). It introduces a prominent statistical procedure to assess models with empirical data and shows how the results can have implications for…

  5. The growth of business firms: theoretical framework and empirical evidence.

    PubMed

    Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S V; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H Eugene

    2005-12-27

    We introduce a model of proportional growth to explain the distribution P(g)(g) of business-firm growth rates. The model predicts that P(g)(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent zeta = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships.

  6. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  7. Prediction of AL and Dst Indices from ACE Measurements Using Hybrid Physics/Black-Box Techniques

    NASA Astrophysics Data System (ADS)

    Spencer, E.; Rao, A.; Horton, W.; Mays, L.

    2008-12-01

    ACE measurements of the solar wind velocity, IMF and proton density is used to drive a hybrid Physics/Black- Box model of the nightside magnetosphere. The core physics is contained in a low order nonlinear dynamical model of the nightside magnetosphere called WINDMI. The model is augmented by wavelet based nonlinear mappings between the solar wind quantities and the input into the physics model, followed by further wavelet based mappings of the model output field aligned currents onto the ground based magnetometer measurements of the AL index and Dst index. The black box mappings are introduced at the input stage to account for uncertainties in the way the solar wind quantities are transported from the ACE spacecraft at L1 to the magnetopause. Similar mappings are introduced at the output stage to account for a spatially and temporally varying westward auroral electrojet geometry. The parameters of the model are tuned using a genetic algorithm, and trained using the large geomagnetic storm dataset of October 3-7 2000. It's predictive performance is then evaluated on subsequent storm datasets, in particular the April 15-24 2002 storm. This work is supported by grant NSF 7020201

  8. Characteristics of Creep Damage for 60Sn-40Pb Solder Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Y.; Chow, C.L.; Fang, H.E.

    This paper presents a viscoplasticity model taking into account the effects of change in grain or phase size and damage on the characterization of creep damage in 60Sn-40Pb solder. Based on the theory of damage mechanics, a two-scalar damage model is developed for isotropic materials by introducing the free energy equivalence principle. The damage evolution equations are derived in terms of the damage energy release rates. In addition, a failure criterion is developed based on the postulation that a material element is said to have ruptured when the total damage accumulated in the element reaches a critical value. The damagemore » coupled viscoplasticity model is discretized and coded in a general-purpose finite element program known as ABAQUS through its user-defined material subroutine UMAT. To illustrate the application of the model, several example cases are introduced to analyze, both numerically and experimentally, the tensile creep behaviors of the material at three stress levels. The model is then applied to predict the deformation of a notched specimen under monotonic tension at room temperature (22 C). The results demonstrate that the proposed model can successfully predict the viscoplastic behavior of the solder material.« less

  9. A 1D-2D coupled SPH-SWE model applied to open channel flow simulations in complicated geometries

    NASA Astrophysics Data System (ADS)

    Chang, Kao-Hua; Sheu, Tony Wen-Hann; Chang, Tsang-Jung

    2018-05-01

    In this study, a one- and two-dimensional (1D-2D) coupled model is developed to solve the shallow water equations (SWEs). The solutions are obtained using a Lagrangian meshless method called smoothed particle hydrodynamics (SPH) to simulate shallow water flows in converging, diverging and curved channels. A buffer zone is introduced to exchange information between the 1D and 2D SPH-SWE models. Interpolated water discharge values and water surface levels at the internal boundaries are prescribed as the inflow/outflow boundary conditions in the two SPH-SWE models. In addition, instead of using the SPH summation operator, we directly solve the continuity equation by introducing a diffusive term to suppress oscillations in the predicted water depth. The performance of the two approaches in calculating the water depth is comprehensively compared through a case study of a straight channel. Additionally, three benchmark cases involving converging, diverging and curved channels are adopted to demonstrate the ability of the proposed 1D and 2D coupled SPH-SWE model through comparisons with measured data and predicted mesh-based numerical results. The proposed model provides satisfactory accuracy and guaranteed convergence.

  10. Predicting Mercury's precession using simple relativistic Newtonian dynamics

    NASA Astrophysics Data System (ADS)

    Friedman, Y.; Steiner, J. M.

    2016-03-01

    We present a new simple relativistic model for planetary motion describing accurately the anomalous precession of the perihelion of Mercury and its origin. The model is based on transforming Newton's classical equation for planetary motion from absolute to real spacetime influenced by the gravitational potential and introducing the concept of influenced direction.

  11. Forecasting Enrollments with Fuzzy Time Series.

    ERIC Educational Resources Information Center

    Song, Qiang; Chissom, Brad S.

    The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…

  12. Research on prediction of agricultural machinery total power based on grey model optimized by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Yan; Li, Mu; Zhou, Jin; Zheng, Chang-zheng

    2009-07-01

    Agricultural machinery total power is an important index to reflex and evaluate the level of agricultural mechanization. It is the power source of agricultural production, and is the main factors to enhance the comprehensive agricultural production capacity expand production scale and increase the income of the farmers. Its demand is affected by natural, economic, technological and social and other "grey" factors. Therefore, grey system theory can be used to analyze the development of agricultural machinery total power. A method based on genetic algorithm optimizing grey modeling process is introduced in this paper. This method makes full use of the advantages of the grey prediction model and characteristics of genetic algorithm to find global optimization. So the prediction model is more accurate. According to data from a province, the GM (1, 1) model for predicting agricultural machinery total power was given based on the grey system theories and genetic algorithm. The result indicates that the model can be used as agricultural machinery total power an effective tool for prediction.

  13. A coupled ductile fracture phase-field model for crystal plasticity

    NASA Astrophysics Data System (ADS)

    Hernandez Padilla, Carlos Alberto; Markert, Bernd

    2017-07-01

    Nowadays crack initiation and evolution play a key role in the design of mechanical components. In the past few decades, several numerical approaches have been developed with the objective to predict these phenomena. The objective of this work is to present a simplified, nonetheless representative phenomenological model to predict the crack evolution of ductile fracture in single crystals. The proposed numerical approach is carried out by merging a conventional elasto-plastic crystal plasticity model and a phase-field model modified to predict ductile fracture. A two-dimensional initial boundary value problem of ductile fracture is introduced considering a single-crystal setup and Nickel-base superalloy material properties. The model is implemented into the finite element context subjected to a quasi-static uniaxial tension test. The results are then qualitatively analyzed and briefly compared to current benchmark results in the literature.

  14. Mathematical models for predicting human mobility in the context of infectious disease spread: introducing the impedance model.

    PubMed

    Sallah, Kankoé; Giorgi, Roch; Bengtsson, Linus; Lu, Xin; Wetter, Erik; Adrien, Paul; Rebaudet, Stanislas; Piarroux, Renaud; Gaudart, Jean

    2017-11-22

    Mathematical models of human mobility have demonstrated a great potential for infectious disease epidemiology in contexts of data scarcity. While the commonly used gravity model involves parameter tuning and is thus difficult to implement without reference data, the more recent radiation model based on population densities is parameter-free, but biased. In this study we introduce the new impedance model, by analogy with electricity. Previous research has compared models on the basis of a few specific available spatial patterns. In this study, we use a systematic simulation-based approach to assess the performances. Five hundred spatial patterns were generated using various area sizes and location coordinates. Model performances were evaluated based on these patterns. For simulated data, comparison measures were average root mean square error (aRMSE) and bias criteria. Modeling of the 2010 Haiti cholera epidemic with a basic susceptible-infected-recovered (SIR) framework allowed an empirical evaluation through assessing the goodness-of-fit of the observed epidemic curve. The new, parameter-free impedance model outperformed previous models on simulated data according to average aRMSE and bias criteria. The impedance model achieved better performances with heterogeneous population densities and small destination populations. As a proof of concept, the basic compartmental SIR framework was used to confirm the results obtained with the impedance model in predicting the spread of cholera in Haiti in 2010. The proposed new impedance model provides accurate estimations of human mobility, especially when the population distribution is highly heterogeneous. This model can therefore help to achieve more accurate predictions of disease spread in the context of an epidemic.

  15. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  16. The use of predictive models to optimize risk of decisions.

    PubMed

    Baranyi, József; Buss da Silva, Nathália

    2017-01-02

    The purpose of this paper is to set up a mathematical framework that risk assessors and regulators could use to quantify the "riskiness" of a particular recommendation (choice/decision). The mathematical theory introduced here can be used for decision support systems. We point out that efficient use of predictive models in decision making for food microbiology needs to consider three major points: (1) the uncertainty and variability of the used information based on which the decision is to be made; (2) the validity of the predictive models aiding the assessor; and (3) the cost generated by the difference between the a-priory choice and the a-posteriori outcome. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. A Diffusive-Particle Theory of Free Recall

    PubMed Central

    Fumarola, Francesco

    2017-01-01

    Diffusive models of free recall have been recently introduced in the memory literature, but their potential remains largely unexplored. In this paper, a diffusive model of short-term verbal memory is considered, in which the psychological state of the subject is encoded as the instantaneous position of a particle diffusing over a semantic graph. The model is particularly suitable for studying the dependence of free-recall observables on the semantic properties of the words to be recalled. Besides predicting some well-known experimental features (forward asymmetry, semantic clustering, word-length effect), a novel prediction is obtained on the relationship between the contiguity effect and the syllabic length of words; shorter words, by way of their wider semantic range, are predicted to be characterized by stronger forward contiguity. A fresh analysis of archival free-recall data allows to confirm this prediction. PMID:29085521

  18. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks.

    PubMed

    Khan, Taimoor; De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.

  19. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks

    PubMed Central

    De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  20. Modelling and prediction for chaotic fir laser attractor using rational function neural network.

    PubMed

    Cho, S

    2001-02-01

    Many real-world systems such as irregular ECG signal, volatility of currency exchange rate and heated fluid reaction exhibit highly complex nonlinear characteristic known as chaos. These chaotic systems cannot be retreated satisfactorily using linear system theory due to its high dimensionality and irregularity. This research focuses on prediction and modelling of chaotic FIR (Far InfraRed) laser system for which the underlying equations are not given. This paper proposed a method for prediction and modelling a chaotic FIR laser time series using rational function neural network. Three network architectures, TDNN (Time Delayed Neural Network), RBF (radial basis function) network and the RF (rational function) network, are also presented. Comparisons between these networks performance show the improvements introduced by the RF network in terms of a decrement in network complexity and better ability of predictability.

  1. Prediction of clinical behaviour and treatment for cancers.

    PubMed

    Futschik, Matthias E; Sullivan, Mike; Reeve, Anthony; Kasabov, Nikola

    2003-01-01

    Prediction of clinical behaviour and treatment for cancers is based on the integration of clinical and pathological parameters. Recent reports have demonstrated that gene expression profiling provides a powerful new approach for determining disease outcome. If clinical and microarray data each contain independent information then it should be possible to combine these datasets to gain more accurate prognostic information. Here, we have used existing clinical information and microarray data to generate a combined prognostic model for outcome prediction for diffuse large B-cell lymphoma (DLBCL). A prediction accuracy of 87.5% was achieved. This constitutes a significant improvement compared to the previously most accurate prognostic model with an accuracy of 77.6%. The model introduced here may be generally applicable to the combination of various types of molecular and clinical data for improving medical decision support systems and individualising patient care.

  2. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    NASA Astrophysics Data System (ADS)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.

  3. An analytical approach for predicting pilot induced oscillations

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  4. Two models for identification and predicting behaviour of an induction motor system

    NASA Astrophysics Data System (ADS)

    Kuo, Chien-Hsun

    2018-01-01

    System identification or modelling is the process of building mathematical models of dynamical systems based on the available input and output data from the systems. This paper introduces system identification by using ARX (Auto Regressive with eXogeneous input) and ARMAX (Auto Regressive Moving Average with eXogeneous input) models. Through the identified system model, the predicted output could be compared with the measured one to help prevent the motor faults from developing into a catastrophic machine failure and avoid unnecessary costs and delays caused by the need to carry out unscheduled repairs. The induction motor system is illustrated as an example. Numerical and experimental results are shown for the identified induction motor system.

  5. Development of a recursion RNG-based turbulence model

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Vahala, George; Thangam, S.

    1993-01-01

    Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.

  6. Canonical Statistical Model for Maximum Expected Immission of Wire Conductor in an Aperture Enclosure

    NASA Technical Reports Server (NTRS)

    Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.

    2016-01-01

    Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.

  7. Reliability prediction of ontology-based service compositions using Petri net and time series models.

    PubMed

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.

  8. Biomechanics of injury prediction for anthropomorphic manikins - preliminary design considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engin, A.E.

    1996-12-31

    The anthropomorphic manikins are used in automobile safety research as well as in aerospace related applications. There is now a strong need to advance the biomechanics knowledge to determine appropriate criteria for injury likelihood prediction as functions of manikin-measured responses. In this paper, three regions of a manikin, namely, the head, knee joint, and lumbar spine are taken as examples to introduce preliminary design considerations for injury prediction by means of responses of theoretical models and strategically placed sensing devices.

  9. Laszlo Tisza and the two-fluid model of superfluidity

    NASA Astrophysics Data System (ADS)

    Balibar, Sébastien

    2017-11-01

    The "two-fluid model" of superfluidity was first introduced by Laszlo Tisza in 1938. On that year, Tisza published the principles of his model as a brief note in Nature and two articles in French in the Comptes rendus de l'Académie des sciences, followed in 1940 by two other articles in French in the Journal de physique et le Radium. In 1941, the two-fluid model was reformulated by Lev Landau on a more rigorous basis. Successive experiments confirmed the revolutionary idea introduced by Tisza: superfluid helium is indeed a surprising mixture of two fluids with independent velocity fields. His prediction of the existence of heat waves, a consequence of his model, was also confirmed. Then, it took several decades for the superfluidity of liquid helium to be fully understood.

  10. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    NASA Astrophysics Data System (ADS)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  11. Ideal glass transitions in thin films: An energy landscape perspective

    NASA Astrophysics Data System (ADS)

    Truskett, Thomas M.; Ganesan, Venkat

    2003-07-01

    We introduce a mean-field model for the potential energy landscape of a thin fluid film confined between parallel substrates. The model predicts how the number of accessible basins on the energy landscape and, consequently, the film's ideal glass transition temperature depend on bulk pressure, film thickness, and the strength of the fluid-fluid and fluid-substrate interactions. The predictions are in qualitative agreement with the experimental trends for the kinetic glass transition temperature of thin films, suggesting the utility of landscape-based approaches for studying the behavior of confined fluids.

  12. Progress Toward Improving Jet Noise Predictions in Hot Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Kenzakowski, Donald C.

    2007-01-01

    An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.

  13. [Gaussian process regression and its application in near-infrared spectroscopy analysis].

    PubMed

    Feng, Ai-Ming; Fang, Li-Min; Lin, Min

    2011-06-01

    Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.

  14. Forming limit prediction by an evolving non-quadratic yield criterion considering the anisotropic hardening and r-value evolution

    NASA Astrophysics Data System (ADS)

    Lian, Junhe; Shen, Fuhui; Liu, Wenqi; Münstermann, Sebastian

    2018-05-01

    The constitutive model development has been driven to a very accurate and fine-resolution description of the material behaviour responding to various environmental variable changes. The evolving features of the anisotropic behaviour during deformation, therefore, has drawn particular attention due to its possible impacts on the sheet metal forming industry. An evolving non-associated Hill48 (enHill48) model was recently proposed and applied to the forming limit prediction by coupling with the modified maximum force criterion. On the one hand, the study showed the significance to include the anisotropic evolution for accurate forming limit prediction. On the other hand, it also illustrated that the enHill48 model introduced an instability region that suddenly decreases the formability. Therefore, in this study, an alternative model that is based on the associated flow rule and provides similar anisotropic predictive capability is extended to chapter the evolving effects and further applied to the forming limit prediction. The final results are compared with experimental data as well as the results by enHill48 model.

  15. Sirius PSB: a generic system for analysis of biological sequences.

    PubMed

    Koh, Chuan Hock; Lin, Sharene; Jedd, Gregory; Wong, Limsoon

    2009-12-01

    Computational tools are essential components of modern biological research. For example, BLAST searches can be used to identify related proteins based on sequence homology, or when a new genome is sequenced, prediction models can be used to annotate functional sites such as transcription start sites, translation initiation sites and polyadenylation sites and to predict protein localization. Here we present Sirius Prediction Systems Builder (PSB), a new computational tool for sequence analysis, classification and searching. Sirius PSB has four main operations: (1) Building a classifier, (2) Deploying a classifier, (3) Search for proteins similar to query proteins, (4) Preliminary and post-prediction analysis. Sirius PSB supports all these operations via a simple and interactive graphical user interface. Besides being a convenient tool, Sirius PSB has also introduced two novelties in sequence analysis. Firstly, genetic algorithm is used to identify interesting features in the feature space. Secondly, instead of the conventional method of searching for similar proteins via sequence similarity, we introduced searching via features' similarity. To demonstrate the capabilities of Sirius PSB, we have built two prediction models - one for the recognition of Arabidopsis polyadenylation sites and another for the subcellular localization of proteins. Both systems are competitive against current state-of-the-art models based on evaluation of public datasets. More notably, the time and effort required to build each model is greatly reduced with the assistance of Sirius PSB. Furthermore, we show that under certain conditions when BLAST is unable to find related proteins, Sirius PSB can identify functionally related proteins based on their biophysical similarities. Sirius PSB and its related supplements are available at: http://compbio.ddns.comp.nus.edu.sg/~sirius.

  16. Multi-dimensional classification of GABAergic interneurons with Bayesian network-modeled label uncertainty.

    PubMed

    Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.

  17. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2014-09-30

    Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online

  18. Predicting Flu Season Requirements: An Undergraduate Modeling Project

    ERIC Educational Resources Information Center

    Kramlich, Gary R., II; Braunstein Fierson, Janet L.; Wright, J. Adam

    2010-01-01

    This project was designed to be used in a freshman calculus class whose students had already been introduced to logistic functions and basic data modeling techniques. It need not be limited to such an audience, however; it has also been implemented in a topics in mathematics class for college upperclassmen. Originally intended to be presented in…

  19. Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Kubo, H.

    2013-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.

  20. Forecast Method of Solar Irradiance with Just-In-Time Modeling

    NASA Astrophysics Data System (ADS)

    Suzuki, Takanobu; Goto, Yusuke; Terazono, Takahiro; Wakao, Shinji; Oozeki, Takashi

    PV power output mainly depends on the solar irradiance which is affected by various meteorological factors. So, it is required to predict solar irradiance in the future for the efficient operation of PV systems. In this paper, we develop a novel approach for solar irradiance forecast, in which we introduce to combine the black-box model (JIT Modeling) with the physical model (GPV data). We investigate the predictive accuracy of solar irradiance over wide controlled-area of each electric power company by utilizing the measured data on the 44 observation points throughout Japan offered by JMA and the 64 points around Kanto by NEDO. Finally, we propose the application forecast method of solar irradiance to the point which is difficulty in compiling the database. And we consider the influence of different GPV default time on solar irradiance prediction.

  1. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  2. A mechanistic investigation of the algae growth "Droop" model.

    PubMed

    Lemesle, V; Mailleret, L

    2008-06-01

    In this work a mechanistic explanation of the classical algae growth model built by M. R. Droop in the late sixties is proposed. We first recall the history of the construction of the "predictive" variable yield Droop model as well as the meaning of the introduced cell quota. We then introduce some theoretical hypotheses on the biological phenomena involved in nutrient storage by the algae that lead us to a "conceptual" model. Though more complex than Droop's one, our model remains accessible to a complete mathematical study: its confrontation to the Droop model shows both have the same asymptotic behavior. However, while Droop's cell quota comes from experimental bio-chemical measurements not related to intra-cellular biological phenomena, its analogous in our model directly follows our theoretical hypotheses. This new model should then be looked at as a re-interpretation of Droop's work from a theoretical biologist's point of view.

  3. Predictive Capabilities of Multiphysics and Multiscale Models in Modeling Solidification of Steel Ingots and DC Casting of Aluminum

    NASA Astrophysics Data System (ADS)

    Combeau, Hervé; Založnik, Miha; Bedel, Marie

    2016-08-01

    Prediction of solidification defects, such as macrosegregation and inhomogeneous microstructures, constitutes a key issue for industry. The development of models of casting processes needs to account for several imbricated length scales and different physical phenomena. For example, the kinetics of the growth of microstructures needs to be coupled with the multiphase flow at the process scale. We introduce such a state-of-the-art model and outline its principles. We present the most recent applications of the model to casting of a heavy steel ingot and to direct chill casting of a large Al alloy sheet ingot. Their ability to help in the understanding of complex phenomena, such as the competition between nucleation and growth of grains in the presence of convection of the liquid and of grain motion is shown, and its predictive capabilities are discussed. Key issues for future developments and research are addressed.

  4. Stoichio-Kinetic Modeling of Fenton Chemistry in a Meat-Mimetic Aqueous-Phase Medium.

    PubMed

    Oueslati, Khaled; Promeyrat, Aurélie; Gatellier, Philippe; Daudin, Jean-Dominique; Kondjoyan, Alain

    2018-05-31

    Fenton reaction kinetics, which involved an Fe(II)/Fe(III) oxidative redox cycle, were studied in a liquid medium that mimics meat composition. Muscle antioxidants (enzymes, peptides, and vitamins) were added one by one in the medium to determine their respective effects on the formation of superoxide and hydroxyl radicals. A stoichio-kinetic mathematical model was used to predict the formation of these radicals under different iron and H 2 O 2 concentrations and temperature conditions. The difference between experimental and predicted results was mainly due to iron reactivity, which had to be taken into account in the model, and to uncertainties on some of the rate constant values introduced in the model. This stoichio-kinetic model will be useful to predict oxidation during meat processes, providing it can be completed to take into account the presence of myoglobin in the muscle.

  5. A New Stress-Based Model of Political Extremism

    PubMed Central

    Canetti-Nisim, Daphna; Halperin, Eran; Sharvit, Keren; Hobfoll, Stevan E.

    2011-01-01

    Does exposure to terrorism lead to hostility toward minorities? Drawing on theories from clinical and social psychology, we propose a stress-based model of political extremism in which psychological distress—which is largely overlooked in political scholarship—and threat perceptions mediate the relationship between exposure to terrorism and attitudes toward minorities. To test the model, a representative sample of 469 Israeli Jewish respondents was interviewed on three occasions at six-month intervals. Structural Equation Modeling indicated that exposure to terrorism predicted psychological distress (t1), which predicted perceived threat from Palestinian citizens of Israel (t2), which, in turn, predicted exclusionist attitudes toward Palestinian citizens of Israel (t3). These findings provide solid evidence and a mechanism for the hypothesis that terrorism introduces nondemocratic attitudes threatening minority rights. It suggests that psychological distress plays an important role in political decision making and should be incorporated in models drawing upon political psychology. PMID:22140275

  6. Modeling and dynamic environment analysis technology for spacecraft

    NASA Astrophysics Data System (ADS)

    Fang, Ren; Zhaohong, Qin; Zhong, Zhang; Zhenhao, Liu; Kai, Yuan; Long, Wei

    Spacecraft sustains complex and severe vibrations and acoustic environments during flight. Predicting the resulting structures, including numerical predictions of fluctuating pressure, updating models and random vibration and acoustic analysis, plays an important role during the design, manufacture and ground testing of spacecraft. In this paper, Monotony Integrative Large Eddy Simulation (MILES) is introduced to predict the fluctuating pressure of the fairing. The exact flow structures of the fairing wall surface under different Mach numbers are obtained, then a spacecraft model is constructed using the finite element method (FEM). According to the modal test data, the model is updated by the penalty method. On this basis, the random vibration and acoustic responses of the fairing and satellite are analyzed by different methods. The simulated results agree well with the experimental ones, which shows the validity of the modeling and dynamic environment analysis technology. This information can better support test planning, defining test conditions and designing optimal structures.

  7. Forecasting stochastic neural network based on financial empirical mode decomposition.

    PubMed

    Wang, Jie; Wang, Jun

    2017-06-01

    In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Predicting perceptual quality of images in realistic scenario using deep filter banks

    NASA Astrophysics Data System (ADS)

    Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang

    2018-03-01

    Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.

  9. Orthogonal Gaussian process models

    DOE PAGES

    Plumlee, Matthew; Joseph, V. Roshan

    2017-01-01

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  10. Orthogonal Gaussian process models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plumlee, Matthew; Joseph, V. Roshan

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  11. A generalized preferential attachment model for business firms growth rates. I. Empirical evidence

    NASA Astrophysics Data System (ADS)

    Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.

    2007-05-01

    We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.

  12. From the Cover: The growth of business firms: Theoretical framework and empirical evidence

    NASA Astrophysics Data System (ADS)

    Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S. V.; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H. Eugene

    2005-12-01

    We introduce a model of proportional growth to explain the distribution Pg(g) of business-firm growth rates. The model predicts that Pg(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships. proportional growth | preferential attachment | Laplace distribution

  13. Discharge in Long Air Gaps; Modelling and applications

    NASA Astrophysics Data System (ADS)

    Beroual, A.; Fofana, I.

    2016-06-01

    Discharge in Long Air Gaps: Modelling and applications presents self-consistent predictive dynamic models of positive and negative discharges in long air gaps. Equivalent models are also derived to predict lightning parameters based on the similarities between long air gap discharges and lightning flashes. Macroscopic air gap discharge parameters are calculated to solve electrical, empirical and physical equations, and comparisons between computed and experimental results for various test configurations are presented and discussed. This book is intended to provide a fresh perspective by contributing an innovative approach to this research domain, and universities with programs in high-voltage engineering will find this volume to be a working example of how to introduce the basics of electric discharge phenomena.

  14. Experimental Evaluation of Tuned Chamber Core Panels for Payload Fairing Noise Control

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Allen, Albert R.; Herlan, Jonathan W.; Rosenthal, Bruce N.

    2015-01-01

    Analytical models have been developed to predict the sound absorption and sound transmission loss of tuned chamber core panels. The panels are constructed of two facesheets sandwiching a corrugated core. When ports are introduced through one facesheet, the long chambers within the core can be used as an array of low-frequency acoustic resonators. To evaluate the accuracy of the analytical models, absorption and sound transmission loss tests were performed on flat panels. Measurements show that the acoustic resonators embedded in the panels improve both the absorption and transmission loss of the sandwich structure at frequencies near the natural frequency of the resonators. Analytical predictions for absorption closely match measured data. However, transmission loss predictions miss important features observed in the measurements. This suggests that higher-fidelity analytical or numerical models will be needed to supplement transmission loss predictions in the future.

  15. Applying Mondrian Cross-Conformal Prediction To Estimate Prediction Confidence on Large Imbalanced Bioactivity Data Sets.

    PubMed

    Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming

    2017-07-24

    Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.

  16. A rational account of pedagogical reasoning: teaching by, and learning from, examples.

    PubMed

    Shafto, Patrick; Goodman, Noah D; Griffiths, Thomas L

    2014-06-01

    Much of learning and reasoning occurs in pedagogical situations--situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher's examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    PubMed

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Zee-Babu type model with U (1 )Lμ-Lτ gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-05-01

    We extend the Zee-Babu model, introducing local U (1 )Lμ-Lτ symmetry with several singly charged bosons. We find a predictive neutrino mass texture in a simple hypothesis in which mixings among singly charged bosons are negligible. Also, lepton-flavor violations are less constrained compared with the original model. Then, we explore the testability of the model, focusing on doubly charged boson physics at the LHC and the International Linear Collider.

  19. Modeling of propulsive jet plumes--extension of modeling capabilities by utilizing wall curvature effects

    NASA Astrophysics Data System (ADS)

    Doerr, S. E.

    1984-06-01

    Modeling of aerodynamic interference effects of propulsive jet plumes, by using inert gases as substitute propellants, introduces design limits. To extend the range of modeling capabilities, nozzle wall curvature effects may be utilized. Numerical calculations, using the Method of Characteristics, were made and experimental data were taken to evaluate the merits of the theoretical predictions. A bibliography, listing articles that led to the present report, is included.

  20. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  1. Millimeter wave attenuation prediction using a piecewise uniform rain rate model

    NASA Technical Reports Server (NTRS)

    Persinger, R. R.; Stutzman, W. L.; Bostian, C. W.; Castle, R. E., Jr.

    1980-01-01

    A piecewise uniform rain rate distribution model is introduced as a quasi-physical model of real rain along earth-space millimeter wave propagation paths. It permits calculation of the total attenuation from specific attenuation in a simple fashion. The model predications are verified by comparison with direct attenuation measurements for several frequencies, elevation angles, and locations. Also, coupled with the Rice-Holmberg rain rate model, attenuation statistics are predicated from rainfall accumulation data.

  2. Predicting the probability of mortality of gastric cancer patients using decision tree.

    PubMed

    Mohammadzadeh, F; Noorkojuri, H; Pourhoseingholi, M A; Saadat, S; Baghestani, A R

    2015-06-01

    Gastric cancer is the fourth most common cancer worldwide. This reason motivated us to investigate and introduce gastric cancer risk factors utilizing statistical methods. The aim of this study was to identify the most important factors influencing the mortality of patients who suffer from gastric cancer disease and to introduce a classification approach according to decision tree model for predicting the probability of mortality from this disease. Data on 216 patients with gastric cancer, who were registered in Taleghani hospital in Tehran,Iran, were analyzed. At first, patients were divided into two groups: the dead and alive. Then, to fit decision tree model to our data, we randomly selected 20% of dataset to the test sample and remaining dataset considered as the training sample. Finally, the validity of the model examined with sensitivity, specificity, diagnosis accuracy and the area under the receiver operating characteristic curve. The CART version 6.0 and SPSS version 19.0 softwares were used for the analysis of the data. Diabetes, ethnicity, tobacco, tumor size, surgery, pathologic stage, age at diagnosis, exposure to chemical weapons and alcohol consumption were determined as effective factors on mortality of gastric cancer. The sensitivity, specificity and accuracy of decision tree were 0.72, 0.75 and 0.74 respectively. The indices of sensitivity, specificity and accuracy represented that the decision tree model has acceptable accuracy to prediction the probability of mortality in gastric cancer patients. So a simple decision tree consisted of factors affecting on mortality of gastric cancer may help clinicians as a reliable and practical tool to predict the probability of mortality in these patients.

  3. Practical simplifications for radioimmunotherapy dosimetric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, S.; DeNardo, G.L.; O`Donnell, R.T.

    1999-01-01

    Radiation dosimetry is potentially useful for assessment and prediction of efficacy and toxicity for radionuclide therapy. The usefulness of these dose estimates relies on the establishment of a dose-response model using accurate pharmacokinetic data and a radiation dosimetric model. Due to the complexity in radiation dose estimation, many practical simplifications have been introduced in the dosimetric modeling for clinical trials of radioimmunotherapy. Although research efforts are generally needed to improve the simplifications used at each stage of model development, practical simplifications are often possible for specific applications without significant consequences to the dose-response model. In the development of dosimetric methodsmore » for radioimmunotherapy, practical simplifications in the dosimetric models were introduced. This study evaluated the magnitude of uncertainty associated with practical simplifications for: (1) organ mass of the MIRD phantom; (2) radiation contribution from target alone; (3) interpolation of S value; (4) macroscopic tumor uniformity; and (5) fit of tumor pharmacokinetic data.« less

  4. Analysis of errors introduced by geographic coordinate systems on weather numeric prediction modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Yanni; Cervone, Guido; Barkley, Zachary

    Most atmospheric models, including the Weather Research and Forecasting (WRF) model, use a spherical geographic coordinate system to internally represent input data and perform computations. However, most geographic information system (GIS) input data used by the models are based on a spheroid datum because it better represents the actual geometry of the earth. WRF and other atmospheric models use these GIS input layers as if they were in a spherical coordinate system without accounting for the difference in datum. When GIS layers are not properly reprojected, latitudinal errors of up to 21 km in the midlatitudes are introduced. Recent studiesmore » have suggested that for very high-resolution applications, the difference in datum in the GIS input data (e.g., terrain land use, orography) should be taken into account. However, the magnitude of errors introduced by the difference in coordinate systems remains unclear. This research quantifies the effect of using a spherical vs. a spheroid datum for the input GIS layers used by WRF to study greenhouse gas transport and dispersion in northeast Pennsylvania.« less

  5. Analysis of errors introduced by geographic coordinate systems on weather numeric prediction modeling

    DOE PAGES

    Cao, Yanni; Cervone, Guido; Barkley, Zachary; ...

    2017-09-19

    Most atmospheric models, including the Weather Research and Forecasting (WRF) model, use a spherical geographic coordinate system to internally represent input data and perform computations. However, most geographic information system (GIS) input data used by the models are based on a spheroid datum because it better represents the actual geometry of the earth. WRF and other atmospheric models use these GIS input layers as if they were in a spherical coordinate system without accounting for the difference in datum. When GIS layers are not properly reprojected, latitudinal errors of up to 21 km in the midlatitudes are introduced. Recent studiesmore » have suggested that for very high-resolution applications, the difference in datum in the GIS input data (e.g., terrain land use, orography) should be taken into account. However, the magnitude of errors introduced by the difference in coordinate systems remains unclear. This research quantifies the effect of using a spherical vs. a spheroid datum for the input GIS layers used by WRF to study greenhouse gas transport and dispersion in northeast Pennsylvania.« less

  6. Analysis of errors introduced by geographic coordinate systems on weather numeric prediction modeling

    NASA Astrophysics Data System (ADS)

    Cao, Yanni; Cervone, Guido; Barkley, Zachary; Lauvaux, Thomas; Deng, Aijun; Taylor, Alan

    2017-09-01

    Most atmospheric models, including the Weather Research and Forecasting (WRF) model, use a spherical geographic coordinate system to internally represent input data and perform computations. However, most geographic information system (GIS) input data used by the models are based on a spheroid datum because it better represents the actual geometry of the earth. WRF and other atmospheric models use these GIS input layers as if they were in a spherical coordinate system without accounting for the difference in datum. When GIS layers are not properly reprojected, latitudinal errors of up to 21 km in the midlatitudes are introduced. Recent studies have suggested that for very high-resolution applications, the difference in datum in the GIS input data (e.g., terrain land use, orography) should be taken into account. However, the magnitude of errors introduced by the difference in coordinate systems remains unclear. This research quantifies the effect of using a spherical vs. a spheroid datum for the input GIS layers used by WRF to study greenhouse gas transport and dispersion in northeast Pennsylvania.

  7. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    NASA Astrophysics Data System (ADS)

    Xavier, Marcelo A.; Trimboli, M. Scott

    2015-07-01

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggest significant performance improvements might be achieved by extending the result to electrochemical models.

  8. Toward a comprehensive model of antisocial development: a dynamic systems approach.

    PubMed

    Granic, Isabela; Patterson, Gerald R

    2006-01-01

    The purpose of this article is to develop a preliminary comprehensive model of antisocial development based on dynamic systems principles. The model is built on the foundations of behavioral research on coercion theory. First, the authors focus on the principles of multistability, feedback, and nonlinear causality to reconceptualize real-time parent-child and peer processes. Second, they model the mechanisms by which these real-time processes give rise to negative developmental outcomes, which in turn feed back to determine real-time interactions. Third, they examine mechanisms of change and stability in early- and late-onset antisocial trajectories. Finally, novel clinical designs and predictions are introduced. The authors highlight new predictions and present studies that have tested aspects of the model

  9. Quantum decay model with exact explicit analytical solution

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  10. Application of predictive modelling techniques in industry: from food design up to risk assessment.

    PubMed

    Membré, Jeanne-Marie; Lambert, Ronald J W

    2008-11-30

    In this communication, examples of applications of predictive microbiology in industrial contexts (i.e. Nestlé and Unilever) are presented which cover a range of applications in food safety from formulation and process design to consumer safety risk assessment. A tailor-made, private expert system, developed to support safe product/process design assessment is introduced as an example of how predictive models can be deployed for use by non-experts. Its use in conjunction with other tools and software available in the public domain is discussed. Specific applications of predictive microbiology techniques are presented relating to investigations of either growth or limits to growth with respect to product formulation or process conditions. An example of a probabilistic exposure assessment model for chilled food application is provided and its potential added value as a food safety management tool in an industrial context is weighed against its disadvantages. The role of predictive microbiology in the suite of tools available to food industry and some of its advantages and constraints are discussed.

  11. Comprehensive model of a hermetic reciprocating compressor

    NASA Astrophysics Data System (ADS)

    Yang, B.; Ziviani, D.; Groll, E. A.

    2017-08-01

    A comprehensive simulation model is presented to predict the performance of a hermetic reciprocating compressor and to reveal the underlying mechanisms when the compressor is running. The presented model is composed of sub-models simulating the in-cylinder compression process, piston ring/journal bearing frictional power loss, single phase induction motor and the overall compressor energy balance among different compressor components. The valve model, leakage through piston ring model and in-cylinder heat transfer model are also incorporated into the in-cylinder compression process model. A numerical algorithm solving the model is introduced. The predicted results of the compressor mass flow rate and input power consumption are compared to the published compressor map values. Future work will focus on detailed experimental validation of the model and parametric studies investigating the effects of structural parameters, including the stroke-to-bore ratio, on the compressor performance.

  12. Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower

    NASA Astrophysics Data System (ADS)

    Fujii, Kenzo; Yamamoto, Toru

    In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.

  13. Application of Fracture Distribution Prediction Model in Xihu Depression of East China Sea

    NASA Astrophysics Data System (ADS)

    Yan, Weifeng; Duan, Feifei; Zhang, Le; Li, Ming

    2018-02-01

    There are different responses on each of logging data with the changes of formation characteristics and outliers caused by the existence of fractures. For this reason, the development of fractures in formation can be characterized by the fine analysis of logging curves. The well logs such as resistivity, sonic transit time, density, neutron porosity and gamma ray, which are classified as conventional well logs, are more sensitive to formation fractures. In view of traditional fracture prediction model, using the simple weighted average of different logging data to calculate the comprehensive fracture index, are more susceptible to subjective factors and exist a large deviation, a statistical method is introduced accordingly. Combining with responses of conventional logging data on the development of formation fracture, a prediction model based on membership function is established, and its essence is to analyse logging data with fuzzy mathematics theory. The fracture prediction results in a well formation in NX block of Xihu depression through two models are compared with that of imaging logging, which shows that the accuracy of fracture prediction model based on membership function is better than that of traditional model. Furthermore, the prediction results are highly consistent with imaging logs and can reflect the development of cracks much better. It can provide a reference for engineering practice.

  14. Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models

    PubMed Central

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429

  15. Potential ecological and economic consequences of climate-driven agricultural and silvicultural transformations in central Siberia

    NASA Astrophysics Data System (ADS)

    Tchebakova, Nadezhda M.; Zander, Evgeniya V.; Pyzhev, Anton I.; Parfenova, Elena I.; Soja, Amber J.

    2014-05-01

    Increased warming predicted from general circulation models (GCMs) by the end of the century is expected to dramatically impact Siberian forests. Both natural climate-change-caused disturbance (weather, wildfire, infestation) and anthropogenic disturbance (legal/illegal logging) has increased, and their impact on Siberian boreal forest has been mounting over the last three decades. The Siberian BioClimatic Model (SiBCliM) was used to simulate Siberian forests, and the resultant maps show a severely decreased forest that has shifted northwards and a changed composition. Predicted dryer climates would enhance the risks of high fire danger and thawing permafrost, both of which challenge contemporary ecosystems. Our current goal is to evaluate the ecological and economic consequences of climate warming, to optimise economic loss/gain effects in forestry versus agriculture, to question the relative economic value of supporting forestry, agriculture or a mixed agro-forestry at the southern forest border in central Siberia predicted to undergo the most noticeable landcover and landuse changes. We developed and used forest and agricultural bioclimatic models to predict forest shifts; novel tree species and their climatypes are introduced in a warmer climate and/or potential novel agriculture are introduced with a potential variety of crops by the end of the century. We applied two strategies to estimate climate change effects, motivated by forest disturbance. One is a genetic means of assisting trees and forests to be harmonized with a changing climate by developing management strategies for seed transfer to locations that are best ecologically suited to the genotypes in future climates. The second strategy is the establishment of agricultural lands in new forest-steppe and steppe habitats, because the forests would retreat northwards. Currently, food, forage, and biofuel crops primarily reside in the steppe and forest-steppe zones which are known to have favorable climatic and soil resources. During this century, traditional Siberian crops are predicted to gradually shift northwards and new crops, which are currently non-existent but potentially important in a warmer climate, could be introduced in the extreme south. In a future warmer climate, the economic effect of climate change impacts on agriculture was estimated based on a production function approach and the Ricardian model. The production function estimated climate impacts of temperature, precipitation and carbon dioxide levels. The Ricardian model examined climate impacts on the net rent or value of farmland at various regions. The models produced the optimal distribution of agricultural lands between crop, livestock, and forestry sectors to compensate economic losses in forestry in potential landuse areas depending on climatic change.

  16. Light-transmittance predictions under multiple-light-scattering conditions. I. Direct problem: hybrid-method approximation.

    PubMed

    Czerwiński, M; Mroczka, J; Girasole, T; Gouesbet, G; Gréhan, G

    2001-03-20

    Our aim is to present a method of predicting light transmittances through dense three-dimensional layered media. A hybrid method is introduced as a combination of the four-flux method with coefficients predicted from a Monte Carlo statistical model to take into account the actual three-dimensional geometry of the problem under study. We present the principles of the hybrid method, some exemplifying results of numerical simulations, and their comparison with results obtained from Bouguer-Lambert-Beer law and from Monte Carlo simulations.

  17. Comparison of Models for Ball Bearing Dynamic Capacity and Life

    NASA Technical Reports Server (NTRS)

    Gupta, Pradeep K.; Oswald, Fred B.; Zaretsky, Erwin V.

    2015-01-01

    Generalized formulations for dynamic capacity and life of ball bearings, based on the models introduced by Lundberg and Palmgren and Zaretsky, have been developed and implemented in the bearing dynamics computer code, ADORE. Unlike the original Lundberg-Palmgren dynamic capacity equation, where the elastic properties are part of the life constant, the generalized formulations permit variation of elastic properties of the interacting materials. The newly updated Lundberg-Palmgren model allows prediction of life as a function of elastic properties. For elastic properties similar to those of AISI 52100 bearing steel, both the original and updated Lundberg-Palmgren models provide identical results. A comparison between the Lundberg-Palmgren and the Zaretsky models shows that at relatively light loads the Zaretsky model predicts a much higher life than the Lundberg-Palmgren model. As the load increases, the Zaretsky model provides a much faster drop off in life. This is because the Zaretsky model is much more sensitive to load than the Lundberg-Palmgren model. The generalized implementation where all model parameters can be varied provides an effective tool for future model validation and enhancement in bearing life prediction capabilities.

  18. Fuzzy time series forecasting model with natural partitioning length approach for predicting the unemployment rate under different degree of confidence

    NASA Astrophysics Data System (ADS)

    Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud

    2017-08-01

    Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.

  19. A probabilistic framework to infer brain functional connectivity from anatomical connections.

    PubMed

    Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel

    2011-01-01

    We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.

  20. Pollen dispersal slows geographical range shift and accelerates ecological niche shift under climate change

    PubMed Central

    Aguilée, Robin; Raoul, Gaël; Rousset, François; Ronce, Ophélie

    2016-01-01

    Species may survive climate change by migrating to track favorable climates and/or adapting to different climates. Several quantitative genetics models predict that species escaping extinction will change their geographical distribution while keeping the same ecological niche. We introduce pollen dispersal in these models, which affects gene flow but not directly colonization. We show that plant populations may escape extinction because of both spatial range and ecological niche shifts. Exact analytical formulas predict that increasing pollen dispersal distance slows the expected spatial range shift and accelerates the ecological niche shift. There is an optimal distance of pollen dispersal, which maximizes the sustainable rate of climate change. These conclusions hold in simulations relaxing several strong assumptions of our analytical model. Our results imply that, for plants with long distance of pollen dispersal, models assuming niche conservatism may not accurately predict their future distribution under climate change. PMID:27621443

  1. Pollen dispersal slows geographical range shift and accelerates ecological niche shift under climate change.

    PubMed

    Aguilée, Robin; Raoul, Gaël; Rousset, François; Ronce, Ophélie

    2016-09-27

    Species may survive climate change by migrating to track favorable climates and/or adapting to different climates. Several quantitative genetics models predict that species escaping extinction will change their geographical distribution while keeping the same ecological niche. We introduce pollen dispersal in these models, which affects gene flow but not directly colonization. We show that plant populations may escape extinction because of both spatial range and ecological niche shifts. Exact analytical formulas predict that increasing pollen dispersal distance slows the expected spatial range shift and accelerates the ecological niche shift. There is an optimal distance of pollen dispersal, which maximizes the sustainable rate of climate change. These conclusions hold in simulations relaxing several strong assumptions of our analytical model. Our results imply that, for plants with long distance of pollen dispersal, models assuming niche conservatism may not accurately predict their future distribution under climate change.

  2. Possibilities of GIS for Plant Introduction

    USDA-ARS?s Scientific Manuscript database

    The Ecological Atlas of Russia and Neighboring Countries was used to predict the distribution of introduced species. Acer negundo was used as an example and an ecological model was developed using the natural distribution of the species in North America and some American ecological variables: namely...

  3. Introducing Perception and Modelling of Spatial Randomness in Classroom

    ERIC Educational Resources Information Center

    De Nóbrega, José Renato

    2017-01-01

    A strategy to facilitate understanding of spatial randomness is described, using student activities developed in sequence: looking at spatial patterns, simulating approximate spatial randomness using a grid of equally-likely squares, using binomial probabilities for approximations and predictions and then comparing with given Poisson…

  4. Experiment-specific cosmic microwave background calculations made easier - Approximation formula for smoothed delta T/T windows

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.

    1993-01-01

    Simple and easy to implement elementary function approximations are introduced to the spectral window functions needed in calculations of model predictions of the cosmic microwave backgrond (CMB) anisotropy. These approximations allow the investigator to obtain model delta T/T predictions in terms of single integrals over the power spectrum of cosmological perturbations and to avoid the necessity of performing the additional integrations. The high accuracy of these approximations is demonstrated here for the CDM theory-based calculations of the expected delta T/T signal in several experiments searching for the CMB anisotropy.

  5. Predicting the Activity Coefficients of Free-Solvent for Concentrated Globular Protein Solutions Using Independently Determined Physical Parameters

    PubMed Central

    McBride, Devin W.; Rodgers, Victor G. J.

    2013-01-01

    The activity coefficient is largely considered an empirical parameter that was traditionally introduced to correct the non-ideality observed in thermodynamic systems such as osmotic pressure. Here, the activity coefficient of free-solvent is related to physically realistic parameters and a mathematical expression is developed to directly predict the activity coefficients of free-solvent, for aqueous protein solutions up to near-saturation concentrations. The model is based on the free-solvent model, which has previously been shown to provide excellent prediction of the osmotic pressure of concentrated and crowded globular proteins in aqueous solutions up to near-saturation concentrations. Thus, this model uses only the independently determined, physically realizable quantities: mole fraction, solvent accessible surface area, and ion binding, in its prediction. Predictions are presented for the activity coefficients of free-solvent for near-saturated protein solutions containing either bovine serum albumin or hemoglobin. As a verification step, the predictability of the model for the activity coefficient of sucrose solutions was evaluated. The predicted activity coefficients of free-solvent are compared to the calculated activity coefficients of free-solvent based on osmotic pressure data. It is observed that the predicted activity coefficients are increasingly dependent on the solute-solvent parameters as the protein concentration increases to near-saturation concentrations. PMID:24324733

  6. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  7. Desert mammal populations are limited by introduced predators rather than future climate change

    PubMed Central

    Wardle, Glenda M.; Dickman, Chris R.

    2017-01-01

    Climate change is predicted to place up to one in six species at risk of extinction in coming decades, but extinction probability is likely to be influenced further by biotic interactions such as predation. We use structural equation modelling to integrate results from remote camera trapping and long-term (17–22 years) regional-scale (8000 km2) datasets on vegetation and small vertebrates (greater than 38 880 captures) to explore how biotic processes and two key abiotic drivers influence the structure of a diverse assemblage of desert biota in central Australia. We use our models to predict how changes in rainfall and wildfire are likely to influence the cover and productivity of the dominant vegetation and the impacts of predators on their primary rodent prey over a 100-year timeframe. Our results show that, while vegetation cover may decline due to climate change, the strongest negative effect on prey populations in this desert system is top-down suppression from introduced predators. PMID:29291051

  8. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  9. Low Data Drug Discovery with One-Shot Learning

    PubMed Central

    2017-01-01

    Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds (Ma, J. et al. J. Chem. Inf. Model.2015, 55, 263–27425635324). However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the iterative refinement long short-term memory, that, when combined with graph convolutional neural networks, significantly improves learning of meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery (Ramsundar, B. deepchem.io. https://github.com/deepchem/deepchem, 2016). PMID:28470045

  10. An emission-weighted proximity model for air pollution exposure assessment.

    PubMed

    Zou, Bin; Wilson, J Gaines; Zhan, F Benjamin; Zeng, Yongnian

    2009-08-15

    Among the most common spatial models for estimating personal exposure are Traditional Proximity Models (TPMs). Though TPMs are straightforward to configure and interpret, they are prone to extensive errors in exposure estimates and do not provide prospective estimates. To resolve these inherent problems with TPMs, we introduce here a novel Emission Weighted Proximity Model (EWPM) to improve the TPM, which takes into consideration the emissions from all sources potentially influencing the receptors. EWPM performance was evaluated by comparing the normalized exposure risk values of sulfur dioxide (SO(2)) calculated by EWPM with those calculated by TPM and monitored observations over a one-year period in two large Texas counties. In order to investigate whether the limitations of TPM in potential exposure risk prediction without recorded incidence can be overcome, we also introduce a hybrid framework, a 'Geo-statistical EWPM'. Geo-statistical EWPM is a synthesis of Ordinary Kriging Geo-statistical interpolation and EWPM. The prediction results are presented as two potential exposure risk prediction maps. The performance of these two exposure maps in predicting individual SO(2) exposure risk was validated with 10 virtual cases in prospective exposure scenarios. Risk values for EWPM were clearly more agreeable with the observed concentrations than those from TPM. Over the entire study area, the mean SO(2) exposure risk from EWPM was higher relative to TPM (1.00 vs. 0.91). The mean bias of the exposure risk values of 10 virtual cases between EWPM and 'Geo-statistical EWPM' are much smaller than those between TPM and 'Geo-statistical TPM' (5.12 vs. 24.63). EWPM appears to more accurately portray individual exposure relative to TPM. The 'Geo-statistical EWPM' effectively augments the role of the standard proximity model and makes it possible to predict individual risk in future exposure scenarios resulting in adverse health effects from environmental pollution.

  11. Learning to Predict Combinatorial Structures

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar

    2009-12-01

    The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.

  12. Support Vector Machines for Differential Prediction

    PubMed Central

    Kuusisto, Finn; Santos Costa, Vitor; Nassif, Houssam; Burnside, Elizabeth; Page, David; Shavlik, Jude

    2015-01-01

    Machine learning is continually being applied to a growing set of fields, including the social sciences, business, and medicine. Some fields present problems that are not easily addressed using standard machine learning approaches and, in particular, there is growing interest in differential prediction. In this type of task we are interested in producing a classifier that specifically characterizes a subgroup of interest by maximizing the difference in predictive performance for some outcome between subgroups in a population. We discuss adapting maximum margin classifiers for differential prediction. We first introduce multiple approaches that do not affect the key properties of maximum margin classifiers, but which also do not directly attempt to optimize a standard measure of differential prediction. We next propose a model that directly optimizes a standard measure in this field, the uplift measure. We evaluate our models on real data from two medical applications and show excellent results. PMID:26158123

  13. Support Vector Machines for Differential Prediction.

    PubMed

    Kuusisto, Finn; Santos Costa, Vitor; Nassif, Houssam; Burnside, Elizabeth; Page, David; Shavlik, Jude

    Machine learning is continually being applied to a growing set of fields, including the social sciences, business, and medicine. Some fields present problems that are not easily addressed using standard machine learning approaches and, in particular, there is growing interest in differential prediction . In this type of task we are interested in producing a classifier that specifically characterizes a subgroup of interest by maximizing the difference in predictive performance for some outcome between subgroups in a population. We discuss adapting maximum margin classifiers for differential prediction. We first introduce multiple approaches that do not affect the key properties of maximum margin classifiers, but which also do not directly attempt to optimize a standard measure of differential prediction. We next propose a model that directly optimizes a standard measure in this field, the uplift measure. We evaluate our models on real data from two medical applications and show excellent results.

  14. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    NASA Astrophysics Data System (ADS)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  15. Probabilistic modeling of discourse-aware sentence processing.

    PubMed

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  16. Concepts and tools for predictive modeling of microbial dynamics.

    PubMed

    Bernaerts, Kristel; Dens, Els; Vereecken, Karen; Geeraerd, Annemie H; Standaert, Arnout R; Devlieghere, Frank; Debevere, Johan; Van Impe, Jan F

    2004-09-01

    Description of microbial cell (population) behavior as influenced by dynamically changing environmental conditions intrinsically needs dynamic mathematical models. In the past, major effort has been put into the modeling of microbial growth and inactivation within a constant environment (static models). In the early 1990s, differential equation models (dynamic models) were introduced in the field of predictive microbiology. Here, we present a general dynamic model-building concept describing microbial evolution under dynamic conditions. Starting from an elementary model building block, the model structure can be gradually complexified to incorporate increasing numbers of influencing factors. Based on two case studies, the fundamentals of both macroscopic (population) and microscopic (individual) modeling approaches are revisited. These illustrations deal with the modeling of (i) microbial lag under variable temperature conditions and (ii) interspecies microbial interactions mediated by lactic acid production (product inhibition). Current and future research trends should address the need for (i) more specific measurements at the cell and/or population level, (ii) measurements under dynamic conditions, and (iii) more comprehensive (mechanistically inspired) model structures. In the context of quantitative microbial risk assessment, complexity of the mathematical model must be kept under control. An important challenge for the future is determination of a satisfactory trade-off between predictive power and manageability of predictive microbiology models.

  17. The Acoustic Analogy: A Powerful Tool in Aeroacoustics with Emphasis on Jet Noise Prediction

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Doty, Michael J.; Hunter, Craig A.

    2004-01-01

    The acoustic analogy introduced by Lighthill to study jet noise is now over 50 years old. In the present paper, Lighthill s Acoustic Analogy is revisited together with a brief evaluation of the state-of-the-art of the subject and an exploration of the possibility of further improvements in jet noise prediction from analytical methods, computational fluid dynamics (CFD) predictions, and measurement techniques. Experimental Particle Image Velocimetry (PIV) data is used both to evaluate turbulent statistics from Reynolds-averaged Navier-Stokes (RANS) CFD and to propose correlation models for the Lighthill stress tensor. The NASA Langley Jet3D code is used to study the effect of these models on jet noise prediction. From the analytical investigation, a retarded time correction is shown that improves, by approximately 8 dB, the over-prediction of aft-arc jet noise by Jet3D. In experimental investigation, the PIV data agree well with the CFD mean flow predictions, with room for improvement in Reynolds stress predictions. Initial modifications, suggested by the PIV data, to the form of the Jet3D correlation model showed no noticeable improvements in jet noise prediction.

  18. Early prediction of extreme stratospheric polar vortex states based on causal precursors

    NASA Astrophysics Data System (ADS)

    Kretschmer, Marlene; Runge, Jakob; Coumou, Dim

    2017-08-01

    Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.

  19. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  20. On the Connection Between One-and Two-Equation Models of Turbulence

    NASA Technical Reports Server (NTRS)

    Menter, F. R.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    A formalism will be presented that allows the transformation of two-equation eddy viscosity turbulence models into one-equation models. The transformation is based on an assumption that is widely accepted over a large range of boundary layer flows and that has been shown to actually improve predictions when incorporated into two-equation models of turbulence. Based on that assumption, a new one-equation turbulence model will be derived. The new model will be tested in great detail against a previously introduced one-equation model and against its parent two-equation model.

  1. Hierarchical modeling of molecular energies using a deep neural network

    NASA Astrophysics Data System (ADS)

    Lubbers, Nicholas; Smith, Justin S.; Barros, Kipton

    2018-06-01

    We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network—a composition of many nonlinear transformations—acting on a representation of the molecule. HIP-NN achieves the state-of-the-art performance on a dataset of 131k ground state organic molecules and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.

  2. Use of simulated satellite radiances from a mesoscale numerical model to understand kinematic and dynamic processes

    NASA Technical Reports Server (NTRS)

    Kalb, Michael; Robertson, Franklin; Jedlovec, Gary; Perkey, Donald

    1987-01-01

    Techniques by which mesoscale numerical weather prediction model output and radiative transfer codes are combined to simulate the radiance fields that a given passive temperature/moisture satellite sensor would see if viewing the evolving model atmosphere are introduced. The goals are to diagnose the dynamical atmospheric processes responsible for recurring patterns in observed satellite radiance fields, and to develop techniques to anticipate the ability of satellite sensor systems to depict atmospheric structures and provide information useful for numerical weather prediction (NWP). The concept of linking radiative transfer and dynamical NWP codes is demonstrated with time sequences of simulated radiance imagery in the 24 TIROS vertical sounder channels derived from model integrations for March 6, 1982.

  3. New approach to the calculation of pistachio powder hysteresis

    NASA Astrophysics Data System (ADS)

    Tavakolipour, Hamid; Mokhtarian, Mohsen

    2016-04-01

    Moisture sorption isotherms for pistachio powder were determined by gravimetric method at temperatures of 15, 25, 35 and 40°C. A selected mathematical models were tested to determine the best suitable model to predict isotherm curve. The results show that Caurie model had the most satisfactory goodness of fit. Also, another purpose of this research was to introduce a new methodology to determine the amount of hysteresis at different temperatures by using best predictive model of isotherm curve based on definite integration method. The results demonstrated that maximum hysteresis is related to the multi-layer water (in the range of water activity 0.2-0.6) which corresponds to the capillary condensation region and this phenomenon decreases with increasing temperature.

  4. IVUS-Based Computational Modeling and Planar Biaxial Artery Material Properties for Human Coronary Plaque Vulnerability Assessment

    PubMed Central

    Liu, Haofei; Cai, Mingchao; Yang, Chun; Zheng, Jie; Bach, Richard; Kural, Mehmet H.; Billiar, Kristen L.; Muccigrosso, David; Lu, Dongsi; Tang, Dalin

    2012-01-01

    Image-based computational modeling has been introduced for vulnerable atherosclerotic plaques to identify critical mechanical conditions which may be used for better plaque assessment and rupture predictions. In vivo patient-specific coronary plaque models are lagging due to limitations on non-invasive image resolution, flow data, and vessel material properties. A framework is proposed to combine intravascular ultrasound (IVUS) imaging, biaxial mechanical testing and computational modeling with fluid-structure interactions and anisotropic material properties to acquire better and more complete plaque data and make more accurate plaque vulnerability assessment and predictions. Impact of pre-shrink-stretch process, vessel curvature and high blood pressure on stress, strain, flow velocity and flow maximum principal shear stress was investigated. PMID:22428362

  5. Path Loss Prediction Over the Lunar Surface Utilizing a Modified Longley-Rice Irregular Terrain Model

    NASA Technical Reports Server (NTRS)

    Foore, Larry; Ida, Nathan

    2007-01-01

    This study introduces the use of a modified Longley-Rice irregular terrain model and digital elevation data representative of an analogue lunar site for the prediction of RF path loss over the lunar surface. The results are validated by theoretical models and past Apollo studies. The model is used to approximate the path loss deviation from theoretical attenuation over a reflecting sphere. Analysis of the simulation results provides statistics on the fade depths for frequencies of interest, and correspondingly a method for determining the maximum range of communications for various coverage confidence intervals. Communication system engineers and mission planners are provided a link margin and path loss policy for communication frequencies of interest.

  6. A semi-supervised learning approach for RNA secondary structure prediction.

    PubMed

    Yonemoto, Haruka; Asai, Kiyoshi; Hamada, Michiaki

    2015-08-01

    RNA secondary structure prediction is a key technology in RNA bioinformatics. Most algorithms for RNA secondary structure prediction use probabilistic models, in which the model parameters are trained with reliable RNA secondary structures. Because of the difficulty of determining RNA secondary structures by experimental procedures, such as NMR or X-ray crystal structural analyses, there are still many RNA sequences that could be useful for training whose secondary structures have not been experimentally determined. In this paper, we introduce a novel semi-supervised learning approach for training parameters in a probabilistic model of RNA secondary structures in which we employ not only RNA sequences with annotated secondary structures but also ones with unknown secondary structures. Our model is based on a hybrid of generative (stochastic context-free grammars) and discriminative models (conditional random fields) that has been successfully applied to natural language processing. Computational experiments indicate that the accuracy of secondary structure prediction is improved by incorporating RNA sequences with unknown secondary structures into training. To our knowledge, this is the first study of a semi-supervised learning approach for RNA secondary structure prediction. This technique will be useful when the number of reliable structures is limited. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A framework for predicting impacts on ecosystem services ...

    EPA Pesticide Factsheets

    Protection of ecosystem services is increasingly emphasized as a risk-assessment goal, but there are wide gaps between current ecological risk-assessment endpoints and potential effects on services provided by ecosystems. The authors present a framework that links common ecotoxicological endpoints to chemical impacts on populations and communities and the ecosystem services that they provide. This framework builds on considerable advances in mechanistic effects models designed to span multiple levels of biological organization and account for various types of biological interactions and feedbacks. For illustration, the authors introduce 2 case studies that employ well-developed and validated mechanistic effects models: the inSTREAM individual-based model for fish populations and the AQUATOX ecosystem model. They also show how dynamic energy budget theory can provide a common currency for interpreting organism-level toxicity. They suggest that a framework based on mechanistic models that predict impacts on ecosystem services resulting from chemical exposure, combined with economic valuation, can provide a useful approach for informing environmental management. The authors highlight the potential benefits of using this framework as well as the challenges that will need to be addressed in future work. The framework introduced here represents an ongoing initiative supported by the National Institute of Mathematical and Biological Synthesis (NIMBioS; http://www.nimbi

  8. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    NASA Astrophysics Data System (ADS)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.

  9. Fitting the low-frequency Raman spectra to boson peak models: glycerol, triacetin and polystyrene

    NASA Astrophysics Data System (ADS)

    Kirillov, S. A.; Perova, T. S.; Faurskov Nielsen, O.; Praestgaard, E.; Rasmussen, U.; Kolomiyets, T. M.; Voyiatzis, G. A.; Anastasiadis, S. H.

    1999-04-01

    A computational approach was elaborated to explicitly account for the Rayleigh line wing, the Boson peak and vibrational contributions to the low-frequency Raman spectra of amorphous solids and viscous liquids. It was shown that the low-frequency Raman spectra of glycerol and polystyrene consist of the Rayleigh contribution of Lorentzian form and the Boson peak which profile follows the predictions of the theory by Martin and Brenig in the version by Malinovsky and Sokolov. In the case of triacetin, the Boson peaks decay faster in their high-frequency side than the above theory predicts. Their form can be successfully modeled with a newly introduced empirical function intermediate between the Martin-Brenig and Malinovsky-Sokolov predictions.

  10. Boosting Learning Algorithm for Stock Price Forecasting

    NASA Astrophysics Data System (ADS)

    Wang, Chengzhang; Bai, Xiaoming

    2018-03-01

    To tackle complexity and uncertainty of stock market behavior, more studies have introduced machine learning algorithms to forecast stock price. ANN (artificial neural network) is one of the most successful and promising applications. We propose a boosting-ANN model in this paper to predict the stock close price. On the basis of boosting theory, multiple weak predicting machines, i.e. ANNs, are assembled to build a stronger predictor, i.e. boosting-ANN model. New error criteria of the weak studying machine and rules of weights updating are adopted in this study. We select technical factors from financial markets as forecasting input variables. Final results demonstrate the boosting-ANN model works better than other ones for stock price forecasting.

  11. One-Dimensional Modelling of Internal Ballistics

    NASA Astrophysics Data System (ADS)

    Monreal-González, G.; Otón-Martínez, R. A.; Velasco, F. J. S.; García-Cascáles, J. R.; Ramírez-Fernández, F. J.

    2017-10-01

    A one-dimensional model is introduced in this paper for problems of internal ballistics involving solid propellant combustion. First, the work presents the physical approach and equations adopted. Closure relationships accounting for the physical phenomena taking place during combustion (interfacial friction, interfacial heat transfer, combustion) are deeply discussed. Secondly, the numerical method proposed is presented. Finally, numerical results provided by this code (UXGun) are compared with results of experimental tests and with the outcome from a well-known zero-dimensional code. The model provides successful results in firing tests of artillery guns, predicting with good accuracy the maximum pressure in the chamber and muzzle velocity what highlights its capabilities as prediction/design tool for internal ballistics.

  12. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    NASA Astrophysics Data System (ADS)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  13. Use of machine learning methods to reduce predictive error of groundwater models.

    PubMed

    Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal

    2014-01-01

    Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.

  14. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    PubMed

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  15. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xavier, MA; Trimboli, MS

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggestmore » significant performance improvements might be achieved by extending the result to electrochemical models. (C) 2015 Elsevier B.V. All rights reserved.« less

  16. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    PubMed Central

    Li, Xiaoqing; Wang, Yu

    2018-01-01

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254

  17. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    PubMed

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.

  18. Simulation of Chronic Liver Injury Due to Environmental Chemicals

    EPA Science Inventory

    US EPA Virtual Liver (v-Liver) is a cellular systems model of hepatic tissues to predict the effects of chronic exposure to chemicals. Tens of thousands of chemicals are currently in commerce and hundreds more are introduced every year. Few of these chemicals have been adequate...

  19. Genetic analysis across differential spatial scales reveals multiple dispersal mechanisms for the invasive hydrozoan Cordylophora in the Great Lakes

    EPA Science Inventory

    Understanding patterns of post-establishment spread by invasive species is critically important for the design of effective management strategies and the development of appropriate theoretical models predicting spatial expansion of introduced populations. Here we explore genetic ...

  20. Mortality risk prediction in burn injury: Comparison of logistic regression with machine learning approaches.

    PubMed

    Stylianou, Neophytos; Akbarov, Artur; Kontopantelis, Evangelos; Buchan, Iain; Dunn, Ken W

    2015-08-01

    Predicting mortality from burn injury has traditionally employed logistic regression models. Alternative machine learning methods have been introduced in some areas of clinical prediction as the necessary software and computational facilities have become accessible. Here we compare logistic regression and machine learning predictions of mortality from burn. An established logistic mortality model was compared to machine learning methods (artificial neural network, support vector machine, random forests and naïve Bayes) using a population-based (England & Wales) case-cohort registry. Predictive evaluation used: area under the receiver operating characteristic curve; sensitivity; specificity; positive predictive value and Youden's index. All methods had comparable discriminatory abilities, similar sensitivities, specificities and positive predictive values. Although some machine learning methods performed marginally better than logistic regression the differences were seldom statistically significant and clinically insubstantial. Random forests were marginally better for high positive predictive value and reasonable sensitivity. Neural networks yielded slightly better prediction overall. Logistic regression gives an optimal mix of performance and interpretability. The established logistic regression model of burn mortality performs well against more complex alternatives. Clinical prediction with a small set of strong, stable, independent predictors is unlikely to gain much from machine learning outside specialist research contexts. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  1. Performance comparison of the Prophecy (forecasting) Algorithm in FFT form for unseen feature and time-series prediction

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger; Handley, James

    2013-06-01

    We introduce a generalized numerical prediction and forecasting algorithm. We have previously published it for malware byte sequence feature prediction and generalized distribution modeling for disparate test article analysis. We show how non-trivial non-periodic extrapolation of a numerical sequence (forecast and backcast) from the starting data is possible. Our ancestor-progeny prediction can yield new options for evolutionary programming. Our equations enable analytical integrals and derivatives to any order. Interpolation is controllable from smooth continuous to fractal structure estimation. We show how our generalized trigonometric polynomial can be derived using a Fourier transform.

  2. Modelling safety of multistate systems with ageing components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna

    An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics ofmore » the consecutive “m out of n: F” is presented as well.« less

  3. Modeling Unproductive Behavior in Online Homework in Terms of Latent Student Traits: An Approach Based on Item Response Theory

    NASA Astrophysics Data System (ADS)

    Gönülateş, Emre; Kortemeyer, Gerd

    2017-04-01

    Homework is an important component of most physics courses. One of the functions it serves is to provide meaningful formative assessment in preparation for examinations. However, correlations between homework and examination scores tend to be low, likely due to unproductive student behavior such as copying and random guessing of answers. In this study, we attempt to model these two counterproductive learner behaviors within the framework of Item Response Theory in order to provide an ability measurement that strongly correlates with examination scores. We find that introducing additional item parameters leads to worse predictions of examination grades, while introducing additional learner traits is a more promising approach.

  4. Soft-Matter Resistive Sensor for Measuring Shear and Pressure Stresses

    NASA Astrophysics Data System (ADS)

    Tepayotl-Ramirez, Daniel; Roberts, Peter; Majidi, Carmel

    2013-03-01

    Building on emerging paradigms in soft-matter electronics, we introduce liquid-phase electronic sensors that simultaneously measures elastic pressure and shear deformation. The sensors are com- posed of a sheet of elastomer that is embedded with fluidic channels containing eutectic Gallium- Indium (EGaIn), a metal alloy that is liquid at room temperature. Applying pressure or shear traction to the surface of the surrounding elastomer causes the elastomer to elastically deform and changes the geometry and electrical properties of the embedded liquid-phase circuit elements. We introduce analytic models that predict the electrical response of the sensor to prescribed surface tractions. These models are validated with both Finite Element Analysis (FEA) and experimental measurements.

  5. Unsaturated consolidation theory for the prediction of long-term municipal solid waste landfill settlement.

    PubMed

    Liu, Chia-Nan; Chen, Rong-Her; Chen, Kuo-Sheng

    2006-02-01

    The understanding of long-term landfill settlement is important for landfill design and rehabilitation. However, suitable models that can consider both the mechanical and biodecomposition mechanisms in predicting the long-term landfill settlement are generally not available. In this paper, a model based on unsaturated consolidation theory and considering the biodegradation process is introduced to simulate the landfill settlement behaviour. The details of problem formulations and the derivation of the solution for the formulated differential equation of gas pressure are presented. A step-by-step analytical procedure employing this approach for estimating settlement is proposed. The proposed model can generally model the typical features of short-term and long-term behaviour. The proposed model also yields results that are comparable with the field measurements.

  6. Modeling Clinical Outcomes in Prostate Cancer: Application and Validation of the Discrete Event Simulation Approach.

    PubMed

    Pan, Feng; Reifsnider, Odette; Zheng, Ying; Proskorovsky, Irina; Li, Tracy; He, Jianming; Sorensen, Sonja V

    2018-04-01

    Treatment landscape in prostate cancer has changed dramatically with the emergence of new medicines in the past few years. The traditional survival partition model (SPM) cannot accurately predict long-term clinical outcomes because it is limited by its ability to capture the key consequences associated with this changing treatment paradigm. The objective of this study was to introduce and validate a discrete-event simulation (DES) model for prostate cancer. A DES model was developed to simulate overall survival (OS) and other clinical outcomes based on patient characteristics, treatment received, and disease progression history. We tested and validated this model with clinical trial data from the abiraterone acetate phase III trial (COU-AA-302). The model was constructed with interim data (55% death) and validated with the final data (96% death). Predicted OS values were also compared with those from the SPM. The DES model's predicted time to chemotherapy and OS are highly consistent with the final observed data. The model accurately predicts the OS hazard ratio from the final data cut (predicted: 0.74; 95% confidence interval [CI] 0.64-0.85 and final actual: 0.74; 95% CI 0.6-0.88). The log-rank test to compare the observed and predicted OS curves indicated no statistically significant difference between observed and predicted curves. However, the predictions from the SPM based on interim data deviated significantly from the final data. Our study showed that a DES model with properly developed risk equations presents considerable improvements to the more traditional SPM in flexibility and predictive accuracy of long-term outcomes. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Combining a generic process-based productivity model classification method to predict the presence and absence species in the Pacific Northwest, U.S.A

    Treesearch

    Nicholas C. Coops; Richard H. Waring; Todd A. Schroeder

    2009-01-01

    Although long-lived tree species experience considerable environmental variation over their life spans, their geographical distributions reflect sensitivity mainly to mean monthly climatic conditions.We introduce an approach that incorporates a physiologically based growth model to illustrate how a half-dozen tree species differ in their responses to monthly variation...

  8. Quality Control Analysis of Selected Aspects of Programs Administered by the Bureau of Student Financial Assistance. Task 1 and Quality Control Sample; Error-Prone Modeling Analysis Plan.

    ERIC Educational Resources Information Center

    Saavedra, Pedro; And Others

    Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…

  9. Modelling and predicting the simultaneous growth of Escherichia coli and lactic acid bacteria in milk.

    PubMed

    Ačai, P; Valík, L'; Medved'ová, A; Rosskopf, F

    2016-09-01

    Modelling and predicting the simultaneous competitive growth of Escherichia coli and starter culture of lactic acid bacteria (Fresco 1010, Chr. Hansen, Hørsholm, Denmark) was studied in milk at different temperatures and Fresco inoculum concentrations. The lactic acid bacteria (LAB) were able to induce an early stationary state in E. coli The developed model described and tested the growth inhibition of E. coli (with initial inoculum concentration 10(3) CFU/mL) when LAB have reached maximum density in different conditions of temperature (ranging from 12 ℃ to 30 ℃) and for various inoculum sizes of LAB (ranging from approximately 10(3) to 10(7) CFU/mL). The prediction ability of the microbial competition model (the Baranyi and Roberts model coupled with the Gimenez and Dalgaard model) was first performed only with parameters estimated from individual growth of E. coli and the LAB and then with the introduced competition coefficients evaluated from co-culture growth of E. coli and LAB in milk. Both the results and their statistical indices showed that the model with incorporated average values of competition coefficients improved the prediction of E. coli behaviour in co-culture with LAB. © The Author(s) 2015.

  10. A deep auto-encoder model for gene expression prediction.

    PubMed

    Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua

    2017-11-17

    Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.

  11. Understanding the origin of the solar cyclic activity for an improved earth climate prediction

    NASA Astrophysics Data System (ADS)

    Turck-Chièze, Sylvaine; Lambert, Pascal

    This review is dedicated to the processes which could explain the origin of the great extrema of the solar activity. We would like to reach a more suitable estimate and prediction of the temporal solar variability and its real impact on the Earth climatic models. The development of this new field is stimulated by the SoHO helioseismic measurements and by some recent solar modelling improvement which aims to describe the dynamical processes from the core to the surface. We first recall assumptions on the potential different solar variabilities. Then, we introduce stellar seismology and summarize the main SOHO results which are relevant for this field. Finally we mention the dynamical processes which are presently introduced in new solar models. We believe that the knowledge of two important elements: (1) the magnetic field interplay between the radiative zone and the convective zone and (2) the role of the gravity waves, would allow to understand the origin of the grand minima and maxima observed during the last millennium. Complementary observables like acoustic and gravity modes, radius and spectral irradiance from far UV to visible in parallel to the development of 1D-2D-3D simulations will improve this field. PICARD, SDO, DynaMICCS are key projects for a prediction of the next century variability. Some helioseismic indicators constitute the first necessary information to properly describe the Sun-Earth climatic connection.

  12. Toward a theory of topopatric speciation: The role of genetic assortative mating

    NASA Astrophysics Data System (ADS)

    Schneider, David M.; do Carmo, Eduardo; Martins, Ayana B.; de Aguiar, Marcus A. M.

    2014-09-01

    We discuss a minimalist model of assortative mating for sexually reproducing haploid individuals with two biallelic loci. Assortativeness is introduced in the model by preventing mating between individuals whose alleles differ at both loci. Using methods of dynamical systems and population genetics we provide a full description of the evolution of the system for the case of very large populations. We derive the equations governing the evolution of haplotype frequencies and study the equilibrium solutions, stability, and speed of convergence to equilibrium. We find a constant of motion which allows us to introduce a geometrical construction that makes it straightforward to predict the fate of initial conditions. Finally, we discuss the consequences of this class of assortative mating models, including their possible extensions and implications for sympatric and topopatric speciation.

  13. Hyperbolastic growth models: theory and application

    PubMed Central

    Tabatabai, Mohammad; Williams, David Keith; Bursac, Zoran

    2005-01-01

    Background Mathematical models describing growth kinetics are very important for predicting many biological phenomena such as tumor volume, speed of disease progression, and determination of an optimal radiation and/or chemotherapy schedule. Growth models such as logistic, Gompertz, Richards, and Weibull have been extensively studied and applied to a wide range of medical and biological studies. We introduce a class of three and four parameter models called "hyperbolastic models" for accurately predicting and analyzing self-limited growth behavior that occurs e.g. in tumors. To illustrate the application and utility of these models and to gain a more complete understanding of them, we apply them to two sets of data considered in previously published literature. Results The results indicate that volumetric tumor growth follows the principle of hyperbolastic growth model type III, and in both applications at least one of the newly proposed models provides a better fit to the data than the classical models used for comparison. Conclusion We have developed a new family of growth models that predict the volumetric growth behavior of multicellular tumor spheroids with a high degree of accuracy. We strongly believe that the family of hyperbolastic models can be a valuable predictive tool in many areas of biomedical and epidemiological research such as cancer or stem cell growth and infectious disease outbreaks. PMID:15799781

  14. Topographies and dynamics on multidimensional potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Ball, Keith Douglas

    The stochastic master equation is a valuable tool for elucidating potential energy surface (PES) details that govern structural relaxation in clusters, bulk systems, and protein folding. This work develops a comprehensive framework for studying non-equilibrium relaxation dynamics using the master equation. Since our master equations depend upon accurate partition function models for use in Rice-Ramsperger-Kassel-Marcus (RRK(M) transition state theory, this work introduces several such models employing various harmonic and anharmonic approximations and compares their predicted equilibrium population distributions with those determined from molecular dynamics. This comparison is performed for the fully-delineated surfaces (KCl)5 and Ar9 to evaluate model performance for potential surfaces with long- and short-range interactions, respectively. For each system, several models perform better than a simple harmonic approximation. While no model gives acceptable results for all minima, and optimal modeling strategies differ for (KCl)5 and Ar9, a particular one-parameter model gives the best agreement with simulation for both systems. We then construct master equations from these models and compare their isothermal relaxation predictions for (KCl)5 and Ar9 with molecular dynamics simulations. This is the first comprehensive test of the kinetic performance of partition function models of its kind. Our results show that accurate modeling of transition-state partition functions is more important for (KCl)5 than for Ar9 in reproducing simulation results, due to a marked stiffening anharmonicity in the transition-state normal modes of (KCl)5. For both systems, several models yield qualitative agreement with simulation over a large temperature range. To examine the robustness of the master equation when applied to larger systems, for which full topographical descriptions would be either impossible or infeasible, we compute relaxation predictions for Ar11 using a master equation constructed from data representing the full PES, and compare these predictions to those of reduced master equations based on statistical samples of the full PES. We introduce a sampling method which generates random, Boltzmann-weighted, energetically 'downhill' sequences. The study reveals that, at moderate temperatures, the slowest relaxation timescale converges as the number of sequences in a sample grows to ~1000. Furthermore, the asymptotic timescale is comparable to the full-PES value.

  15. The design, analysis and experimental evaluation of an elastic model wing

    NASA Technical Reports Server (NTRS)

    Cavin, R. K., III; Thisayakorn, C.

    1974-01-01

    An elastic orbiter model was developed to evaluate the effectiveness of aeroelasticity computer programs. The elasticity properties were introduced by constructing beam-like straight wings for the wind tunnel model. A standard influence coefficient mathematical model was used to estimate aeroelastic effects analytically. In general good agreement was obtained between the empirical and analytical estimates of the deformed shape. However, in the static aeroelasticity case, it was found that the physical wing exhibited less bending and more twist than was predicted by theory.

  16. Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.

    PubMed

    Kaplan, Adam; Lock, Eric F

    2017-01-01

    Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.

  17. Can We Predict Individual Combined Benefit and Harm of Therapy? Warfarin Therapy for Atrial Fibrillation as a Test Case

    PubMed Central

    Li, Guowei; Thabane, Lehana; Delate, Thomas; Witt, Daniel M.; Levine, Mitchell A. H.; Cheng, Ji; Holbrook, Anne

    2016-01-01

    Objectives To construct and validate a prediction model for individual combined benefit and harm outcomes (stroke with no major bleeding, major bleeding with no stroke, neither event, or both) in patients with atrial fibrillation (AF) with and without warfarin therapy. Methods Using the Kaiser Permanente Colorado databases, we included patients newly diagnosed with AF between January 1, 2005 and December 31, 2012 for model construction and validation. The primary outcome was a prediction model of composite of stroke or major bleeding using polytomous logistic regression (PLR) modelling. The secondary outcome was a prediction model of all-cause mortality using the Cox regression modelling. Results We included 9074 patients with 4537 and 4537 warfarin users and non-users, respectively. In the derivation cohort (n = 4632), there were 136 strokes (2.94%), 280 major bleedings (6.04%) and 1194 deaths (25.78%) occurred. In the prediction models, warfarin use was not significantly associated with risk of stroke, but increased the risk of major bleeding and decreased the risk of death. Both the PLR and Cox models were robust, internally and externally validated, and with acceptable model performances. Conclusions In this study, we introduce a new methodology for predicting individual combined benefit and harm outcomes associated with warfarin therapy for patients with AF. Should this approach be validated in other patient populations, it has potential advantages over existing risk stratification approaches as a patient-physician aid for shared decision-making PMID:27513986

  18. A Model for Predicting Grain Boundary Cracking in Polycrystalline Viscoplastic Materials Including Scale Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, D.H.; Helms, K.L.E.; Hurtado, L.D.

    1999-04-06

    A model is developed herein for predicting the mechanical response of inelastic crystalline solids. Particular emphasis is given to the development of microstructural damage along grain boundaries, and the interaction of this damage with intragranular inelasticity caused by dislocation dissipation mechanisms. The model is developed within the concepts of continuum mechanics, with special emphasis on the development of internal boundaries in the continuum by utilizing a cohesive zone model based on fracture mechanics. In addition, the crystalline grains are assumed to be characterized by nonlinear viscoplastic mechanical material behavior in order to account for dislocation generation and migration. Due tomore » the nonlinearities introduced by the crack growth and viscoplastic constitution, a numerical algorithm is utilized to solve representative problems. Implementation of the model to a finite element computational algorithm is therefore briefly described. Finally, sample calculations are presented for a polycrystalline titanium alloy with particular focus on effects of scale on the predicted response.« less

  19. Predicting Turbulent Convective Heat Transfer in Three-Dimensional Duct Flows

    NASA Technical Reports Server (NTRS)

    Rokni, M.; Gatski, T. B.

    1999-01-01

    The performance of an explicit algebraic stress model is assessed in predicting the turbulent flow and forced heat transfer in straight ducts, with square, rectangular, trapezoidal and triangular cross-sections, under fully developed conditions over a range of Reynolds numbers. Iso-thermal conditions are imposed on the duct walls and the turbulent heat fluxes are modeled by gradient-diffusion type models. At high Reynolds numbers (>/= 10(exp 5)), wall functions are used for the velocity and temperature fields; while at low Reynolds numbers damping functions are introduced into the models. Hydraulic parameters such as friction factor and Nusselt number are well predicted even when damping functions are used, and the present formulation imposes minimal demand on the number of grid points without any convergence or stability problems. Comparison between the models is presented in terms of the hydraulic parameters, friction factor and Nusselt number, as well as in terms of the secondary flow patterns occurring within the ducts.

  20. A method to identify and analyze biological programs through automated reasoning

    PubMed Central

    Yordanov, Boyan; Dunn, Sara-Jane; Kugler, Hillel; Smith, Austin; Martello, Graziano; Emmott, Stephen

    2016-01-01

    Predictive biology is elusive because rigorous, data-constrained, mechanistic models of complex biological systems are difficult to derive and validate. Current approaches tend to construct and examine static interaction network models, which are descriptively rich, but often lack explanatory and predictive power, or dynamic models that can be simulated to reproduce known behavior. However, in such approaches implicit assumptions are introduced as typically only one mechanism is considered, and exhaustively investigating all scenarios is impractical using simulation. To address these limitations, we present a methodology based on automated formal reasoning, which permits the synthesis and analysis of the complete set of logical models consistent with experimental observations. We test hypotheses against all candidate models, and remove the need for simulation by characterizing and simultaneously analyzing all mechanistic explanations of observed behavior. Our methodology transforms knowledge of complex biological processes from sets of possible interactions and experimental observations to precise, predictive biological programs governing cell function. PMID:27668090

  1. QSAR Methods.

    PubMed

    Gini, Giuseppina

    2016-01-01

    In this chapter, we introduce the basis of computational chemistry and discuss how computational methods have been extended to some biological properties and toxicology, in particular. Since about 20 years, chemical experimentation is more and more replaced by modeling and virtual experimentation, using a large core of mathematics, chemistry, physics, and algorithms. Then we see how animal experiments, aimed at providing a standardized result about a biological property, can be mimicked by new in silico methods. Our emphasis here is on toxicology and on predicting properties through chemical structures. Two main streams of such models are available: models that consider the whole molecular structure to predict a value, namely QSAR (Quantitative Structure Activity Relationships), and models that find relevant substructures to predict a class, namely SAR. The term in silico discovery is applied to chemical design, to computational toxicology, and to drug discovery. We discuss how the experimental practice in biological science is moving more and more toward modeling and simulation. Such virtual experiments confirm hypotheses, provide data for regulation, and help in designing new chemicals.

  2. A New Stress-Based Model of Political Extremism: Personal Exposure to Terrorism, Psychological Distress, and Exclusionist Political Attitudes.

    PubMed

    Canetti-Nisim, Daphna; Halperin, Eran; Sharvit, Keren; Hobfoll, Stevan E

    2009-06-01

    Does exposure to terrorism lead to hostility toward minorities? Drawing on theories from clinical and social psychology, we propose a stress-based model of political extremism in which psychological distress-which is largely overlooked in political scholarship-and threat perceptions mediate the relationship between exposure to terrorism and attitudes toward minorities. To test the model, a representative sample of 469 Israeli Jewish respondents was interviewed on three occasions at six-month intervals. Structural Equation Modeling indicated that exposure to terrorism predicted psychological distress (t1), which predicted perceived threat from Palestinian citizens of Israel (t2), which, in turn, predicted exclusionist attitudes toward Palestinian citizens of Israel (t3). These findings provide solid evidence and a mechanism for the hypothesis that terrorism introduces nondemocratic attitudes threatening minority rights. It suggests that psychological distress plays an important role in political decision making and should be incorporated in models drawing upon political psychology.

  3. Improvement and Application of the Softened Strut-and-Tie Model

    NASA Astrophysics Data System (ADS)

    Fan, Guoxi; Wang, Debin; Diao, Yuhong; Shang, Huaishuai; Tang, Xiaocheng; Sun, Hai

    2017-11-01

    Previous experimental researches indicate that reinforced concrete beam-column joints play an important role in the mechanical properties of moment resisting frame structures, so as to require proper design. The aims of this paper are to predict the joint carrying capacity and cracks development theoretically. Thus, a rational model needs to be developed. Based on the former considerations, the softened strut-and-tie model is selected to be introduced and analyzed. Four adjustments including modifications of the depth of the diagonal strut, the inclination angle of diagonal compression strut, the smeared stress of mild steel bars embedded in concrete, as well as the softening coefficient are made. After that, the carrying capacity of beam-column joint and cracks development are predicted using the improved softened strut-and-tie model. Based on the test results, it is not difficult to find that the improved softened strut-and-tie model can be used to predict the joint carrying capacity and cracks development with sufficient accuracy.

  4. The importance of radiation for semiempirical water-use efficiency models

    NASA Astrophysics Data System (ADS)

    Boese, Sven; Jung, Martin; Carvalhais, Nuno; Reichstein, Markus

    2017-06-01

    Water-use efficiency (WUE) is a fundamental property for the coupling of carbon and water cycles in plants and ecosystems. Existing model formulations predicting this variable differ in the type of response of WUE to the atmospheric vapor pressure deficit of water (VPD). We tested a representative WUE model on the ecosystem scale at 110 eddy covariance sites of the FLUXNET initiative by predicting evapotranspiration (ET) based on gross primary productivity (GPP) and VPD. We found that introducing an intercept term in the formulation increases model performance considerably, indicating that an additional factor needs to be considered. We demonstrate that this intercept term varies seasonally and we subsequently associate it with radiation. Replacing the constant intercept term with a linear function of global radiation was found to further improve model predictions of ET. Our new semiempirical ecosystem WUE formulation indicates that, averaged over all sites, this radiation term accounts for up to half (39-47 %) of transpiration. These empirical findings challenge the current understanding of water-use efficiency on the ecosystem scale.

  5. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  6. Postcombustion and its influences in 135 MWe CFB boilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaohua Li; Hairui Yang; Hai Zhang

    2009-09-15

    In the cyclone of a circulating fluidized bed (CFB) boiler, a noticeable increment of flue gas temperature, caused by combustion of combustible gas and unburnt carbon content, is often found. Such phenomenon is defined as post combustion, and it could introduce overheating of reheated and superheated steam and extra heat loss of exhaust flue gas. In this paper, mathematical modeling and field measurements on post combustion in 135MWe commercial CFB boilers were conducted. A novel one-dimensional combustion model taking post combustion into account was developed. With this model, the overall combustion performance, including size distribution of various ashes, temperature profile,more » and carbon content profiles along the furnace height, heat release fraction in the cyclone and furnace were predicted. Field measurements were conducted by sampling gas and solid at different positions in the boiler under different loads. The measured data and corresponding model-calculated results were compared. Both prediction and field measurements showed post combustion introduced a temperature increment of flue gas in the cyclone of the 135MWe CFB boiler in the range of 20-50{sup o}C when a low-volatile bituminous coal was fired. Although it had little influence on ash size distribution, post combustion had a remarkable influence on the carbon content profile and temperature profile in the furnace. Moreover, it introduced about 4-7% heat release in the cyclone over the total heat release in the boiler. This fraction slightly increased with total air flow rate and boiler load. Model calculations were also conducted on other two 135MWe CFB boilers burning lignite and anthracite coal, respectively. The results confirmed that post combustion was sensitive to coal type and became more severe as the volatile content of the coal decreased. 15 refs., 11 figs., 4 tabs.« less

  7. Recent Turbulence Model Advances Applied to Multielement Airfoil Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2000-01-01

    A one-equation linear turbulence model and a two-equation nonlinear explicit algebraic stress model (EASM) are applied to the flow over a multielement airfoil. The effect of the K-epsilon and K-omega forms of the two-equation model are explored, and the K-epsilon form is shown to be deficient in the wall-bounded regions of adverse pressure gradient flows. A new K-omega form of EASM is introduced. Nonlinear terms present in EASM are shown to improve predictions of turbulent shear stress behind the trailing edge of the main element and near midflap. Curvature corrections are applied to both the one- and two-equation turbulence models and yield only relatively small local differences in the flap region, where the flow field undergoes the greatest curvature. Predictions of maximum lift are essentially unaffected by the turbulence model variations studied.

  8. A numerical study of wave-current interaction through surface and bottom stresses: Coastal ocean response to Hurricane Fran of 1996

    NASA Astrophysics Data System (ADS)

    Xie, L.; Pietrafesa, L. J.; Wu, K.

    2003-02-01

    A three-dimensional wave-current coupled modeling system is used to examine the influence of waves on coastal currents and sea level. This coupled modeling system consists of the wave model-WAM (Cycle 4) and the Princeton Ocean Model (POM). The results from this study show that it is important to incorporate surface wave effects into coastal storm surge and circulation models. Specifically, we find that (1) storm surge models without coupled surface waves generally under estimate not only the peak surge but also the coastal water level drop which can also cause substantial impact on the coastal environment, (2) introducing wave-induced surface stress effect into storm surge models can significantly improve storm surge prediction, (3) incorporating wave-induced bottom stress into the coupled wave-current model further improves storm surge prediction, and (4) calibration of the wave module according to minimum error in significant wave height does not necessarily result in an optimum wave module in a wave-current coupled system for current and storm surge prediction.

  9. A cluster expansion model for predicting activation barrier of atomic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rehman, Tafizur; Jaipal, M.; Chatterjee, Abhijit, E-mail: achatter@iitk.ac.in

    2013-06-15

    We introduce a procedure based on cluster expansion models for predicting the activation barrier of atomic processes encountered while studying the dynamics of a material system using the kinetic Monte Carlo (KMC) method. Starting with an interatomic potential description, a mathematical derivation is presented to show that the local environment dependence of the activation barrier can be captured using cluster interaction models. Next, we develop a systematic procedure for training the cluster interaction model on-the-fly, which involves: (i) obtaining activation barriers for handful local environments using nudged elastic band (NEB) calculations, (ii) identifying the local environment by analyzing the NEBmore » results, and (iii) estimating the cluster interaction model parameters from the activation barrier data. Once a cluster expansion model has been trained, it is used to predict activation barriers without requiring any additional NEB calculations. Numerical studies are performed to validate the cluster expansion model by studying hop processes in Ag/Ag(100). We show that the use of cluster expansion model with KMC enables efficient generation of an accurate process rate catalog.« less

  10. Predicting movement of nursery hosts using a linear network model

    Treesearch

    Steve McKelvey; Frank Koch; Bill Smith

    2008-01-01

    There is widespread concern among scientists and land managers that Phytophthora ramorum may be accidentally introduced into oak-dominated eastern U.S. forests through the transfer of the pathogen from infected nursery plants to susceptible understory forest species (for example, Rhododendron spp.) at the forest-urban interface....

  11. Contributions of Dynamic Systems Theory to Cognitive Development

    ERIC Educational Resources Information Center

    Spencer, John P.; Austin, Andrew; Schutte, Anne R.

    2012-01-01

    We examine the contributions of dynamic systems theory to the field of cognitive development, focusing on modeling using dynamic neural fields. After introducing central concepts of dynamic field theory (DFT), we probe empirical predictions and findings around two examples--the DFT of infant perseverative reaching that explains Piaget's A-not-B…

  12. Rebar: Reinforcing a Matching Estimator with Predictions from High-Dimensional Covariates

    ERIC Educational Resources Information Center

    Sales, Adam C.; Hansen, Ben B.; Rowan, Brian

    2018-01-01

    In causal matching designs, some control subjects are often left unmatched, and some covariates are often left unmodeled. This article introduces "rebar," a method using high-dimensional modeling to incorporate these commonly discarded data without sacrificing the integrity of the matching design. After constructing a match, a researcher…

  13. 20180312 - Ensemble QSAR Modeling to Predict Multispecies Fish Toxicity Points of Departure (SOT)

    EPA Science Inventory

    Due to the large quantity of new chemicals being developed and potentially introduced into aquatic ecosystems, there is a need to prioritize chemicals with the greatest likelihood of ecological hazard for further research. To this end, a useful in silico estimation of ecotoxicity...

  14. Systems biology as a conceptual framework for research in family medicine; use in predicting response to influenza vaccination.

    PubMed

    Majnarić-Trtica, Ljiljana; Vitale, Branko

    2011-10-01

    To introduce systems biology as a conceptual framework for research in family medicine, based on empirical data from a case study on the prediction of influenza vaccination outcomes. This concept is primarily oriented towards planning preventive interventions and includes systematic data recording, a multi-step research protocol and predictive modelling. Factors known to affect responses to influenza vaccination include older age, past exposure to influenza viruses, and chronic diseases; however, constructing useful prediction models remains a challenge, because of the need to identify health parameters that are appropriate for general use in modelling patients' responses. The sample consisted of 93 patients aged 50-89 years (median 69), with multiple medical conditions, who were vaccinated against influenza. Literature searches identified potentially predictive health-related parameters, including age, gender, diagnoses of the main chronic ageing diseases, anthropometric measures, and haematological and biochemical tests. By applying data mining algorithms, patterns were identified in the data set. Candidate health parameters, selected in this way, were then combined with information on past influenza virus exposure to build the prediction model using logistic regression. A highly significant prediction model was obtained, indicating that by using a systems biology approach it is possible to answer unresolved complex medical uncertainties. Adopting this systems biology approach can be expected to be useful in identifying the most appropriate target groups for other preventive programmes.

  15. Analytical model for local scour prediction around hydrokinetic turbine foundations

    NASA Astrophysics Data System (ADS)

    Musa, M.; Heisel, M.; Hill, C.; Guala, M.

    2017-12-01

    Marine and Hydrokinetic renewable energy is an emerging sustainable and secure technology which produces clean energy harnessing water currents from mostly tidal and fluvial waterways. Hydrokinetic turbines are typically anchored at the bottom of the channel, which can be erodible or non-erodible. Recent experiments demonstrated the interactions between operating turbines and an erodible surface with sediment transport, resulting in a remarkable localized erosion-deposition pattern significantly larger than those observed by static in-river construction such as bridge piers, etc. Predicting local scour geometry at the base of hydrokinetic devices is extremely important during foundation design, installation, operation, and maintenance (IO&M), and long-term structural integrity. An analytical modeling framework is proposed applying the phenomenological theory of turbulence to the flow structures that promote the scouring process at the base of a turbine. The evolution of scour is directly linked to device operating conditions through the turbine drag force, which is inferred to locally dictate the energy dissipation rate in the scour region. The predictive model is validated using experimental data obtained at the University of Minnesota's St. Anthony Falls Laboratory (SAFL), covering two sediment mobility regimes (clear water and live bed), different turbine designs, hydraulic parameters, grain size distribution and bedform types. The model is applied to a potential prototype scale deployment in the lower Mississippi River, demonstrating its practical relevance and endorsing the feasibility of hydrokinetic energy power plants in large sandy rivers. Multi-turbine deployments are further studied experimentally by monitoring both local and non-local geomorphic effects introduced by a twelve turbine staggered array model installed in a wide channel at SAFL. Local scour behind each turbine is well captured by the theoretical predictive model. However, multi-turbine configurations introduce subtle large-scale effects that deepen local scour within the first two rows of the array and develop spatially as a two-dimensional oscillation of the mean bed downstream of the entire array.

  16. Using radiance predicted by the P3 approximation in a spherical geometry to predict tissue optical properties

    NASA Astrophysics Data System (ADS)

    Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John

    2001-01-01

    For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.

  17. A Prediction Model for Functional Outcomes in Spinal Cord Disorder Patients Using Gaussian Process Regression.

    PubMed

    Lee, Sunghoon Ivan; Mortazavi, Bobak; Hoffman, Haydn A; Lu, Derek S; Li, Charles; Paak, Brian H; Garst, Jordan H; Razaghy, Mehrdad; Espinal, Marie; Park, Eunjeong; Lu, Daniel C; Sarrafzadeh, Majid

    2016-01-01

    Predicting the functional outcomes of spinal cord disorder patients after medical treatments, such as a surgical operation, has always been of great interest. Accurate posttreatment prediction is especially beneficial for clinicians, patients, care givers, and therapists. This paper introduces a prediction method for postoperative functional outcomes by a novel use of Gaussian process regression. The proposed method specifically considers the restricted value range of the target variables by modeling the Gaussian process based on a truncated Normal distribution, which significantly improves the prediction results. The prediction has been made in assistance with target tracking examinations using a highly portable and inexpensive handgrip device, which greatly contributes to the prediction performance. The proposed method has been validated through a dataset collected from a clinical cohort pilot involving 15 patients with cervical spinal cord disorder. The results show that the proposed method can accurately predict postoperative functional outcomes, Oswestry disability index and target tracking scores, based on the patient's preoperative information with a mean absolute error of 0.079 and 0.014 (out of 1.0), respectively.

  18. Nonlinear optimization of acoustic energy harvesting using piezoelectric devices.

    PubMed

    Lallart, Mickaeël; Guyomar, Daniel; Richard, Claude; Petit, Lionel

    2010-11-01

    In the first part of the paper, a single degree-of-freedom model of a vibrating membrane with piezoelectric inserts is introduced and is initially applied to the case when a plane wave is incident with frequency close to one of the resonance frequencies. The model is a prototype of a device which converts ambient acoustical energy to electrical energy with the use of piezoelectric devices. The paper then proposes an enhancement of the energy harvesting process using a nonlinear processing of the output voltage of piezoelectric actuators, and suggests that this improves the energy conversion and reduces the sensitivity to frequency drifts. A theoretical discussion is given for the electrical power that can be expected making use of various models. This and supporting experimental results suggest that a nonlinear optimization approach allows a gain of up to 10 in harvested energy and a doubling of the bandwidth. A model is introduced in the latter part of the paper for predicting the behavior of the energy-harvesting device with changes in acoustic frequency, this model taking into account the damping effect and the frequency changes introduced by the nonlinear processes in the device.

  19. Prediction of chemical biodegradability using support vector classifier optimized with differential evolution.

    PubMed

    Cao, Qi; Leung, K M

    2014-09-22

    Reliable computer models for the prediction of chemical biodegradability from molecular descriptors and fingerprints are very important for making health and environmental decisions. Coupling of the differential evolution (DE) algorithm with the support vector classifier (SVC) in order to optimize the main parameters of the classifier resulted in an improved classifier called the DE-SVC, which is introduced in this paper for use in chemical biodegradability studies. The DE-SVC was applied to predict the biodegradation of chemicals on the basis of extensive sample data sets and known structural features of molecules. Our optimization experiments showed that DE can efficiently find the proper parameters of the SVC. The resulting classifier possesses strong robustness and reliability compared with grid search, genetic algorithm, and particle swarm optimization methods. The classification experiments conducted here showed that the DE-SVC exhibits better classification performance than models previously used for such studies. It is a more effective and efficient prediction model for chemical biodegradability.

  20. Modeling of Complex Coupled Fluid-Structure Interaction Systems in Arbitrary Water Depth

    DTIC Science & Technology

    2009-01-01

    basin. For the particle finite- element method ( PFEM ) near-field fluid model we completed: (4) the development of a fully-coupled fluid/flexible...method ( PFEM ) based framework for the ALE-RANS solver [1]. We presented the theory of ALE-RANS with a k- turbulence closure model and several numerical...implemented by PFEM (Task (4)). In this work a universal wall function (UWF) is introduced and implemented to more accurately predict the boundary

  1. Predicting and understanding law-making with word vectors and an ensemble model.

    PubMed

    Nay, John J

    2017-01-01

    Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015, only 2,513 were enacted. We developed a machine learning approach to forecasting the probability that any bill will become law. Starting in 2001 with the 107th Congress, we trained models on data from previous Congresses, predicted all bills in the current Congress, and repeated until the 113th Congress served as the test. For prediction we scored each sentence of a bill with a language model that embeds legislative vocabulary into a high-dimensional, semantic-laden vector space. This language representation enables our investigation into which words increase the probability of enactment for any topic. To test the relative importance of text and context, we compared the text model to a context-only model that uses variables such as whether the bill's sponsor is in the majority party. To test the effect of changes to bills after their introduction on our ability to predict their final outcome, we compared using the bill text and meta-data available at the time of introduction with using the most recent data. At the time of introduction context-only predictions outperform text-only, and with the newest data text-only outperforms context-only. Combining text and context always performs best. We conducted a global sensitivity analysis on the combined model to determine important variables predicting enactment.

  2. Labor estimation by informational objective assessment (LEIOA) for preterm delivery prediction.

    PubMed

    Malaina, Iker; Aranburu, Larraitz; Martínez, Luis; Fernández-Llebrez, Luis; Bringas, Carlos; De la Fuente, Ildefonso M; Pérez, Martín Blás; González, Leire; Arana, Itziar; Matorras, Roberto

    2018-05-01

    To introduce LEIOA, a new screening method to forecast which patients admitted to the hospital because of suspected threatened premature delivery will give birth in < 7 days, so that it can be used to assist in the prognosis and treatment jointly with other clinical tools. From 2010 to 2013, 286 tocographies from women with gestational ages comprehended between 24 and 37 weeks were collected and studied. Then, we developed a new predictive model based on uterine contractions which combine the Generalized Hurst Exponent and the Approximate Entropy by logistic regression (LEIOA model). We compared it with a model using exclusively obstetric variables, and afterwards, we joined both to evaluate the gain. Finally, a cross validation was performed. The combination of LEIOA with the medical model resulted in an increase (in average) of predictive values of 12% with respect to the medical model alone, giving a sensitivity of 0.937, a specificity of 0.747, a positive predictive value of 0.907 and a negative predictive value of 0.819. Besides, adding LEIOA reduced the percentage of incorrectly classified cases by the medical model by almost 50%. Due to the significant increase in predictive parameters and the reduction of incorrectly classified cases when LEIOA was combined with the medical variables, we conclude that it could be a very useful tool to improve the estimation of the immediacy of preterm delivery.

  3. Predicting and understanding law-making with word vectors and an ensemble model

    PubMed Central

    Nay, John J.

    2017-01-01

    Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015, only 2,513 were enacted. We developed a machine learning approach to forecasting the probability that any bill will become law. Starting in 2001 with the 107th Congress, we trained models on data from previous Congresses, predicted all bills in the current Congress, and repeated until the 113th Congress served as the test. For prediction we scored each sentence of a bill with a language model that embeds legislative vocabulary into a high-dimensional, semantic-laden vector space. This language representation enables our investigation into which words increase the probability of enactment for any topic. To test the relative importance of text and context, we compared the text model to a context-only model that uses variables such as whether the bill’s sponsor is in the majority party. To test the effect of changes to bills after their introduction on our ability to predict their final outcome, we compared using the bill text and meta-data available at the time of introduction with using the most recent data. At the time of introduction context-only predictions outperform text-only, and with the newest data text-only outperforms context-only. Combining text and context always performs best. We conducted a global sensitivity analysis on the combined model to determine important variables predicting enactment. PMID:28489868

  4. Modeling the compensatory response of an invasive tree to specialist insect herbivory

    USGS Publications Warehouse

    Zhang, Bo; Liu, Xin; DeAngelis, Donald L.; Zhai, Lu; Rayamajhi, Min B.; Ju, Shu

    2018-01-01

    The severity of the effects of herbivory on plant fitness can be moderated by the ability of plants to compensate for biomass loss. Compensation is an important component of the ecological fitness in many plants, and has been shown to reduce the effects of pests on agricultural plant yields. It can also reduce the effectiveness of biocontrol through introduced herbivores in controlling weedy invasive plants. This study used a modeling approach to predict the effect of different levels of foliage herbivory by biological control agents introduced to control the invasive tree Melaleuca quinquennervia (melaleuca) in Florida. It is assumed in the model that melaleuca can optimally change its carbon and nitrogen allocation strategies in order to compensate for the effects of herbivory. The model includes reallocation of more resources to production and maintenance of photosynthetic tissues at the expense of roots. This compensation is shown to buffer the severity of the defoliation effect, but the model predicts a limit on the maximum herbivory that melaleuca can tolerate and survive. The model also shows that the level of available limiting nutrient (e.g., soil nitrogen) may play an important role in a melaleuca’s ability to compensate for herbivory. This study has management implications for the best ways to maximize the level of damage using biological control or other means of defoliation.

  5. Shared Mechanisms in the Estimation of Self-Generated Actions and the Prediction of Other's Actions by Humans.

    PubMed

    Ikegami, Tsuyoshi; Ganesh, Gowrishankar

    2017-01-01

    The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants' ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert's abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert's self-estimation is explained only by considering a change in the individual's forward model, showing that an improvement in an expert's ability to predict outcomes of observed actions affects the individual's forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions.

  6. Progress towards a more predictive model for hohlraum radiation drive and symmetry

    NASA Astrophysics Data System (ADS)

    Jones, O. S.; Suter, L. J.; Scott, H. A.; Barrios, M. A.; Farmer, W. A.; Hansen, S. B.; Liedahl, D. A.; Mauche, C. W.; Moore, A. S.; Rosen, M. D.; Salmonson, J. D.; Strozzi, D. J.; Thomas, C. A.; Turnbull, D. P.

    2017-05-01

    For several years, we have been calculating the radiation drive in laser-heated gold hohlraums using flux-limited heat transport with a limiter of 0.15, tabulated values of local thermodynamic equilibrium gold opacity, and an approximate model for not in a local thermodynamic equilibrium (NLTE) gold emissivity (DCA_2010). This model has been successful in predicting the radiation drive in vacuum hohlraums, but for gas-filled hohlraums used to drive capsule implosions, the model consistently predicts too much drive and capsule bang times earlier than measured. In this work, we introduce a new model that brings the calculated bang time into better agreement with the measured bang time. The new model employs (1) a numerical grid that is fully converged in space, energy, and time, (2) a modified approximate NLTE model that includes more physics and is in better agreement with more detailed offline emissivity models, and (3) a reduced flux limiter value of 0.03. We applied this model to gas-filled hohlraum experiments using high density carbon and plastic ablator capsules that had hohlraum He fill gas densities ranging from 0.06 to 1.6 mg/cc and hohlraum diameters of 5.75 or 6.72 mm. The new model predicts bang times to within ±100 ps for most experiments with low to intermediate fill densities (up to 0.85 mg/cc). This model predicts higher temperatures in the plasma than the old model and also predicts that at higher gas fill densities, a significant amount of inner beam laser energy escapes the hohlraum through the opposite laser entrance hole.

  7. Action Unit Models of Facial Expression of Emotion in the Presence of Speech

    PubMed Central

    Shah, Miraj; Cooper, David G.; Cao, Houwei; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement. PMID:25525561

  8. Simulation of Silicon Photomultiplier Signals

    NASA Astrophysics Data System (ADS)

    Seifert, Stefan; van Dam, Herman T.; Huizenga, Jan; Vinke, Ruud; Dendooven, Peter; Lohner, Herbert; Schaart, Dennis R.

    2009-12-01

    In a silicon photomultiplier (SiPM), also referred to as multi-pixel photon counter (MPPC), many Geiger-mode avalanche photodiodes (GM-APDs) are connected in parallel so as to combine the photon counting capabilities of each of these so-called microcells into a proportional light sensor. The discharge of a single microcell is relatively well understood and electronic models exist to simulate this process. In this paper we introduce an extended model that is able to simulate the simultaneous discharge of multiple cells. This model is used to predict the SiPM signal in response to fast light pulses as a function of the number of fired cells, taking into account the influence of the input impedance of the SiPM preamplifier. The model predicts that the electronic signal is not proportional to the number of fired cells if the preamplifier input impedance is not zero. This effect becomes more important for SiPMs with lower parasitic capacitance (which otherwise is a favorable property). The model is validated by comparing its predictions to experimental data obtained with two different SiPMs (Hamamatsu S10362-11-25u and Hamamatsu S10362-33-25c) illuminated with ps laser pulses. The experimental results are in good agreement with the model predictions.

  9. Network reconstruction based on proteomic data and prior knowledge of protein connectivity using graph theory.

    PubMed

    Stavrakas, Vassilis; Melas, Ioannis N; Sakellaropoulos, Theodore; Alexopoulos, Leonidas G

    2015-01-01

    Modeling of signal transduction pathways is instrumental for understanding cells' function. People have been tackling modeling of signaling pathways in order to accurately represent the signaling events inside cells' biochemical microenvironment in a way meaningful for scientists in a biological field. In this article, we propose a method to interrogate such pathways in order to produce cell-specific signaling models. We integrate available prior knowledge of protein connectivity, in a form of a Prior Knowledge Network (PKN) with phosphoproteomic data to construct predictive models of the protein connectivity of the interrogated cell type. Several computational methodologies focusing on pathways' logic modeling using optimization formulations or machine learning algorithms have been published on this front over the past few years. Here, we introduce a light and fast approach that uses a breadth-first traversal of the graph to identify the shortest pathways and score proteins in the PKN, fitting the dependencies extracted from the experimental design. The pathways are then combined through a heuristic formulation to produce a final topology handling inconsistencies between the PKN and the experimental scenarios. Our results show that the algorithm we developed is efficient and accurate for the construction of medium and large scale signaling networks. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGF/TNFA stimulation against made up experimental data. To avoid the possibility of erroneous predictions, we performed a cross-validation analysis. Finally, we validate that the introduced approach generates predictive topologies, comparable to the ILP formulation. Overall, an efficient approach based on graph theory is presented herein to interrogate protein-protein interaction networks and to provide meaningful biological insights.

  10. Avoiding drift related to linear analysis update with Lagrangian coordinate models

    NASA Astrophysics Data System (ADS)

    Wang, Yiguo; Counillon, Francois; Bertino, Laurent

    2015-04-01

    When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.

  11. Modeling time-to-event (survival) data using classification tree analysis.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  12. Graph wavelet alignment kernels for drug virtual screening.

    PubMed

    Smalter, Aaron; Huan, Jun; Lushington, Gerald

    2009-06-01

    In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.

  13. Multiscale Fatigue Life Prediction for Composite Panels

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Yarrington, Phillip W.; Arnold, Steven M.

    2012-01-01

    Fatigue life prediction capabilities have been incorporated into the HyperSizer Composite Analysis and Structural Sizing Software. The fatigue damage model is introduced at the fiber/matrix constituent scale through HyperSizer s coupling with NASA s MAC/GMC micromechanics software. This enables prediction of the micro scale damage progression throughout stiffened and sandwich panels as a function of cycles leading ultimately to simulated panel failure. The fatigue model implementation uses a cycle jumping technique such that, rather than applying a specified number of additional cycles, a specified local damage increment is specified and the number of additional cycles to reach this damage increment is calculated. In this way, the effect of stress redistribution due to damage-induced stiffness change is captured, but the fatigue simulations remain computationally efficient. The model is compared to experimental fatigue life data for two composite facesheet/foam core sandwich panels, demonstrating very good agreement.

  14. KTX circuit model and discharge waveform prediction

    NASA Astrophysics Data System (ADS)

    Bai, Wei; Lan, T.; Mao, W. Z.; You, W.; Li, H.; Liu, A. D.; Xie, J. L.; Wan, S. D.; Liu, W. D.; Yang, L.; Fu, P.; Xiao, C. J.; Ding, W. X.

    2013-10-01

    The Keda Torus eXperiment (KTX) is a constructing reversed field pinch (RFP) device in University of Science and Technology of China. The KTX power supply system includes the Ohmic heating, field shaping and toroidal power supply systems, which produce the Ohmic field, equilibrium field and toroidal field, respectively. The detailed circuit model will be introduced in this poster. Another purpose is to predict its discharge waveforms using the modified Bessel function mode (MBFM), which describes the evolution of plasma current and magnetic flux in RFP base on Taylor theory. Furthermore, the power supply requirements of external field shaping winding are also predicted in the model, which will be very helpful for the design of plasma equilibrium controlling system. Supported by ITER-China program (No. 2011GB106000), NNSFC (Nos. 10990210, 10990211, 10335060 and 10905057), CPSF (No. 20080440104), YIF (No. WK2030040019) and KIPCAS (No. kjcx-yw-n28).

  15. Mass Transport through Nanostructured Membranes: Towards a Predictive Tool

    PubMed Central

    Darvishmanesh, Siavash; Van der Bruggen, Bart

    2016-01-01

    This study proposes a new mechanism to understand the transport of solvents through nanostructured membranes from a fundamental point of view. The findings are used to develop readily applicable mathematical models to predict solvent fluxes and solute rejections through solvent resistant membranes used for nanofiltration. The new model was developed based on a pore-flow type of transport. New parameters found to be of fundamental importance were introduced to the equation, i.e., the affinity of the solute and the solvent for the membrane expressed as the hydrogen-bonding contribution of the solubility parameter for the solute, solvent and membrane. A graphical map was constructed to predict the solute rejection based on the hydrogen-bonding contribution of the solubility parameter. The model was evaluated with performance data from the literature. Both the solvent flux and the solute rejection calculated with the new approach were similar to values reported in the literature. PMID:27918434

  16. Shear wave prediction using committee fuzzy model constrained by lithofacies, Zagros basin, SW Iran

    NASA Astrophysics Data System (ADS)

    Shiroodi, Sadjad Kazem; Ghafoori, Mohammad; Ansari, Hamid Reza; Lashkaripour, Golamreza; Ghanadian, Mostafa

    2017-02-01

    The main purpose of this study is to introduce the geological controlling factors in improving an intelligence-based model to estimate shear wave velocity from seismic attributes. The proposed method includes three main steps in the framework of geological events in a complex sedimentary succession located in the Persian Gulf. First, the best attributes were selected from extracted seismic data. Second, these attributes were transformed into shear wave velocity using fuzzy inference systems (FIS) such as Sugeno's fuzzy inference (SFIS), adaptive neuro-fuzzy inference (ANFIS) and optimized fuzzy inference (OFIS). Finally, a committee fuzzy machine (CFM) based on bat-inspired algorithm (BA) optimization was applied to combine previous predictions into an enhanced solution. In order to show the geological effect on improving the prediction, the main classes of predominate lithofacies in the reservoir of interest including shale, sand, and carbonate were selected and then the proposed algorithm was performed with and without lithofacies constraint. The results showed a good agreement between real and predicted shear wave velocity in the lithofacies-based model compared to the model without lithofacies especially in sand and carbonate.

  17. De novo protein structure prediction by dynamic fragment assembly and conformational space annealing.

    PubMed

    Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung

    2011-08-01

    Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.

  18. Comparative Study of Shrinkage and Non-Shrinkage Model of Food Drying

    NASA Astrophysics Data System (ADS)

    Shahari, N.; Jamil, N.; Rasmani, KA.

    2016-08-01

    A single phase heat and mass model has always been used to represent the moisture and temperature distribution during the drying of food. Several effects of the drying process, such as physical and structural changes, have been considered in order to increase understanding of the movement of water and temperature. However, the comparison between the heat and mass equation with and without structural change (in terms of shrinkage), which can affect the accuracy of the prediction model, has been little investigated. In this paper, two mathematical models to describe the heat and mass transfer in food, with and without the assumption of structural change, were analysed. The equations were solved using the finite difference method. The converted coordinate system was introduced within the numerical computations for the shrinkage model. The result shows that the temperature with shrinkage predicts a higher temperature at a specific time compared to that of the non-shrinkage model. Furthermore, the predicted moisture content decreased faster at a specific time when the shrinkage effect was included in the model.

  19. Intermittency in small-scale turbulence: a velocity gradient approach

    NASA Astrophysics Data System (ADS)

    Meneveau, Charles; Johnson, Perry

    2017-11-01

    Intermittency of small-scale motions is an ubiquitous facet of turbulent flows, and predicting this phenomenon based on reduced models derived from first principles remains an important open problem. Here, a multiple-time scale stochastic model is introduced for the Lagrangian evolution of the full velocity gradient tensor in fluid turbulence at arbitrarily high Reynolds numbers. This low-dimensional model differs fundamentally from prior shell models and other empirically-motivated models of intermittency because the nonlinear gradient self-stretching and rotation A2 term vital to the energy cascade and intermittency development is represented exactly from the Navier-Stokes equations. With only one adjustable parameter needed to determine the model's effective Reynolds number, numerical solutions of the resulting set of stochastic differential equations show that the model predicts anomalous scaling for moments of the velocity gradient components and negative derivative skewness. It also predicts signature topological features of the velocity gradient tensor such as vorticity alignment trends with the eigen-directions of the strain-rate. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.

  20. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE PAGES

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    2017-10-27

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  1. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  2. Use of a machine learning framework to predict substance use disorder treatment success

    PubMed Central

    Kelmansky, Diana; van der Laan, Mark; Sahker, Ethan; Jones, DeShauna; Arndt, Stephan

    2017-01-01

    There are several methods for building prediction models. The wealth of currently available modeling techniques usually forces the researcher to judge, a priori, what will likely be the best method. Super learning (SL) is a methodology that facilitates this decision by combining all identified prediction algorithms pertinent for a particular prediction problem. SL generates a final model that is at least as good as any of the other models considered for predicting the outcome. The overarching aim of this work is to introduce SL to analysts and practitioners. This work compares the performance of logistic regression, penalized regression, random forests, deep learning neural networks, and SL to predict successful substance use disorders (SUD) treatment. A nationwide database including 99,013 SUD treatment patients was used. All algorithms were evaluated using the area under the receiver operating characteristic curve (AUC) in a test sample that was not included in the training sample used to fit the prediction models. AUC for the models ranged between 0.793 and 0.820. SL was superior to all but one of the algorithms compared. An explanation of SL steps is provided. SL is the first step in targeted learning, an analytic framework that yields double robust effect estimation and inference with fewer assumptions than the usual parametric methods. Different aspects of SL depending on the context, its function within the targeted learning framework, and the benefits of this methodology in the addiction field are discussed. PMID:28394905

  3. Use of a machine learning framework to predict substance use disorder treatment success.

    PubMed

    Acion, Laura; Kelmansky, Diana; van der Laan, Mark; Sahker, Ethan; Jones, DeShauna; Arndt, Stephan

    2017-01-01

    There are several methods for building prediction models. The wealth of currently available modeling techniques usually forces the researcher to judge, a priori, what will likely be the best method. Super learning (SL) is a methodology that facilitates this decision by combining all identified prediction algorithms pertinent for a particular prediction problem. SL generates a final model that is at least as good as any of the other models considered for predicting the outcome. The overarching aim of this work is to introduce SL to analysts and practitioners. This work compares the performance of logistic regression, penalized regression, random forests, deep learning neural networks, and SL to predict successful substance use disorders (SUD) treatment. A nationwide database including 99,013 SUD treatment patients was used. All algorithms were evaluated using the area under the receiver operating characteristic curve (AUC) in a test sample that was not included in the training sample used to fit the prediction models. AUC for the models ranged between 0.793 and 0.820. SL was superior to all but one of the algorithms compared. An explanation of SL steps is provided. SL is the first step in targeted learning, an analytic framework that yields double robust effect estimation and inference with fewer assumptions than the usual parametric methods. Different aspects of SL depending on the context, its function within the targeted learning framework, and the benefits of this methodology in the addiction field are discussed.

  4. Understanding valence-shell electron-pair repulsion (VSEPR) theory using origami molecular models

    NASA Astrophysics Data System (ADS)

    Endah Saraswati, Teguh; Saputro, Sulistyo; Ramli, Murni; Praseptiangga, Danar; Khasanah, Nurul; Marwati, Sri

    2017-01-01

    Valence-shell electron-pair repulsion (VSEPR) theory is conventionally used to predict molecular geometry. However, it is difficult to explore the full implications of this theory by simply drawing chemical structures. Here, we introduce origami modelling as a more accessible approach for exploration of the VSEPR theory. Our technique is simple, readily accessible and inexpensive compared with other sophisticated methods such as computer simulation or commercial three-dimensional modelling kits. This method can be implemented in chemistry education at both the high school and university levels. We discuss the example of a simple molecular structure prediction for ammonia (NH3). Using the origami model, both molecular shape and the scientific justification can be visualized easily. This ‘hands-on’ approach to building molecules will help promote understanding of VSEPR theory.

  5. Improved design of constrained model predictive tracking control for batch processes against unknown uncertainties.

    PubMed

    Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong

    2017-07-01

    In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains.

    PubMed

    Bulashevska, Alla; Eils, Roland

    2006-06-14

    The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  7. Evaluation of new collision-pair selection models in DSMC

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Hassan; Roohi, Ehsan

    2017-10-01

    The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.

  8. Assessing the Impact of Electrostatic Drag on Processive Molecular Motor Transport.

    PubMed

    Smith, J Darby; McKinley, Scott A

    2018-06-04

    The bidirectional movement of intracellular cargo is usually described as a tug-of-war among opposite-directed families of molecular motors. While tug-of-war models have enjoyed some success, recent evidence suggests underlying motor interactions are more complex than previously understood. For example, these tug-of-war models fail to predict the counterintuitive phenomenon that inhibiting one family of motors can decrease the functionality of opposite-directed transport. In this paper, we use a stochastic differential equations modeling framework to explore one proposed physical mechanism, called microtubule tethering, that could play a role in this "co-dependence" among antagonistic motors. This hypothesis includes the possibility of a trade-off: weakly bound trailing molecular motors can serve as tethers for cargoes and processing motors, thereby enhancing motor-cargo run lengths along microtubules; however, this introduces a cost of processing at a lower mean velocity. By computing the small- and large-time mean-squared displacement of our theoretical model and comparing our results to experimental observations of dynein and its "helper protein" dynactin, we find some supporting evidence for microtubule tethering interactions. We extrapolate these findings to predict how dynein-dynactin might interact with the opposite-directed kinesin motors and introduce a criterion for when the trade-off is beneficial in simple systems.

  9. A consistent transported PDF model for treating differential molecular diffusion

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  10. Multiscale Cancer Modeling

    PubMed Central

    Macklin, Paul; Cristini, Vittorio

    2013-01-01

    Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirazi, M.A.; Davis, L.R.

    To obtain improved prediction of heated plume characteristics from a surface jet, an integral analysis computer model was modified and a comprehensive set of field and laboratory data available from the literature was gathered, analyzed, and correlated for estimating the magnitude of certain coefficients that are normally introduced in these analyses to achieve closure. The parameters so estimated include the coefficients for entrainment, turbulent exchange, drag, and shear. Since there appeared considerable scatter in the data, even after appropriate subgrouping to narrow the influence of various flow conditions on the data, only statistical procedures could be applied to find themore » best fit. This and other analyses of its type have been widely used in industry and government for the prediction of thermal plumes from steam power plants. Although the present model has many shortcomings, a recent independent and exhaustive assessment of such predictions revealed that in comparison with other analyses of its type the present analysis predicts the field situations more successfully.« less

  12. Estimation of brain network ictogenicity predicts outcome from epilepsy surgery

    NASA Astrophysics Data System (ADS)

    Goodfellow, M.; Rummel, C.; Abela, E.; Richardson, M. P.; Schindler, K.; Terry, J. R.

    2016-07-01

    Surgery is a valuable option for pharmacologically intractable epilepsy. However, significant post-operative improvements are not always attained. This is due in part to our incomplete understanding of the seizure generating (ictogenic) capabilities of brain networks. Here we introduce an in silico, model-based framework to study the effects of surgery within ictogenic brain networks. We find that factors conventionally determining the region of tissue to resect, such as the location of focal brain lesions or the presence of epileptiform rhythms, do not necessarily predict the best resection strategy. We validate our framework by analysing electrocorticogram (ECoG) recordings from patients who have undergone epilepsy surgery. We find that when post-operative outcome is good, model predictions for optimal strategies align better with the actual surgery undertaken than when post-operative outcome is poor. Crucially, this allows the prediction of optimal surgical strategies and the provision of quantitative prognoses for patients undergoing epilepsy surgery.

  13. Improved fuzzy PID controller design using predictive functional control structure.

    PubMed

    Wang, Yuzhong; Jin, Qibing; Zhang, Ridong

    2017-11-01

    In conventional PID scheme, the ensemble control performance may be unsatisfactory due to limited degrees of freedom under various kinds of uncertainty. To overcome this disadvantage, a novel PID control method that inherits the advantages of fuzzy PID control and the predictive functional control (PFC) is presented and further verified on the temperature model of a coke furnace. Based on the framework of PFC, the prediction of the future process behavior is first obtained using the current process input signal. Then, the fuzzy PID control based on the multi-step prediction is introduced to acquire the optimal control law. Finally, the case study on a temperature model of a coke furnace shows the effectiveness of the fuzzy PID control scheme when compared with conventional PID control and fuzzy self-adaptive PID control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Life extending control: An interdisciplinary engineering thrust

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Merrill, Walter C.

    1991-01-01

    The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.

  15. The statistical analysis of multi-environment data: modeling genotype-by-environment interaction and its genetic basis

    PubMed Central

    Malosetti, Marcos; Ribaut, Jean-Marcel; van Eeuwijk, Fred A.

    2013-01-01

    Genotype-by-environment interaction (GEI) is an important phenomenon in plant breeding. This paper presents a series of models for describing, exploring, understanding, and predicting GEI. All models depart from a two-way table of genotype by environment means. First, a series of descriptive and explorative models/approaches are presented: Finlay–Wilkinson model, AMMI model, GGE biplot. All of these approaches have in common that they merely try to group genotypes and environments and do not use other information than the two-way table of means. Next, factorial regression is introduced as an approach to explicitly introduce genotypic and environmental covariates for describing and explaining GEI. Finally, QTL modeling is presented as a natural extension of factorial regression, where marker information is translated into genetic predictors. Tests for regression coefficients corresponding to these genetic predictors are tests for main effect QTL expression and QTL by environment interaction (QEI). QTL models for which QEI depends on environmental covariables form an interesting model class for predicting GEI for new genotypes and new environments. For realistic modeling of genotypic differences across multiple environments, sophisticated mixed models are necessary to allow for heterogeneity of genetic variances and correlations across environments. The use and interpretation of all models is illustrated by an example data set from the CIMMYT maize breeding program, containing environments differing in drought and nitrogen stress. To help readers to carry out the statistical analyses, GenStat® programs, 15th Edition and Discovery® version, are presented as “Appendix.” PMID:23487515

  16. A glucose model based on support vector regression for the prediction of hypoglycemic events under free-living conditions.

    PubMed

    Georga, Eleni I; Protopappas, Vasilios C; Ardigò, Diego; Polyzos, Demosthenes; Fotiadis, Dimitrios I

    2013-08-01

    The prevention of hypoglycemic events is of paramount importance in the daily management of insulin-treated diabetes. The use of short-term prediction algorithms of the subcutaneous (s.c.) glucose concentration may contribute significantly toward this direction. The literature suggests that, although the recent glucose profile is a prominent predictor of hypoglycemia, the overall patient's context greatly impacts its accurate estimation. The objective of this study is to evaluate the performance of a support vector for regression (SVR) s.c. glucose method on hypoglycemia prediction. We extend our SVR model to predict separately the nocturnal events during sleep and the non-nocturnal (i.e., diurnal) ones over 30-min and 60-min horizons using information on recent glucose profile, meals, insulin intake, and physical activities for a hypoglycemic threshold of 70 mg/dL. We also introduce herein additional variables accounting for recurrent nocturnal hypoglycemia due to antecedent hypoglycemia, exercise, and sleep. SVR predictions are compared with those from two other machine learning techniques. The method is assessed on a dataset of 15 patients with type 1 diabetes under free-living conditions. Nocturnal hypoglycemic events are predicted with 94% sensitivity for both horizons and with time lags of 5.43 min and 4.57 min, respectively. As concerns the diurnal events, when physical activities are not considered, the sensitivity is 92% and 96% for a 30-min and 60-min horizon, respectively, with both time lags being less than 5 min. However, when such information is introduced, the diurnal sensitivity decreases by 8% and 3%, respectively. Both nocturnal and diurnal predictions show a high (>90%) precision. Results suggest that hypoglycemia prediction using SVR can be accurate and performs better in most diurnal and nocturnal cases compared with other techniques. It is advised that the problem of hypoglycemia prediction should be handled differently for nocturnal and diurnal periods as regards input variables and interpretation of results.

  17. Predicting remaining life by fusing the physics of failure modeling with diagnostics

    NASA Astrophysics Data System (ADS)

    Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.

    2004-03-01

    Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.

  18. Application of various FLD modelling approaches

    NASA Astrophysics Data System (ADS)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  19. Assessment and prediction of air quality using fuzzy logic and autoregressive models

    NASA Astrophysics Data System (ADS)

    Carbajal-Hernández, José Juan; Sánchez-Fernández, Luis P.; Carrasco-Ochoa, Jesús A.; Martínez-Trinidad, José Fco.

    2012-12-01

    In recent years, artificial intelligence methods have been used for the treatment of environmental problems. This work, presents two models for assessment and prediction of air quality. First, we develop a new computational model for air quality assessment in order to evaluate toxic compounds that can harm sensitive people in urban areas, affecting their normal activities. In this model we propose to use a Sigma operator to statistically asses air quality parameters using their historical data information and determining their negative impact in air quality based on toxicity limits, frequency average and deviations of toxicological tests. We also introduce a fuzzy inference system to perform parameter classification using a reasoning process and integrating them in an air quality index describing the pollution levels in five stages: excellent, good, regular, bad and danger, respectively. The second model proposed in this work predicts air quality concentrations using an autoregressive model, providing a predicted air quality index based on the fuzzy inference system previously developed. Using data from Mexico City Atmospheric Monitoring System, we perform a comparison among air quality indices developed for environmental agencies and similar models. Our results show that our models are an appropriate tool for assessing site pollution and for providing guidance to improve contingency actions in urban areas.

  20. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: A National Application.

    PubMed

    Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William

    2016-04-19

    To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.

  1. Study on Development of 1D-2D Coupled Real-time Urban Inundation Prediction model

    NASA Astrophysics Data System (ADS)

    Lee, Seungsoo

    2017-04-01

    In recent years, we are suffering abnormal weather condition due to climate change around the world. Therefore, countermeasures for flood defense are urgent task. In this research, study on development of 1D-2D coupled real-time urban inundation prediction model using predicted precipitation data based on remote sensing technology is conducted. 1 dimensional (1D) sewerage system analysis model which was introduced by Lee et al. (2015) is used to simulate inlet and overflow phenomena by interacting with surface flown as well as flows in conduits. 2 dimensional (2D) grid mesh refinement method is applied to depict road networks for effective calculation time. 2D surface model is coupled with 1D sewerage analysis model in order to consider bi-directional flow between both. Also parallel computing method, OpenMP, is applied to reduce calculation time. The model is estimated by applying to 25 August 2014 extreme rainfall event which caused severe inundation damages in Busan, Korea. Oncheoncheon basin is selected for study basin and observed radar data are assumed as predicted rainfall data. The model shows acceptable calculation speed with accuracy. Therefore it is expected that the model can be used for real-time urban inundation forecasting system to minimize damages.

  2. A Lagrangian Transport Eulerian Reaction Spatial (LATERS) Markov Model for Prediction of Effective Bimolecular Reactive Transport

    NASA Astrophysics Data System (ADS)

    Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi

    2017-11-01

    Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.

  3. Formation enthalpies for transition metal alloys using machine learning

    NASA Astrophysics Data System (ADS)

    Ubaru, Shashanka; Miedlar, Agnieszka; Saad, Yousef; Chelikowsky, James R.

    2017-06-01

    The enthalpy of formation is an important thermodynamic property. Developing fast and accurate methods for its prediction is of practical interest in a variety of applications. Material informatics techniques based on machine learning have recently been introduced in the literature as an inexpensive means of exploiting materials data, and can be used to examine a variety of thermodynamics properties. We investigate the use of such machine learning tools for predicting the formation enthalpies of binary intermetallic compounds that contain at least one transition metal. We consider certain easily available properties of the constituting elements complemented by some basic properties of the compounds, to predict the formation enthalpies. We show how choosing these properties (input features) based on a literature study (using prior physics knowledge) seems to outperform machine learning based feature selection methods such as sensitivity analysis and LASSO (least absolute shrinkage and selection operator) based methods. A nonlinear kernel based support vector regression method is employed to perform the predictions. The predictive ability of our model is illustrated via several experiments on a dataset containing 648 binary alloys. We train and validate the model using the formation enthalpies calculated using a model by Miedema, which is a popular semiempirical model used for the prediction of formation enthalpies of metal alloys.

  4. How ecology shapes exploitation: a framework to predict the behavioural response of human and animal foragers along exploration-exploitation trade-offs.

    PubMed

    Monk, Christopher T; Barbier, Matthieu; Romanczuk, Pawel; Watson, James R; Alós, Josep; Nakayama, Shinnosuke; Rubenstein, Daniel I; Levin, Simon A; Arlinghaus, Robert

    2018-06-01

    Understanding how humans and other animals behave in response to changes in their environments is vital for predicting population dynamics and the trajectory of coupled social-ecological systems. Here, we present a novel framework for identifying emergent social behaviours in foragers (including humans engaged in fishing or hunting) in predator-prey contexts based on the exploration difficulty and exploitation potential of a renewable natural resource. A qualitative framework is introduced that predicts when foragers should behave territorially, search collectively, act independently or switch among these states. To validate it, we derived quantitative predictions from two models of different structure: a generic mathematical model, and a lattice-based evolutionary model emphasising exploitation and exclusion costs. These models independently identified that the exploration difficulty and exploitation potential of the natural resource controls the social behaviour of resource exploiters. Our theoretical predictions were finally compared to a diverse set of empirical cases focusing on fisheries and aquatic organisms across a range of taxa, substantiating the framework's predictions. Understanding social behaviour for given social-ecological characteristics has important implications, particularly for the design of governance structures and regulations to move exploited systems, such as fisheries, towards sustainability. Our framework provides concrete steps in this direction. © 2018 John Wiley & Sons Ltd/CNRS.

  5. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  6. A link prediction method for heterogeneous networks based on BP neural network

    NASA Astrophysics Data System (ADS)

    Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu

    2018-04-01

    Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.

  7. Methods of Information Geometry to model complex shapes

    NASA Astrophysics Data System (ADS)

    De Sanctis, A.; Gattone, S. A.

    2016-09-01

    In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.

  8. Comparisons Between Experimental and Semi-theoretical Cutting Forces of CCS Disc Cutters

    NASA Astrophysics Data System (ADS)

    Xia, Yimin; Guo, Ben; Tan, Qing; Zhang, Xuhui; Lan, Hao; Ji, Zhiyong

    2018-05-01

    This paper focuses on comparisons between the experimental and semi-theoretical forces of CCS disc cutters acting on different rocks. The experimental forces obtained from LCM tests were used to evaluate the prediction accuracy of a semi-theoretical CSM model. The results show that the CSM model reliably predicts the normal forces acting on red sandstone and granite, but underestimates the normal forces acting on marble. Some additional LCM test data from the literature were collected to further explore the ability of the CSM model to predict the normal forces acting on rocks of different strengths. The CSM model underestimates the normal forces acting on soft rocks, semi-hard rocks and hard rocks by approximately 38, 38 and 10%, respectively, but very accurately predicts those acting on very hard and extremely hard rocks. A calibration factor is introduced to modify the normal forces estimated by the CSM model. The overall trend of the calibration factor is characterized by an exponential decrease with increasing rock uniaxial compressive strength. The mean fitting ratios between the normal forces estimated by the modified CSM model and the experimental normal forces acting on soft rocks, semi-hard rocks and hard rocks are 1.076, 0.879 and 1.013, respectively. The results indicate that the prediction accuracy and the reliability of the CSM model have been improved.

  9. Development of GP and GEP models to estimate an environmental issue induced by blasting operation.

    PubMed

    Faradonbeh, Roohollah Shirani; Hasanipanah, Mahdi; Amnieh, Hassan Bakhshandeh; Armaghani, Danial Jahed; Monjezi, Masoud

    2018-05-21

    Air overpressure (AOp) is one of the most adverse effects induced by blasting in the surface mines and civil projects. So, proper evaluation and estimation of the AOp is important for minimizing the environmental problems resulting from blasting. The main aim of this study is to estimate AOp produced by blasting operation in Miduk copper mine, Iran, developing two artificial intelligence models, i.e., genetic programming (GP) and gene expression programming (GEP). Then, the accuracy of the GP and GEP models has been compared to multiple linear regression (MLR) and three empirical models. For this purpose, 92 blasting events were investigated, and subsequently, the AOp values were carefully measured. Moreover, in each operation, the values of maximum charge per delay and distance from blast points, as two effective parameters on the AOp, were measured. After predicting by the predictive models, their performance prediction was checked in terms of variance account for (VAF), coefficient of determination (CoD), and root mean square error (RMSE). Finally, it was found that the GEP with VAF of 94.12%, CoD of 0.941, and RMSE of 0.06 is a more precise model than other predictive models for the AOp prediction in the Miduk copper mine, and it can be introduced as a new powerful tool for estimating the AOp resulting from blasting.

  10. A comparison of two adaptive multivariate analysis methods (PLSR and ANN) for winter wheat yield forecasting using Landsat-8 OLI images

    NASA Astrophysics Data System (ADS)

    Chen, Pengfei; Jing, Qi

    2017-02-01

    An assumption that the non-linear method is more reasonable than the linear method when canopy reflectance is used to establish the yield prediction model was proposed and tested in this study. For this purpose, partial least squares regression (PLSR) and artificial neural networks (ANN), represented linear and non-linear analysis method, were applied and compared for wheat yield prediction. Multi-period Landsat-8 OLI images were collected at two different wheat growth stages, and a field campaign was conducted to obtain grain yields at selected sampling sites in 2014. The field data were divided into a calibration database and a testing database. Using calibration data, a cross-validation concept was introduced for the PLSR and ANN model construction to prevent over-fitting. All models were tested using the test data. The ANN yield-prediction model produced R2, RMSE and RMSE% values of 0.61, 979 kg ha-1, and 10.38%, respectively, in the testing phase, performing better than the PLSR yield-prediction model, which produced R2, RMSE, and RMSE% values of 0.39, 1211 kg ha-1, and 12.84%, respectively. Non-linear method was suggested as a better method for yield prediction.

  11. Constrained Active Learning for Anchor Link Prediction Across Multiple Heterogeneous Social Networks

    PubMed Central

    Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S.

    2017-01-01

    Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a=(u,v) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages. PMID:28771201

  12. Constrained Active Learning for Anchor Link Prediction Across Multiple Heterogeneous Social Networks.

    PubMed

    Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S

    2017-08-03

    Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a = ( u , v ) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages.

  13. Electromagnetic field strength prediction in an urban environment: A useful tool for the planning of LMSS

    NASA Technical Reports Server (NTRS)

    Vandooren, G. A. J.; Herben, M. H. A. J.; Brussaard, G.; Sforza, M.; Poiaresbaptista, J. P. V.

    1993-01-01

    A model for the prediction of the electromagnetic field strength in an urban environment is presented. The ray model, that is based on the Uniform Theory of Diffraction (UTD), includes effects of the non-perfect conductivity of the obstacles and their surface roughness. The urban environment is transformed into a list of standardized obstacles that have various shapes and material properties. The model is capable of accurately predicting the field strength in the urban environment by calculating different types of wave contributions such as reflected, edge and corner diffracted waves, and combinations thereof. Also, antenna weight functions are introduced to simulate the spatial filtering by the mobile antenna. Communication channel parameters such as signal fading, time delay profiles, Doppler shifts and delay-Doppler spectra can be derived from the ray-tracing procedure using post-processing routines. The model has been tested against results from scaled measurements at 50 GHz and proves to be accurate.

  14. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  15. Correlation Imaging Reveals Specific Crowding Dynamics of Kinesin Motor Proteins

    NASA Astrophysics Data System (ADS)

    Miedema, Daniël M.; Kushwaha, Vandana S.; Denisov, Dmitry V.; Acar, Seyda; Nienhuis, Bernard; Peterman, Erwin J. G.; Schall, Peter

    2017-10-01

    Molecular motor proteins fulfill the critical function of transporting organelles and other building blocks along the biopolymer network of the cell's cytoskeleton, but crowding effects are believed to crucially affect this motor-driven transport due to motor interactions. Physical transport models, like the paradigmatic, totally asymmetric simple exclusion process (TASEP), have been used to predict these crowding effects based on simple exclusion interactions, but verifying them in experiments remains challenging. Here, we introduce a correlation imaging technique to precisely measure the motor density, velocity, and run length along filaments under crowding conditions, enabling us to elucidate the physical nature of crowding and test TASEP model predictions. Using the kinesin motor proteins kinesin-1 and OSM-3, we identify crowding effects in qualitative agreement with TASEP predictions, and we achieve excellent quantitative agreement by extending the model with motor-specific interaction ranges and crowding-dependent detachment probabilities. These results confirm the applicability of basic nonequilibrium models to the intracellular transport and highlight motor-specific strategies to deal with crowding.

  16. Analytical and numerical techniques for predicting the interfacial stresses of wavy carbon nanotube/polymer composites

    NASA Astrophysics Data System (ADS)

    Yazdchi, K.; Salehi, M.; Shokrieh, M. M.

    2009-03-01

    By introducing a new simplified 3D representative volume element for wavy carbon nanotubes, an analytical model is developed to study the stress transfer in single-walled carbon nanotube-reinforced polymer composites. Based on the pull-out modeling technique, the effects of waviness, aspect ratio, and Poisson ratio on the axial and interfacial shear stresses are analyzed in detail. The results of the present analytical model are in a good agreement with corresponding results for straight nanotubes.

  17. Testing the generality of above-ground biomass allometry across plant functional types at the continent scale.

    PubMed

    Paul, Keryn I; Roxburgh, Stephen H; Chave, Jerome; England, Jacqueline R; Zerihun, Ayalsew; Specht, Alison; Lewis, Tom; Bennett, Lauren T; Baker, Thomas G; Adams, Mark A; Huxtable, Dan; Montagu, Kelvin D; Falster, Daniel S; Feller, Mike; Sochacki, Stan; Ritson, Peter; Bastin, Gary; Bartle, John; Wildy, Dan; Hobbs, Trevor; Larmour, John; Waterworth, Rob; Stewart, Hugh T L; Jonson, Justin; Forrester, David I; Applegate, Grahame; Mendham, Daniel; Bradford, Matt; O'Grady, Anthony; Green, Daryl; Sudmeyer, Rob; Rance, Stan J; Turner, John; Barton, Craig; Wenk, Elizabeth H; Grove, Tim; Attiwill, Peter M; Pinkard, Elizabeth; Butler, Don; Brooksbank, Kim; Spencer, Beren; Snowdon, Peter; O'Brien, Nick; Battaglia, Michael; Cameron, David M; Hamilton, Steve; McAuthur, Geoff; Sinclair, Jenny

    2016-06-01

    Accurate ground-based estimation of the carbon stored in terrestrial ecosystems is critical to quantifying the global carbon budget. Allometric models provide cost-effective methods for biomass prediction. But do such models vary with ecoregion or plant functional type? We compiled 15 054 measurements of individual tree or shrub biomass from across Australia to examine the generality of allometric models for above-ground biomass prediction. This provided a robust case study because Australia includes ecoregions ranging from arid shrublands to tropical rainforests, and has a rich history of biomass research, particularly in planted forests. Regardless of ecoregion, for five broad categories of plant functional type (shrubs; multistemmed trees; trees of the genus Eucalyptus and closely related genera; other trees of high wood density; and other trees of low wood density), relationships between biomass and stem diameter were generic. Simple power-law models explained 84-95% of the variation in biomass, with little improvement in model performance when other plant variables (height, bole wood density), or site characteristics (climate, age, management) were included. Predictions of stand-based biomass from allometric models of varying levels of generalization (species-specific, plant functional type) were validated using whole-plot harvest data from 17 contrasting stands (range: 9-356 Mg ha(-1) ). Losses in efficiency of prediction were <1% if generalized models were used in place of species-specific models. Furthermore, application of generalized multispecies models did not introduce significant bias in biomass prediction in 92% of the 53 species tested. Further, overall efficiency of stand-level biomass prediction was 99%, with a mean absolute prediction error of only 13%. Hence, for cost-effective prediction of biomass across a wide range of stands, we recommend use of generic allometric models based on plant functional types. Development of new species-specific models is only warranted when gains in accuracy of stand-based predictions are relatively high (e.g. high-value monocultures). © 2015 John Wiley & Sons Ltd.

  18. Research on orbit prediction for solar-based calibration proper satellite

    NASA Astrophysics Data System (ADS)

    Chen, Xuan; Qi, Wenwen; Xu, Peng

    2018-03-01

    Utilizing the mathematical model of the orbit mechanics, the orbit prediction is to forecast the space target's orbit information of a certain time based on the orbit of the initial moment. The proper satellite radiometric calibration and calibration orbit prediction process are introduced briefly. On the basis of the research of the calibration space position design method and the radiative transfer model, an orbit prediction method for proper satellite radiometric calibration is proposed to select the appropriate calibration arc for the remote sensor and to predict the orbit information of the proper satellite and the remote sensor. By analyzing the orbit constraint of the proper satellite calibration, the GF-1solar synchronous orbit is chose as the proper satellite orbit in order to simulate the calibration visible durance for different satellites to be calibrated. The results of simulation and analysis provide the basis for the improvement of the radiometric calibration accuracy of the satellite remote sensor, which lays the foundation for the high precision and high frequency radiometric calibration.

  19. Student Ranking Differences within Institutions Using Old and New SAT Scores

    ERIC Educational Resources Information Center

    Marini, Jessica P.; Beard, Jonathan; Shaw, Emily J.

    2018-01-01

    Admission offices at colleges and universities often use SAT® scores to make decisions about applicants for their incoming class. Many institutions use prediction models to quantify a student's potential for success using various measures, including SAT scores (NACAC, 2016). In March 2016, the College Board introduced a redesigned SAT that better…

  20. Using Novel Word Context Measures to Predict Human Ratings of Lexical Proficiency

    ERIC Educational Resources Information Center

    Berger, Cynthia M.; Crossley, Scott A.; Kyle, Kristopher

    2017-01-01

    This study introduces a model of lexical proficiency based on novel computational indices related to word context. The indices come from an updated version of the Tool for the Automatic Analysis of Lexical Sophistication (TAALES) and include associative, lexical, and semantic measures of word context. Human ratings of holistic lexical proficiency…

  1. Interface modeling for predicting atmospheric transport of biota

    Treesearch

    Gary L. Achtemeier

    2002-01-01

    The influx of foreign organisms and the growing resistance of resident organisms to chemical controls are coming at a time of increasing world population and need for greater efficiency in food production in the face of changing world climate. Rapid transportation and increased world trade have introduced foreign pests into American agricultural areas. Pesticides are...

  2. An Improved Model for Nucleation-Limited Ice Formation in Living Cells during Freezing

    PubMed Central

    Zhao, Gang; He, Xiaoming

    2014-01-01

    Ice formation in living cells is a lethal event during freezing and its characterization is important to the development of optimal protocols for not only cryopreservation but also cryotherapy applications. Although the model for probability of ice formation (PIF) in cells developed by Toner et al. has been widely used to predict nucleation-limited intracellular ice formation (IIF), our data of freezing Hela cells suggest that this model could give misleading prediction of PIF when the maximum PIF in cells during freezing is less than 1 (PIF ranges from 0 to 1). We introduce a new model to overcome this problem by incorporating a critical cell volume to modify the Toner's original model. We further reveal that this critical cell volume is dependent on the mechanisms of ice nucleation in cells during freezing, i.e., surface-catalyzed nucleation (SCN) and volume-catalyzed nucleation (VCN). Taken together, the improved PIF model may be valuable for better understanding of the mechanisms of ice nucleation in cells during freezing and more accurate prediction of PIF for cryopreservation and cryotherapy applications. PMID:24852166

  3. Cost-effective computational method for radiation heat transfer in semi-crystalline polymers

    NASA Astrophysics Data System (ADS)

    Boztepe, Sinan; Gilblas, Rémi; de Almeida, Olivier; Le Maoult, Yannick; Schmidt, Fabrice

    2018-05-01

    This paper introduces a cost-effective numerical model for infrared (IR) heating of semi-crystalline polymers. For the numerical and experimental studies presented here semi-crystalline polyethylene (PE) was used. The optical properties of PE were experimentally analyzed under varying temperature and the obtained results were used as input in the numerical studies. The model was built based on optically homogeneous medium assumption whereas the strong variation in the thermo-optical properties of semi-crystalline PE under heating was taken into account. Thus, the change in the amount radiative energy absorbed by the PE medium was introduced in the model induced by its temperature-dependent thermo-optical properties. The computational study was carried out considering an iterative closed-loop computation, where the absorbed radiation was computed using an in-house developed radiation heat transfer algorithm -RAYHEAT- and the computed results was transferred into the commercial software -COMSOL Multiphysics- for solving transient heat transfer problem to predict temperature field. The predicted temperature field was used to iterate the thermo-optical properties of PE that varies under heating. In order to analyze the accuracy of the numerical model experimental analyses were carried out performing IR-thermographic measurements during the heating of the PE plate. The applicability of the model in terms of computational cost, number of numerical input and accuracy was highlighted.

  4. Prediction of road traffic death rate using neural networks optimised by genetic algorithm.

    PubMed

    Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari

    2015-01-01

    Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.

  5. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    PubMed

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  6. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science

    PubMed Central

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883

  7. Radar backscatter from the sea: Controlled experiments

    NASA Astrophysics Data System (ADS)

    Moore, R. K.

    1992-04-01

    The subwindowing method of modelling synthetic-aperture-radar (SAR) imaging of ocean waves was extended to allow wave propagation in arbitrary directions. Simulated images show that the SAR image response to swells that are imaged by velocity bunching is reduced by random smearing due to wind-generated waves. The magnitude of this response is not accurately predicted by introducing a finite coherence time in the radar backscatter. The smearing does not affect the imaging of waves by surface radar cross-section modulation, and is independent of the wind direction. Adjusting the focus of the SAR processor introduces an offset in the image response of the surface scatters. When adjusted by one-half the azimuthal phase velocity of the wave, this compensates the incoherent advance of the wave being imaged, leading to a higher image contrast. The azimuthal cut-off and range rotation of the spectral peak are predicted when the imaging of wind-generated wave trains is simulated. The simulated images suggest that velocity bunching and azimuthal smearing are strongly interdependent, and cannot be included in a model separately.

  8. Phonon Conduction in Silicon Nanobeam Labyrinths

    DOE PAGES

    Park, Woosung; Romano, Giuseppe; Ahn, Ethan C.; ...

    2017-07-24

    Here we study single-crystalline silicon nanobeams having 470 nm width and 80 nm thickness cross section, where we produce tortuous thermal paths (i.e. labyrinths) by introducing slits to control the impact of the unobstructed “line-of-sight” (LOS) between the heat source and heat sink. The labyrinths range from straight nanobeams with a complete LOS along the entire length to nanobeams in which the LOS ranges from partially to entirely blocked by introducing slits, s = 95, 195, 245, 295 and 395 nm. The measured thermal conductivity of the samples decreases monotonically from ~47 W m -1K -1 for straight beam tomore » ~31 W m -1 K -1 for slit width of 395 nm. A model prediction through a combination of the Boltzmann transport equation and ab initio calculations shows an excellent agreement with the experimental data to within ~8%. The model prediction for the most tortuous path (s = 395 nm) is reduced by ~14% compared to a straight beam of equivalent cross section. This study suggests that LOS is an important metric for characterizing and interpreting phonon propagation in nanostructures.« less

  9. Numerical prediction of fiber orientation in injection-molded short-fiber/thermoplastic composite parts with experimental validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thi, Thanh Binh Nguyen; Morioka, Mizuki; Yokoyama, Atsushi

    Numerical prediction of the fiber orientation in the short-glass fiber (GF) reinforced polyamide 6 (PA6) composites with the fiber weight concentration of 30%, 50%, and 70% manufactured by the injection molding process is presented. And the fiber orientation was also directly observed and measured through X-ray computed tomography. During the injection molding process of the short-fiber/thermoplastic composite, the fiber orientation is produced by the flow states and the fiber-fiber interaction. Folgar and Tucker equation is the well known for modeling the fiber orientation in a concentrated suspension. They included into Jeffrey’s equation a diffusive type of term by introducing amore » phenomenological coefficient to account for the fiber-fiber interaction. Our developed model for the fiber-fiber interaction was proposed by modifying the rotary diffusion term of the Folgar-Tucker equation. This model was presented in a conference paper of the 29{sup th} International Conference of the Polymer Processing Society published by AIP conference proceeding. For modeling fiber interaction, the fiber dynamic simulation was introduced in order to obtain a global fiber interaction coefficient, which is sum function of the fiber concentration, aspect ratio, and angular velocity. The fiber orientation is predicted by using the proposed fiber interaction model incorporated into a computer aided engineering simulation package C-Mold. An experimental program has been carried out in which the fiber orientation distribution has been measured in 100 x 100 x 2 mm injection-molded plate and 100 x 80 x 2 mm injection-molded weld by analyzed with a high resolution 3D X-ray computed tomography system XVA-160α, and calculated by X-ray computed tomography imaging. The numerical prediction shows a good agreement with experimental validation. And the complex fiber orientation in the injection-molded weld was investigated.« less

  10. Numerical prediction of fiber orientation in injection-molded short-fiber/thermoplastic composite parts with experimental validation

    NASA Astrophysics Data System (ADS)

    Thi, Thanh Binh Nguyen; Morioka, Mizuki; Yokoyama, Atsushi; Hamanaka, Senji; Yamashita, Katsuhisa; Nonomura, Chisato

    2015-05-01

    Numerical prediction of the fiber orientation in the short-glass fiber (GF) reinforced polyamide 6 (PA6) composites with the fiber weight concentration of 30%, 50%, and 70% manufactured by the injection molding process is presented. And the fiber orientation was also directly observed and measured through X-ray computed tomography. During the injection molding process of the short-fiber/thermoplastic composite, the fiber orientation is produced by the flow states and the fiber-fiber interaction. Folgar and Tucker equation is the well known for modeling the fiber orientation in a concentrated suspension. They included into Jeffrey's equation a diffusive type of term by introducing a phenomenological coefficient to account for the fiber-fiber interaction. Our developed model for the fiber-fiber interaction was proposed by modifying the rotary diffusion term of the Folgar-Tucker equation. This model was presented in a conference paper of the 29th International Conference of the Polymer Processing Society published by AIP conference proceeding. For modeling fiber interaction, the fiber dynamic simulation was introduced in order to obtain a global fiber interaction coefficient, which is sum function of the fiber concentration, aspect ratio, and angular velocity. The fiber orientation is predicted by using the proposed fiber interaction model incorporated into a computer aided engineering simulation package C-Mold. An experimental program has been carried out in which the fiber orientation distribution has been measured in 100 x 100 x 2 mm injection-molded plate and 100 x 80 x 2 mm injection-molded weld by analyzed with a high resolution 3D X-ray computed tomography system XVA-160α, and calculated by X-ray computed tomography imaging. The numerical prediction shows a good agreement with experimental validation. And the complex fiber orientation in the injection-molded weld was investigated.

  11. Risk terrain modeling predicts child maltreatment.

    PubMed

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Predicting birth weight with conditionally linear transformation models.

    PubMed

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  13. Micromechanical modeling of rate-dependent behavior of Connective tissues.

    PubMed

    Fallah, A; Ahmadian, M T; Firozbakhsh, K; Aghdam, M M

    2017-03-07

    In this paper, a constitutive and micromechanical model for prediction of rate-dependent behavior of connective tissues (CTs) is presented. Connective tissues are considered as nonlinear viscoelastic material. The rate-dependent behavior of CTs is incorporated into model using the well-known quasi-linear viscoelasticity (QLV) theory. A planar wavy representative volume element (RVE) is considered based on the tissue microstructure histological evidences. The presented model parameters are identified based on the available experiments in the literature. The presented constitutive model introduced to ABAQUS by means of UMAT subroutine. Results show that, monotonic uniaxial test predictions of the presented model at different strain rates for rat tail tendon (RTT) and human patellar tendon (HPT) are in good agreement with experimental data. Results of incremental stress-relaxation test are also presented to investigate both instantaneous and viscoelastic behavior of connective tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Predicting potential global distributions of two Miscanthus grasses: implications for horticulture, biofuel production, and biological invasions.

    PubMed

    Hager, Heather A; Sinasac, Sarah E; Gedalof, Ze'ev; Newman, Jonathan A

    2014-01-01

    In many regions, large proportions of the naturalized and invasive non-native floras were originally introduced deliberately by humans. Pest risk assessments are now used in many jurisdictions to regulate the importation of species and usually include an estimation of the potential distribution in the import area. Two species of Asian grass (Miscanthus sacchariflorus and M. sinensis) that were originally introduced to North America as ornamental plants have since escaped cultivation. These species and their hybrid offspring are now receiving attention for large-scale production as biofuel crops in North America and elsewhere. We evaluated their potential global climate suitability for cultivation and potential invasion using the niche model CLIMEX and evaluated the models' sensitivity to the parameter values. We then compared the sensitivity of projections of future climatically suitable area under two climate models and two emissions scenarios. The models indicate that the species have been introduced to most of the potential global climatically suitable areas in the northern but not the southern hemisphere. The more narrowly distributed species (M. sacchariflorus) is more sensitive to changes in model parameters, which could have implications for modelling species of conservation concern. Climate projections indicate likely contractions in potential range in the south, but expansions in the north, particularly in introduced areas where biomass production trials are under way. Climate sensitivity analysis shows that projections differ more between the selected climate change models than between the selected emissions scenarios. Local-scale assessments are required to overlay suitable habitat with climate projections to estimate areas of cultivation potential and invasion risk.

  15. Validating models of target acquisition performance in the dismounted soldier context

    NASA Astrophysics Data System (ADS)

    Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.

    2018-04-01

    The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.

  16. Researches of fruit quality prediction model based on near infrared spectrum

    NASA Astrophysics Data System (ADS)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  17. Mathematical modeling and simulation in animal health. Part I: Moving beyond pharmacokinetics.

    PubMed

    Riviere, J E; Gabrielsson, J; Fink, M; Mochel, J

    2016-06-01

    The application of mathematical modeling to problems in animal health has a rich history in the form of pharmacokinetic modeling applied to problems in veterinary medicine. Advances in modeling and simulation beyond pharmacokinetics have the potential to streamline and speed-up drug research and development programs. To foster these goals, a series of manuscripts will be published with the following goals: (i) expand the application of modeling and simulation to issues in veterinary pharmacology; (ii) bridge the gap between the level of modeling and simulation practiced in human and veterinary pharmacology; (iii) explore how modeling and simulation concepts can be used to improve our understanding of common issues not readily addressed in human pharmacology (e.g. breed differences, tissue residue depletion, vast weight ranges among adults within a single species, interspecies differences, small animal species research where data collection is limited to sparse sampling, availability of different sampling matrices); and (iv) describe how quantitative pharmacology approaches could help understanding key pharmacokinetic and pharmacodynamic characteristics of a drug candidate, with the goal of providing explicit, reproducible, and predictive evidence for optimizing drug development plans, enabling critical decision making, and eventually bringing safe and effective medicines to patients. This study introduces these concepts and introduces new approaches to modeling and simulation as well as clearly articulate basic assumptions and good practices. The driving force behind these activities is to create predictive models that are based on solid physiological and pharmacological principles as well as adhering to the limitations that are fundamental to applying mathematical and statistical models to biological systems. © 2015 John Wiley & Sons Ltd.

  18. A new predictive multi-zone model for HCCI engine combustion

    DOE PAGES

    Bissoli, Mattia; Frassoldati, Alessio; Cuoci, Alberto; ...

    2016-06-30

    Here, this work introduces a new predictive multi-zone model for the description of combustion in Homogeneous Charge Compression Ignition (HCCI) engines. The model exploits the existing OpenSMOKE++ computational suite to handle detailed kinetic mechanisms, providing reliable predictions of the in-cylinder auto-ignition processes. All the elements with a significant impact on the combustion performances and emissions, like turbulence, heat and mass exchanges, crevices, residual burned gases, thermal and feed stratification are taken into account. Compared to other computational approaches, this model improves the description of mixture stratification phenomena by coupling a wall heat transfer model derived from CFD application with amore » proper turbulence model. Furthermore, the calibration of this multi-zone model requires only three parameters, which can be derived from a non-reactive CFD simulation: these adaptive variables depend only on the engine geometry and remain fixed across a wide range of operating conditions, allowing the prediction of auto-ignition, pressure traces and pollutants. This computational framework enables the use of detail kinetic mechanisms, as well as Rate of Production Analysis (RoPA) and Sensitivity Analysis (SA) to investigate the complex chemistry involved in the auto-ignition and the pollutants formation processes. In the final sections of the paper, these capabilities are demonstrated through the comparison with experimental data.« less

  19. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades.

    PubMed

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-05-08

    As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.

  20. Proton exchange membrane fuel cell model for aging predictions: Simulated equivalent active surface area loss and comparisons with durability tests

    NASA Astrophysics Data System (ADS)

    Robin, C.; Gérard, M.; Quinaud, M.; d'Arbigny, J.; Bultel, Y.

    2016-09-01

    The prediction of Proton Exchange Membrane Fuel Cell (PEMFC) lifetime is one of the major challenges to optimize both material properties and dynamic control of the fuel cell system. In this study, by a multiscale modeling approach, a mechanistic catalyst dissolution model is coupled to a dynamic PEMFC cell model to predict the performance loss of the PEMFC. Results are compared to two 2000-h experimental aging tests. More precisely, an original approach is introduced to estimate the loss of an equivalent active surface area during an aging test. Indeed, when the computed Electrochemical Catalyst Surface Area profile is fitted on the experimental measures from Cyclic Voltammetry, the computed performance loss of the PEMFC is underestimated. To be able to predict the performance loss measured by polarization curves during the aging test, an equivalent active surface area is obtained by a model inversion. This methodology enables to successfully find back the experimental cell voltage decay during time. The model parameters are fitted from the polarization curves so that they include the global degradation. Moreover, the model captures the aging heterogeneities along the surface of the cell observed experimentally. Finally, a second 2000-h durability test in dynamic operating conditions validates the approach.

  1. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades

    PubMed Central

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-01-01

    As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models. PMID:28772873

  2. Utilization of the NSQIP-Pediatric Database in Development and Validation of a New Predictive Model of Pediatric Postoperative Wound Complications.

    PubMed

    Maizlin, Ilan I; Redden, David T; Beierle, Elizabeth A; Chen, Mike K; Russell, Robert T

    2017-04-01

    Surgical wound classification, introduced in 1964, stratifies the risk of surgical site infection (SSI) based on a clinical estimate of the inoculum of bacteria encountered during the procedure. Recent literature has questioned the accuracy of predicting SSI risk based on wound classification. We hypothesized that a more specific model founded on specific patient and perioperative factors would more accurately predict the risk of SSI. Using all observations from the 2012 to 2014 pediatric National Surgical Quality Improvement Program-Pediatric (NSQIP-P) Participant Use File, patients were randomized into model creation and model validation datasets. Potential perioperative predictive factors were assessed with univariate analysis for each of 4 outcomes: wound dehiscence, superficial wound infection, deep wound infection, and organ space infection. A multiple logistic regression model with a step-wise backwards elimination was performed. A receiver operating characteristic curve with c-statistic was generated to assess the model discrimination for each outcome. A total of 183,233 patients were included. All perioperative NSQIP factors were evaluated for clinical pertinence. Of the original 43 perioperative predictive factors selected, 6 to 9 predictors for each outcome were significantly associated with postoperative SSI. The predictive accuracy level of our model compared favorably with the traditional wound classification in each outcome of interest. The proposed model from NSQIP-P demonstrated a significantly improved predictive ability for postoperative SSIs than the current wound classification system. This model will allow providers to more effectively counsel families and patients of these risks, and more accurately reflect true risks for individual surgical patients to hospitals and payers. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  3. The HIrisPlex-S system for eye, hair and skin colour prediction from DNA: Introduction and forensic developmental validation.

    PubMed

    Chaitanya, Lakshmi; Breslin, Krystal; Zuñiga, Sofia; Wirken, Laura; Pośpiech, Ewelina; Kukla-Bartoszek, Magdalena; Sijen, Titia; Knijff, Peter de; Liu, Fan; Branicki, Wojciech; Kayser, Manfred; Walsh, Susan

    2018-07-01

    Forensic DNA Phenotyping (FDP), i.e. the prediction of human externally visible traits from DNA, has become a fast growing subfield within forensic genetics due to the intelligence information it can provide from DNA traces. FDP outcomes can help focus police investigations in search of unknown perpetrators, who are generally unidentifiable with standard DNA profiling. Therefore, we previously developed and forensically validated the IrisPlex DNA test system for eye colour prediction and the HIrisPlex system for combined eye and hair colour prediction from DNA traces. Here we introduce and forensically validate the HIrisPlex-S DNA test system (S for skin) for the simultaneous prediction of eye, hair, and skin colour from trace DNA. This FDP system consists of two SNaPshot-based multiplex assays targeting a total of 41 SNPs via a novel multiplex assay for 17 skin colour predictive SNPs and the previous HIrisPlex assay for 24 eye and hair colour predictive SNPs, 19 of which also contribute to skin colour prediction. The HIrisPlex-S system further comprises three statistical prediction models, the previously developed IrisPlex model for eye colour prediction based on 6 SNPs, the previous HIrisPlex model for hair colour prediction based on 22 SNPs, and the recently introduced HIrisPlex-S model for skin colour prediction based on 36 SNPs. In the forensic developmental validation testing, the novel 17-plex assay performed in full agreement with the Scientific Working Group on DNA Analysis Methods (SWGDAM) guidelines, as previously shown for the 24-plex assay. Sensitivity testing of the 17-plex assay revealed complete SNP profiles from as little as 63 pg of input DNA, equalling the previously demonstrated sensitivity threshold of the 24-plex HIrisPlex assay. Testing of simulated forensic casework samples such as blood, semen, saliva stains, of inhibited DNA samples, of low quantity touch (trace) DNA samples, and of artificially degraded DNA samples as well as concordance testing, demonstrated the robustness, efficiency, and forensic suitability of the new 17-plex assay, as previously shown for the 24-plex assay. Finally, we provide an update to the publically available HIrisPlex website https://hirisplex.erasmusmc.nl/, now allowing the estimation of individual probabilities for 3 eye, 4 hair, and 5 skin colour categories from HIrisPlex-S input genotypes. The HIrisPlex-S DNA test represents the first forensically validated tool for skin colour prediction, and reflects the first forensically validated tool for simultaneous eye, hair and skin colour prediction from DNA. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Using ‘particle in a box’ models to calculate energy levels in semiconductor quantum well structures

    NASA Astrophysics Data System (ADS)

    Ebbens, A. T.

    2018-07-01

    Although infinite potential ‘particle in a box’ models are widely used to introduce quantised energy levels their predictions cannot be quantitatively compared with atomic emission spectra. Here, this problem is overcome by describing how both infinite and finite potential well models can be used to calculate the confined energy levels of semiconductor quantum wells. This is done by using physics and mathematics concepts that are accessible to pre-university students. The results of the models are compared with experimental data and their accuracy discussed.

  5. Neutrino mass in flavor dependent gauged lepton model

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-03-01

    We study a neutrino model introducing an additional nontrivial gauged lepton symmetry where the neutrino masses are induced at two-loop level, while the first and second charged-leptons of the standard model are done at one-loop level. As a result of the model structure, we can predict one massless active neutrino, and there is a dark matter candidate. Then we discuss the neutrino mass matrix, muon anomalous magnetic moment, lepton flavor violations, oblique parameters, and relic density of dark matter, taking into account the experimental constraints.

  6. Can species distribution models really predict the expansion of invasive species?

    PubMed

    Barbet-Massin, Morgane; Rome, Quentin; Villemant, Claire; Courchamp, Franck

    2018-01-01

    Predictive studies are of paramount importance for biological invasions, one of the biggest threats for biodiversity. To help and better prioritize management strategies, species distribution models (SDMs) are often used to predict the potential invasive range of introduced species. Yet, SDMs have been regularly criticized, due to several strong limitations, such as violating the equilibrium assumption during the invasion process. Unfortunately, validation studies-with independent data-are too scarce to assess the predictive accuracy of SDMs in invasion biology. Yet, biological invasions allow to test SDMs usefulness, by retrospectively assessing whether they would have accurately predicted the latest ranges of invasion. Here, we assess the predictive accuracy of SDMs in predicting the expansion of invasive species. We used temporal occurrence data for the Asian hornet Vespa velutina nigrithorax, a species native to China that is invading Europe with a very fast rate. Specifically, we compared occurrence data from the last stage of invasion (independent validation points) to the climate suitability distribution predicted from models calibrated with data from the early stage of invasion. Despite the invasive species not being at equilibrium yet, the predicted climate suitability of validation points was high. SDMs can thus adequately predict the spread of V. v. nigrithorax, which appears to be-at least partially-climatically driven. In the case of V. v. nigrithorax, SDMs predictive accuracy was slightly but significantly better when models were calibrated with invasive data only, excluding native data. Although more validation studies for other invasion cases are needed to generalize our results, our findings are an important step towards validating the use of SDMs in invasion biology.

  7. Can species distribution models really predict the expansion of invasive species?

    PubMed Central

    Rome, Quentin; Villemant, Claire; Courchamp, Franck

    2018-01-01

    Predictive studies are of paramount importance for biological invasions, one of the biggest threats for biodiversity. To help and better prioritize management strategies, species distribution models (SDMs) are often used to predict the potential invasive range of introduced species. Yet, SDMs have been regularly criticized, due to several strong limitations, such as violating the equilibrium assumption during the invasion process. Unfortunately, validation studies–with independent data–are too scarce to assess the predictive accuracy of SDMs in invasion biology. Yet, biological invasions allow to test SDMs usefulness, by retrospectively assessing whether they would have accurately predicted the latest ranges of invasion. Here, we assess the predictive accuracy of SDMs in predicting the expansion of invasive species. We used temporal occurrence data for the Asian hornet Vespa velutina nigrithorax, a species native to China that is invading Europe with a very fast rate. Specifically, we compared occurrence data from the last stage of invasion (independent validation points) to the climate suitability distribution predicted from models calibrated with data from the early stage of invasion. Despite the invasive species not being at equilibrium yet, the predicted climate suitability of validation points was high. SDMs can thus adequately predict the spread of V. v. nigrithorax, which appears to be—at least partially–climatically driven. In the case of V. v. nigrithorax, SDMs predictive accuracy was slightly but significantly better when models were calibrated with invasive data only, excluding native data. Although more validation studies for other invasion cases are needed to generalize our results, our findings are an important step towards validating the use of SDMs in invasion biology. PMID:29509789

  8. Computational biology for cardiovascular biomarker discovery.

    PubMed

    Azuaje, Francisco; Devaux, Yvan; Wagner, Daniel

    2009-07-01

    Computational biology is essential in the process of translating biological knowledge into clinical practice, as well as in the understanding of biological phenomena based on the resources and technologies originating from the clinical environment. One such key contribution of computational biology is the discovery of biomarkers for predicting clinical outcomes using 'omic' information. This process involves the predictive modelling and integration of different types of data and knowledge for screening, diagnostic or prognostic purposes. Moreover, this requires the design and combination of different methodologies based on statistical analysis and machine learning. This article introduces key computational approaches and applications to biomarker discovery based on different types of 'omic' data. Although we emphasize applications in cardiovascular research, the computational requirements and advances discussed here are also relevant to other domains. We will start by introducing some of the contributions of computational biology to translational research, followed by an overview of methods and technologies used for the identification of biomarkers with predictive or classification value. The main types of 'omic' approaches to biomarker discovery will be presented with specific examples from cardiovascular research. This will include a review of computational methodologies for single-source and integrative data applications. Major computational methods for model evaluation will be described together with recommendations for reporting models and results. We will present recent advances in cardiovascular biomarker discovery based on the combination of gene expression and functional network analyses. The review will conclude with a discussion of key challenges for computational biology, including perspectives from the biosciences and clinical areas.

  9. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  10. Use of mathematical modelling to assess the impact of vaccines on antibiotic resistance.

    PubMed

    Atkins, Katherine E; Lafferty, Erin I; Deeny, Sarah R; Davies, Nicholas G; Robotham, Julie V; Jit, Mark

    2018-06-01

    Antibiotic resistance is a major global threat to the provision of safe and effective health care. To control antibiotic resistance, vaccines have been proposed as an essential intervention, complementing improvements in diagnostic testing, antibiotic stewardship, and drug pipelines. The decision to introduce or amend vaccination programmes is routinely based on mathematical modelling. However, few mathematical models address the impact of vaccination on antibiotic resistance. We reviewed the literature using PubMed to identify all studies that used an original mathematical model to quantify the impact of a vaccine on antibiotic resistance transmission within a human population. We reviewed the models from the resulting studies in the context of a new framework to elucidate the pathways through which vaccination might impact antibiotic resistance. We identified eight mathematical modelling studies; the state of the literature highlighted important gaps in our understanding. Notably, studies are limited in the range of pathways represented, their geographical scope, and the vaccine-pathogen combinations assessed. Furthermore, to translate model predictions into public health decision making, more work is needed to understand how model structure and parameterisation affects model predictions and how to embed these predictions within economic frameworks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. One-dimensional model of inertial pumping

    NASA Astrophysics Data System (ADS)

    Kornilovitch, Pavel E.; Govyadinov, Alexander N.; Markel, David P.; Torniainen, Erik D.

    2013-02-01

    A one-dimensional model of inertial pumping is introduced and solved. The pump is driven by a high-pressure vapor bubble generated by a microheater positioned asymmetrically in a microchannel. The bubble is approximated as a short-term impulse delivered to the two fluidic columns inside the channel. Fluid dynamics is described by a Newton-like equation with a variable mass, but without the mass derivative term. Because of smaller inertia, the short column refills the channel faster and accumulates a larger mechanical momentum. After bubble collapse the total fluid momentum is nonzero, resulting in a net flow. Two different versions of the model are analyzed in detail, analytically and numerically. In the symmetrical model, the pressure at the channel-reservoir connection plane is assumed constant, whereas in the asymmetrical model it is reduced by a Bernoulli term. For low and intermediate vapor bubble pressures, both models predict the existence of an optimal microheater location. The predicted net flow in the asymmetrical model is smaller by a factor of about 2. For unphysically large vapor pressures, the asymmetrical model predicts saturation of the effect, while in the symmetrical model net flow increases indefinitely. Pumping is reduced by nonzero viscosity, but to a different degree depending on the microheater location.

  12. One-dimensional model of inertial pumping.

    PubMed

    Kornilovitch, Pavel E; Govyadinov, Alexander N; Markel, David P; Torniainen, Erik D

    2013-02-01

    A one-dimensional model of inertial pumping is introduced and solved. The pump is driven by a high-pressure vapor bubble generated by a microheater positioned asymmetrically in a microchannel. The bubble is approximated as a short-term impulse delivered to the two fluidic columns inside the channel. Fluid dynamics is described by a Newton-like equation with a variable mass, but without the mass derivative term. Because of smaller inertia, the short column refills the channel faster and accumulates a larger mechanical momentum. After bubble collapse the total fluid momentum is nonzero, resulting in a net flow. Two different versions of the model are analyzed in detail, analytically and numerically. In the symmetrical model, the pressure at the channel-reservoir connection plane is assumed constant, whereas in the asymmetrical model it is reduced by a Bernoulli term. For low and intermediate vapor bubble pressures, both models predict the existence of an optimal microheater location. The predicted net flow in the asymmetrical model is smaller by a factor of about 2. For unphysically large vapor pressures, the asymmetrical model predicts saturation of the effect, while in the symmetrical model net flow increases indefinitely. Pumping is reduced by nonzero viscosity, but to a different degree depending on the microheater location.

  13. Modeling of SBS Phase Conjugation in Multimode Step Index Fibers

    DTIC Science & Technology

    2008-03-01

    cavity or in an external amplifier. Since pumping is never a perfectly efficient process, some heat will be introduced, and for very high pump powers...modes it supports, and the incident pump power. While theoretical investigations of SBS PCMs have been conducted by a num- ber of authors, the model...predictions about the phase conjugate fidelity that could be expected from a given pump intensity input coupled into a specific fiber. A numerical

  14. Predicting when biliary excretion of parent drug is a major route of elimination in humans.

    PubMed

    Hosey, Chelsea M; Broccatelli, Fabio; Benet, Leslie Z

    2014-09-01

    Biliary excretion is an important route of elimination for many drugs, yet measuring the extent of biliary elimination is difficult, invasive, and variable. Biliary elimination has been quantified for few drugs with a limited number of subjects, who are often diseased patients. An accurate prediction of which drugs or new molecular entities are significantly eliminated in the bile may predict potential drug-drug interactions, pharmacokinetics, and toxicities. The Biopharmaceutics Drug Disposition Classification System (BDDCS) characterizes significant routes of drug elimination, identifies potential transporter effects, and is useful in understanding drug-drug interactions. Class 1 and 2 drugs are primarily eliminated in humans via metabolism and will not exhibit significant biliary excretion of parent compound. In contrast, class 3 and 4 drugs are primarily excreted unchanged in the urine or bile. Here, we characterize the significant elimination route of 105 orally administered class 3 and 4 drugs. We introduce and validate a novel model, predicting significant biliary elimination using a simple classification scheme. The model is accurate for 83% of 30 drugs collected after model development. The model corroborates the observation that biliarily eliminated drugs have high molecular weights, while demonstrating the necessity of considering route of administration and extent of metabolism when predicting biliary excretion. Interestingly, a predictor of potential metabolism significantly improves predictions of major elimination routes of poorly metabolized drugs. This model successfully predicts the major elimination route for poorly permeable/poorly metabolized drugs and may be applied prior to human dosing.

  15. Governing Laws of Complex System Predictability under Co-evolving Uncertainty Sources: Theory and Nonlinear Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Perdigão, R. A. P.

    2017-12-01

    Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.

  16. Shared Mechanisms in the Estimation of Self-Generated Actions and the Prediction of Other’s Actions by Humans

    PubMed Central

    Ganesh, Gowrishankar

    2017-01-01

    Abstract The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants’ ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert’s abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert’s self-estimation is explained only by considering a change in the individual’s forward model, showing that an improvement in an expert’s ability to predict outcomes of observed actions affects the individual’s forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions. PMID:29340300

  17. [Research advances in mathematical model of coniferous trees cold hardiness].

    PubMed

    Zhang, Gang; Wang, Ai-Fang

    2007-07-01

    Plant cold hardiness has complicated attributes. This paper introduced the research advances in establishing the dynamic models of coniferous trees cold hardiness, with the advantages and disadvantages of the models presented and the further studies suggested. In the models established initially, temperature was concerned as the only environmental factor affecting the cold hardiness, and the concept of stationary level of cold hardiness was introduced. Due to the obvious prediction errors of these models, the stationary level of cold hardiness was modeled later by assuming the existence of an additive effect of temperature and photoperiod on the increase of cold hardiness. Furthermore, the responses of the annual development phases for cold hardiness to environment were considered. The model researchers have paid more attention to the additive effect models, and run some experiments to test the additivity principle. However, the research results on Scots pine (Pinus sylvestris) indicated that its organs did not support the presumption of an additive response of cold hardiness by temperature and photoperiod, and the interaction between environmental factors should be taken into account. The mathematical models of cold hardiness need to be developed and improved.

  18. Simulating physiological interactions in a hybrid system of mathematical models.

    PubMed

    Kretschmer, Jörn; Haunsberger, Thomas; Drost, Erick; Koch, Edmund; Möller, Knut

    2014-12-01

    Mathematical models can be deployed to simulate physiological processes of the human organism. Exploiting these simulations, reactions of a patient to changes in the therapy regime can be predicted. Based on these predictions, medical decision support systems (MDSS) can help in optimizing medical therapy. An MDSS designed to support mechanical ventilation in critically ill patients should not only consider respiratory mechanics but should also consider other systems of the human organism such as gas exchange or blood circulation. A specially designed framework allows combining three model families (respiratory mechanics, cardiovascular dynamics and gas exchange) to predict the outcome of a therapy setting. Elements of the three model families are dynamically combined to form a complex model system with interacting submodels. Tests revealed that complex model combinations are not computationally feasible. In most patients, cardiovascular physiology could be simulated by simplified models decreasing computational costs. Thus, a simplified cardiovascular model that is able to reproduce basic physiological behavior is introduced. This model purely consists of difference equations and does not require special algorithms to be solved numerically. The model is based on a beat-to-beat model which has been extended to react to intrathoracic pressure levels that are present during mechanical ventilation. The introduced reaction to intrathoracic pressure levels as found during mechanical ventilation has been tuned to mimic the behavior of a complex 19-compartment model. Tests revealed that the model is able to represent general system behavior comparable to the 19-compartment model closely. Blood pressures were calculated with a maximum deviation of 1.8 % in systolic pressure and 3.5 % in diastolic pressure, leading to a simulation error of 0.3 % in cardiac output. The gas exchange submodel being reactive to changes in cardiac output showed a resulting deviation of less than 0.1 %. Therefore, the proposed model is usable in combinations where cardiovascular simulation does not have to be detailed. Computing costs have been decreased dramatically by a factor 186 compared to a model combination employing the 19-compartment model.

  19. Perceptual quality prediction on authentically distorted images using a bag of features approach

    PubMed Central

    Ghadiyaram, Deepti; Bovik, Alan C.

    2017-01-01

    Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417

  20. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  1. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  2. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  3. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  4. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

    PubMed Central

    Scellier, Benjamin; Bengio, Yoshua

    2017-01-01

    We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task. PMID:28522969

  5. Metabolic Model-Based Integration of Microbiome Taxonomic and Metabolomic Profiles Elucidates Mechanistic Links between Ecological and Metabolic Variation.

    PubMed

    Noecker, Cecilia; Eng, Alexander; Srinivasan, Sujatha; Theriot, Casey M; Young, Vincent B; Jansson, Janet K; Fredricks, David N; Borenstein, Elhanan

    2016-01-01

    Multiple molecular assays now enable high-throughput profiling of the ecology, metabolic capacity, and activity of the human microbiome. However, to date, analyses of such multi-omic data typically focus on statistical associations, often ignoring extensive prior knowledge of the mechanisms linking these various facets of the microbiome. Here, we introduce a comprehensive framework to systematically link variation in metabolomic data with community composition by utilizing taxonomic, genomic, and metabolic information. Specifically, we integrate available and inferred genomic data, metabolic network modeling, and a method for predicting community-wide metabolite turnover to estimate the biosynthetic and degradation potential of a given community. Our framework then compares variation in predicted metabolic potential with variation in measured metabolites' abundances to evaluate whether community composition can explain observed shifts in the community metabolome, and to identify key taxa and genes contributing to the shifts. Focusing on two independent vaginal microbiome data sets, each pairing 16S community profiling with large-scale metabolomics, we demonstrate that our framework successfully recapitulates observed variation in 37% of metabolites. Well-predicted metabolite variation tends to result from disease-associated metabolism. We further identify several disease-enriched species that contribute significantly to these predictions. Interestingly, our analysis also detects metabolites for which the predicted variation negatively correlates with the measured variation, suggesting environmental control points of community metabolism. Applying this framework to gut microbiome data sets reveals similar trends, including prediction of bile acid metabolite shifts. This framework is an important first step toward a system-level multi-omic integration and an improved mechanistic understanding of the microbiome activity and dynamics in health and disease. Studies characterizing both the taxonomic composition and metabolic profile of various microbial communities are becoming increasingly common, yet new computational methods are needed to integrate and interpret these data in terms of known biological mechanisms. Here, we introduce an analytical framework to link species composition and metabolite measurements, using a simple model to predict the effects of community ecology on metabolite concentrations and evaluating whether these predictions agree with measured metabolomic profiles. We find that a surprisingly large proportion of metabolite variation in the vaginal microbiome can be predicted based on species composition (including dramatic shifts associated with disease), identify putative mechanisms underlying these predictions, and evaluate the roles of individual bacterial species and genes. Analysis of gut microbiome data using this framework recovers similar community metabolic trends. This framework lays the foundation for model-based multi-omic integrative studies, ultimately improving our understanding of microbial community metabolism.

  6. Metabolic Model-Based Integration of Microbiome Taxonomic and Metabolomic Profiles Elucidates Mechanistic Links between Ecological and Metabolic Variation

    PubMed Central

    Noecker, Cecilia; Eng, Alexander; Srinivasan, Sujatha; Theriot, Casey M.; Young, Vincent B.; Jansson, Janet K.; Fredricks, David N.

    2016-01-01

    ABSTRACT Multiple molecular assays now enable high-throughput profiling of the ecology, metabolic capacity, and activity of the human microbiome. However, to date, analyses of such multi-omic data typically focus on statistical associations, often ignoring extensive prior knowledge of the mechanisms linking these various facets of the microbiome. Here, we introduce a comprehensive framework to systematically link variation in metabolomic data with community composition by utilizing taxonomic, genomic, and metabolic information. Specifically, we integrate available and inferred genomic data, metabolic network modeling, and a method for predicting community-wide metabolite turnover to estimate the biosynthetic and degradation potential of a given community. Our framework then compares variation in predicted metabolic potential with variation in measured metabolites’ abundances to evaluate whether community composition can explain observed shifts in the community metabolome, and to identify key taxa and genes contributing to the shifts. Focusing on two independent vaginal microbiome data sets, each pairing 16S community profiling with large-scale metabolomics, we demonstrate that our framework successfully recapitulates observed variation in 37% of metabolites. Well-predicted metabolite variation tends to result from disease-associated metabolism. We further identify several disease-enriched species that contribute significantly to these predictions. Interestingly, our analysis also detects metabolites for which the predicted variation negatively correlates with the measured variation, suggesting environmental control points of community metabolism. Applying this framework to gut microbiome data sets reveals similar trends, including prediction of bile acid metabolite shifts. This framework is an important first step toward a system-level multi-omic integration and an improved mechanistic understanding of the microbiome activity and dynamics in health and disease. IMPORTANCE Studies characterizing both the taxonomic composition and metabolic profile of various microbial communities are becoming increasingly common, yet new computational methods are needed to integrate and interpret these data in terms of known biological mechanisms. Here, we introduce an analytical framework to link species composition and metabolite measurements, using a simple model to predict the effects of community ecology on metabolite concentrations and evaluating whether these predictions agree with measured metabolomic profiles. We find that a surprisingly large proportion of metabolite variation in the vaginal microbiome can be predicted based on species composition (including dramatic shifts associated with disease), identify putative mechanisms underlying these predictions, and evaluate the roles of individual bacterial species and genes. Analysis of gut microbiome data using this framework recovers similar community metabolic trends. This framework lays the foundation for model-based multi-omic integrative studies, ultimately improving our understanding of microbial community metabolism. PMID:27239563

  7. Introducing etch kernels for efficient pattern sampling and etch bias prediction

    NASA Astrophysics Data System (ADS)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2018-01-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.

  8. DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.

    PubMed

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2017-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.

  9. Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks

    PubMed Central

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2018-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980

  10. Predicting coronary artery disease using different artificial neural network models.

    PubMed

    Colak, M Cengiz; Colak, Cemil; Kocatürk, Hasan; Sağiroğlu, Seref; Barutçu, Irfan

    2008-08-01

    Eight different learning algorithms used for creating artificial neural network (ANN) models and the different ANN models in the prediction of coronary artery disease (CAD) are introduced. This work was carried out as a retrospective case-control study. Overall, 124 consecutive patients who had been diagnosed with CAD by coronary angiography (at least 1 coronary stenosis > 50% in major epicardial arteries) were enrolled in the work. Angiographically, the 113 people (group 2) with normal coronary arteries were taken as control subjects. Multi-layered perceptrons ANN architecture were applied. The ANN models trained with different learning algorithms were performed in 237 records, divided into training (n=171) and testing (n=66) data sets. The performance of prediction was evaluated by sensitivity, specificity and accuracy values based on standard definitions. The results have demonstrated that ANN models trained with eight different learning algorithms are promising because of high (greater than 71%) sensitivity, specificity and accuracy values in the prediction of CAD. Accuracy, sensitivity and specificity values varied between 83.63%-100%, 86.46%-100% and 74.67%-100% for training, respectively. For testing, the values were more than 71% for sensitivity, 76% for specificity and 81% for accuracy. It may be proposed that the use of different learning algorithms other than backpropagation and larger sample sizes can improve the performance of prediction. The proposed ANN models trained with these learning algorithms could be used a promising approach for predicting CAD without the need for invasive diagnostic methods and could help in the prognostic clinical decision.

  11. Analytical methods to predict liquid congealing in ram air heat exchangers during cold operation

    NASA Astrophysics Data System (ADS)

    Coleman, Kenneth; Kosson, Robert

    1989-07-01

    Ram air heat exchangers used to cool liquids such as lube oils or Ethylene-Glycol/water solutions can be subject to congealing in very cold ambients, resulting in a loss of cooling capability. Two-dimensional, transient analytical models have been developed to explore this phenomenon with both continuous and staggered fin cores. Staggered fin predictions are compared to flight test data from the E-2C Allison T56 engine lube oil system during winter conditions. For simpler calculations, a viscosity ratio correction was introduced and found to provide reasonable cold ambient performance predictions for the staggered fin core, using a one-dimensional approach.

  12. Risk factors for Apgar score using artificial neural networks.

    PubMed

    Ibrahim, Doaa; Frize, Monique; Walker, Robin C

    2006-01-01

    Artificial Neural Networks (ANNs) have been used in identifying the risk factors for many medical outcomes. In this paper, the risk factors for low Apgar score are introduced. This is the first time, to our knowledge, that the ANNs are used for Apgar score prediction. The medical domain of interest used is the perinatal database provided by the Perinatal Partnership Program of Eastern and Southeastern Ontario (PPPESO). The ability of the feed forward back propagation ANNs to generate strong predictive model with the most influential variables is tested. Finally, minimal sets of variables (risk factors) that are important in predicting Apgar score outcome without degrading the ANN performance are identified.

  13. Decay of standard-model-like Higgs boson h →μ τ in a 3-3-1 model with inverse seesaw neutrino masses

    NASA Astrophysics Data System (ADS)

    Nguyen, T. Phong; Le, T. Thuy; Hong, T. T.; Hue, L. T.

    2018-04-01

    By adding new gauge singlets of neutral leptons, the improved versions of the 3-3-1 models with right-handed neutrinos have been recently introduced in order to explain recent experimental neutrino oscillation data through the inverse seesaw mechanism. We prove that these models predict promising signals of lepton-flavor-violating decays of the standard-model-like Higgs boson h10→μ τ ,e τ , which are suppressed in the original versions. One-loop contributions to these decay amplitudes are introduced in the unitary gauge. Based on a numerical investigation, we find that the branching ratios of the decays h10→μ τ ,e τ can reach values of 10-5 in the regions of parameter space satisfying the current experimental data of the decay μ →e γ . The value of 10-4 appears when the Yukawa couplings of leptons are close to the perturbative limit. Some interesting properties of these regions of parameter space are also discussed.

  14. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  15. Predicting mining activity with parallel genetic algorithms

    USGS Publications Warehouse

    Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,

    2005-01-01

    We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.

  16. On Verifying Currents and Other Features in the Hawaiian Islands Region Using Fully Coupled Ocean/Atmosphere Mesoscale Prediction System Compared to Global Ocean Model and Ocean Observations

    NASA Astrophysics Data System (ADS)

    Jessen, P. G.; Chen, S.

    2014-12-01

    This poster introduces and evaluates features concerning the Hawaii, USA region using the U.S. Navy's fully Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS-OS™) coupled to the Navy Coastal Ocean Model (NCOM). It also outlines some challenges in verifying ocean currents in the open ocean. The system is evaluated using in situ ocean data and initial forcing fields from the operational global Hybrid Coordinate Ocean Model (HYCOM). Verification shows difficulties in modelling downstream currents off the Hawaiian islands (Hawaii's wake). Comparing HYCOM to NCOM current fields show some displacement of small features such as eddies. Generally, there is fair agreement from HYCOM to NCOM in salinity and temperature fields. There is good agreement in SSH fields.

  17. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  18. Elucidating Inherent Uncertainties in Data Assimilation for Predictions Incorporating Non-stationary Processes - Focus on Predictive Phenology

    NASA Astrophysics Data System (ADS)

    Lowman, L.; Barros, A. P.

    2017-12-01

    Data assimilation (DA) is the widely accepted procedure for estimating parameters within predictive models because of the adaptability and uncertainty quantification offered by Bayesian methods. DA applications in phenology modeling offer critical insights into how extreme weather or changes in climate impact the vegetation life cycle. Changes in leaf onset and senescence, root phenology, and intermittent leaf shedding imply large changes in the surface radiative, water, and carbon budgets at multiple scales. Models of leaf phenology require concurrent atmospheric and soil conditions to determine how biophysical plant properties respond to changes in temperature, light and water demand. Presently, climatological records for fraction of photosynthetically active radiation (FPAR) and leaf area index (LAI), the modelled states indicative of plant phenology, are not available. Further, DA models are typically trained on short periods of record (e.g. less than 10 years). Using limited records with a DA framework imposes non-stationarity on estimated parameters and the resulting predicted model states. This talk discusses how uncertainty introduced by the inherent non-stationarity of the modeled processes propagates through a land-surface hydrology model coupled to a predictive phenology model. How water demand is accounted for in the upscaling of DA model inputs and analysis period serves as a key source of uncertainty in the FPAR and LAI predictions. Parameters estimated from different DA effectively calibrate a plant water-use strategy within the land-surface hydrology model. For example, when extreme droughts are included in the DA period, the plants are trained to uptake water, transpire, and assimilate carbon under favorable conditions and quickly shut down at the onset of water stress.

  19. 2018 update to the HIV-TRePS system: the development of new computational models to predict HIV treatment outcomes, with or without a genotype, with enhanced usability for low-income settings.

    PubMed

    Revell, Andrew D; Wang, Dechao; Perez-Elias, Maria-Jesus; Wood, Robin; Cogill, Dolphina; Tempelman, Hugo; Hamers, Raph L; Reiss, Peter; van Sighem, Ard I; Rehm, Catherine A; Pozniak, Anton; Montaner, Julio S G; Lane, H Clifford; Larder, Brendan A

    2018-06-08

    Optimizing antiretroviral drug combination on an individual basis can be challenging, particularly in settings with limited access to drugs and genotypic resistance testing. Here we describe our latest computational models to predict treatment responses, with or without a genotype, and compare their predictive accuracy with that of genotyping. Random forest models were trained to predict the probability of virological response to a new therapy introduced following virological failure using up to 50 000 treatment change episodes (TCEs) without a genotype and 18 000 TCEs including genotypes. Independent data sets were used to evaluate the models. This study tested the effects on model accuracy of relaxing the baseline data timing windows, the use of a new filter to exclude probable non-adherent cases and the addition of maraviroc, tipranavir and elvitegravir to the system. The no-genotype models achieved area under the receiver operator characteristic curve (AUC) values of 0.82 and 0.81 using the standard and relaxed baseline data windows, respectively. The genotype models achieved AUC values of 0.86 with the new non-adherence filter and 0.84 without. Both sets of models were significantly more accurate than genotyping with rules-based interpretation, which achieved AUC values of only 0.55-0.63, and were marginally more accurate than previous models. The models were able to identify alternative regimens that were predicted to be effective for the vast majority of cases in which the new regimen prescribed in the clinic failed. These latest global models predict treatment responses accurately even without a genotype and have the potential to help optimize therapy, particularly in resource-limited settings.

  20. Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.

    PubMed

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.

  1. Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models

    PubMed Central

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179

  2. Prediction of the Wall Factor of Arbitrary Particle Settling through Various Fluid Media in a Cylindrical Tube Using Artificial Intelligence

    PubMed Central

    Li, Mingzhong; Xue, Jianquan; Li, Yanchao; Tang, Shukai

    2014-01-01

    Considering the influence of particle shape and the rheological properties of fluid, two artificial intelligence methods (Artificial Neural Network and Support Vector Machine) were used to predict the wall factor which is widely introduced to deduce the net hydrodynamic drag force of confining boundaries on settling particles. 513 data points were culled from the experimental data of previous studies, which were divided into training set and test set. Particles with various shapes were divided into three kinds: sphere, cylinder, and rectangular prism; feature parameters of each kind of particle were extracted; prediction models of sphere and cylinder using artificial neural network were established. Due to the little number of rectangular prism sample, support vector machine was used to predict the wall factor, which is more suitable for addressing the problem of small samples. The characteristic dimension was presented to describe the shape and size of the diverse particles and a comprehensive prediction model of particles with arbitrary shapes was established to cover all types of conditions. Comparisons were conducted between the predicted values and the experimental results. PMID:24772024

  3. Prediction of the wall factor of arbitrary particle settling through various fluid media in a cylindrical tube using artificial intelligence.

    PubMed

    Li, Mingzhong; Zhang, Guodong; Xue, Jianquan; Li, Yanchao; Tang, Shukai

    2014-01-01

    Considering the influence of particle shape and the rheological properties of fluid, two artificial intelligence methods (Artificial Neural Network and Support Vector Machine) were used to predict the wall factor which is widely introduced to deduce the net hydrodynamic drag force of confining boundaries on settling particles. 513 data points were culled from the experimental data of previous studies, which were divided into training set and test set. Particles with various shapes were divided into three kinds: sphere, cylinder, and rectangular prism; feature parameters of each kind of particle were extracted; prediction models of sphere and cylinder using artificial neural network were established. Due to the little number of rectangular prism sample, support vector machine was used to predict the wall factor, which is more suitable for addressing the problem of small samples. The characteristic dimension was presented to describe the shape and size of the diverse particles and a comprehensive prediction model of particles with arbitrary shapes was established to cover all types of conditions. Comparisons were conducted between the predicted values and the experimental results.

  4. Prediction of Sea Surface Temperature Using Long Short-Term Memory

    NASA Astrophysics Data System (ADS)

    Zhang, Qin; Wang, Hui; Dong, Junyu; Zhong, Guoqiang; Sun, Xin

    2017-10-01

    This letter adopts long short-term memory(LSTM) to predict sea surface temperature(SST), which is the first attempt, to our knowledge, to use recurrent neural network to solve the problem of SST prediction, and to make one week and one month daily prediction. We formulate the SST prediction problem as a time series regression problem. LSTM is a special kind of recurrent neural network, which introduces gate mechanism into vanilla RNN to prevent the vanished or exploding gradient problem. It has strong ability to model the temporal relationship of time series data and can handle the long-term dependency problem well. The proposed network architecture is composed of two kinds of layers: LSTM layer and full-connected dense layer. LSTM layer is utilized to model the time series relationship. Full-connected layer is utilized to map the output of LSTM layer to a final prediction. We explore the optimal setting of this architecture by experiments and report the accuracy of coastal seas of China to confirm the effectiveness of the proposed method. In addition, we also show its online updated characteristics.

  5. Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Morita, Kazuki

    2018-03-01

    We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.

  6. Bounded rationality alters the dynamics of paediatric immunization acceptance.

    PubMed

    Oraby, Tamer; Bauch, Chris T

    2015-06-02

    Interactions between disease dynamics and vaccinating behavior have been explored in many coupled behavior-disease models. Cognitive effects such as risk perception, framing, and subjective probabilities of adverse events can be important determinants of the vaccinating behaviour, and represent departures from the pure "rational" decision model that are often described as "bounded rationality". However, the impact of such cognitive effects in the context of paediatric infectious disease vaccines has received relatively little attention. Here, we develop a disease-behavior model that accounts for bounded rationality through prospect theory. We analyze the model and compare its predictions to a reduced model that lacks bounded rationality. We find that, in general, introducing bounded rationality increases the dynamical richness of the model and makes it harder to eliminate a paediatric infectious disease. In contrast, in other cases, a low cost, highly efficacious vaccine can be refused, even when the rational decision model predicts acceptance. Injunctive social norms can prevent vaccine refusal, if vaccine acceptance is sufficiently high in the beginning of the vaccination campaign. Cognitive processes can have major impacts on the predictions of behaviour-disease models, and further study of such processes in the context of vaccination is thus warranted.

  7. Prediction of enzyme activity with neural network models based on electronic and geometrical features of substrates.

    PubMed

    Szaleniec, Maciej

    2012-01-01

    Artificial Neural Networks (ANNs) are introduced as robust and versatile tools in quantitative structure-activity relationship (QSAR) modeling. Their application to the modeling of enzyme reactivity is discussed, along with methodological issues. Methods of input variable selection, optimization of network internal structure, data set division and model validation are discussed. The application of ANNs in the modeling of enzyme activity over the last 20 years is briefly recounted. The discussed methodology is exemplified by the case of ethylbenzene dehydrogenase (EBDH). Intelligent Problem Solver and genetic algorithms are applied for input vector selection, whereas k-means clustering is used to partition the data into training and test cases. The obtained models exhibit high correlation between the predicted and experimental values (R(2) > 0.9). Sensitivity analyses and study of the response curves are used as tools for the physicochemical interpretation of the models in terms of the EBDH reaction mechanism. Neural networks are shown to be a versatile tool for the construction of robust QSAR models that can be applied to a range of aspects important in drug design and the prediction of biological activity.

  8. Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Morita, Kazuki

    2018-06-01

    We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.

  9. QoS prediction for web services based on user-trust propagation model

    NASA Astrophysics Data System (ADS)

    Thinh, Le-Van; Tu, Truong-Dinh

    2017-10-01

    There is an important online role for Web service providers and users; however, the rapidly growing number of service providers and users, it can create some similar functions among web services. This is an exciting area for research, and researchers seek to to propose solutions for the best service to users. Collaborative filtering (CF) algorithms are widely used in recommendation systems, although these are less effective for cold-start users. Recently, some recommender systems have been developed based on social network models, and the results show that social network models have better performance in terms of CF, especially for cold-start users. However, most social network-based recommendations do not consider the user's mood. This is a hidden source of information, and is very useful in improving prediction efficiency. In this paper, we introduce a new model called User-Trust Propagation (UTP). The model uses a combination of trust and the mood of users to predict the QoS value and matrix factorisation (MF), which is used to train the model. The experimental results show that the proposed model gives better accuracy than other models, especially for the cold-start problem.

  10. Bounded rationality alters the dynamics of paediatric immunization acceptance

    PubMed Central

    Oraby, Tamer; Bauch, Chris T.

    2015-01-01

    Interactions between disease dynamics and vaccinating behavior have been explored in many coupled behavior-disease models. Cognitive effects such as risk perception, framing, and subjective probabilities of adverse events can be important determinants of the vaccinating behaviour, and represent departures from the pure “rational” decision model that are often described as “bounded rationality”. However, the impact of such cognitive effects in the context of paediatric infectious disease vaccines has received relatively little attention. Here, we develop a disease-behavior model that accounts for bounded rationality through prospect theory. We analyze the model and compare its predictions to a reduced model that lacks bounded rationality. We find that, in general, introducing bounded rationality increases the dynamical richness of the model and makes it harder to eliminate a paediatric infectious disease. In contrast, in other cases, a low cost, highly efficacious vaccine can be refused, even when the rational decision model predicts acceptance. Injunctive social norms can prevent vaccine refusal, if vaccine acceptance is sufficiently high in the beginning of the vaccination campaign. Cognitive processes can have major impacts on the predictions of behaviour-disease models, and further study of such processes in the context of vaccination is thus warranted. PMID:26035413

  11. Adhesion design maps for bio-inspired attachment systems.

    PubMed

    Spolenak, Ralph; Gorb, Stanislav; Arzt, Eduard

    2005-01-01

    Fibrous surface structures can improve the adhesion of objects to other surfaces. Animals, such as flies and geckos, take advantage of this principle by developing "hairy" contact structures which ensure controlled and repeatable adhesion and detachment. Mathematical models for fiber adhesion predict pronounced dependencies of contact performance on the geometry and the elastic properties of the fibers. In this paper the limits of such contacts imposed by fiber strength, fiber condensation, compliance, and ideal contact strength are modeled for spherical contact tips. Based on this, we introduce the concept of "adhesion design maps" which visualize the predicted mechanical behavior. The maps are useful for understanding biological systems and for guiding experimentation to achieve optimum artificial contacts.

  12. Development of the Semi-implicit Time Integration in KIM-SH

    NASA Astrophysics Data System (ADS)

    NAM, H.

    2015-12-01

    The Korea Institute of Atmospheric Prediction Systems (KIAPS) was founded in 2011 by the Korea Meteorological Administration (KMA) to develop Korea's own global Numerical Weather Prediction (NWP) system as nine year (2011-2019) project. The KIM-SH is a KIAPS integrated model-spectral element based in the HOMME. In KIM-SH, the explicit schemes are employed. We introduce the three- and two-time-level semi-implicit scheme in KIM-SH as the time integration. Explicit schemes however have a tendancy to be unstable and require very small timesteps while semi-implicit schemes are very stable and can have much larger timesteps.We define the linear and reference values, then by definition of semi-implicit scheme, we apply the linear solver as GMRES. The numerical results from experiments will be introduced with the current development status of the time integration in KIM-SH. Several numerical examples are shown to confirm the efficiency and reliability of the proposed schemes.

  13. Low Data Drug Discovery with One-Shot Learning.

    PubMed

    Altae-Tran, Han; Ramsundar, Bharath; Pappu, Aneesh S; Pande, Vijay

    2017-04-26

    Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds (Ma, J. et al. J. Chem. Inf. 2015, 55, 263-274). However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the iterative refinement long short-term memory, that, when combined with graph convolutional neural networks, significantly improves learning of meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery (Ramsundar, B. deepchem.io. https://github.com/deepchem/deepchem, 2016).

  14. In silico assessment of the acute toxicity of chemicals: recent advances and new model for multitasking prediction of toxic effect.

    PubMed

    Kleandrova, Valeria V; Luan, Feng; Speck-Planche, Alejandro; Cordeiro, M Natália D S

    2015-01-01

    The assessment of acute toxicity is one of the most important stages to ensure the safety of chemicals with potential applications in pharmaceutical sciences, biomedical research, or any other industrial branch. A huge and indiscriminate number of toxicity assays have been carried out on laboratory animals. In this sense, computational approaches involving models based on quantitative-structure activity/toxicity relationships (QSAR/QSTR) can help to rationalize time and financial costs. Here, we discuss the most significant advances in the last 6 years focused on the use of QSAR/QSTR models to predict acute toxicity of drugs/chemicals in laboratory animals, employing large and heterogeneous datasets. The advantages and drawbacks of the different QSAR/QSTR models are analyzed. As a contribution to the field, we introduce the first multitasking (mtk) QSTR model for simultaneous prediction of acute toxicity of compounds by considering different routes of administration, diverse breeds of laboratory animals, and the reliability of the experimental conditions. The mtk-QSTR model was based on artificial neural networks (ANN), allowing the classification of compounds as toxic or non-toxic. This model correctly classified more than 94% of the 1646 cases present in the whole dataset, and its applicability was demonstrated by performing predictions of different chemicals such as drugs, dietary supplements, and molecules which could serve as nanocarriers for drug delivery. The predictions given by the mtk-QSTR model are in very good agreement with the experimental results.

  15. Protein structure modeling and refinement by global optimization in CASP12.

    PubMed

    Hong, Seung Hwan; Joung, InSuk; Flores-Canales, Jose C; Manavalan, Balachandran; Cheng, Qianyi; Heo, Seungryong; Kim, Jong Yun; Lee, Sun Young; Nam, Mikyung; Joo, Keehyoung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2018-03-01

    For protein structure modeling in the CASP12 experiment, we have developed a new protocol based on our previous CASP11 approach. The global optimization method of conformational space annealing (CSA) was applied to 3 stages of modeling: multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain re-modeling. For better template selection and model selection, we updated our model quality assessment (QA) method with the newly developed SVMQA (support vector machine for quality assessment). For 3D chain building, we updated our energy function by including restraints generated from predicted residue-residue contacts. New energy terms for the predicted secondary structure and predicted solvent accessible surface area were also introduced. For difficult targets, we proposed a new method, LEEab, where the template term played a less significant role than it did in LEE, complemented by increased contributions from other terms such as the predicted contact term. For TBM (template-based modeling) targets, LEE performed better than LEEab, but for FM targets, LEEab was better. For model refinement, we modified our CASP11 molecular dynamics (MD) based protocol by using explicit solvents and tuning down restraint weights. Refinement results from MD simulations that used a new augmented statistical energy term in the force field were quite promising. Finally, when using inaccurate information (such as the predicted contacts), it was important to use the Lorentzian function for which the maximal penalty arising from wrong information is always bounded. © 2017 Wiley Periodicals, Inc.

  16. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs

    PubMed Central

    2017-01-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package. PMID:29107980

  17. Bubbles and denaturation in DNA

    NASA Astrophysics Data System (ADS)

    van Erp, T. S.; Cuesta-López, S.; Peyrard, M.

    2006-08-01

    The local opening of DNA is an intriguing phenomenon from a statistical-physics point of view, but is also essential for its biological function. For instance, the transcription and replication of our genetic code cannot take place without the unwinding of the DNA double helix. Although these biological processes are driven by proteins, there might well be a relation between these biological openings and the spontaneous bubble formation due to thermal fluctuations. Mesoscopic models, like the Peyrard-Bishop-Dauxois (PBD) model, have fairly accurately reproduced some experimental denaturation curves and the sharp phase transition in the thermodynamic limit. It is, hence, tempting to see whether these models could be used to predict the biological activity of DNA. In a previous study, we introduced a method that allows to obtain very accurate results on this subject, which showed that some previous claims in this direction, based on molecular-dynamics studies, were premature. This could either imply that the present PBD model should be improved or that biological activity can only be predicted in a more complex framework that involves interactions with proteins and super helical stresses. In this article, we give a detailed description of the statistical method introduced before. Moreover, for several DNA sequences, we give a thorough analysis of the bubble-statistics as a function of position and bubble size and the so-called l-denaturation curves that can be measured experimentally. These show that some important experimental observations are missing in the present model. We discuss how the present model could be improved.

  18. Why are Anxiety and Depressive Symptoms Comorbid in Youth? A Multi-Wave, Longitudinal Examination of Competing Etiological Models

    PubMed Central

    Cohen, Joseph R.; Young, Jami, F.; Gibb, Brandon E.; Hankin, Benjamin L.; Abela, John R. Z.

    2014-01-01

    Background The present study sought to clarify the development of comorbid emotional distress by comparing different explanations for how youth develop anxiety and depressive symptoms. Specifically, we introduced the diathesis-anxiety approach (whether cognitive vulnerabilities interact with anxiety symptoms), and compared it to a causal model (anxiety symptoms predicting depressive symptoms), and a correlated liabilities model (whether cognitive vulnerabilities interacted with stressors to predict both anxiety and depressive symptoms) to examine which model best explained the relation between depressive and anxiety symptoms in youth. Methods 678 3rd (n=208), 6th (n=245), and 9th (n=225) grade girls (n=380) and boys (n=298) completed self-report measures at baseline assessing cognitive vulnerabilities (rumination and self-criticism), stressors, depressive and anxiety symptoms. Every 3 months over the next 18 months, youth completed follow-up measures of symptoms and stressors. Results While limited support was found for a causal (p > .10) or correlated-liability model (p > .05) for comorbidity, findings did support a diathesis-anxiety approach for both self-criticism (t(2494) = 3.36, p < .001) and rumination (t(2505) = 2.40, p < .05). Limitations The present study’s findings are based on self-report measure and makes inferences concerning comorbidity with a community sample. Conclusions These results may help clarify past research concerning comorbidity by introducing a diathesis-anxiety approach as a viable model to understand which youth are most at-risk for developing comorbid emotional distress. PMID:24751303

  19. Prediction of Human Activity by Discovering Temporal Sequence Patterns.

    PubMed

    Li, Kang; Fu, Yun

    2014-08-01

    Early prediction of ongoing human activity has become more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions and interacting objects. Different from early detection on short-duration simple actions, we propose a novel framework for long -duration complex activity prediction by discovering three key aspects of activity: Causality, Context-cue, and Predictability. The major contributions of our work include: (1) a general framework is proposed to systematically address the problem of complex activity prediction by mining temporal sequence patterns; (2) probabilistic suffix tree (PST) is introduced to model causal relationships between constituent actions, where both large and small order Markov dependencies between action units are captured; (3) the context-cue, especially interactive objects information, is modeled through sequential pattern mining (SPM), where a series of action and object co-occurrence are encoded as a complex symbolic sequence; (4) we also present a predictive accumulative function (PAF) to depict the predictability of each kind of activity. The effectiveness of our approach is evaluated on two experimental scenarios with two data sets for each: action-only prediction and context-aware prediction. Our method achieves superior performance for predicting global activity classes and local action units.

  20. Nonlinear model predictive control applied to the separation of praziquantel in simulated moving bed chromatography.

    PubMed

    Andrade Neto, A S; Secchi, A R; Souza, M B; Barreto, A G

    2016-10-28

    An adaptive nonlinear model predictive control of a simulated moving bed unit for the enantioseparation of praziquantel is presented. A first principle model was applied at the proposed purity control scheme. The main concern about this kind of model in a control framework is in regard to the computational effort to solve it; however, a fast enough solution was achieved. In order to evaluate the controller's performance, several cases were simulated, including external pumps and switching valve malfunctions. The problem of plant-model mismatch was also investigated, and for that reason a parameter estimation step was introduced in the control strategy. In every studied scenario, the controller was able to maintain the purity levels at their set points, which were set to 99% and 98.6% for extract and raffinate, respectively. Additionally, fast responses and smooth actuation were achieved. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Sociophysics:. a Review of Galam Models

    NASA Astrophysics Data System (ADS)

    Galam, Serge

    We review a series of models of sociophysics introduced by Galam and Galam et al. in the last 25 years. The models are divided into five different classes, which deal respectively with democratic voting in bottom-up hierarchical systems, decision making, fragmentation versus coalitions, terrorism and opinion dynamics. For each class the connexion to the original physical model and techniques are outlined underlining both the similarities and the differences. Emphasis is put on the numerous novel and counterintuitive results obtained with respect to the associated social and political framework. Using these models several major real political events were successfully predicted including the victory of the French extreme right party in the 2000 first round of French presidential elections, the voting at fifty-fifty in several democratic countries (Germany, Italy, Mexico), and the victory of the "no" to the 2005 French referendum on the European constitution. The perspectives and the challenges to make sociophysics a predictive solid field of science are discussed.

  2. The adaptive safety analysis and monitoring system

    NASA Astrophysics Data System (ADS)

    Tu, Haiying; Allanach, Jeffrey; Singh, Satnam; Pattipati, Krishna R.; Willett, Peter

    2004-09-01

    The Adaptive Safety Analysis and Monitoring (ASAM) system is a hybrid model-based software tool for assisting intelligence analysts to identify terrorist threats, to predict possible evolution of the terrorist activities, and to suggest strategies for countering terrorism. The ASAM system provides a distributed processing structure for gathering, sharing, understanding, and using information to assess and predict terrorist network states. In combination with counter-terrorist network models, it can also suggest feasible actions to inhibit potential terrorist threats. In this paper, we will introduce the architecture of the ASAM system, and discuss the hybrid modeling approach embedded in it, viz., Hidden Markov Models (HMMs) to detect and provide soft evidence on the states of terrorist network nodes based on partial and imperfect observations, and Bayesian networks (BNs) to integrate soft evidence from multiple HMMs. The functionality of the ASAM system is illustrated by way of application to the Indian Airlines Hijacking, as modeled from open sources.

  3. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    PubMed

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  4. Conditional dissipation of scalars in homogeneous turbulence: Closure for MMC modelling

    NASA Astrophysics Data System (ADS)

    Wandel, Andrew P.

    2013-08-01

    While the mean and unconditional variance are to be predicted well by any reasonable turbulent combustion model, these are generally not sufficient for the accurate modelling of complex phenomena such as extinction/reignition. An additional criterion has been recently introduced: accurate modelling of the dissipation timescales associated with fluctuations of scalars about their conditional mean (conditional dissipation timescales). Analysis of Direct Numerical Simulation (DNS) results for a passive scalar shows that the conditional dissipation timescale is of the order of the integral timescale and smaller than the unconditional dissipation timescale. A model is proposed: the conditional dissipation timescale is proportional to the integral timescale. This model is used in Multiple Mapping Conditioning (MMC) modelling for a passive scalar case and a reactive scalar case, comparing to DNS results for both. The results show that this model improves the accuracy of MMC predictions so as to match the DNS results more closely using a relatively-coarse spatial resolution compared to other turbulent combustion models.

  5. A Spatial Modeling Approach to Predicting the Secondary Spread of Invasive Species Due to Ballast Water Discharge

    PubMed Central

    Sieracki, Jennifer L.; Bossenbroek, Jonathan M.; Chadderton, W. Lindsay

    2014-01-01

    Ballast water in ships is an important contributor to the secondary spread of invasive species in the Laurentian Great Lakes. Here, we use a model previously created to determine the role ballast water management has played in the secondary spread of viral hemorrhagic septicemia virus (VHSV) to identify the future spread of one current and two potential invasive species in the Great Lakes, the Eurasian Ruffe (Gymnocephalus cernuus), killer shrimp (Dikerogammarus villosus), and golden mussel (Limnoperna fortunei), respectively. Model predictions for Eurasian Ruffe have been used to direct surveillance efforts within the Great Lakes and DNA evidence of ruffe presence was recently reported from one of three high risk port localities identified by our model. Predictions made for killer shrimp and golden mussel suggest that these two species have the potential to become rapidly widespread if introduced to the Great Lakes, reinforcing the need for proactive ballast water management. The model used here is flexible enough to be applied to any species capable of being spread by ballast water in marine or freshwater ecosystems. PMID:25470822

  6. Uncertainty Quantification and Certification Prediction of Low-Boom Supersonic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.

    2014-01-01

    The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.

  7. The importance of radiation for semiempirical water-use efficiency models

    DOE PAGES

    Boese, Sven; Jung, Martin; Carvalhais, Nuno; ...

    2017-06-22

    Water-use efficiency (WUE) is a fundamental property for the coupling of carbon and water cycles in plants and ecosystems. Existing model formulations predicting this variable differ in the type of response of WUE to the atmospheric vapor pressure deficit of water (VPD). We tested a representative WUE model on the ecosystem scale at 110 eddy covariance sites of the FLUXNET initiative by predicting evapotranspiration (ET) based on gross primary productivity (GPP) and VPD. We found that introducing an intercept term in the formulation increases model performance considerably, indicating that an additional factor needs to be considered. We demonstrate that thismore » intercept term varies seasonally and we subsequently associate it with radiation. Replacing the constant intercept term with a linear function of global radiation was found to further improve model predictions of ET. Our new semiempirical ecosystem WUE formulation indicates that, averaged over all sites, this radiation term accounts for up to half (39–47 %) of transpiration. These empirical findings challenge the current understanding of water-use efficiency on the ecosystem scale.« less

  8. The applications of machine learning algorithms in the modeling of estrogen-like chemicals.

    PubMed

    Liu, Huanxiang; Yao, Xiaojun; Gramatica, Paola

    2009-06-01

    Increasing concern is being shown by the scientific community, government regulators, and the public about endocrine-disrupting chemicals that, in the environment, are adversely affecting human and wildlife health through a variety of mechanisms, mainly estrogen receptor-mediated mechanisms of toxicity. Because of the large number of such chemicals in the environment, there is a great need for an effective means of rapidly assessing endocrine-disrupting activity in the toxicology assessment process. When faced with the challenging task of screening large libraries of molecules for biological activity, the benefits of computational predictive models based on quantitative structure-activity relationships to identify possible estrogens become immediately obvious. Recently, in order to improve the accuracy of prediction, some machine learning techniques were introduced to build more effective predictive models. In this review we will focus our attention on some recent advances in the use of these methods in modeling estrogen-like chemicals. The advantages and disadvantages of the machine learning algorithms used in solving this problem, the importance of the validation and performance assessment of the built models as well as their applicability domains will be discussed.

  9. Predicting Electrostatic Forces in RNA Folding

    PubMed Central

    Tan, Zhi-Jie; Chen, Shi-Jie

    2016-01-01

    Metal ion-mediated electrostatic interactions are critical to RNA folding. Although considerable progress has been made in mechanistic studies, the problem of accurate predictions for the ion effects in RNA folding remains unsolved, mainly due to the complexity of several potentially important issues such as ion correlation and dehydration effects. In this chapter, after giving a brief overview of the experimental findings and theoretical approaches, we focus on a recently developed new model, the tightly bound ion (TBI) model, for ion electrostatics in RNA folding. The model is unique because it can treat ion correlation and fluctuation effects for realistic RNA 3D structures. For monovalent ion (such as Na+) solutions, where ion correlation is weak, TBI and the Poisson–Boltzmann (PB) theory give the same results and the results agree with the experimental data. For multivalent ion (such as Mg2+) solutions, where ion correlation can be strong, however, TBI gives much improved predictions than the PB. Moreover, the model suggests an ion correlation- induced mechanism for the unusual efficiency of Mg2+ ions in the stabilization of RNA tertiary folds. In this chapter, after introducing the theoretical framework of the TBI model, we will describe how to apply the model to predict ion-binding properties and ion-dependent folding stabilities. PMID:20946803

  10. Likelihood of achieving air quality targets under model uncertainties.

    PubMed

    Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W

    2011-01-01

    Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.

  11. On the effects of alternative optima in context-specific metabolic model predictions

    PubMed Central

    Nikoloski, Zoran

    2017-01-01

    The integration of experimental data into genome-scale metabolic models can greatly improve flux predictions. This is achieved by restricting predictions to a more realistic context-specific domain, like a particular cell or tissue type. Several computational approaches to integrate data have been proposed—generally obtaining context-specific (sub)models or flux distributions. However, these approaches may lead to a multitude of equally valid but potentially different models or flux distributions, due to possible alternative optima in the underlying optimization problems. Although this issue introduces ambiguity in context-specific predictions, it has not been generally recognized, especially in the case of model reconstructions. In this study, we analyze the impact of alternative optima in four state-of-the-art context-specific data integration approaches, providing both flux distributions and/or metabolic models. To this end, we present three computational methods and apply them to two particular case studies: leaf-specific predictions from the integration of gene expression data in a metabolic model of Arabidopsis thaliana, and liver-specific reconstructions derived from a human model with various experimental data sources. The application of these methods allows us to obtain the following results: (i) we sample the space of alternative flux distributions in the leaf- and the liver-specific case and quantify the ambiguity of the predictions. In addition, we show how the inclusion of ℓ1-regularization during data integration reduces the ambiguity in both cases. (ii) We generate sets of alternative leaf- and liver-specific models that are optimal to each one of the evaluated model reconstruction approaches. We demonstrate that alternative models of the same context contain a marked fraction of disparate reactions. Further, we show that a careful balance between model sparsity and metabolic functionality helps in reducing the discrepancies between alternative models. Finally, our findings indicate that alternative optima must be taken into account for rendering the context-specific metabolic model predictions less ambiguous. PMID:28557990

  12. On the effects of alternative optima in context-specific metabolic model predictions.

    PubMed

    Robaina-Estévez, Semidán; Nikoloski, Zoran

    2017-05-01

    The integration of experimental data into genome-scale metabolic models can greatly improve flux predictions. This is achieved by restricting predictions to a more realistic context-specific domain, like a particular cell or tissue type. Several computational approaches to integrate data have been proposed-generally obtaining context-specific (sub)models or flux distributions. However, these approaches may lead to a multitude of equally valid but potentially different models or flux distributions, due to possible alternative optima in the underlying optimization problems. Although this issue introduces ambiguity in context-specific predictions, it has not been generally recognized, especially in the case of model reconstructions. In this study, we analyze the impact of alternative optima in four state-of-the-art context-specific data integration approaches, providing both flux distributions and/or metabolic models. To this end, we present three computational methods and apply them to two particular case studies: leaf-specific predictions from the integration of gene expression data in a metabolic model of Arabidopsis thaliana, and liver-specific reconstructions derived from a human model with various experimental data sources. The application of these methods allows us to obtain the following results: (i) we sample the space of alternative flux distributions in the leaf- and the liver-specific case and quantify the ambiguity of the predictions. In addition, we show how the inclusion of ℓ1-regularization during data integration reduces the ambiguity in both cases. (ii) We generate sets of alternative leaf- and liver-specific models that are optimal to each one of the evaluated model reconstruction approaches. We demonstrate that alternative models of the same context contain a marked fraction of disparate reactions. Further, we show that a careful balance between model sparsity and metabolic functionality helps in reducing the discrepancies between alternative models. Finally, our findings indicate that alternative optima must be taken into account for rendering the context-specific metabolic model predictions less ambiguous.

  13. Fast integration-based prediction bands for ordinary differential equation models.

    PubMed

    Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel

    2016-04-15

    To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. The balanced ideological antipathy model: explaining the effects of ideological attitudes on inter-group antipathy across the political spectrum.

    PubMed

    Crawford, Jarret T; Mallinas, Stephanie R; Furman, Bryan J

    2015-12-01

    We introduce the balanced ideological antipathy (BIA) model, which challenges assumptions that right-wing authoritarianism (RWA) and social dominance orientation (SDO) predict inter-group antipathy per se. Rather, the effects of RWA and SDO on antipathy should depend on the target's political orientation and political objectives, the specific components of RWA, and the type of antipathy expressed. Consistent with the model, two studies (N = 585) showed that the Traditionalism component of RWA positively and negatively predicted both political intolerance and prejudice toward tradition-threatening and -reaffirming groups, respectively, whereas SDO positively and negatively predicted prejudice (and to some extent political intolerance) toward hierarchy-attenuating and -enhancing groups, respectively. Critically, the Conservatism component of RWA positively predicted political intolerance (but not prejudice) toward each type of target group, suggesting it captures the anti-democratic impulse at the heart of authoritarianism. Recommendations for future research on the relationship between ideological attitudes and inter-group antipathy are discussed. © 2015 by the Society for Personality and Social Psychology, Inc.

  15. Modeling and analysis of the chip formation and transient cutting force during elliptical vibration cutting process

    NASA Astrophysics Data System (ADS)

    Lin, Jieqiong; Guan, Liang; Lu, Mingming; Han, Jinguo; Kan, Yudi

    2017-12-01

    In traditional diamond cutting, the cutting force is usually large and it will affect tool life and machining quality. Elliptical vibration cutting (EVC) as one of the ultra-precision machining technologies has a lot of advantages, such as reduces cutting force, extend tool life and so on. It's difficult to predict the transient cutting force of EVC due to its unique elliptical motion trajectory. Study on chip formation will helpfully to predict cutting force. The geometric feature of chip has important effects on cutting force, however, few scholars have studied the chip formation. In order to investigate the time-varying cutting force of EVC, the geometric feature model of chip is established based on analysis of chip formation, and the effects of cutting parameters on the geometric feature of chip are analyzed. To predict transient force quickly and effectively, the geometric feature of chip is introduced into the cutting force model. The calculated results show that the error between the predicted cutting force in this paper and that in the literature is less than 2%, which proves its feasibility.

  16. Eigenspace perturbations for uncertainty estimation of single-point turbulence closures

    NASA Astrophysics Data System (ADS)

    Iaccarino, Gianluca; Mishra, Aashwin Ananda; Ghili, Saman

    2017-02-01

    Reynolds-averaged Navier-Stokes (RANS) models represent the workhorse for predicting turbulent flows in complex industrial applications. However, RANS closures introduce a significant degree of epistemic uncertainty in predictions due to the potential lack of validity of the assumptions utilized in model formulation. Estimating this uncertainty is a fundamental requirement for building confidence in such predictions. We outline a methodology to estimate this structural uncertainty, incorporating perturbations to the eigenvalues and the eigenvectors of the modeled Reynolds stress tensor. The mathematical foundations of this framework are derived and explicated. Thence, this framework is applied to a set of separated turbulent flows, while compared to numerical and experimental data and contrasted against the predictions of the eigenvalue-only perturbation methodology. It is exhibited that for separated flows, this framework is able to yield significant enhancement over the established eigenvalue perturbation methodology in explaining the discrepancy against experimental observations and high-fidelity simulations. Furthermore, uncertainty bounds of potential engineering utility can be estimated by performing five specific RANS simulations, reducing the computational expenditure on such an exercise.

  17. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  18. Evaluation of the Acceptance of Audience Response System by Corporations Using the Technology Acceptance Model

    NASA Astrophysics Data System (ADS)

    Chu, Hsing-Hui; Lu, Ta-Jung; Wann, Jong-Wen

    The purpose of this research is to explore enterprises' acceptance of Audience Response System (ARS) using Technology Acceptance Model (TAM). The findings show that (1) IT characteristics and facilitating conditions could be external variables of TAM. (2) The degree of E-business has positive significant correlation with behavioral intention of employees. (3) TAM is a good model to predict and explain IT acceptance. (4) Demographic variables, industry and firm characteristics have no significant correlation with ARS acceptance. The results provide useful information to managers and ARS providers that (1) ARS providers should focus more on creating different usages to enhance interactivity and employees' using intention. (2) Managers should pay attention to build sound internal facilitating conditions for introducing IT. (3) According to the degree of E-business, managers should set up strategic stages of introducing IT. (4) Providers should increase product promotion and also leverage academic and government to promote ARS.

  19. Predicting Flory-Huggins χ from Simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Wenlin; Gomez, Enrique D.; Milner, Scott T.

    2017-07-01

    We introduce a method, based on a novel thermodynamic integration scheme, to extract the Flory-Huggins χ parameter as small as 10-3k T for polymer blends from molecular dynamics (MD) simulations. We obtain χ for the archetypical coarse-grained model of nonpolar polymer blends: flexible bead-spring chains with different Lennard-Jones interactions between A and B monomers. Using these χ values and a lattice version of self-consistent field theory (SCFT), we predict the shape of planar interfaces for phase-separated binary blends. Our SCFT results agree with MD simulations, validating both the predicted χ values and our thermodynamic integration method. Combined with atomistic simulations, our method can be applied to predict χ for new polymers from their chemical structures.

  20. Evaluation of biogeographical factors in the native range to improve the success of biological control agents in the introduced range

    USDA-ARS?s Scientific Manuscript database

    Biogeographical factors associated with Arundo donax in its native range were evaluated in reference to its key herbivore, an armored scale, Rhizaspidiotus donacis. Climate modeling from location data in Spain and France accurately predicted the native range of the scale in the warmer, drier parts o...

  1. Research efforts on fuels, fuel models, and fire behavior in eastern hardwood forests

    Treesearch

    Thomas A. Waldrop; Lucy Brudnak; Ross J. Phillips; Patrick H. Brose

    2006-01-01

    Although fire was historically important to most eastern hardwood systems, its reintroduction by prescribed burning programs has been slow. As a result, less information is available on these systems to fire managers. Recent research and nationwide programs are beginning to produce usable products to predict fuel accumulation and fire behavior. We introduce some of...

  2. Providing Confidence in Regional Maps in Predicting Where Nonnative Species are Invading the Forested Landscape

    Treesearch

    Dennis M. Jacobs; Victor A. Rudis

    2005-01-01

    Nonnative invasive plant species introduced to the South during the past century threaten to forest resources. Knowing their extent is important for strategic management and planning. We used U.S. Department of Agriculture, Forest Service, Forest Inventory and Analysis (FIA) field observations at ground-sampled locations to model the geographic occurrence probability...

  3. Models of Vocabulary Acquisition: Direct Tests and Text-Derived Simulations of Vocabulary Growth

    ERIC Educational Resources Information Center

    Biemiller, Andrew; Rosenstein, Mark; Sparks, Randall; Landauer, Thomas K.; Foltz, Peter W.

    2014-01-01

    Determining word meanings that ought to be taught or introduced is important for educators. A sequence for vocabulary growth can be inferred from many sources, including testing children's knowledge of word meanings at various ages, predicting from print frequency, or adult-recalled Age of Acquisition. A new approach, Word Maturity, is based on…

  4. Providing confidence in regional maps in predicting where nonnative species are invading the forested landscape.

    Treesearch

    Dennis M. Jacobs; Victor A. Rudis

    2005-01-01

    Nonnative invasive plant species introduced to the South during the past century threaten to forest resources. Knowing their extent is important for strategic management and planning. We used U.S. Department of Agriculture, Forest Service, Forest Inventory and Analysis (FTA) field observations at ground-sampled locations to model the geographic occurrence probability...

  5. Extending the Simultaneous-Sequential Paradigm to Measure Perceptual Capacity for Features and Words

    ERIC Educational Resources Information Center

    Scharff, Alec; Palmer, John; Moore, Cathleen M.

    2011-01-01

    In perception, divided attention refers to conditions in which multiple stimuli are relevant to an observer. To measure the effect of divided attention in terms of perceptual capacity, we introduce an extension of the simultaneous-sequential paradigm. The extension makes predictions for fixed-capacity models as well as for unlimited-capacity…

  6. Comparison of integrated clustering methods for accurate and stable prediction of building energy consumption data

    DOE PAGES

    Hsu, David

    2015-09-27

    Clustering methods are often used to model energy consumption for two reasons. First, clustering is often used to process data and to improve the predictive accuracy of subsequent energy models. Second, stable clusters that are reproducible with respect to non-essential changes can be used to group, target, and interpret observed subjects. However, it is well known that clustering methods are highly sensitive to the choice of algorithms and variables. This can lead to misleading assessments of predictive accuracy and mis-interpretation of clusters in policymaking. This paper therefore introduces two methods to the modeling of energy consumption in buildings: clusterwise regression,more » also known as latent class regression, which integrates clustering and regression simultaneously; and cluster validation methods to measure stability. Using a large dataset of multifamily buildings in New York City, clusterwise regression is compared to common two-stage algorithms that use K-means and model-based clustering with linear regression. Predictive accuracy is evaluated using 20-fold cross validation, and the stability of the perturbed clusters is measured using the Jaccard coefficient. These results show that there seems to be an inherent tradeoff between prediction accuracy and cluster stability. This paper concludes by discussing which clustering methods may be appropriate for different analytical purposes.« less

  7. Niche similarities among introduced and native mountain ungulates.

    PubMed

    Lowrey, B; Garrott, R A; McWhirter, D E; White, P J; DeCesare, N J; Stewart, S T

    2018-03-24

    The niche concept provides a strong foundation for theoretical and applied research among a broad range of disciplines. When two ecologically similar species are sympatric, theory predicts they will occupy distinct ecological niches to reduce competition. Capitalizing on the increasing availability of spatial data, we built from single species habitat suitability models to a multispecies evaluation of the niche partitioning hypothesis with sympatric mountain ungulates: native bighorn sheep (BHS; Ovis canadensis) and introduced mountain goats (MTG; Oreamnos americanus) in the northeast Greater Yellowstone Area. We characterized seasonal niches using two-stage resource selection functions with a used-available design and descriptive summaries of the niche attributes associated with used GPS locations. We evaluated seasonal similarity in niche space according to confidence interval overlap of model coefficients and similarity in geographic space by comparing model predicted values with Schoener's D metric. Our sample contained 37,962 summer locations from 53 individuals (BHS = 31, MTG = 22), and 79,984 winter locations from 57 individuals (BHS = 35, MTG = 22). Slope was the most influential niche component for both species and seasons, and showed the strongest evidence of niche partitioning. Bighorn sheep occurred on steeper slopes than mountain goats in summer and mountain goats occurred on steeper slopes in winter. The pattern of differential selection among species was less prevalent for the remaining covariates, indicating similarity in niche space. Model predictions in geographic space showed broad seasonal similarity (summer D = 0.88, winter D = 0.87), as did niche characterizations from used GPS locations. The striking similarities in seasonal niches suggest that introduced mountain goats will continue to increase their spatial overlap with native bighorn. Our results suggest that reducing densities of mountain goats in hunted areas where they are sympatric with bighorn sheep and impeding their expansion may reduce the possibility of competition and disease transfer. Additional studies that specifically investigate partitioning at finer scales and along dietary or temporal niche axes will help to inform an adaptive management approach. © 2018 by the Ecological Society of America.

  8. Design of the Next Generation Aircraft Noise Prediction Program: ANOPP2

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard V., Dr.; Burley, Casey L.

    2011-01-01

    The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced. Similar to its predecessor (ANOPP), ANOPP2 provides the U.S. Government with an independent aircraft system noise prediction capability that can be used as a stand-alone program or within larger trade studies that include performance, emissions, and fuel burn. The ANOPP2 framework is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. ANOPP2 integrates noise prediction and propagation methods, including those found in ANOPP, into a unified system that is compatible for use within general aircraft analysis software. The design of the system is described in terms of its functionality and capability to perform predictions accounting for distributed sources, installation effects, and propagation through a non-uniform atmosphere including refraction and the influence of terrain. The philosophy of mixed fidelity noise prediction through the use of nested Ffowcs Williams and Hawkings surfaces is presented and specific issues associated with its implementation are identified. Demonstrations for a conventional twin-aisle and an unconventional hybrid wing body aircraft configuration are presented to show the feasibility and capabilities of the system. Isolated model-scale jet noise predictions are also presented using high-fidelity and reduced order models, further demonstrating ANOPP2's ability to provide predictions for model-scale test configurations.

  9. Perspective Space as a Model for Distance and Size Perception.

    PubMed

    Erkelens, Casper J

    2017-01-01

    In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception.

  10. Perspective Space as a Model for Distance and Size Perception

    PubMed Central

    2017-01-01

    In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception. PMID:29225765

  11. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  12. Forecasting zoonotic cutaneous leishmaniasis using meteorological factors in eastern Fars province, Iran: a SARIMA analysis.

    PubMed

    Tohidinik, Hamid Reza; Mohebali, Mehdi; Mansournia, Mohammad Ali; Niakan Kalhori, Sharareh R; Ali-Akbarpour, Mohsen; Yazdani, Kamran

    2018-05-22

    To predict the occurrence of zoonotic cutaneous leishmaniasis (ZCL) and evaluate the effect of climatic variables on disease incidence in the east of Fars province, Iran using the Seasonal Autoregressive Integrated Moving Average (SARIMA) model. The Box-Jenkins approach was applied to fit the SARIMA model for ZCL incidence from 2004 to 2015. Then the model was used to predict the number of ZCL cases for the year 2016. Finally, we assessed the relation of meteorological variables (rainfall, rainy days, temperature, hours of sunshine and relative humidity) with ZCL incidence. SARIMA(2,0,0) (2,1,0)12 was the preferred model for predicting ZCL incidence in the east of Fars province (validation Root Mean Square Error, RMSE = 0.27). It showed that ZCL incidence in a given month can be estimated by the number of cases occurring 1 and 2 months, as well as 12 and 24 months earlier. The predictive power of SARIMA models was improved by the inclusion of rainfall at a lag of 2 months (β = -0.02), rainy days at a lag of 2 months (β = -0.09) and relative humidity at a lag of 8 months (β = 0.13) as external regressors (P-values < 0.05). The latter was the best climatic variable for predicting ZCL cases (validation RMSE = 0.26). Time series models can be useful tools to predict the trend of ZCL in Fars province, Iran; thus, they can be used in the planning of public health programmes. Introducing meteorological variables into the models may improve their precision. © 2018 John Wiley & Sons Ltd.

  13. Real-time emissions from construction equipment compared with model predictions.

    PubMed

    Heidari, Bardia; Marr, Linsey C

    2015-02-01

    The construction industry is a large source of greenhouse gases and other air pollutants. Measuring and monitoring real-time emissions will provide practitioners with information to assess environmental impacts and improve the sustainability of construction. We employed a portable emission measurement system (PEMS) for real-time measurement of carbon dioxide (CO), nitrogen oxides (NOx), hydrocarbon, and carbon monoxide (CO) emissions from construction equipment to derive emission rates (mass of pollutant emitted per unit time) and emission factors (mass of pollutant emitted per unit volume of fuel consumed) under real-world operating conditions. Measurements were compared with emissions predicted by methodologies used in three models: NONROAD2008, OFFROAD2011, and a modal statistical model. Measured emission rates agreed with model predictions for some pieces of equipment but were up to 100 times lower for others. Much of the difference was driven by lower fuel consumption rates than predicted. Emission factors during idling and hauling were significantly different from each other and from those of other moving activities, such as digging and dumping. It appears that operating conditions introduce considerable variability in emission factors. Results of this research will aid researchers and practitioners in improving current emission estimation techniques, frameworks, and databases.

  14. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles.

    PubMed

    Roth, Jenny; Steffens, Melanie C; Vignoles, Vivian L

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  15. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    PubMed Central

    Roth, Jenny; Steffens, Melanie C.; Vignoles, Vivian L.

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias. PMID:29681878

  16. Performance of statistical models to predict mental health and substance abuse cost.

    PubMed

    Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K

    2006-10-26

    Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.

  17. Impact of Initial Condition Errors and Precipitation Forecast Bias on Drought Simulation and Prediction in the Huaihe River Basin

    NASA Astrophysics Data System (ADS)

    Xu, H.; Luo, L.; Wu, Z.

    2016-12-01

    Drought, regarded as one of the major disasters all over the world, is not always easy to detect and forecast. Hydrological models coupled with Numerical Weather Prediction (NWP) has become a relatively effective method for drought monitoring and prediction. The accuracy of hydrological initial condition (IC) and the skill of NWP precipitation forecast can both heavily affect the quality and skill of hydrological forecast. In the study, the Variable Infiltration Capacity (VIC) model and Global Environmental Multi-scale (GEM) model were used to investigate the roles of IC and NWP forecast accuracy on hydrological predictions. A rev-ESP type experiment was conducted for a number of drought events in the Huaihe river basin. The experiment suggests that errors in ICs indeed affect the drought simulations by VIC and thus the drought monitoring. Although errors introduced in the ICs diminish gradually, the influence sometimes can last beyond 12 months. Using the soil moisture anomaly percentage index (SMAPI) as the metric to measure drought severity for the study region, we are able to quantify that time scale of influence from IC ranges. The analysis shows that the time scale is directly related to the magnitude of the introduced IC range and the average precipitation intensity. In order to explore how systematic bias correction in GEM forecasted precipitation can affect precipitation and hydrological forecast, we then both used station and gridded observations to eliminate biases of forecasted data. Meanwhile, different precipitation inputs with corrected data during drought process were conducted by VIC to investigate the changes of drought simulations, thus demonstrated short-term rolling drought prediction using a better performed corrected precipitation forecast. There is a word limit on the length of the abstract. So make sure your abstract fits the requirement. If this version is too long, try to shorten it as much as you can.

  18. Off the beaten path: a new approach to realistically model the orbital decay of supermassive black holes in galaxy formation simulations

    NASA Astrophysics Data System (ADS)

    Tremmel, M.; Governato, F.; Volonteri, M.; Quinn, T. R.

    2015-08-01

    We introduce a sub-grid force correction term to better model the dynamical friction experienced by a supermassive black hole (SMBH) as it orbits within its host galaxy. This new approach accurately follows an SMBH's orbital decay and drastically improves over commonly used `advection' methods. The force correction introduced here naturally scales with the force resolution of the simulation and converges as resolution is increased. In controlled experiments, we show how the orbital decay of the SMBH closely follows analytical predictions when particle masses are significantly smaller than that of the SMBH. In a cosmological simulation of the assembly of a small galaxy, we show how our method allows for realistic black hole orbits. This approach overcomes the limitations of the advection scheme, where black holes are rapidly and artificially pushed towards the halo centre and then forced to merge, regardless of their orbits. We find that SMBHs from merging dwarf galaxies can spend significant time away from the centre of the remnant galaxy. Improving the modelling of SMBH orbital decay will help in making robust predictions of the growth, detectability and merger rates of SMBHs, especially at low galaxy masses or at high redshift.

  19. An experimental and theoretical analysis of a foil-air bearing rotor system

    NASA Astrophysics Data System (ADS)

    Bonello, P.; Hassan, M. F. Bin

    2018-01-01

    Although there is considerable research on the experimental testing of foil-air bearing (FAB) rotor systems, only a small fraction has been correlated with simulations from a full nonlinear model that links the rotor, air film and foil domains, due to modelling complexity and computational burden. An approach for the simultaneous solution of the three domains as a coupled dynamical system, introduced by the first author and adopted by independent researchers, has recently demonstrated its capability to address this problem. This paper uses this approach, with further developments, in an experimental and theoretical study of a FAB-rotor test rig. The test rig is described in detail, including issues with its commissioning. The theoretical analysis uses a recently introduced modal-based bump foil model that accounts for interaction between the bumps and their inertia. The imposition of pressure constraints on the air film is found to delay the predicted onset of instability speed. The results lend experimental validation to a recent theoretically-based claim that the Gümbel condition may not be appropriate for a practical single-pad FAB. The satisfactory prediction of the salient features of the measured nonlinear behavior shows that the air film is indeed highly influential on the response, in contrast to an earlier finding.

  20. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  1. Diesel engine emissions and combustion predictions using advanced mixing models applicable to fuel sprays

    NASA Astrophysics Data System (ADS)

    Abani, Neerav; Reitz, Rolf D.

    2010-09-01

    An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.

  2. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  3. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  4. Uncertainty in Predicted Neighborhood-Scale Green Stormwater Infrastructure Performance Informed by field monitoring of Hydrologic Abstractions

    NASA Astrophysics Data System (ADS)

    Smalls-Mantey, L.; Jeffers, S.; Montalto, F. A.

    2013-12-01

    Human alterations to the environment provide infrastructure for housing and transportation but have drastically changed local hydrology. Excess stormwater runoff from impervious surfaces generates erosion, overburdens sewer infrastructure, and can pollute receiving bodies. Increased attention to green stormwater management controls is based on the premise that some of these issues can be mitigated by capturing or slowing the flow of stormwater. However, our ability to predict actual green infrastructure facility performance using physical or statistical methods needs additional validation, and efforts to incorporate green infrastructure controls into hydrologic models are still in their infancy stages. We use more than three years of field monitoring data to derive facility specific probability density functions characterizing the hydrologic abstractions provided by a stormwater treatment wetland, streetside bioretention facility, and a green roof. The monitoring results are normalized by impervious area treated, and incorporated into a neighborhood-scale agent model allowing probabilistic comparisons of the stormwater capture outcomes associated with alternative urban greening scenarios. Specifically, we compare the uncertainty introduced into the model by facility performance (as represented by the variability in the abstraction), to that introduced by both precipitation variability, and spatial patterns of emergence of different types of green infrastructure. The modeling results are used to update a discussion about the potential effectiveness of urban green infrastructure implementation plans.

  5. Development and Current Status of the “Cambridge” Loudness Models

    PubMed Central

    2014-01-01

    This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375

  6. Vegetation Monitoring with Gaussian Processes and Latent Force Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, Gustau; Svendsen, Daniel; Martino, Luca; Campos, Manuel; Luengo, David

    2017-04-01

    Monitoring vegetation by biophysical parameter retrieval from Earth observation data is a challenging problem, where machine learning is currently a key player. Neural networks, kernel methods, and Gaussian Process (GP) regression have excelled in parameter retrieval tasks at both local and global scales. GP regression is based on solid Bayesian statistics, yield efficient and accurate parameter estimates, and provides interesting advantages over competing machine learning approaches such as confidence intervals. However, GP models are hampered by lack of interpretability, that prevented the widespread adoption by a larger community. In this presentation we will summarize some of our latest developments to address this issue. We will review the main characteristics of GPs and their advantages in vegetation monitoring standard applications. Then, three advanced GP models will be introduced. First, we will derive sensitivity maps for the GP predictive function that allows us to obtain feature ranking from the model and to assess the influence of examples in the solution. Second, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated radiative transfer data in a single GP model. The JGP regression provides more sensible confidence intervals for the predictions, respects the physics of the underlying processes, and allows for transferability across time and space. Finally, a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical models of the system is presented. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Empirical evidence of the performance of these models will be presented through illustrative examples.

  7. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  8. Forecasting influenza in Hong Kong with Google search queries and statistical model fusion.

    PubMed

    Xu, Qinneng; Gel, Yulia R; Ramirez Ramirez, L Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung

    2017-01-01

    The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient.

  9. Budget impact analysis of sFlt-1/PlGF ratio as prediction test in Italian women with suspected preeclampsia.

    PubMed

    Frusca, Tiziana; Gervasi, Maria-Teresa; Paolini, Davide; Dionisi, Matteo; Ferre, Francesca; Cetin, Irene

    2017-09-01

    Preeclampsia (PE) is a pregnancy disease which represents a leading cause of maternal and perinatal mortality and morbidity. Accurate prediction of PE risk could provide an increase in health benefits and better patient management. To estimate the economic impact of introducing Elecsys sFlt-1/PlGF ratio test, in addition to standard practice, for the prediction of PE in women with suspected PE in the Italian National Health Service (INHS). A decision tree model has been developed to simulate the progression of a cohort of pregnant women from the first presentation of clinical suspicion of PE in the second and third trimesters until delivery. The model provides an estimation of the financial impact of introducing sFlt-1/PlGF versus standard practice. Clinical inputs have been derived from PROGNOSIS study and from literature review, and validated by National Clinical Experts. Resources and unit costs have been obtained from Italian-specific sources. Healthcare costs associated with the management of a pregnant woman with clinical suspicion of PE equal €2384 when following standard practice versus €1714 using sFlt-1/PlGF ratio test. Introduction of sFlt-1/PlGF into hospital practice is cost-saving. Savings are generated primarily through improvement in diagnostic accuracy and reduction in unnecessary hospitalization for women before PE's onset.

  10. Variability And Uncertainty Analysis Of Contaminant Transport Model Using Fuzzy Latin Hypercube Sampling Technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.

    2006-12-01

    Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.

  11. Analysis of Neuronal Spike Trains, Deconstructed

    PubMed Central

    Aljadeff, Johnatan; Lansdell, Benjamin J.; Fairhall, Adrienne L.; Kleinfeld, David

    2016-01-01

    As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods. PMID:27477016

  12. Communication: Understanding molecular representations in machine learning: The role of uniqueness and target similarity

    NASA Astrophysics Data System (ADS)

    Huang, Bing; von Lilienfeld, O. Anatole

    2016-10-01

    The predictive accuracy of Machine Learning (ML) models of molecular properties depends on the choice of the molecular representation. Inspired by the postulates of quantum mechanics, we introduce a hierarchy of representations which meet uniqueness and target similarity criteria. To systematically control target similarity, we simply rely on interatomic many body expansions, as implemented in universal force-fields, including Bonding, Angular (BA), and higher order terms. Addition of higher order contributions systematically increases similarity to the true potential energy and predictive accuracy of the resulting ML models. We report numerical evidence for the performance of BAML models trained on molecular properties pre-calculated at electron-correlated and density functional theory level of theory for thousands of small organic molecules. Properties studied include enthalpies and free energies of atomization, heat capacity, zero-point vibrational energies, dipole-moment, polarizability, HOMO/LUMO energies and gap, ionization potential, electron affinity, and electronic excitations. After training, BAML predicts energies or electronic properties of out-of-sample molecules with unprecedented accuracy and speed.

  13. Asymmetric bagging and feature selection for activities prediction of drug molecules.

    PubMed

    Li, Guo-Zheng; Meng, Hao-Hua; Lu, Wen-Cong; Yang, Jack Y; Yang, Mary Qu

    2008-05-28

    Activities of drug molecules can be predicted by QSAR (quantitative structure activity relationship) models, which overcomes the disadvantages of high cost and long cycle by employing the traditional experimental method. With the fact that the number of drug molecules with positive activity is rather fewer than that of negatives, it is important to predict molecular activities considering such an unbalanced situation. Here, asymmetric bagging and feature selection are introduced into the problem and asymmetric bagging of support vector machines (asBagging) is proposed on predicting drug activities to treat the unbalanced problem. At the same time, the features extracted from the structures of drug molecules affect prediction accuracy of QSAR models. Therefore, a novel algorithm named PRIFEAB is proposed, which applies an embedded feature selection method to remove redundant and irrelevant features for asBagging. Numerical experimental results on a data set of molecular activities show that asBagging improve the AUC and sensitivity values of molecular activities and PRIFEAB with feature selection further helps to improve the prediction ability. Asymmetric bagging can help to improve prediction accuracy of activities of drug molecules, which can be furthermore improved by performing feature selection to select relevant features from the drug molecules data sets.

  14. Inflationary cosmology: First 30+ years

    NASA Astrophysics Data System (ADS)

    Sato, Katsuhiko; Yokoyama, Jun'ichi

    2015-08-01

    Starting with an account of historical developments in Japan and Russia, we review inflationary cosmology and its basic predictions in a pedagogical manner. We also introduce the generalized G-inflation model, in terms of which all the known single-field inflation models may be described. This formalism allows us to analyze and compare the many inflationary models that have been proposed simultaneously and within a common framework. Finally, current observational constraints on inflation are reviewed, with particular emphasis on the sensitivity of the inferred constraints to the choice of datasets used.

  15. Effective Potentials for Folding Proteins

    NASA Astrophysics Data System (ADS)

    Chen, Nan-Yow; Su, Zheng-Yao; Mou, Chung-Yu

    2006-02-01

    A coarse-grained off-lattice model that is not biased in any way to the native state is proposed to fold proteins. To predict the native structure in a reasonable time, the model has included the essential effects of water in an effective potential. Two new ingredients, the dipole-dipole interaction and the local hydrophobic interaction, are introduced and are shown to be as crucial as the hydrogen bonding. The model allows successful folding of the wild-type sequence of protein G and may have provided important hints to the study of protein folding.

  16. Operational atmospheric modeling system CARIS for effective emergency response associated with hazardous chemical releases in Korea.

    PubMed

    Kim, Cheol-Hee; Park, Jin-Ho; Park, Cheol-Jin; Na, Jin-Gyun

    2004-03-01

    The Chemical Accidents Response Information System (CARIS) was developed at the Center for Chemical Safety Management in South Korea in order to track and predict the dispersion of hazardous chemicals in the case of an accident or terrorist attack involving chemical companies. The main objective of CARIS is to facilitate an efficient emergency response to hazardous chemical accidents by rapidly providing key information in the decision-making process. In particular, the atmospheric modeling system implemented in CARIS, which is composed of a real-time numerical weather forecasting model and an air pollution dispersion model, can be used as a tool to forecast concentrations and to provide a wide range of assessments associated with various hazardous chemicals in real time. This article introduces the components of CARIS and describes its operational modeling system. Some examples of the operational modeling system and its use for emergency preparedness are presented and discussed. Finally, this article evaluates the current numerical weather prediction model for Korea.

  17. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  18. Predictive representations can link model-based reinforcement learning to model-free mechanisms.

    PubMed

    Russek, Evan M; Momennejad, Ida; Botvinick, Matthew M; Gershman, Samuel J; Daw, Nathaniel D

    2017-09-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.

  19. Predictive representations can link model-based reinforcement learning to model-free mechanisms

    PubMed Central

    Botvinick, Matthew M.

    2017-01-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation. PMID:28945743

  20. Clustering gene expression data based on predicted differential effects of GV interaction.

    PubMed

    Pan, Hai-Yan; Zhu, Jun; Han, Dan-Fu

    2005-02-01

    Microarray has become a popular biotechnology in biological and medical research. However, systematic and stochastic variabilities in microarray data are expected and unavoidable, resulting in the problem that the raw measurements have inherent "noise" within microarray experiments. Currently, logarithmic ratios are usually analyzed by various clustering methods directly, which may introduce bias interpretation in identifying groups of genes or samples. In this paper, a statistical method based on mixed model approaches was proposed for microarray data cluster analysis. The underlying rationale of this method is to partition the observed total gene expression level into various variations caused by different factors using an ANOVA model, and to predict the differential effects of GV (gene by variety) interaction using the adjusted unbiased prediction (AUP) method. The predicted GV interaction effects can then be used as the inputs of cluster analysis. We illustrated the application of our method with a gene expression dataset and elucidated the utility of our approach using an external validation.

  1. Measuring the value of accurate link prediction for network seeding.

    PubMed

    Wei, Yijin; Spencer, Gwen

    2017-01-01

    The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.

  2. Stochastic simulation of human pulmonary blood flow and transit time frequency distribution based on anatomic and elasticity data.

    PubMed

    Huang, Wei; Shi, Jun; Yen, R T

    2012-12-01

    The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.

  3. A Simulation Model Articulation of the REA Ontology

    NASA Astrophysics Data System (ADS)

    Laurier, Wim; Poels, Geert

    This paper demonstrates how the REA enterprise ontology can be used to construct simulation models for business processes, value chains and collaboration spaces in supply chains. These models support various high-level and operational management simulation applications, e.g. the analysis of enterprise sustainability and day-to-day planning. First, the basic constructs of the REA ontology and the ExSpect modelling language for simulation are introduced. Second, collaboration space, value chain and business process models and their conceptual dependencies are shown, using the ExSpect language. Third, an exhibit demonstrates the use of value chain models in predicting the financial performance of an enterprise.

  4. A Sequential Ensemble Prediction System at Convection Permitting Scales

    NASA Astrophysics Data System (ADS)

    Milan, M.; Simmer, C.

    2012-04-01

    A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.

  5. Moving beyond regression techniques in cardiovascular risk prediction: applying machine learning to address analytic challenges

    PubMed Central

    Goldstein, Benjamin A.; Navar, Ann Marie; Carter, Rickey E.

    2017-01-01

    Abstract Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. PMID:27436868

  6. Analytical model for vibration prediction of two parallel tunnels in a full-space

    NASA Astrophysics Data System (ADS)

    He, Chao; Zhou, Shunhua; Guo, Peijun; Di, Honggui; Zhang, Xiaohui

    2018-06-01

    This paper presents a three-dimensional analytical model for the prediction of ground vibrations from two parallel tunnels embedded in a full-space. The two tunnels are modelled as cylindrical shells of infinite length, and the surrounding soil is modelled as a full-space with two cylindrical cavities. A virtual interface is introduced to divide the soil into the right layer and the left layer. By transforming the cylindrical waves into the plane waves, the solution of wave propagation in the full-space with two cylindrical cavities is obtained. The transformations from the plane waves to cylindrical waves are then used to satisfy the boundary conditions on the tunnel-soil interfaces. The proposed model provides a highly efficient tool to predict the ground vibration induced by the underground railway, which accounts for the dynamic interaction between neighbouring tunnels. Analysis of the vibration fields produced over a range of frequencies and soil properties is conducted. When the distance between the two tunnels is smaller than three times the tunnel diameter, the interaction between neighbouring tunnels is highly significant, at times in the order of 20 dB. It is necessary to consider the interaction between neighbouring tunnels for the prediction of ground vibrations induced underground railways.

  7. A new possible picture of the hadron structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokrovsky, Yury E.

    A new chiral-scale invariant version of the bag model (CSB) is developed and applied to calculations of masses and radii for single bag states. The mass formula of the CSB model contains no free parameters and connects masses and radii of the bags with fundamental QCD scales, namely with {lambda}{sub QCD}, , , and quark masses. For high angular momentum states the CSB model well describes hadron Regge trajectories and predicts thin flux tubes with R{sub tube}{approx_equal}0.25 fm close to the small tube radii introduced a posteriori in modern models. For low angular momentum states this model predicts smallmore » radii of the bags R{sub bag}{approx_equal}0.25 fm close to the radii associated with constituent quarks. Masses of the lowest angular momentum bags are obtained close to the data for well known hadron resonances ({pi}(1300), {omega}(1420), N(1440),{delta}(1600), etc.). These resonances are predicted to be almost single bag states. But ground states of SU(3) hadrons (N(940), {pi}(140), etc.) are treated as strongly bounded multi bag states--BagBag-mesons, and BagBagBag-baryons like in the old Fermi, Yang, and Sakata models. As well, this model predicts the low mass excitations of SU(3) hadrons newly observed for nucleons at the following masses 1004, 1044, and 1094 MeV.« less

  8. Design and application of implicit solvent models in biomolecular simulations.

    PubMed

    Kleinjung, Jens; Fraternali, Franca

    2014-04-01

    We review implicit solvent models and their parametrisation by introducing the concepts and recent devlopments of the most popular models with a focus on parametrisation via force matching. An overview of recent applications of the solvation energy term in protein dynamics, modelling, design and prediction is given to illustrate the usability and versatility of implicit solvation in reproducing the physical behaviour of biomolecular systems. Limitations of implicit modes are discussed through the example of more challenging systems like nucleic acids and membranes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.

    2008-01-01

    A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.

  10. Sensitivity of the Greenland Ice Sheet to Pliocene sea surface temperatures

    USGS Publications Warehouse

    Hill, Daniel J.; Dolan, Aisling M.; Haywood, Alan M.; Hunter, Stephen J.; Stoll, Danielle K.

    2010-01-01

    PRISM3).Use of these different SSTswithin theHadley CentreGCM(GeneralCirculationModel) and BASISM (BritishAntarctic Survey Ice Sheet Model), consistently show large reductions of Pliocene Greenland ice volumes compared to modern. The changes in climate introduced by the use of different SST reconstructions do change the predicted ice volumes, mainly through precipitation feedbacks. However, the models show a relatively low sensitivity of modelled Greenland ice volumes to different mid-Piacenzian SST reconstructions, with the largest SST induced changes being 20% of Pliocene ice volume or less than a metre of sea-level rise.

  11. Artificial neural networks and approximate reasoning for intelligent control in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.

  12. Modeling Humans as Reinforcement Learners: How to Predict Human Behavior in Multi-Stage Games

    NASA Technical Reports Server (NTRS)

    Lee, Ritchie; Wolpert, David H.; Backhaus, Scott; Bent, Russell; Bono, James; Tracey, Brendan

    2011-01-01

    This paper introduces a novel framework for modeling interacting humans in a multi-stage game environment by combining concepts from game theory and reinforcement learning. The proposed model has the following desirable characteristics: (1) Bounded rational players, (2) strategic (i.e., players account for one anothers reward functions), and (3) is computationally feasible even on moderately large real-world systems. To do this we extend level-K reasoning to policy space to, for the first time, be able to handle multiple time steps. This allows us to decompose the problem into a series of smaller ones where we can apply standard reinforcement learning algorithms. We investigate these ideas in a cyber-battle scenario over a smart power grid and discuss the relationship between the behavior predicted by our model and what one might expect of real human defenders and attackers.

  13. Hyperquarks and bosonic preon bound states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmid, Michael L.; Buchmann, Alfons J.

    2009-11-01

    In a model in which leptons, quarks, and the recently introduced hyperquarks are built up from two fundamental spin-(1/2) preons, the standard model weak gauge bosons emerge as preon bound states. In addition, the model predicts a host of new composite gauge bosons, in particular, those responsible for hyperquark and proton decay. Their presence entails a left-right symmetric extension of the standard model weak interactions and a scheme for a partial and grand unification of nongravitational interactions based on, respectively, the effective gauge groups SU(6){sub P} and SU(9){sub G}. This leads to a prediction of the Weinberg angle at lowmore » energies in good agreement with experiment. Furthermore, using evolution equations for the effective coupling strengths, we calculate the partial and grand unification scales, the hyperquark mass scale, as well as the mass and decay rate of the lightest hyperhadron.« less

  14. Social relevance: toward understanding the impact of the individual in an information cascade

    NASA Astrophysics Data System (ADS)

    Hall, Robert T.; White, Joshua S.; Fields, Jeremy

    2016-05-01

    Information Cascades (IC) through a social network occur due to the decision of users to disseminate content. We define this decision process as User Diffusion (UD). IC models typically describe an information cascade by treating a user as a node within a social graph, where a node's reception of an idea is represented by some activation state. The probability of activation then becomes a function of a node's connectedness to other activated nodes as well as, potentially, the history of activation attempts. We enrich this Coarse-Grained User Diffusion (CGUD) model by applying actor type logics to the nodes of the graph. The resulting Fine-Grained User Diffusion (FGUD) model utilizes prior research in actor typing to generate a predictive model regarding the future influence a user will have on an Information Cascade. Furthermore, we introduce a measure of Information Resonance that is used to aid in predictions regarding user behavior.

  15. Vehicle lift-off modelling and a new rollover detection criterion

    NASA Astrophysics Data System (ADS)

    Mashadi, Behrooz; Mostaghimi, Hamid

    2017-05-01

    The modelling and development of a general criterion for the prediction of rollover threshold is the main purpose of this work. Vehicle dynamics models after the wheels lift-off and when the vehicle moves on the two wheels are derived and the governing equations are used to develop the rollover threshold. These models include the properties of the suspension and steering systems. In order to study the stability of motion, the steady-state solutions of the equations of motion are carried out. Based on the stability analyses, a new relation is obtained for the rollover threshold in terms of measurable response parameters. The presented criterion predicts the best time for the prevention of the vehicle rollover by applying a correcting moment. It is shown that the introduced threshold of vehicle rollover is a proper state of vehicle motion that is best for stabilising the vehicle with a low energy requirement.

  16. A Three-Parameter Model for Predicting Fatigue Life of Ductile Metals Under Constant Amplitude Multiaxial Loading

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Li, Jing; Zhang, Zhong-ping

    2013-04-01

    In this article, a fatigue damage parameter is proposed to assess the multiaxial fatigue lives of ductile metals based on the critical plane concept: Fatigue crack initiation is controlled by the maximum shear strain, and the other important effect in the fatigue damage process is the normal strain and stress. This fatigue damage parameter introduces a stress-correlated factor, which describes the degree of the non-proportional cyclic hardening. Besides, a three-parameter multiaxial fatigue criterion is used to correlate the fatigue lifetime of metallic materials with the proposed damage parameter. Under the uniaxial loading, this three-parameter model reduces to the recently developed Zhang's model for predicting the uniaxial fatigue crack initiation life. The accuracy and reliability of this three-parameter model are checked against the experimental data found in literature through testing six different ductile metals under various strain paths with zero/non-zero mean stress.

  17. A comprehensive model of the spatio-temporal stem cell and tissue organisation in the intestinal crypt.

    PubMed

    Buske, Peter; Galle, Jörg; Barker, Nick; Aust, Gabriela; Clevers, Hans; Loeffler, Markus

    2011-01-06

    We introduce a novel dynamic model of stem cell and tissue organisation in murine intestinal crypts. Integrating the molecular, cellular and tissue level of description, this model links a broad spectrum of experimental observations encompassing spatially confined cell proliferation, directed cell migration, multiple cell lineage decisions and clonal competition.Using computational simulations we demonstrate that the model is capable of quantitatively describing and predicting the dynamic behaviour of the intestinal tissue during steady state as well as after cell damage and following selective gain or loss of gene function manipulations affecting Wnt- and Notch-signalling. Our simulation results suggest that reversibility and flexibility of cellular decisions are key elements of robust tissue organisation of the intestine. We predict that the tissue should be able to fully recover after complete elimination of cellular subpopulations including subpopulations deemed to be functional stem cells. This challenges current views of tissue stem cell organisation.

  18. Interface stresses in fiber-reinforced materials with regular fiber arrangements

    NASA Astrophysics Data System (ADS)

    Mueller, W. H.; Schmauder, S.

    The theory of linear elasticity is used here to analyze the stresses inside and at the surface of fiber-reinforced composites. Plane strain, plane stress, and generalized plane strain are analyzed using the shell model and the BHE model and are numerically studied using finite element analysis. Interface stresses are shown to depend weakly on Poisson's ratio. For equal values of the ratio, generalized plane strain and plane strain results are identical. For small volume fractions up to 40 vol pct of fibers, the shell and the BHE models predict the interface stresses very well over a wide range of elastic mismatches and for different fiber arrangements. At higher volume fractions the stresses are influenced by interactions with neighboring fibers. Introducing an external pressure into the shell model allows the prediction of interface stresses in real composite with isolated or regularly arranged fibers.

  19. On the Gause predator-prey model with a refuge: a fresh look at the history.

    PubMed

    Křivan, Vlastimil

    2011-04-07

    This article re-analyses a prey-predator model with a refuge introduced by one of the founders of population ecology Gause and his co-workers to explain discrepancies between their observations and predictions of the Lotka-Volterra prey-predator model. They replaced the linear functional response used by Lotka and Volterra by a saturating functional response with a discontinuity at a critical prey density. At concentrations below this critical density prey were effectively in a refuge while at a higher densities they were available to predators. Thus, their functional response was of the Holling type III. They analyzed this model and predicted existence of a limit cycle in predator-prey dynamics. In this article I show that their model is ill posed, because trajectories are not well defined. Using the Filippov method, I define and analyze solutions of the Gause model. I show that depending on parameter values, there are three possibilities: (1) trajectories converge to a limit cycle, as predicted by Gause, (2) trajectories converge to an equilibrium, or (3) the prey population escapes predator control and grows to infinity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Validation of Shoulder Response of Human Body Finite-Element Model (GHBMC) Under Whole Body Lateral Impact Condition.

    PubMed

    Park, Gwansik; Kim, Taewung; Panzer, Matthew B; Crandall, Jeff R

    2016-08-01

    In previous shoulder impact studies, the 50th-percentile male GHBMC human body finite-element model was shown to have good biofidelity regarding impact force, but under-predicted shoulder deflection by 80% compared to those observed in the experiment. The goal of this study was to validate the response of the GHBMC M50 model by focusing on three-dimensional shoulder kinematics under a whole-body lateral impact condition. Five modifications, focused on material properties and modeling techniques, were introduced into the model and a supplementary sensitivity analysis was done to determine the influence of each modification to the biomechanical response of the body. The modified model predicted substantially improved shoulder response and peak shoulder deflection within 10% of the observed experimental data, and showed good correlation in the scapula kinematics on sagittal and transverse planes. The improvement in the biofidelity of the shoulder region was mainly due to the modifications of material properties of muscle, the acromioclavicular joint, and the attachment region between the pectoralis major and ribs. Predictions of rib fracture and chest deflection were also improved because of these modifications.

Top