Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
A review of statistical updating methods for clinical prediction models.
Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew
2018-01-01
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.
Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H
2017-07-01
Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.
Hilkens, N A; Algra, A; Greving, J P
2016-01-01
ESSENTIALS: Prediction models may help to identify patients at high risk of bleeding on antiplatelet therapy. We identified existing prediction models for bleeding and validated them in patients with cerebral ischemia. Five prediction models were identified, all of which had some methodological shortcomings. Performance in patients with cerebral ischemia was poor. Background Antiplatelet therapy is widely used in secondary prevention after a transient ischemic attack (TIA) or ischemic stroke. Bleeding is the main adverse effect of antiplatelet therapy and is potentially life threatening. Identification of patients at increased risk of bleeding may help target antiplatelet therapy. This study sought to identify existing prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy and evaluate their performance in patients with cerebral ischemia. We systematically searched PubMed and Embase for existing prediction models up to December 2014. The methodological quality of the included studies was assessed with the CHARMS checklist. Prediction models were externally validated in the European Stroke Prevention Study 2, comprising 6602 patients with a TIA or ischemic stroke. We assessed discrimination and calibration of included prediction models. Five prediction models were identified, of which two were developed in patients with previous cerebral ischemia. Three studies assessed major bleeding, one studied intracerebral hemorrhage and one gastrointestinal bleeding. None of the studies met all criteria of good quality. External validation showed poor discriminative performance, with c-statistics ranging from 0.53 to 0.64 and poor calibration. A limited number of prediction models is available that predict intracranial hemorrhage or major bleeding in patients on antiplatelet therapy. The methodological quality of the models varied, but was generally low. Predictive performance in patients with cerebral ischemia was poor. In order to reliably predict the risk of bleeding in patients with cerebral ischemia, development of a prediction model according to current methodological standards is needed. © 2015 International Society on Thrombosis and Haemostasis.
Improved Modeling of Open Waveguide Aperture Radiators for use in Conformal Antenna Arrays
NASA Astrophysics Data System (ADS)
Nelson, Gregory James
Open waveguide apertures have been used as radiating elements in conformal arrays. Individual radiating element model patterns are used in constructing overall array models. The existing models for these aperture radiating elements may not accurately predict the array pattern for TEM waves which are not on boresight for each radiating element. In particular, surrounding structures can affect the far field patterns of these apertures, which ultimately affects the overall array pattern. New models of open waveguide apertures are developed here with the goal of accounting for the surrounding structure effects on the aperture far field patterns such that the new models make accurate pattern predictions. These aperture patterns (both E plane and H plane) are measured in an anechoic chamber and the manner in which they deviate from existing model patterns are studied. Using these measurements as a basis, existing models for both E and H planes are updated with new factors and terms which allow the prediction of far field open waveguide aperture patterns with improved accuracy. These new and improved individual radiator models are then used to predict overall conformal array patterns. Arrays of open waveguide apertures are constructed and measured in a similar fashion to the individual aperture measurements. These measured array patterns are compared with the newly modeled array patterns to verify the improved accuracy of the new models as compared with the performance of existing models in making array far field pattern predictions. The array pattern lobe characteristics are then studied for predicting fully circularly conformal arrays of varying radii. The lobe metrics that are tracked are angular location and magnitude as the radii of the conformal arrays are varied. A constructed, measured array that is close to conforming to a circular surface is compared with a fully circularly conformal modeled array pattern prediction, with the predicted lobe angular locations and magnitudes tracked, plotted and tabulated. The close match between the patterns of the measured array and the modeled circularly conformal array verifies the validity of the modeled circularly conformal array pattern predictions.
Model predictions of wind and turbulence profiles associated with an ensemble of aircraft accidents
NASA Technical Reports Server (NTRS)
Williamson, G. G.; Lewellen, W. S.; Teske, M. E.
1977-01-01
The feasibility of predicting conditions under which wind/turbulence environments hazardous to aviation operations exist is studied by examining a number of different accidents in detail. A model of turbulent flow in the atmospheric boundary layer is used to reconstruct wind and turbulence profiles which may have existed at low altitudes at the time of the accidents. The predictions are consistent with available flight recorder data, but neither the input boundary conditions nor the flight recorder observations are sufficiently precise for these studies to be interpreted as verification tests of the model predictions.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
Markkula, Gustav; Boer, Erwin; Romano, Richard; Merat, Natasha
2018-06-01
A conceptual and computational framework is proposed for modelling of human sensorimotor control and is exemplified for the sensorimotor task of steering a car. The framework emphasises control intermittency and extends on existing models by suggesting that the nervous system implements intermittent control using a combination of (1) motor primitives, (2) prediction of sensory outcomes of motor actions, and (3) evidence accumulation of prediction errors. It is shown that approximate but useful sensory predictions in the intermittent control context can be constructed without detailed forward models, as a superposition of simple prediction primitives, resembling neurobiologically observed corollary discharges. The proposed mathematical framework allows straightforward extension to intermittent behaviour from existing one-dimensional continuous models in the linear control and ecological psychology traditions. Empirical data from a driving simulator are used in model-fitting analyses to test some of the framework's main theoretical predictions: it is shown that human steering control, in routine lane-keeping and in a demanding near-limit task, is better described as a sequence of discrete stepwise control adjustments, than as continuous control. Results on the possible roles of sensory prediction in control adjustment amplitudes, and of evidence accumulation mechanisms in control onset timing, show trends that match the theoretical predictions; these warrant further investigation. The results for the accumulation-based model align with other recent literature, in a possibly converging case against the type of threshold mechanisms that are often assumed in existing models of intermittent control.
Cultural Resource Predictive Modeling
2017-10-01
property to manage ? a. Yes 2) Do you use CRPM (Cultural Resource Predictive Modeling) No, but I use predictive modelling informally . For example...resource program and provide support to the test ranges for their missions. This document will provide information such as lessons learned, points...of contact, and resources to the range cultural resource managers . Objective/Scope: Identify existing cultural resource predictive models and
Calibration of PMIS pavement performance prediction models.
DOT National Transportation Integrated Search
2012-02-01
Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...
Combustion of Nitramine Propellants
1983-03-01
through development of a comprehensive analytical model. The ultimate goals are to enable prediction of deflagration rate over a wide pressure range...superior in burn rate prediction , both simple models fail in correlating existing temperature- sensitivity data. (2) In the second part, a...auxiliary condition to enable independent burn rate prediction ; improved melt phase model including decomposition-gas bubbles; model for far-field
Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.; O'Brien, Grady M.
2004-01-01
We develop a new observation‐prediction (OPR) statistic for evaluating the importance of system state observations to model predictions. The OPR statistic measures the change in prediction uncertainty produced when an observation is added to or removed from an existing monitoring network, and it can be used to guide refinement and enhancement of the network. Prediction uncertainty is approximated using a first‐order second‐moment method. We apply the OPR statistic to a model of the Death Valley regional groundwater flow system (DVRFS) to evaluate the importance of existing and potential hydraulic head observations to predicted advective transport paths in the saturated zone underlying Yucca Mountain and underground testing areas on the Nevada Test Site. Important existing observations tend to be far from the predicted paths, and many unimportant observations are in areas of high observation density. These results can be used to select locations at which increased observation accuracy would be beneficial and locations that could be removed from the network. Important potential observations are mostly in areas of high hydraulic gradient far from the paths. Results for both existing and potential observations are related to the flow system dynamics and coarse parameter zonation in the DVRFS model. If system properties in different locations are as similar as the zonation assumes, then the OPR results illustrate a data collection opportunity whereby observations in distant, high‐gradient areas can provide information about properties in flatter‐gradient areas near the paths. If this similarity is suspect, then the analysis produces a different type of data collection opportunity involving testing of model assumptions critical to the OPR results.
Product component genealogy modeling and field-failure prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Caleb; Hong, Yili; Meeker, William Q.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Product component genealogy modeling and field-failure prediction
King, Caleb; Hong, Yili; Meeker, William Q.
2016-04-13
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Mammographic density, breast cancer risk and risk prediction
Vachon, Celine M; van Gils, Carla H; Sellers, Thomas A; Ghosh, Karthik; Pruthi, Sandhya; Brandt, Kathleen R; Pankratz, V Shane
2007-01-01
In this review, we examine the evidence for mammographic density as an independent risk factor for breast cancer, describe the risk prediction models that have incorporated density, and discuss the current and future implications of using mammographic density in clinical practice. Mammographic density is a consistent and strong risk factor for breast cancer in several populations and across age at mammogram. Recently, this risk factor has been added to existing breast cancer risk prediction models, increasing the discriminatory accuracy with its inclusion, albeit slightly. With validation, these models may replace the existing Gail model for clinical risk assessment. However, absolute risk estimates resulting from these improved models are still limited in their ability to characterize an individual's probability of developing cancer. Promising new measures of mammographic density, including volumetric density, which can be standardized using full-field digital mammography, will likely result in a stronger risk factor and improve accuracy of risk prediction models. PMID:18190724
Dynamic prediction in functional concurrent regression with an application to child growth.
Leroux, Andrew; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William
2018-04-15
In many studies, it is of interest to predict the future trajectory of subjects based on their historical data, referred to as dynamic prediction. Mixed effects models have traditionally been used for dynamic prediction. However, the commonly used random intercept and slope model is often not sufficiently flexible for modeling subject-specific trajectories. In addition, there may be useful exposures/predictors of interest that are measured concurrently with the outcome, complicating dynamic prediction. To address these problems, we propose a dynamic functional concurrent regression model to handle the case where both the functional response and the functional predictors are irregularly measured. Currently, such a model cannot be fit by existing software. We apply the model to dynamically predict children's length conditional on prior length, weight, and baseline covariates. Inference on model parameters and subject-specific trajectories is conducted using the mixed effects representation of the proposed model. An extensive simulation study shows that the dynamic functional regression model provides more accurate estimation and inference than existing methods. Methods are supported by fast, flexible, open source software that uses heavily tested smoothing techniques. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Gomes, Anna; van der Wijk, Lars; Proost, Johannes H; Sinha, Bhanu; Touw, Daan J
2017-01-01
Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid potential underdosing of gentamicin in endocarditis patients.
van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.
2017-01-01
Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid potential underdosing of gentamicin in endocarditis patients. PMID:28475651
NASA Astrophysics Data System (ADS)
Zhu, Linqi; Zhang, Chong; Zhang, Chaomo; Wei, Yang; Zhou, Xueqing; Cheng, Yuan; Huang, Yuyang; Zhang, Le
2018-06-01
There is increasing interest in shale gas reservoirs due to their abundant reserves. As a key evaluation criterion, the total organic carbon content (TOC) of the reservoirs can reflect its hydrocarbon generation potential. The existing TOC calculation model is not very accurate and there is still the possibility for improvement. In this paper, an integrated hybrid neural network (IHNN) model is proposed for predicting the TOC. This is based on the fact that the TOC information on the low TOC reservoir, where the TOC is easy to evaluate, comes from a prediction problem, which is the inherent problem of the existing algorithm. By comparing the prediction models established in 132 rock samples in the shale gas reservoir within the Jiaoshiba area, it can be seen that the accuracy of the proposed IHNN model is much higher than that of the other prediction models. The mean square error of the samples, which were not joined to the established models, was reduced from 0.586 to 0.442. The results show that TOC prediction is easier after logging prediction has been improved. Furthermore, this paper puts forward the next research direction of the prediction model. The IHNN algorithm can help evaluate the TOC of a shale gas reservoir.
van Eijkeren, Jan C H; Olie, J Daniël N; Bradberry, Sally M; Vale, J Allister; de Vries, Irma; Clewell, Harvey J; Meulenbelt, Jan; Hunault, Claudine C
2017-02-01
Kinetic models could assist clinicians potentially in managing cases of lead poisoning. Several models exist that can simulate lead kinetics but none of them can predict the effect of chelation in lead poisoning. Our aim was to devise a model to predict the effect of succimer (dimercaptosuccinic acid; DMSA) chelation therapy on blood lead concentrations. We integrated a two-compartment kinetic succimer model into an existing PBPK lead model and produced a Chelation Lead Therapy (CLT) model. The accuracy of the model's predictions was assessed by simulating clinical observations in patients poisoned by lead and treated with succimer. The CLT model calculates blood lead concentrations as the sum of the background exposure and the acute or chronic lead poisoning. The latter was due either to ingestion of traditional remedies or occupational exposure to lead-polluted ambient air. The exposure duration was known. The blood lead concentrations predicted by the CLT model were compared to the measured blood lead concentrations. Pre-chelation blood lead concentrations ranged between 99 and 150 μg/dL. The model was able to simulate accurately the blood lead concentrations during and after succimer treatment. The pattern of urine lead excretion was successfully predicted in some patients, while poorly predicted in others. Our model is able to predict blood lead concentrations after succimer therapy, at least, in situations where the duration of lead exposure is known.
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
An individual-based model of zebrafish population dynamics accounting for energy dynamics.
Beaudouin, Rémy; Goussen, Benoit; Piccini, Benjamin; Augustine, Starrlight; Devillers, James; Brion, François; Péry, Alexandre R R
2015-01-01
Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) was coupled to an individual based model of zebrafish population dynamics (IBM model). Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can already serve to predict the impact of compounds at the population level.
An Individual-Based Model of Zebrafish Population Dynamics Accounting for Energy Dynamics
Beaudouin, Rémy; Goussen, Benoit; Piccini, Benjamin; Augustine, Starrlight; Devillers, James; Brion, François; Péry, Alexandre R. R.
2015-01-01
Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) was coupled to an individual based model of zebrafish population dynamics (IBM model). Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can already serve to predict the impact of compounds at the population level. PMID:25938409
Latent Patient Cluster Discovery for Robust Future Forecasting and New-Patient Generalization
Masino, Aaron J.
2016-01-01
Commonly referred to as predictive modeling, the use of machine learning and statistical methods to improve healthcare outcomes has recently gained traction in biomedical informatics research. Given the vast opportunities enabled by large Electronic Health Records (EHR) data and powerful resources for conducting predictive modeling, we argue that it is yet crucial to first carefully examine the prediction task and then choose predictive methods accordingly. Specifically, we argue that there are at least three distinct prediction tasks that are often conflated in biomedical research: 1) data imputation, where a model fills in the missing values in a dataset, 2) future forecasting, where a model projects the development of a medical condition for a known patient based on existing observations, and 3) new-patient generalization, where a model transfers the knowledge learned from previously observed patients to newly encountered ones. Importantly, the latter two tasks—future forecasting and new-patient generalizations—tend to be more difficult than data imputation as they require predictions to be made on potentially out-of-sample data (i.e., data following a different predictable pattern from what has been learned by the model). Using hearing loss progression as an example, we investigate three regression models and show that the modeling of latent clusters is a robust method for addressing the more challenging prediction scenarios. Overall, our findings suggest that there exist significant differences between various kinds of prediction tasks and that it is important to evaluate the merits of a predictive model relative to the specific purpose of a prediction task. PMID:27636203
Latent Patient Cluster Discovery for Robust Future Forecasting and New-Patient Generalization.
Qian, Ting; Masino, Aaron J
2016-01-01
Commonly referred to as predictive modeling, the use of machine learning and statistical methods to improve healthcare outcomes has recently gained traction in biomedical informatics research. Given the vast opportunities enabled by large Electronic Health Records (EHR) data and powerful resources for conducting predictive modeling, we argue that it is yet crucial to first carefully examine the prediction task and then choose predictive methods accordingly. Specifically, we argue that there are at least three distinct prediction tasks that are often conflated in biomedical research: 1) data imputation, where a model fills in the missing values in a dataset, 2) future forecasting, where a model projects the development of a medical condition for a known patient based on existing observations, and 3) new-patient generalization, where a model transfers the knowledge learned from previously observed patients to newly encountered ones. Importantly, the latter two tasks-future forecasting and new-patient generalizations-tend to be more difficult than data imputation as they require predictions to be made on potentially out-of-sample data (i.e., data following a different predictable pattern from what has been learned by the model). Using hearing loss progression as an example, we investigate three regression models and show that the modeling of latent clusters is a robust method for addressing the more challenging prediction scenarios. Overall, our findings suggest that there exist significant differences between various kinds of prediction tasks and that it is important to evaluate the merits of a predictive model relative to the specific purpose of a prediction task.
Enhancing emotional-based target prediction
NASA Astrophysics Data System (ADS)
Gosnell, Michael; Woodley, Robert
2008-04-01
This work extends existing agent-based target movement prediction to include key ideas of behavioral inertia, steady states, and catastrophic change from existing psychological, sociological, and mathematical work. Existing target prediction work inherently assumes a single steady state for target behavior, and attempts to classify behavior based on a single emotional state set. The enhanced, emotional-based target prediction maintains up to three distinct steady states, or typical behaviors, based on a target's operating conditions and observed behaviors. Each steady state has an associated behavioral inertia, similar to the standard deviation of behaviors within that state. The enhanced prediction framework also allows steady state transitions through catastrophic change and individual steady states could be used in an offline analysis with additional modeling efforts to better predict anticipated target reactions.
Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.
2013-01-01
Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches that had at least 2 years of data (2010-11 and sometimes earlier) and for 1 beach that had 1 year of data. For most models, software designed for model development by the U.S. Environmental Protection Agency (Virtual Beach) was used. The selected model for each beach was based on a combination of explanatory variables including, most commonly, turbidity, day of the year, change in lake level over 24 hours, wave height, wind direction and speed, and antecedent rainfall for various time periods. Forty-two predictive models were validated against data collected during an independent year (2012) and compared to the current method for assessing recreational water quality-using the previous day’s E. coli concentration (persistence model). Goals for good predictive-model performance were responses that were at least 5 percent greater than the persistence model and overall correct responses greater than or equal to 80 percent, sensitivities (percentage of exceedances of the bathing-water standard that were correctly predicted by the model) greater than or equal to 50 percent, and specificities (percentage of nonexceedances correctly predicted by the model) greater than or equal to 85 percent. Out of 42 predictive models, 24 models yielded over-all correct responses that were at least 5 percent greater than the use of the persistence model. Predictive-model responses met the performance goals more often than the persistence-model responses in terms of overall correctness (28 versus 17 models, respectively), sensitivity (17 versus 4 models), and specificity (34 versus 25 models). Gaining knowledge of each beach and the factors that affect E. coli concentrations is important for developing good predictive models. Collection of additional years of data with a wide range of environmental conditions may also help to improve future model performance. The USGS will continue to work with local agencies in 2013 and beyond to develop and validate predictive models at beaches and improve existing nowcasts, restructuring monitoring activities to accommodate future uncertainties in funding and resources.
NASA Astrophysics Data System (ADS)
Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal
2017-11-01
Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.
A multidimensional stability model for predicting shallow landslide size and shape across landscapes
David G. Milledge; Dino Bellugi; Jim A. McKean; Alexander L. Densmore; William E. Dietrich
2014-01-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but...
Modeling water yield response to forest cover changes in northern Minnesota
S.C. Bernath; E.S. Verry; K.N. Brooks; P.F. Ffolliott
1982-01-01
A water yield model (TIMWAT) has been developed to predict changes in water yield following changes in forest cover in northern Minnesota. Two versions of the model exist; one predicts changes in water yield as a function of gross precipitation and time after clearcutting. The second version predicts changes in water yield due to changes in above-ground biomass...
Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale
NASA Astrophysics Data System (ADS)
Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang
2017-12-01
The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.
Dupuy, Madeleine M; Powell, James A; Ramirez, Ricardo A
2017-10-01
Billbugs are native pests of turfgrass throughout North America, primarily managed with preventive, calendar-based insecticide applications. An existing degree-day model (lower development threshold of 10°C, biofix 1 March) developed in the eastern United States for bluegrass billbug, Sphenophorus parvulus (Gyllenhal; Coleoptera: Curculionidae), may not accurately predict adult billbug activity in the western United States, where billbugs occur as a species complex. The objectives of this study were 1) to track billbug phenology and species composition in managed Utah and Idaho turfgrass and 2) to evaluate model parameters that best predict billbug activity, including those of the existing bluegrass billbug model. Tracking billbugs with linear pitfall traps at two sites each in Utah and Idaho, we confirmed a complex of three univoltine species damaging turfgrass consisting of (in descending order of abundance) bluegrass billbug, hunting billbug (Sphenophorus venatus vestitus Chittenden; Coleoptera: Curculionidae), and Rocky Mountain billbug (Sphenophorus cicatristriatus Fabraeus; Coleoptera: Curculionidae). This complex was active from February through mid-October, with peak activity in mid-June. Based on linear regression analysis, we found that the existing bluegrass billbug model was not robust in predicting billbug activity in Utah and Idaho. Instead, the model that best predicts adult activity of the billbug complex accumulates degree-days above 3°C after 13 January. This model predicts adult activity levels important for management within 11 d of observed activity at 77% of sites. In conjunction with outreach and cooperative networking, this predictive degree-day model may assist end users to better time monitoring efforts and insecticide applications against billbug pests in Utah and Idaho by predicting adult activity. © The Author 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Plans for Aeroelastic Prediction Workshop
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Ballmann, Josef; Bhatia, Kumar; Blades, Eric; Boucke, Alexander; Chwalowski, Pawel; Dietz, Guido; Dowell, Earl; Florance, Jennifer P.; Hansen, Thorsten;
2011-01-01
This paper summarizes the plans for the first Aeroelastic Prediction Workshop. The workshop is designed to assess the state of the art of computational methods for predicting unsteady flow fields and aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify computational and experimental areas needing additional research and development. Three subject configurations have been chosen from existing wind tunnel data sets where there is pertinent experimental data available for comparison. For each case chosen, the wind tunnel testing was conducted using forced oscillation of the model at specified frequencies
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Harrison, David A; Parry, Gareth J; Carpenter, James R; Short, Alasdair; Rowan, Kathy
2007-04-01
To develop a new model to improve risk prediction for admissions to adult critical care units in the UK. Prospective cohort study. The setting was 163 adult, general critical care units in England, Wales, and Northern Ireland, December 1995 to August 2003. Patients were 216,626 critical care admissions. None. The performance of different approaches to modeling physiologic measurements was evaluated, and the best methods were selected to produce a new physiology score. This physiology score was combined with other information relating to the critical care admission-age, diagnostic category, source of admission, and cardiopulmonary resuscitation before admission-to develop a risk prediction model. Modeling interactions between diagnostic category and physiology score enabled the inclusion of groups of admissions that are frequently excluded from risk prediction models. The new model showed good discrimination (mean c index 0.870) and fit (mean Shapiro's R 0.665, mean Brier's score 0.132) in 200 repeated validation samples and performed well when compared with recalibrated versions of existing published risk prediction models in the cohort of patients eligible for all models. The hypothesis of perfect fit was rejected for all models, including the Intensive Care National Audit & Research Centre (ICNARC) model, as is to be expected in such a large cohort. The ICNARC model demonstrated better discrimination and overall fit than existing risk prediction models, even following recalibration of these models. We recommend it be used to replace previously published models for risk adjustment in the UK.
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue
2018-01-01
A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.
USDA-ARS?s Scientific Manuscript database
Predictive models are valuable tools for assessing food safety. Existing thermal inactivation models for Salmonella and ground chicken do not provide predictions above 71 degrees C, which is below the recommended final cooked temperature of 73.9 degrees C. They also do not predict when all Salmone...
Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?
2017-01-01
Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692
Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli
Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard
2016-01-01
Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli. PMID:27875575
Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli.
Necciari, Thibaud; Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard
2016-01-01
Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takemasa, Yuichi; Togari, Satoshi; Arai, Yoshinobu
1996-11-01
Vertical temperature differences tend to be great in a large indoor space such as an atrium, and it is important to predict variations of vertical temperature distribution in the early stage of the design. The authors previously developed and reported on a new simplified unsteady-state calculation model for predicting vertical temperature distribution in a large space. In this paper, this model is applied to predicting the vertical temperature distribution in an existing low-rise atrium that has a skylight and is affected by transmitted solar radiation. Detailed calculation procedures that use the model are presented with all the boundary conditions, andmore » analytical simulations are carried out for the cooling condition. Calculated values are compared with measured results. The results of the comparison demonstrate that the calculation model can be applied to the design of a large space. The effects of occupied-zone cooling are also discussed and compared with those of all-zone cooling.« less
QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.
Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng
2018-05-01
Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Portan, D. V.; Papanicolaou, G. C.
2018-02-01
From practical point of view, predictive modeling based on the physics of composite material behavior is wealth generating; by guiding material system selection and process choices, by cutting down on experimentation and associated costs; and by speeding up the time frame from the research stage to the market place. The presence of areas with different properties and the existence of an interphase between them have a pronounced influence on the behavior of a composite system. The Viscoelastic Hybrid Interphase Model (VHIM), considers the existence of a non-homogeneous viscoelastic and anisotropic interphase having properties depended on the degree of adhesion between the two phases in contact. The model applies for any physical/mechanical property (e.g. mechanical, thermal, electrical and/or biomechanical). Knowing the interphasial variation of a specific property one can predict the corresponding macroscopic behavior of the composite. Moreover, the model acts as an algorithm and a two-way approach can be used: (i) phases in contact may be chosen to get the desired properties of the final composite system or (ii) the initial phases in contact determine the final behavior of the composite system, that can be approximately predicted. The VHIM has been proven, amongst others, to be extremely useful in biomaterial designing for improved contact with human tissues.
Crayton, Elise; Wolfe, Charles; Douiri, Abdel
2018-01-01
Objective We aim to identify and critically appraise clinical prediction models of mortality and function following ischaemic stroke. Methods Electronic databases, reference lists, citations were searched from inception to September 2015. Studies were selected for inclusion, according to pre-specified criteria and critically appraised by independent, blinded reviewers. The discrimination of the prediction models was measured by the area under the curve receiver operating characteristic curve or c-statistic in random effects meta-analysis. Heterogeneity was measured using I2. Appropriate appraisal tools and reporting guidelines were used in this review. Results 31395 references were screened, of which 109 articles were included in the review. These articles described 66 different predictive risk models. Appraisal identified poor methodological quality and a high risk of bias for most models. However, all models precede the development of reporting guidelines for prediction modelling studies. Generalisability of models could be improved, less than half of the included models have been externally validated(n = 27/66). 152 predictors of mortality and 192 predictors and functional outcome were identified. No studies assessing ability to improve patient outcome (model impact studies) were identified. Conclusions Further external validation and model impact studies to confirm the utility of existing models in supporting decision-making is required. Existing models have much potential. Those wishing to predict stroke outcome are advised to build on previous work, to update and adapt validated models to their specific contexts opposed to designing new ones. PMID:29377923
2016-10-01
P re-existing liver disease - P re-existing im m une deficiency - P re-existing pulm onary disease - P re-existing cardiovascular disease ...liver disease - P re-existing im m une deficiency - P re-existing pulm onary disease - P re-existing cardiovascular disease - P ancreatic cancer and...correctly model disease course thereby aiding in treatment of patients. In this report, we analyzed the serum samples for proteins that will help to
Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi
2017-01-01
Objective Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Design Prospective cohort study. Setting General medicine departments of three teaching hospitals in Japan. Participants A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. Main outcome measures The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Results Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0–5.3), negative likelihood ratio of 0.4 (0.2–0.7) and OR of 7.7 (3.0–19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Conclusions Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. PMID:29122806
Miao, Hui; Hartman, Mikael; Bhoo-Pathy, Nirmala; Lee, Soo-Chin; Taib, Nur Aishah; Tan, Ern-Yu; Chan, Patrick; Moons, Karel G M; Wong, Hoong-Seam; Goh, Jeremy; Rahim, Siti Mastura; Yip, Cheng-Har; Verkooijen, Helena M
2014-01-01
In Asia, up to 25% of breast cancer patients present with distant metastases at diagnosis. Given the heterogeneous survival probabilities of de novo metastatic breast cancer, individual outcome prediction is challenging. The aim of the study is to identify existing prognostic models for patients with de novo metastatic breast cancer and validate them in Asia. We performed a systematic review to identify prediction models for metastatic breast cancer. Models were validated in 642 women with de novo metastatic breast cancer registered between 2000 and 2010 in the Singapore Malaysia Hospital Based Breast Cancer Registry. Survival curves for low, intermediate and high-risk groups according to each prognostic score were compared by log-rank test and discrimination of the models was assessed by concordance statistic (C-statistic). We identified 16 prediction models, seven of which were for patients with brain metastases only. Performance status, estrogen receptor status, metastatic site(s) and disease-free interval were the most common predictors. We were able to validate nine prediction models. The capacity of the models to discriminate between poor and good survivors varied from poor to fair with C-statistics ranging from 0.50 (95% CI, 0.48-0.53) to 0.63 (95% CI, 0.60-0.66). The discriminatory performance of existing prediction models for de novo metastatic breast cancer in Asia is modest. Development of an Asian-specific prediction model is needed to improve prognostication and guide decision making.
Sperm economy between female mating frequency and male ejaculate allocation.
Abe, Jun; Kamimura, Yoshitaka
2015-03-01
Why females of many species mate multiply is a major question in evolutionary biology. Furthermore, if females accept matings more than once, ejaculates from different males compete for fertilization (sperm competition), which confronts males with the decision of how to allocate their reproductive resources to each mating event. Although most existing models have examined either female mating frequency or male ejaculate allocation while assuming fixed levels of the opposite sex's strategies, these strategies are likely to coevolve. To investigate how the interaction of the two sexes' strategies is influenced by the level of sperm limitation in the population, we developed models in which females adjust their number of allowable matings and males allocate their ejaculate in each mating. Our model predicts that females mate only once or less than once at an even sex ratio or in an extremely female-biased condition, because of female resistance and sperm limitation in the population, respectively. However, in a moderately female-biased condition, males favor partitioning their reproductive budgets across many females, whereas females favor multiple matings to obtain sufficient sperm, which contradicts the predictions of most existing models. We discuss our model's predictions and relationships with the existing models and demonstrate applications for empirical findings.
Non-parallel coevolution of sender and receiver in the acoustic communication system of treefrogs.
Schul, Johannes; Bush, Sarah L
2002-09-07
Advertisement calls of closely related species often differ in quantitative features such as the repetition rate of signal units. These differences are important in species recognition. Current models of signal-receiver coevolution predict two possible patterns in the evolution of the mechanism used by receivers to recognize the call: (i) classical sexual selection models (Fisher process, good genes/indirect benefits, direct benefits models) predict that close relatives use qualitatively similar signal recognition mechanisms tuned to different values of a call parameter; and (ii) receiver bias models (hidden preference, pre-existing bias models) predict that if different signal recognition mechanisms are used by sibling species, evidence of an ancestral mechanism will persist in the derived species, and evidence of a pre-existing bias will be detectable in the ancestral species. We describe qualitatively different call recognition mechanisms in sibling species of treefrogs. Whereas Hyla chrysoscelis uses pulse rate to recognize male calls, Hyla versicolor uses absolute measurements of pulse duration and interval duration. We found no evidence of either hidden preferences or pre-existing biases. The results are compared with similar data from katydids (Tettigonia sp.). In both taxa, the data are not adequately explained by current models of signal-receiver coevolution.
Consumer preference models: fuzzy theory approach
NASA Astrophysics Data System (ADS)
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
Empirical models for the prediction of ground motion duration for intraplate earthquakes
NASA Astrophysics Data System (ADS)
Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.
2017-07-01
Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the existing relationships.
Application of Fracture Distribution Prediction Model in Xihu Depression of East China Sea
NASA Astrophysics Data System (ADS)
Yan, Weifeng; Duan, Feifei; Zhang, Le; Li, Ming
2018-02-01
There are different responses on each of logging data with the changes of formation characteristics and outliers caused by the existence of fractures. For this reason, the development of fractures in formation can be characterized by the fine analysis of logging curves. The well logs such as resistivity, sonic transit time, density, neutron porosity and gamma ray, which are classified as conventional well logs, are more sensitive to formation fractures. In view of traditional fracture prediction model, using the simple weighted average of different logging data to calculate the comprehensive fracture index, are more susceptible to subjective factors and exist a large deviation, a statistical method is introduced accordingly. Combining with responses of conventional logging data on the development of formation fracture, a prediction model based on membership function is established, and its essence is to analyse logging data with fuzzy mathematics theory. The fracture prediction results in a well formation in NX block of Xihu depression through two models are compared with that of imaging logging, which shows that the accuracy of fracture prediction model based on membership function is better than that of traditional model. Furthermore, the prediction results are highly consistent with imaging logs and can reflect the development of cracks much better. It can provide a reference for engineering practice.
A linear regression model for predicting PNW estuarine temperatures in a changing climate
Pacific Northwest coastal regions, estuaries, and associated ecosystems are vulnerable to the potential effects of climate change, especially to changes in nearshore water temperature. While predictive climate models simulate future air temperatures, no such projections exist for...
Simón, Luis; Afonin, Alexandr; López-Díez, Lucía Isabel; González-Miguel, Javier; Morchón, Rodrigo; Carretón, Elena; Montoya-Alonso, José Alberto; Kartashev, Vladimir; Simón, Fernando
2014-03-01
Zoonotic filarioses caused by Dirofilaria immitis and Dirofilaria repens are transmitted by culicid mosquitoes. Therefore Dirofilaria transmission depends on climatic factors like temperature and humidity. In spite of the dry climate of most of the Spanish territory, there are extensive irrigated crops areas providing moist habitats favourable for mosquito breeding. A GIS model to predict the risk of Dirofilaria transmission in Spain, based on temperatures and rainfall data as well as in the distribution of irrigated crops areas, is constructed. The model predicts that potential risk of Dirofilaria transmission exists in all the Spanish territory. Highest transmission risk exists in several areas of Andalucía, Extremadura, Castilla-La Mancha, Murcia, Valencia, Aragón and Cataluña, where moderate/high temperatures coincide with extensive irrigated crops. High risk in Balearic Islands and in some points of Canary Islands, is also predicted. The lowest risk is predicted in Northern cold and scarcely or non-irrigated dry Southeastern areas. The existence of irrigations locally increases transmission risk in low rainfall areas of the Spanish territory. The model can contribute to implement rational preventive therapy guidelines in accordance with the transmission characteristics of each local area. Moreover, the use of humidity-related factors could be of interest in future predictions to be performed in countries with similar environmental characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.
THE MODELING OF THE FATE AND TRANSPORT OF ENVIRONMENTAL POLLUTANTS
Current models that predict the fate of organic compounds released to the environment are based on the assumption that these compounds exist exclusively as neutral species. This assumption is untrue under many environmental conditions, as some molecules can exist as cations, anio...
Accurate and dynamic predictive model for better prediction in medicine and healthcare.
Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S
2018-05-01
Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.
A predictive model for floating leaf vegetation in the St. Louis River Estuary
In July 2014, USEPA staff was asked by MPCA to develop a predictive model for floating leaf vegetation (FLV) in the St. Louis River Estuary (SLRE). The existing model (Host et al. 2012) greatly overpredicts FLV in St. Louis Bay probably because it was based on a limited number of...
Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma
2010-01-01
In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi
2016-01-01
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637
A general-purpose machine learning framework for predicting properties of inorganic materials
Ward, Logan; Agrawal, Ankit; Choudhary, Alok; ...
2016-08-26
A very active area of materials research is to devise methods that use machine learning to automatically extract predictive models from existing materials data. While prior examples have demonstrated successful models for some applications, many more applications exist where machine learning can make a strong impact. To enable faster development of machine-learning-based models for such applications, we have created a framework capable of being applied to a broad range of materials data. Our method works by using a chemically diverse list of attributes, which we demonstrate are suitable for describing a wide variety of properties, and a novel method formore » partitioning the data set into groups of similar materials to boost the predictive accuracy. In this manuscript, we demonstrate how this new method can be used to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.« less
A general-purpose machine learning framework for predicting properties of inorganic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan; Agrawal, Ankit; Choudhary, Alok
A very active area of materials research is to devise methods that use machine learning to automatically extract predictive models from existing materials data. While prior examples have demonstrated successful models for some applications, many more applications exist where machine learning can make a strong impact. To enable faster development of machine-learning-based models for such applications, we have created a framework capable of being applied to a broad range of materials data. Our method works by using a chemically diverse list of attributes, which we demonstrate are suitable for describing a wide variety of properties, and a novel method formore » partitioning the data set into groups of similar materials to boost the predictive accuracy. In this manuscript, we demonstrate how this new method can be used to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.« less
Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B
2017-05-01
Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P < 10 -20 ) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., "critical care," "pneumonia," "neurologic evaluation"). Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B
2017-01-01
Objective: Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. Materials and Methods: The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Results: Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% (P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). Discussion: Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Conclusion: Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. PMID:27655861
NASA Astrophysics Data System (ADS)
Saleh, F.; Ramaswamy, V.; Georgas, N.; Blumberg, A. F.; Wang, Y.
2016-12-01
Advances in computational resources and modeling techniques are opening the path to effectively integrate existing complex models. In the context of flood prediction, recent extreme events have demonstrated the importance of integrating components of the hydrosystem to better represent the interactions amongst different physical processes and phenomena. As such, there is a pressing need to develop holistic and cross-disciplinary modeling frameworks that effectively integrate existing models and better represent the operative dynamics. This work presents a novel Hydrologic-Hydraulic-Hydrodynamic Ensemble (H3E) flood prediction framework that operationally integrates existing predictive models representing coastal (New York Harbor Observing and Prediction System, NYHOPS), hydrologic (US Army Corps of Engineers Hydrologic Modeling System, HEC-HMS) and hydraulic (2-dimensional River Analysis System, HEC-RAS) components. The state-of-the-art framework is forced with 125 ensemble meteorological inputs from numerical weather prediction models including the Global Ensemble Forecast System, the European Centre for Medium-Range Weather Forecasts (ECMWF), the Canadian Meteorological Centre (CMC), the Short Range Ensemble Forecast (SREF) and the North American Mesoscale Forecast System (NAM). The framework produces, within a 96-hour forecast horizon, on-the-fly Google Earth flood maps that provide critical information for decision makers and emergency preparedness managers. The utility of the framework was demonstrated by retrospectively forecasting an extreme flood event, hurricane Sandy in the Passaic and Hackensack watersheds (New Jersey, USA). Hurricane Sandy caused significant damage to a number of critical facilities in this area including the New Jersey Transit's main storage and maintenance facility. The results of this work demonstrate that ensemble based frameworks provide improved flood predictions and useful information about associated uncertainties, thus improving the assessment of risks as when compared to a deterministic forecast. The work offers perspectives for short-term flood forecasts, flood mitigation strategies and best management practices for climate change scenarios.
NASA Astrophysics Data System (ADS)
Thomas, J. A.; Elmes, G. W.; Clarke, R. T.; Kim, K. G.; Munguira, M. L.; Hochberg, M. E.
1997-11-01
In recent spatial models describing interactions among a myrmecophilous butterfly Maculinea rebeli, a gentian Gentiana cruciata and two competing species of Myrmica ant, we predicted that apparent competition should exist between gentians (the food of young M. rebeli caterpillars) and Myrmica schencki, which supports M. rebeli in its final instar. Here we extend and quantify model predictions about the nature of this phenomenon, and relate them to ecological theory. We predict that: (i) Within sites supporting the butterfly, fewer M. schencki colonies occur in sub-areas containing gentians than in identical habitat lacking this plant. (ii) Where G. cruciata and M. schencki do co-exist, the ant colonies will be less than half the size of those living > 1.5 m from gentians; (iii) The turnover of M. schencki colonies will be much greater than that of other Myrmica species in nest sites situated within 1.5 m of a gentian. All three predictions were supported in the field on 3-6 sites in two mountain ranges, although the exact strength of the apparent competition differed from some model predictions. Field data were also consistent with predictions about apparent mutualisms between gentians and other ants. We suggest that apparent competition is likely to arise in any system in which a specialist enemy feeds sequentially on two or more species during its life-cycle, as occurs in many true parasite-host interactions. We also predict that more complex patterns involving other Myrmica species and G. cruciata occur in our system, with apparent competition existing between them in some sub-areas of a site being balanced by apparent mutualism between them in other sub-areas.
No abstract was prepared or requested. This is a short presentation aiming to present a status of what in silico models and approaches exists in the prediction of skin sensitization potential and/or potency.
NASA Astrophysics Data System (ADS)
Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.
2017-12-01
Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.
ADOT state-specific crash prediction models : an Arizona needs study.
DOT National Transportation Integrated Search
2016-12-01
The predictive method in the Highway Safety Manual (HSM) includes a safety performance function (SPF), : crash modification factors (CMFs), and a local calibration factor (C), if available. Two alternatives exist for : applying the HSM prediction met...
Atmospheric prediction model survey
NASA Technical Reports Server (NTRS)
Wellck, R. E.
1976-01-01
As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.
A systematic review of predictive models for asthma development in children.
Luo, Gang; Nkoy, Flory L; Stone, Bryan L; Schmick, Darell; Johnson, Michael D
2015-11-28
Asthma is the most common pediatric chronic disease affecting 9.6 % of American children. Delay in asthma diagnosis is prevalent, resulting in suboptimal asthma management. To help avoid delay in asthma diagnosis and advance asthma prevention research, researchers have proposed various models to predict asthma development in children. This paper reviews these models. A systematic review was conducted through searching in PubMed, EMBASE, CINAHL, Scopus, the Cochrane Library, the ACM Digital Library, IEEE Xplore, and OpenGrey up to June 3, 2015. The literature on predictive models for asthma development in children was retrieved, with search results limited to human subjects and children (birth to 18 years). Two independent reviewers screened the literature, performed data extraction, and assessed article quality. The literature search returned 13,101 references in total. After manual review, 32 of these references were determined to be relevant and are discussed in the paper. We identify several limitations of existing predictive models for asthma development in children, and provide preliminary thoughts on how to address these limitations. Existing predictive models for asthma development in children have inadequate accuracy. Efforts to improve these models' performance are needed, but are limited by a lack of a gold standard for asthma development in children.
Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun
2016-04-01
Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.
We incorporate the Regional Atmospheric Chemistry Mechanism (RACM2) into the Community Multiscale Air Quality (CMAQ) hemispheric model and compare model predictions to those obtained using the existing Carbon Bond chemical mechanism with the updated toluene chemistry (CB05TU). Th...
NASA Astrophysics Data System (ADS)
Everett, R. A.; Packer, A. M.; Kuang, Y.
Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.
NASA Astrophysics Data System (ADS)
Everett, R. A.; Packer, A. M.; Kuang, Y.
2014-04-01
Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.
Miao, Hui; Hartman, Mikael; Bhoo-Pathy, Nirmala; Lee, Soo-Chin; Taib, Nur Aishah; Tan, Ern-Yu; Chan, Patrick; Moons, Karel G. M.; Wong, Hoong-Seam; Goh, Jeremy; Rahim, Siti Mastura; Yip, Cheng-Har; Verkooijen, Helena M.
2014-01-01
Background In Asia, up to 25% of breast cancer patients present with distant metastases at diagnosis. Given the heterogeneous survival probabilities of de novo metastatic breast cancer, individual outcome prediction is challenging. The aim of the study is to identify existing prognostic models for patients with de novo metastatic breast cancer and validate them in Asia. Materials and Methods We performed a systematic review to identify prediction models for metastatic breast cancer. Models were validated in 642 women with de novo metastatic breast cancer registered between 2000 and 2010 in the Singapore Malaysia Hospital Based Breast Cancer Registry. Survival curves for low, intermediate and high-risk groups according to each prognostic score were compared by log-rank test and discrimination of the models was assessed by concordance statistic (C-statistic). Results We identified 16 prediction models, seven of which were for patients with brain metastases only. Performance status, estrogen receptor status, metastatic site(s) and disease-free interval were the most common predictors. We were able to validate nine prediction models. The capacity of the models to discriminate between poor and good survivors varied from poor to fair with C-statistics ranging from 0.50 (95% CI, 0.48–0.53) to 0.63 (95% CI, 0.60–0.66). Conclusion The discriminatory performance of existing prediction models for de novo metastatic breast cancer in Asia is modest. Development of an Asian-specific prediction model is needed to improve prognostication and guide decision making. PMID:24695692
Modeling evaporation from spent nuclear fuel storage pools: A diffusion approach
NASA Astrophysics Data System (ADS)
Hugo, Bruce Robert
Accurate prediction of evaporative losses from light water reactor nuclear power plant (NPP) spent fuel storage pools (SFPs) is important for activities ranging from sizing of water makeup systems during NPP design to predicting the time available to supply emergency makeup water following severe accidents. Existing correlations for predicting evaporation from water surfaces are only optimized for conditions typical of swimming pools. This new approach modeling evaporation as a diffusion process has yielded an evaporation rate model that provided a better fit of published high temperature evaporation data and measurements from two SFPs than other published evaporation correlations. Insights from treating evaporation as a diffusion process include correcting for the effects of air flow and solutes on evaporation rate. An accurate modeling of the effects of air flow on evaporation rate is required to explain the observed temperature data from the Fukushima Daiichi Unit 4 SFP during the 2011 loss of cooling event; the diffusion model of evaporation provides a significantly better fit to this data than existing evaporation models.
Guiding Conformation Space Search with an All-Atom Energy Potential
Brunette, TJ; Brock, Oliver
2009-01-01
The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. To alleviate this problem, we present model-based search, a novel conformation space search method. Model-based search uses highly accurate information obtained during search to build an approximate, partial model of the energy landscape. Model-based search aggregates information in the model as it progresses, and in turn uses this information to guide exploration towards regions most likely to contain a near-optimal minimum. We validate our method by predicting the structure of 32 proteins, ranging in length from 49 to 213 amino acids. Our results demonstrate that model-based search is more effective at finding low-energy conformations in high-dimensional conformation spaces than existing search methods. The reduction in energy translates into structure predictions of increased accuracy. PMID:18536015
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
Predicting the Overall Spatial Quality of Automotive Audio Systems
NASA Astrophysics Data System (ADS)
Koya, Daisuke
The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.
Risk prediction models for graft failure in kidney transplantation: a systematic review.
Kaboré, Rémi; Haller, Maria C; Harambat, Jérôme; Heinze, Georg; Leffondré, Karen
2017-04-01
Risk prediction models are useful for identifying kidney recipients at high risk of graft failure, thus optimizing clinical care. Our objective was to systematically review the models that have been recently developed and validated to predict graft failure in kidney transplantation recipients. We used PubMed and Scopus to search for English, German and French language articles published in 2005-15. We selected studies that developed and validated a new risk prediction model for graft failure after kidney transplantation, or validated an existing model with or without updating the model. Data on recipient characteristics and predictors, as well as modelling and validation methods were extracted. In total, 39 articles met the inclusion criteria. Of these, 34 developed and validated a new risk prediction model and 5 validated an existing one with or without updating the model. The most frequently predicted outcome was graft failure, defined as dialysis, re-transplantation or death with functioning graft. Most studies used the Cox model. There was substantial variability in predictors used. In total, 25 studies used predictors measured at transplantation only, and 14 studies used predictors also measured after transplantation. Discrimination performance was reported in 87% of studies, while calibration was reported in 56%. Performance indicators were estimated using both internal and external validation in 13 studies, and using external validation only in 6 studies. Several prediction models for kidney graft failure in adults have been published. Our study highlights the need to better account for competing risks when applicable in such studies, and to adequately account for post-transplant measures of predictors in studies aiming at improving monitoring of kidney transplant recipients. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Development of a numerical model to predict physiological strain of firefighter in fire hazard.
Su, Yun; Yang, Jie; Song, Guowen; Li, Rui; Xiang, Chunhui; Li, Jun
2018-02-26
This paper aims to develop a numerical model to predict heat stress of firefighter under low-level thermal radiation. The model integrated a modified multi-layer clothing model with a human thermoregulation model. We took the coupled radiative and conductive heat transfer in the clothing, the size-dependent heat transfer in the air gaps, and the controlling active and controlled passive thermal regulation in human body into consideration. The predicted core temperature and mean skin temperature from the model showed a good agreement with the experimental results. Parametric study was conducted and the result demonstrated that the radiative intensity had a significant influence on the physiological heat strain. The existence of air gap showed positive effect on the physiological heat strain when air gap size is small. However, when the size of air gap exceeds 6 mm, a different trend was observed due to the occurrence of natural convection. Additionally, the time length for the existence of the physiological heat strain was greater than the existence of the skin burn under various heat exposures. The findings obtained in this study provide a better understanding of the physiological strain of firefighter and shed light on textile material engineering for achieving higher protective performance.
A model for predicting thermal properties of asphalt mixtures from their constituents
NASA Astrophysics Data System (ADS)
Keller, Merlin; Roche, Alexis; Lavielle, Marc
Numerous theoretical and experimental approaches have been developed to predict the effective thermal conductivity of composite materials such as polymers, foams, epoxies, soils and concrete. None of such models have been applied to asphalt concrete. This study attempts to develop a model to predict the thermal conductivity of asphalt concrete from its constituents that will contribute to the asphalt industry by reducing costs and saving time on laboratory testing. The necessity to do the laboratory testing would be no longer required when a mix for the pavement is created with desired thermal properties at the design stage by selecting correct constituents. This thesis investigated six existing predictive models for applicability to asphalt mixtures, and four standard mathematical techniques were used to develop a regression model to predict the effective thermal conductivity. The effective thermal conductivities of 81 asphalt specimens were used as the response variables, and the thermal conductivities and volume fractions of their constituents were used as the predictors. The conducted statistical analyses showed that the measured values of thermal conductivities of the mixtures are affected by the bitumen and aggregate content, but not by the air content. Contrarily, the predicted data for some investigated models are highly sensitive to air voids, but not to bitumen and/or aggregate content. Additionally, the comparison of the experimental with analytical data showed that none of the existing models gave satisfactory results; on the other hand, two regression models (Exponential 1* and Linear 3*) are promising for asphalt concrete.
Takada, Toshihiko; Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi
2017-11-08
Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Prospective cohort study. General medicine departments of three teaching hospitals in Japan. A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0-5.3), negative likelihood ratio of 0.4 (0.2-0.7) and OR of 7.7 (3.0-19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
Acute oral toxicity data are used to meet both regulatory and non-regulatory needs. Recently, there have been efforts to explore alternative approaches for predicting acute oral toxicity such as QSARs. Evaluating the performance and scope of existing models and investigating the ...
NASA Technical Reports Server (NTRS)
Pinho, Silvestre T.; Davila, C. G.; Camanho, P. P.; Iannucci, L.; Robinson, P.
2005-01-01
A set of three-dimensional failure criteria for laminated fiber-reinforced composites, denoted LaRC04, is proposed. The criteria are based on physical models for each failure mode and take into consideration non-linear matrix shear behaviour. The model for matrix compressive failure is based on the Mohr-Coulomb criterion and it predicts the fracture angle. Fiber kinking is triggered by an initial fiber misalignment angle and by the rotation of the fibers during compressive loading. The plane of fiber kinking is predicted by the model. LaRC04 consists of 6 expressions that can be used directly for design purposes. Several applications involving a broad range of load combinations are presented and compared to experimental data and other existing criteria. Predictions using LaRC04 correlate well with the experimental data, arguably better than most existing criteria. The good correlation seems to be attributable to the physical soundness of the underlying failure models.
Yajima, Airi; Uesawa, Yoshihiro; Ogawa, Chiaki; Yatabe, Megumi; Kondo, Naoki; Saito, Shinichiro; Suzuki, Yoshihiko; Atsuda, Kouichiro; Kagaya, Hajime
2015-05-01
There exist various useful predictive models, such as the Cockcroft-Gault model, for estimating creatinine clearance (CLcr). However, the prediction of renal function is difficult in patients with cancer treated with cisplatin. Therefore, we attempted to construct a new model for predicting CLcr in such patients. Japanese patients with head and neck cancer who had received cisplatin-based chemotherapy were used as subjects. A multiple regression equation was constructed as a model for predicting CLcr values based on background and laboratory data. A model for predicting CLcr, which included body surface area, serum creatinine and albumin, was constructed. The model exhibited good performance prior to cisplatin therapy. In addition, it performed better than previously reported models after cisplatin therapy. The predictive model constructed in the present study displayed excellent potential and was useful for estimating the renal function of patients treated with cisplatin therapy. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung
2011-08-01
Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.
Review: Modelling chemical kinetics and convective heating in giant planet entries
NASA Astrophysics Data System (ADS)
Reynier, Philippe; D'Ammando, Giuliano; Bruno, Domenico
2018-01-01
A review of the existing chemical kinetics models for H2 / He mixtures and related transport and thermodynamic properties is presented as a pre-requisite towards the development of innovative models based on the state-to-state approach. A survey of the available results obtained during the mission preparation and post-flight analyses of the Galileo mission has been undertaken and a computational matrix has been derived. Different chemical kinetics schemes for hydrogen/helium mixtures have been applied to numerical simulations of the selected points along the entry trajectory. First, a reacting scheme, based on literature data, has been set up for computing the flow-field around the probe at high altitude and comparisons with existing numerical predictions are performed. Then, a macroscopic model derived from a state-to-state model has been constructed and incorporated into a CFD code. Comparisons with existing numerical results from the literature have been performed as well as cross-check comparisons between the predictions provided by the different models in order to evaluate the potential of innovative chemical kinetics models based on the state-to-state approach.
A New Framework for Cumulus Parametrization - A CPT in action
NASA Astrophysics Data System (ADS)
Jakob, C.; Peters, K.; Protat, A.; Kumar, V.
2016-12-01
The representation of convection in climate model remains a major Achilles Heel in our pursuit of better predictions of global and regional climate. The basic principle underpinning the parametrisation of tropical convection in global weather and climate models is that there exist discernible interactions between the resolved model scale and the parametrised cumulus scale. Furthermore, there must be at least some predictive power in the larger scales for the statistical behaviour on small scales for us to be able to formally close the parametrised equations. The presentation will discuss a new framework for cumulus parametrisation based on the idea of separating the prediction of cloud area from that of velocity. This idea is put into practice by combining an existing multi-scale stochastic cloud model with observations to arrive at the prediction of the area fraction for deep precipitating convection. Using mid-tropospheric humidity and vertical motion as predictors, the model is shown to reproduce the observed behaviour of both mean and variability of deep convective area fraction well. The framework allows for the inclusion of convective organisation and can - in principle - be made resolution-aware or resolution-independent. When combined with simple assumptions about cloud-base vertical motion the model can be used as a closure assumption in any existing cumulus parametrisation. Results of applying this idea in the the ECHAM model indicate significant improvements in the simulation of tropical variability, including but not limited to the MJO. This presentation will highlight how the close collaboration of the observational, theoretical and model development community in the spirit of the climate process teams can lead to significant progress in long-standing issues in climate modelling while preserving the freedom of individual groups in pursuing their specific implementation of an agreed framework.
Gene expression models for prediction of longitudinal dispersion coefficient in streams
NASA Astrophysics Data System (ADS)
Sattar, Ahmed M. A.; Gharabaghi, Bahram
2015-05-01
Longitudinal dispersion is the key hydrologic process that governs transport of pollutants in natural streams. It is critical for spill action centers to be able to predict the pollutant travel time and break-through curves accurately following accidental spills in urban streams. This study presents a novel gene expression model for longitudinal dispersion developed using 150 published data sets of geometric and hydraulic parameters in natural streams in the United States, Canada, Europe, and New Zealand. The training and testing of the model were accomplished using randomly-selected 67% (100 data sets) and 33% (50 data sets) of the data sets, respectively. Gene expression programming (GEP) is used to develop empirical relations between the longitudinal dispersion coefficient and various control variables, including the Froude number which reflects the effect of reach slope, aspect ratio, and the bed material roughness on the dispersion coefficient. Two GEP models have been developed, and the prediction uncertainties of the developed GEP models are quantified and compared with those of existing models, showing improved prediction accuracy in favor of GEP models. Finally, a parametric analysis is performed for further verification of the developed GEP models. The main reason for the higher accuracy of the GEP models compared to the existing regression models is that exponents of the key variables (aspect ratio and bed material roughness) are not constants but a function of the Froude number. The proposed relations are both simple and accurate and can be effectively used to predict the longitudinal dispersion coefficients in natural streams.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Predicted deep-sea coral habitat suitability for the U.S. West coast.
Guinotte, John M; Davies, Andrew J
2014-01-01
Regional scale habitat suitability models provide finer scale resolution and more focused predictions of where organisms may occur. Previous modelling approaches have focused primarily on local and/or global scales, while regional scale models have been relatively few. In this study, regional scale predictive habitat models are presented for deep-sea corals for the U.S. West Coast (California, Oregon and Washington). Model results are intended to aid in future research or mapping efforts and to assess potential coral habitat suitability both within and outside existing bottom trawl closures (i.e. Essential Fish Habitat (EFH)) and identify suitable habitat within U.S. National Marine Sanctuaries (NMS). Deep-sea coral habitat suitability was modelled at 500 m×500 m spatial resolution using a range of physical, chemical and environmental variables known or thought to influence the distribution of deep-sea corals. Using a spatial partitioning cross-validation approach, maximum entropy models identified slope, temperature, salinity and depth as important predictors for most deep-sea coral taxa. Large areas of highly suitable deep-sea coral habitat were predicted both within and outside of existing bottom trawl closures and NMS boundaries. Predicted habitat suitability over regional scales are not currently able to identify coral areas with pin point accuracy and probably overpredict actual coral distribution due to model limitations and unincorporated variables (i.e. data on distribution of hard substrate) that are known to limit their distribution. Predicted habitat results should be used in conjunction with multibeam bathymetry, geological mapping and other tools to guide future research efforts to areas with the highest probability of harboring deep-sea corals. Field validation of predicted habitat is needed to quantify model accuracy, particularly in areas that have not been sampled.
Predicted Deep-Sea Coral Habitat Suitability for the U.S. West Coast
Guinotte, John M.; Davies, Andrew J.
2014-01-01
Regional scale habitat suitability models provide finer scale resolution and more focused predictions of where organisms may occur. Previous modelling approaches have focused primarily on local and/or global scales, while regional scale models have been relatively few. In this study, regional scale predictive habitat models are presented for deep-sea corals for the U.S. West Coast (California, Oregon and Washington). Model results are intended to aid in future research or mapping efforts and to assess potential coral habitat suitability both within and outside existing bottom trawl closures (i.e. Essential Fish Habitat (EFH)) and identify suitable habitat within U.S. National Marine Sanctuaries (NMS). Deep-sea coral habitat suitability was modelled at 500 m×500 m spatial resolution using a range of physical, chemical and environmental variables known or thought to influence the distribution of deep-sea corals. Using a spatial partitioning cross-validation approach, maximum entropy models identified slope, temperature, salinity and depth as important predictors for most deep-sea coral taxa. Large areas of highly suitable deep-sea coral habitat were predicted both within and outside of existing bottom trawl closures and NMS boundaries. Predicted habitat suitability over regional scales are not currently able to identify coral areas with pin point accuracy and probably overpredict actual coral distribution due to model limitations and unincorporated variables (i.e. data on distribution of hard substrate) that are known to limit their distribution. Predicted habitat results should be used in conjunction with multibeam bathymetry, geological mapping and other tools to guide future research efforts to areas with the highest probability of harboring deep-sea corals. Field validation of predicted habitat is needed to quantify model accuracy, particularly in areas that have not been sampled. PMID:24759613
A Complete Procedure for Predicting and Improving the Performance of HAWT's
NASA Astrophysics Data System (ADS)
Al-Abadi, Ali; Ertunç, Özgür; Sittig, Florian; Delgado, Antonio
2014-06-01
A complete procedure for predicting and improving the performance of the horizontal axis wind turbine (HAWT) has been developed. The first process is predicting the power extracted by the turbine and the derived rotor torque, which should be identical to that of the drive unit. The BEM method and a developed post-stall treatment for resolving stall-regulated HAWT is incorporated in the prediction. For that, a modified stall-regulated prediction model, which can predict the HAWT performance over the operating range of oncoming wind velocity, is derived from existing models. The model involves radius and chord, which has made it more general in applications for predicting the performance of different scales and rotor shapes of HAWTs. The second process is modifying the rotor shape by an optimization process, which can be applied to any existing HAWT, to improve its performance. A gradient- based optimization is used for adjusting the chord and twist angle distribution of the rotor blade to increase the extraction of the power while keeping the drive torque constant, thus the same drive unit can be kept. The final process is testing the modified turbine to predict its enhanced performance. The procedure is applied to NREL phase-VI 10kW as a baseline turbine. The study has proven the applicability of the developed model in predicting the performance of the baseline as well as the optimized turbine. In addition, the optimization method has shown that the power coefficient can be increased while keeping same design rotational speed.
Predicting fire spread in Arizona's oak chaparral
A. W. Lindenmuth; James R. Davis
1973-01-01
Five existing fire models, both experimental and theoretical, did not adequately predict rate-of-spread (ROS) when tested on single- and multiclump fires in oak chaparral in Arizona. A statistical model developed using essentially the same input variables but weighted differently accounted for 81 percent ofthe variation in ROS. A chemical coefficient that accounts for...
Grossberg, Stephen
2009-01-01
An intimate link exists between the predictive and learning processes in the brain. Perceptual/cognitive and spatial/motor processes use complementary predictive mechanisms to learn, recognize, attend and plan about objects in the world, determine their current value, and act upon them. Recent neural models clarify these mechanisms and how they interact in cortical and subcortical brain regions. The present paper reviews and synthesizes data and models of these processes, and outlines a unified theory of predictive brain processing. PMID:19528003
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?
Heilbron, Micha; Chait, Maria
2017-08-04
Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
High Reynolds number turbulence model of rotating shear flows
NASA Astrophysics Data System (ADS)
Masuda, S.; Ariga, I.; Koyama, H. S.
1983-09-01
A Reynolds stress closure model for rotating turbulent shear flows is developed. Special attention is paid to keeping the model constants independent of rotation. First, general forms of the model of a Reynolds stress equation and a dissipation rate equation are derived, the only restrictions of which are high Reynolds number and incompressibility. The model equations are then applied to two-dimensional equilibrium boundary layers and the effects of Coriolis acceleration on turbulence structures are discussed. Comparisons with the experimental data and with previous results in other external force fields show that there exists a very close analogy between centrifugal, buoyancy and Coriolis force fields. Finally, the model is applied to predict the two-dimensional boundary layers on rotating plane walls. Comparisons with existing data confirmed its capability of predicting mean and turbulent quantities without employing any empirical relations in rotating fields.
De Luca, Andrea; Flandre, Philippe; Dunn, David; Zazzi, Maurizio; Wensing, Annemarie; Santoro, Maria Mercedes; Günthard, Huldrych F; Wittkop, Linda; Kordossis, Theodoros; Garcia, Federico; Castagna, Antonella; Cozzi-Lepri, Alessandro; Churchill, Duncan; De Wit, Stéphane; Brockmeyer, Norbert H; Imaz, Arkaitz; Mussini, Cristina; Obel, Niels; Perno, Carlo Federico; Roca, Bernardino; Reiss, Peter; Schülter, Eugen; Torti, Carlo; van Sighem, Ard; Zangerle, Robert; Descamps, Diane
2016-05-01
The objective of this study was to improve the prediction of the impact of HIV-1 protease mutations in different viral subtypes on virological response to darunavir. Darunavir-containing treatment change episodes (TCEs) in patients previously failing PIs were selected from large European databases. HIV-1 subtype B-infected patients were used as the derivation dataset and HIV-1 non-B-infected patients were used as the validation dataset. The adjusted association of each mutation with week 8 HIV RNA change from baseline was analysed by linear regression. A prediction model was derived based on best subset least squares estimation with mutational weights corresponding to regression coefficients. Virological outcome prediction accuracy was compared with that from existing genotypic resistance interpretation systems (GISs) (ANRS 2013, Rega 9.1.0 and HIVdb 7.0). TCEs were selected from 681 subtype B-infected and 199 non-B-infected adults. Accompanying drugs were NRTIs in 87%, NNRTIs in 27% and raltegravir or maraviroc or enfuvirtide in 53%. The prediction model included weighted protease mutations, HIV RNA, CD4 and activity of accompanying drugs. The model's association with week 8 HIV RNA change in the subtype B (derivation) set was R(2) = 0.47 [average squared error (ASE) = 0.67, P < 10(-6)]; in the non-B (validation) set, ASE was 0.91. Accuracy investigated by means of area under the receiver operating characteristic curves with a binary response (above the threshold value of HIV RNA reduction) showed that our final model outperformed models with existing interpretation systems in both training and validation sets. A model with a new darunavir-weighted mutation score outperformed existing GISs in both B and non-B subtypes in predicting virological response to darunavir. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Regional mapping of soil parent material by machine learning based on point data
NASA Astrophysics Data System (ADS)
Lacoste, Marine; Lemercier, Blandine; Walter, Christian
2011-10-01
A machine learning system (MART) has been used to predict soil parent material (SPM) at the regional scale with a 50-m resolution. The use of point-specific soil observations as training data was tested as a replacement for the soil maps introduced in previous studies, with the aim of generating a more even distribution of training data over the study area and reducing information uncertainty. The 27,020-km 2 study area (Brittany, northwestern France) contains mainly metamorphic, igneous and sedimentary substrates. However, superficial deposits (aeolian loam, colluvial and alluvial deposits) very often represent the actual SPM and are typically under-represented in existing geological maps. In order to calibrate the predictive model, a total of 4920 point soil descriptions were used as training data along with 17 environmental predictors (terrain attributes derived from a 50-m DEM, as well as emissions of K, Th and U obtained by means of airborne gamma-ray spectrometry, geological variables at the 1:250,000 scale and land use maps obtained by remote sensing). Model predictions were then compared: i) during SPM model creation to point data not used in model calibration (internal validation), ii) to the entire point dataset (point validation), and iii) to existing detailed soil maps (external validation). The internal, point and external validation accuracy rates were 56%, 81% and 54%, respectively. Aeolian loam was one of the three most closely predicted substrates. Poor prediction results were associated with uncommon materials and areas with high geological complexity, i.e. areas where existing maps used for external validation were also imprecise. The resultant predictive map turned out to be more accurate than existing geological maps and moreover indicated surface deposits whose spatial coverage is consistent with actual knowledge of the area. This method proves quite useful in predicting SPM within areas where conventional mapping techniques might be too costly or lengthy or where soil maps are insufficient for use as training data. In addition, this method allows producing repeatable and interpretable results, whose accuracy can be assessed objectively.
A novel time series link prediction method: Learning automata approach
NASA Astrophysics Data System (ADS)
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2017-09-01
Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.
Traffic prediction using wireless cellular networks : final report.
DOT National Transportation Integrated Search
2016-03-01
The major objective of this project is to obtain traffic information from existing wireless : infrastructure. : In this project freeway traffic is identified and modeled using data obtained from existing : wireless cellular networks. Most of the prev...
Turbulent flow separation in three-dimensional asymmetric diffusers
NASA Astrophysics Data System (ADS)
Jeyapaul, Elbert
2011-12-01
Turbulent three-dimensional flow separation is more complicated than 2-D. The physics of the flow is not well understood. Turbulent flow separation is nearly independent of the Reynolds number, and separation in 3-D occurs at singular points and along convergence lines emanating from these points. Most of the engineering turbulence research is driven by the need to gain knowledge of the flow field that can be used to improve modeling predictions. This work is motivated by the need for a detailed study of 3-D separation in asymmetric diffusers, to understand the separation phenomena using eddy-resolving simulation methods, assess the predictability of existing RANS turbulence models and propose modeling improvements. The Cherry diffuser has been used as a benchmark. All existing linear eddy-viscosity RANS models k--o SST,k--epsilon and v2- f fail in predicting such flows, predicting separation on the wrong side. The geometry has a doubly-sloped wall, with the other two walls orthogonal to each other and aligned with the diffuser inlet giving the diffuser an asymmetry. The top and side flare angles are different and this gives rise to different pressure gradient in each transverse direction. Eddyresolving simulations using the Scale adaptive simulation (SAS) and Large Eddy Simulation (LES) method have been used to predict separation in benchmark diffuser and validated. A series of diffusers with the same configuration have been generated, each having the same streamwise pressure gradient and parametrized only by the inlet aspect ratio. The RANS models were put to test and the flow physics explored using SAS-generated flow field. The RANS model indicate a transition in separation surface from top sloped wall to the side sloped wall at an inlet aspect ratio much lower than observed in LES results. This over-sensitivity of RANS models to transverse pressure gradients is due to lack of anisotropy in the linear Reynolds stress formulation. The complexity of the flow separation is due to effects of lateral straining, streamline curvature, secondary flow of second kind, transverse pressure gradient on turbulence. Resolving these effects is possible with anisotropy turbulence models as the Explicit Algebraic Reynolds stress model (EARSM). This model has provided accurate prediction of streamwise and transverse velocity, however the wall pressure is under predicted. An improved EARSM model is developed by correcting the coefficients, which predicts a more accurate wall pressure. There exists scope for improvement of this model, by including convective effects and dynamics of velocity gradient invariants.
UXO Burial Prediction Fidelity
2017-07-01
been developed to predict the initial penetration depth of underwater mines . SERDP would like to know if and how these existing mine models could be...designed for near-cylindrical mines —for munitions, however, projectile-specific drag, lift, and moment coefficients are needed for estimating...as inputs. Other models have been built to estimate these initial conditions for mines dropped into water. Can these mine models be useful for
A fuzzy set preference model for market share analysis
NASA Technical Reports Server (NTRS)
Turksen, I. B.; Willson, Ian A.
1992-01-01
Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction).
Indoor tanning and the MC1R genotype: risk prediction for basal cell carcinoma risk in young people.
Molinaro, Annette M; Ferrucci, Leah M; Cartmel, Brenda; Loftfield, Erikka; Leffell, David J; Bale, Allen E; Mayne, Susan T
2015-06-01
Basal cell carcinoma (BCC) incidence is increasing, particularly in young people, and can be associated with significant morbidity and treatment costs. To identify young individuals at risk of BCC, we assessed existing melanoma or overall skin cancer risk prediction models and built a novel risk prediction model, with a focus on indoor tanning and the melanocortin 1 receptor gene, MC1R. We evaluated logistic regression models among 759 non-Hispanic whites from a case-control study of patients seen between 2006 and 2010 in New Haven, Connecticut. In our data, the adjusted area under the receiver operating characteristic curve (AUC) for a model by Han et al. (Int J Cancer. 2006;119(8):1976-1984) with 7 MC1R variants was 0.72 (95% confidence interval (CI): 0.66, 0.78), while that by Smith et al. (J Clin Oncol. 2012;30(15 suppl):8574) with MC1R and indoor tanning had an AUC of 0.69 (95% CI: 0.63, 0.75). Our base model had greater predictive ability than existing models and was significantly improved when we added ever-indoor tanning, burns from indoor tanning, and MC1R (AUC = 0.77, 95% CI: 0.74, 0.81). Our early-onset BCC risk prediction model incorporating MC1R and indoor tanning extends the work of other skin cancer risk prediction models, emphasizes the value of both genotype and indoor tanning in skin cancer risk prediction in young people, and should be validated with an independent cohort. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Du, Qiang; Li, Yanjun
2015-06-01
In this paper, a multi-scale as-cast grain size prediction model is proposed to predict as-cast grain size of inoculated aluminum alloys melt solidified under non-isothermal condition, i.e., the existence of temperature gradient. Given melt composition, inoculation and heat extraction boundary conditions, the model is able to predict maximum nucleation undercooling, cooling curve, primary phase solidification path and final as-cast grain size of binary alloys. The proposed model has been applied to two Al-Mg alloys, and comparison with laboratory and industrial solidification experimental results have been carried out. The preliminary conclusion is that the proposed model is a promising suitable microscopic model used within the multi-scale casting simulation modelling framework.
Promoter Sequences Prediction Using Relational Association Rule Mining
Czibula, Gabriela; Bocicor, Maria-Iuliana; Czibula, Istvan Gergely
2012-01-01
In this paper we are approaching, from a computational perspective, the problem of promoter sequences prediction, an important problem within the field of bioinformatics. As the conditions for a DNA sequence to function as a promoter are not known, machine learning based classification models are still developed to approach the problem of promoter identification in the DNA. We are proposing a classification model based on relational association rules mining. Relational association rules are a particular type of association rules and describe numerical orderings between attributes that commonly occur over a data set. Our classifier is based on the discovery of relational association rules for predicting if a DNA sequence contains or not a promoter region. An experimental evaluation of the proposed model and comparison with similar existing approaches is provided. The obtained results show that our classifier overperforms the existing techniques for identifying promoter sequences, confirming the potential of our proposal. PMID:22563233
Criteria for predicting the formation of single-phase high-entropy alloys
Troparevsky, M Claudia; Morris, James R..; Kent, Paul R.; ...
2015-03-15
High entropy alloys constitute a new class of materials whose very existence poses fundamental questions. Originally thought to be stabilized by the large entropy of mixing, these alloys have attracted attention due to their potential applications, yet no model capable of robustly predicting which combinations of elements will form a single-phase currently exists. Here we propose a model that, through the use of high-throughput computation of the enthalpies of formation of binary compounds, is able to confirm all known high-entropy alloys while rejecting similar alloys that are known to form multiple phases. Despite the increasing entropy, our model predicts thatmore » the number of potential single-phase multicomponent alloys decreases with an increasing number of components: out of more than two million possible 7-component alloys considered, fewer than twenty single-phase alloys are likely.« less
NASA Technical Reports Server (NTRS)
1990-01-01
Lunar base projects, including a reconfigurable lunar cargo launcher, a thermal and micrometeorite protection system, a versatile lifting machine with robotic capabilities, a cargo transport system, the design of a road construction system for a lunar base, and the design of a device for removing lunar dust from material surfaces, are discussed. The emphasis on the Gulf of Mexico project was on the development of a computer simulation model for predicting vessel station keeping requirements. An existing code, used in predicting station keeping requirements for oil drilling platforms operating in North Shore (Alaska) waters was used as a basis for the computer simulation. Modifications were made to the existing code. The input into the model consists of satellite altimeter readings and water velocity readings from buoys stationed in the Gulf of Mexico. The satellite data consists of altimeter readings (wave height) taken during the spring of 1989. The simulation model predicts water velocity and direction, and wind velocity.
Gordon M. Heisler; Richard H. Grant; David J. Nowak; Wei Gao; Daniel E. Crane; Jeffery T. Walton
2003-01-01
Evaluating the impact of ultraviolet-B radiation (UVB) on urban populations would be enhanced by improved predictions of the UVB radiation at the level of human activity. This paper reports the status of plans for incorporating a UVB prediction module into an existing Urban Forest Effects (UFORE) model. UFORE currently has modules to quantify urban forest structure,...
Coarse-Graining Polymer Field Theory for Fast and Accurate Simulations of Directed Self-Assembly
NASA Astrophysics Data System (ADS)
Liu, Jimmy; Delaney, Kris; Fredrickson, Glenn
To design effective manufacturing processes using polymer directed self-assembly (DSA), the semiconductor industry benefits greatly from having a complete picture of stable and defective polymer configurations. Field-theoretic simulations are an effective way to study these configurations and predict defect populations. Self-consistent field theory (SCFT) is a particularly successful theory for studies of DSA. Although other models exist that are faster to simulate, these models are phenomenological or derived through asymptotic approximations, often leading to a loss of accuracy relative to SCFT. In this study, we employ our recently-developed method to produce an accurate coarse-grained field theory for diblock copolymers. The method uses a force- and stress-matching strategy to map output from SCFT simulations into parameters for an optimized phase field model. This optimized phase field model is just as fast as existing phenomenological phase field models, but makes more accurate predictions of polymer self-assembly, both in bulk and in confined systems. We study the performance of this model under various conditions, including its predictions of domain spacing, morphology and defect formation energies. Samsung Electronics.
A Ceramic Fracture Model for High Velocity Impact
1993-05-01
employ damage concepts appear more relevant than crack growth models for this application . This research adopts existing fracture model concepts and...extends them through applications in an existing finite element continuum mechanics code (hydrocode) to the prediction of the damage and fracture processes...to be accurate in the lower velocity range of this work. Mescall and Tracy 15] investigated the selection of ceramic material for application in armors
Adeli, Hossein; Vitu, Françoise; Zelinsky, Gregory J
2017-02-08
Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts. SIGNIFICANCE STATEMENT The superior colliculus (SC) performs a visual-to-motor transformation vital to overt attention, but existing SC models cannot predict saccades to visually complex real-world stimuli. We introduce a brain-inspired SC model that outperforms state-of-the-art image-based competitors in predicting the sequences of fixations made by humans performing a range of everyday tasks (scene viewing and exemplar and categorical search), making clear the value of looking to the brain for model design. This work is significant in that it will drive new research by making computationally explicit predictions of SC neural population activity in response to naturalistic stimuli and tasks. It will also serve as a blueprint for the construction of other brain-inspired models, helping to usher in the next generation of truly intelligent autonomous systems. Copyright © 2017 the authors 0270-6474/17/371453-15$15.00/0.
The transferability of safety-driven access management models for application to other sites.
DOT National Transportation Integrated Search
2001-01-01
Several research studies have produced mathematical models that predict the safety impacts of selected access management techniques. Since new models require substantial resources to construct, this study evaluated five existing models with regard to...
ERIC Educational Resources Information Center
Kwon, Heekyung
2011-01-01
The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…
Debris-flow runout predictions based on the average channel slope (ACS)
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Prediction of the runout distance of a debris flow is an important element in the delineation of potentially hazardous areas on alluvial fans and for the siting of mitigation structures. Existing runout estimation methods rely on input parameters that are often difficult to estimate, including volume, velocity, and frictional factors. In order to provide a simple method for preliminary estimates of debris-flow runout distances, we developed a model that provides runout predictions based on the average channel slope (ACS model) for non-volcanic debris flows that emanate from confined channels and deposit on well-defined alluvial fans. This model was developed from 20 debris-flow events in the western United States and British Columbia. Based on a runout estimation method developed for snow avalanches, this model predicts debris-flow runout as an angle of reach from a fixed point in the drainage channel to the end of the runout zone. The best fixed point was found to be the mid-point elevation of the drainage channel, measured from the apex of the alluvial fan to the top of the drainage basin. Predicted runout lengths were more consistent than those obtained from existing angle-of-reach estimation methods. Results of the model compared well with those of laboratory flume tests performed using the same range of channel slopes. The robustness of this model was tested by applying it to three debris-flow events not used in its development: predicted runout ranged from 82 to 131% of the actual runout for these three events. Prediction interval multipliers were also developed so that the user may calculate predicted runout within specified confidence limits. ?? 2008 Elsevier B.V. All rights reserved.
NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2012-01-01
This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less
Mapping the global depth to bedrock for land surface modelling
NASA Astrophysics Data System (ADS)
Shangguan, W.; Hengl, T.; Yuan, H.; Dai, Y. J.; Zhang, S.
2017-12-01
Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of Depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 130,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surfacee reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forests and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.
Mapping the global depth to bedrock for land surface modeling
NASA Astrophysics Data System (ADS)
Shangguan, Wei; Hengl, Tomislav; Mendes de Jesus, Jorge; Yuan, Hua; Dai, Yongjiu
2017-03-01
Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 1,30,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surface reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forest and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250 m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
NASA Astrophysics Data System (ADS)
Benjanirat, Sarun
Next generation horizontal-axis wind turbines (HAWTs) will operate at very high wind speeds. Existing engineering approaches for modeling the flow phenomena are based on blade element theory, and cannot adequately account for 3-D separated, unsteady flow effects. Therefore, researchers around the world are beginning to model these flows using first principles-based computational fluid dynamics (CFD) approaches. In this study, an existing first principles-based Navier-Stokes approach is being enhanced to model HAWTs at high wind speeds. The enhancements include improved grid topology, implicit time-marching algorithms, and advanced turbulence models. The advanced turbulence models include the Spalart-Allmaras one-equation model, k-epsilon, k-o and Shear Stress Transport (k-o-SST) models. These models are also integrated with detached eddy simulation (DES) models. Results are presented for a range of wind speeds, for a configuration termed National Renewable Energy Laboratory Phase VI rotor, tested at NASA Ames Research Center. Grid sensitivity studies are also presented. Additionally, effects of existing transition models on the predictions are assessed. Data presented include power/torque production, radial distribution of normal and tangential pressure forces, root bending moments, and surface pressure fields. Good agreement was obtained between the predictions and experiments for most of the conditions, particularly with the Spalart-Allmaras-DES model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo
Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Seasonal Drought Prediction: Advances, Challenges, and Future Prospects
NASA Astrophysics Data System (ADS)
Hao, Zengchao; Singh, Vijay P.; Xia, Youlong
2018-03-01
Drought prediction is of critical importance to early warning for drought managements. This review provides a synthesis of drought prediction based on statistical, dynamical, and hybrid methods. Statistical drought prediction is achieved by modeling the relationship between drought indices of interest and a suite of potential predictors, including large-scale climate indices, local climate variables, and land initial conditions. Dynamical meteorological drought prediction relies on seasonal climate forecast from general circulation models (GCMs), which can be employed to drive hydrological models for agricultural and hydrological drought prediction with the predictability determined by both climate forcings and initial conditions. Challenges still exist in drought prediction at long lead time and under a changing environment resulting from natural and anthropogenic factors. Future research prospects to improve drought prediction include, but are not limited to, high-quality data assimilation, improved model development with key processes related to drought occurrence, optimal ensemble forecast to select or weight ensembles, and hybrid drought prediction to merge statistical and dynamical forecasts.
Recent advances in hypersonic technology
NASA Technical Reports Server (NTRS)
Dwoyer, Douglas L.
1990-01-01
This paper will focus on recent advances in hypersonic aerodynamic prediction techniques. Current capabilities of existing numerical methods for predicting high Mach number flows will be discussed and shortcomings will be identified. Physical models available for inclusion into modern codes for predicting the effects of transition and turbulence will also be outlined and their limitations identified. Chemical reaction models appropriate to high-speed flows will be addressed, and the impact of their inclusion in computational fluid dynamics codes will be discussed. Finally, the problem of validating predictive techniques for high Mach number flows will be addressed.
A thermal sensation prediction tool for use by the profession
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fountain, M.E.; Huizenga, C.
1997-12-31
As part of a recent ASHRAE research project (781-RP), a thermal sensation prediction tool has been developed. This paper introduces the tool, describes the component thermal sensation models, and presents examples of how the tool can be used in practice. Since the main end product of the HVAC industry is the comfort of occupants indoors, tools for predicting occupant thermal response can be an important asset to designers of indoor climate control systems. The software tool presented in this paper incorporates several existing models for predicting occupant comfort.
Fechter, Dominik; Storch, Ilse
2014-01-01
Due to legislative protection, many species, including large carnivores, are currently recolonizing Europe. To address the impending human-wildlife conflicts in advance, predictive habitat models can be used to determine potentially suitable habitat and areas likely to be recolonized. As field data are often limited, quantitative rule based models or the extrapolation of results from other studies are often the techniques of choice. Using the wolf (Canis lupus) in Germany as a model for habitat generalists, we developed a habitat model based on the location and extent of twelve existing wolf home ranges in Eastern Germany, current knowledge on wolf biology, different habitat modeling techniques and various input data to analyze ten different input parameter sets and address the following questions: (1) How do a priori assumptions and different input data or habitat modeling techniques affect the abundance and distribution of potentially suitable wolf habitat and the number of wolf packs in Germany? (2) In a synthesis across input parameter sets, what areas are predicted to be most suitable? (3) Are existing wolf pack home ranges in Eastern Germany consistent with current knowledge on wolf biology and habitat relationships? Our results indicate that depending on which assumptions on habitat relationships are applied in the model and which modeling techniques are chosen, the amount of potentially suitable habitat estimated varies greatly. Depending on a priori assumptions, Germany could accommodate between 154 and 1769 wolf packs. The locations of the existing wolf pack home ranges in Eastern Germany indicate that wolves are able to adapt to areas densely populated by humans, but are limited to areas with low road densities. Our analysis suggests that predictive habitat maps in general, should be interpreted with caution and illustrates the risk for habitat modelers to concentrate on only one selection of habitat factors or modeling technique. PMID:25029506
Improved global prediction of 300 nautical mile mean free air anomalies
NASA Technical Reports Server (NTRS)
Cruz, J. Y.
1982-01-01
Current procedures used for the global prediction of 300nm mean anomalies starting from known values of 1 deg by 1 deg mean anomalies yield unreasonable prediction results when applied to 300nm blocks which have a rapidly varying gravity anomaly field and which contain relatively few observed 60nm blocks. Improvement of overall 300nm anomaly prediction is first achieved by using area-weighted as opposed to unweighted averaging of the 25 generated 60nm mean anomalies inside the 300nm block. Then, improvement of prediction over rough 300nm blocks is realized through the use of fully known 1 deg by 1 deg mean elevations, taking advantage of the correlation that locally exists between 60nm mean anomalies and 60nm mean elevations inside the 300nm block. An improved prediction model which adapts itself to the roughness of the local anomaly field is found to be the model of Least Squares Collocation with systematic parameters, the systematic parameter being the slope b which is a type of Bouguer slope expressing the correlation that locally exists between 60nm mean anomalies and 60nm mean elevations.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
A relevance theory of induction.
Medin, Douglas L; Coley, John D; Storms, Gert; Hayes, Brett K
2003-09-01
A framework theory, organized around the principle of relevance, is proposed for category-based reasoning. According to the relevance principle, people assume that premises are informative with respect to conclusions. This idea leads to the prediction that people will use causal scenarios and property reinforcement strategies in inductive reasoning. These predictions are contrasted with both existing models and normative logic. Judgments of argument strength were gathered in three different countries, and the results showed the importance of both causal scenarios and property reinforcement in category-based inferences. The relation between the relevance framework and existing models of category-based inductive reasoning is discussed in the light of these findings.
NASA Technical Reports Server (NTRS)
Perry, Bruce; Anderson, Molly
2015-01-01
The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.
Cure modeling in real-time prediction: How much does it help?
Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F
2017-08-01
Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.
Frequency-dependent selection predicts patterns of radiations and biodiversity.
Melián, Carlos J; Alonso, David; Vázquez, Diego P; Regetz, James; Allesina, Stefano
2010-08-26
Most empirical studies support a decline in speciation rates through time, although evidence for constant speciation rates also exists. Declining rates have been explained by invoking pre-existing niches, whereas constant rates have been attributed to non-adaptive processes such as sexual selection and mutation. Trends in speciation rate and the processes underlying it remain unclear, representing a critical information gap in understanding patterns of global diversity. Here we show that the temporal trend in the speciation rate can also be explained by frequency-dependent selection. We construct a frequency-dependent and DNA sequence-based model of speciation. We compare our model to empirical diversity patterns observed for cichlid fish and Darwin's finches, two classic systems for which speciation rates and richness data exist. Negative frequency-dependent selection predicts well both the declining speciation rate found in cichlid fish and explains their species richness. For groups like the Darwin's finches, in which speciation rates are constant and diversity is lower, speciation rate is better explained by a model without frequency-dependent selection. Our analysis shows that differences in diversity may be driven by incipient species abundance with frequency-dependent selection. Our results demonstrate that genetic-distance-based speciation and frequency-dependent selection are sufficient to explain the high diversity observed in natural systems and, importantly, predict decay through time in speciation rate in the absence of pre-existing niches.
Aaron B. Berdanier; Chelcy F. Miniat; James S. Clark
2016-01-01
Accurately scaling sap flux observations to tree or stand levels requires accounting for variation in sap flux between wood types and by depth into the tree. However, existing models for radial variation in axial sap flux are rarely used because they are difficult to implement, there is uncertainty about their predictive ability and calibration measurements...
Deborah M. Finch
2012-01-01
Recent research and species distribution modeling predict large changes in the distributions of species and vegetation types in the western interior of the United States in response to climate change. This volume reviews existing climate models that predict species and vegetation changes in the western United States, and it synthesizes knowledge about climate change...
20171015 - Predicting Exposure Pathways with Machine Learning (ISES)
Prioritizing the risk posed to human health from the thousands of chemicals in the environment requires tools that can estimate exposure rates from limited information. High throughput models exist to make predictions of exposure via specific, important pathways such as residenti...
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Predicting responses from Rasch measures.
Linacre, John M
2010-01-01
There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.
Sturm, Marc; Quinten, Sascha; Huber, Christian G.; Kohlbacher, Oliver
2007-01-01
We propose a new model for predicting the retention time of oligonucleotides. The model is based on ν support vector regression using features derived from base sequence and predicted secondary structure of oligonucleotides. Because of the secondary structure information, the model is applicable even at relatively low temperatures where the secondary structure is not suppressed by thermal denaturing. This makes the prediction of oligonucleotide retention time for arbitrary temperatures possible, provided that the target temperature lies within the temperature range of the training data. We describe different possibilities of feature calculation from base sequence and secondary structure, present the results and compare our model to existing models. PMID:17567619
Model-based learning and the contribution of the orbitofrontal cortex to the model-free world
McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey
2012-01-01
Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030
Hidden markov model for the prediction of transmembrane proteins using MATLAB.
Chaturvedi, Navaneet; Shanker, Sudhanshu; Singh, Vinay Kumar; Sinha, Dhiraj; Pandey, Paras Nath
2011-01-01
Since membranous proteins play a key role in drug targeting therefore transmembrane proteins prediction is active and challenging area of biological sciences. Location based prediction of transmembrane proteins are significant for functional annotation of protein sequences. Hidden markov model based method was widely applied for transmembrane topology prediction. Here we have presented a revised and a better understanding model than an existing one for transmembrane protein prediction. Scripting on MATLAB was built and compiled for parameter estimation of model and applied this model on amino acid sequence to know the transmembrane and its adjacent locations. Estimated model of transmembrane topology was based on TMHMM model architecture. Only 7 super states are defined in the given dataset, which were converted to 96 states on the basis of their length in sequence. Accuracy of the prediction of model was observed about 74 %, is a good enough in the area of transmembrane topology prediction. Therefore we have concluded the hidden markov model plays crucial role in transmembrane helices prediction on MATLAB platform and it could also be useful for drug discovery strategy. The database is available for free at bioinfonavneet@gmail.comvinaysingh@bhu.ac.in.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
NASA Astrophysics Data System (ADS)
Zapata, Brian Jarvis
As military and diplomatic representatives of the United States are deployed throughout the world, they must frequently make use of local, existing facilities; it is inevitable that some of these will be load bearing unreinforced masonry (URM) structures. Although generally suitable for conventional design loads, load bearing URM presents a unique hazard, with respect to collapse, when exposed to blast loading. There is therefore a need to study the blast resistance of load bearing URM construction in order to better protect US citizens assigned to dangerous locales. To address this, the Department of Civil and Environmental Engineering at the University of North Carolina at Charlotte conducted three blast tests inside a decommissioned, coal-fired, power plant prior to its scheduled demolition. The power plant's walls were constructed of URM and provided an excellent opportunity to study the response of URM walls in-situ. Post-test analytical studies investigated the ability of existing blast load prediction methodologies to model the case of a cylindrical charge with a low height of burst. It was found that even for the relatively simple blast chamber geometries of these tests, simplified analysis methods predicted blast impulses with an average net error of 22%. The study suggested that existing simplified analysis methods would benefit from additional development to better predict blast loads from cylinders detonated near the ground's surface. A hydrocode, CTH, was also used to perform two and three-dimensional simulations of the blast events. In order to use the hydrocode, Jones Wilkins Lee (JWL) equation of state (EOS) coefficients were developed for the experiment's Unimax dynamite charges; a novel energy-scaling technique was developed which permits the derivation of new JWL coefficients from an existing coefficient set. The hydrocode simulations were able to simulate blast impulses with an average absolute error of 34.5%. Moreover, the hydrocode simulations provided highly resolved spatio-temporal blast loading data for subsequent structural simulations. Equivalent single-degree-of-freedom (ESDOF) structural response models were then used to predict the out-of-plane deflections of blast chamber walls. A new resistance function was developed which permits a URM wall to crack at any height; numerical methodologies were also developed to compute transformation factors required for use in the ESDOF method. When combined with the CTH derived blast loading predictions, the ESDOF models were able to predict out-of-plane deflections with reasonable accuracy. Further investigations were performed using finite element models constructed in LS-DYNA; the models used elastic elements combined with contacts possessing a tension/shear cutoff and the ability to simulate fracture energy release. Using the CTH predicted blast loads and carefully selected constitutive parameters, the LS-DYNA models were able to both qualitatively and quantitatively predict blast chamber wall deflections and damage patterns. Moreover, the finite element models suggested several modes of response which cannot be modeled by current ESDOF methods; the effect of these response modes on the accuracy of ESDOF predictions warrants further study.
Predicted carbonation of existing concrete building based on the Indonesian tropical micro-climate
NASA Astrophysics Data System (ADS)
Hilmy, M.; Prabowo, H.
2018-03-01
This paper is aimed to predict the carbonation progress based on the previous mathematical model. It shortly explains the nature of carbonation including the processes and effects. Environmental humidity and temperature of the existing concrete building are measured and compared to data from local Meteorological, Climatological, and Geophysical Agency. The data gained are expressed in the form of annual hygrothermal values which will use as the input parameter in carbonation model. The physical properties of the observed building such as its location, dimensions, and structural material used are quantified. These data then utilized as an important input parameter for carbonation coefficients. The relationships between relative humidity and the rate of carbonation established. The results can provide a basis for repair and maintenance of existing concrete buildings and the sake of service life analysis of them.
Predicting the evolution of complex networks via similarity dynamics
NASA Astrophysics Data System (ADS)
Wu, Tao; Chen, Leiting; Zhong, Linfeng; Xian, Xingping
2017-01-01
Almost all real-world networks are subject to constant evolution, and plenty of them have been investigated empirically to uncover the underlying evolution mechanism. However, the evolution prediction of dynamic networks still remains a challenging problem. The crux of this matter is to estimate the future network links of dynamic networks. This paper studies the evolution prediction of dynamic networks with link prediction paradigm. To estimate the likelihood of the existence of links more accurate, an effective and robust similarity index is presented by exploiting network structure adaptively. Moreover, most of the existing link prediction methods do not make a clear distinction between future links and missing links. In order to predict the future links, the networks are regarded as dynamic systems in this paper, and a similarity updating method, spatial-temporal position drift model, is developed to simulate the evolutionary dynamics of node similarity. Then the updated similarities are used as input information for the future links' likelihood estimation. Extensive experiments on real-world networks suggest that the proposed similarity index performs better than baseline methods and the position drift model performs well for evolution prediction in real-world evolving networks.
NASA Astrophysics Data System (ADS)
Botha, J. D. M.; Shahroki, A.; Rice, H.
2017-12-01
This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.
Sakurai Prize: Extended Higgs Sectors--phenomenology and future prospects
NASA Astrophysics Data System (ADS)
Gunion, John
2017-01-01
The discovery of a spin-0 state at 125 GeV with properties close to those predicted for the single Higgs boson of the Standard Model does not preclude the existence of additional Higgs bosons. In this talk, models with extended Higgs sectors are reviewed, including two-Higgs-doublet models with and without an extra singlet Higgs field and supersymmetric models. Special emphasis is given to the limit in which the couplings and properties of one of the Higgs bosons of the extended Higgs sector are very close to those predicted for the single Standard Model Higgs boson while the other Higgs bosons are relatively light, perhaps even having masses close to or below the SM-like 125 GeV state. Constraints on this type of scenario given existing data are summarized and prospects for observing these non-SM-like Higgs bosons are discussed. Supported by the Department of Energy.
Cloud Based Metalearning System for Predictive Modeling of Biomedical Data
Vukićević, Milan
2014-01-01
Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data. PMID:24892101
Using models for the optimization of hydrologic monitoring
Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.
2011-01-01
Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The techniques briefly described earlier are described in detail in a U.S. Geological Survey Scientific Investigations Report available on the Internet (Fienen and others, 2010; http://pubs.usgs.gov/sir/2010/5159/). This fact sheet presents a synopsis of the techniques as applied to a synthetic model based on a model constructed using properties from the Lake Michigan Basin (Hoard, 2010).
Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie
2017-01-01
Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.
EVALUATION OF UNSATURATED/VADOSE ZONE MODELS FOR SUPERFUND SITES
Mathematical models of water and chemical movement in soils are being used as decision aids for defining groundwater protection practices for Superfund sites. Numerous transport models exist for predicting movementand degradation of hazardous chemicals through soils. Many of thes...
EVALUATION OF UNSATURATED/VADOSE ZONE MODELS FOR SUPERFUND SITES
Mathematical models of water and chemical movement in soils are being used as decision aids for defining groundwater protection practices for Superfund sites. umerous transport models exist for predicting movement and degradation of hazardous chemicals through soil& Many of these...
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
Predictive Modeling of Risk Factors and Complications of Cataract Surgery
Gaskin, Gregory L; Pershing, Suzann; Cole, Tyler S; Shah, Nigam H
2016-01-01
Purpose To quantify the relationship between aggregated preoperative risk factors and cataract surgery complications, as well as to build a model predicting outcomes on an individual-level—given a constellation of demographic, baseline, preoperative, and intraoperative patient characteristics. Setting Stanford Hospital and Clinics between 1994 and 2013. Design Retrospective cohort study Methods Patients age 40 or older who received cataract surgery between 1994 and 2013. Risk factors, complications, and demographic information were extracted from the Electronic Health Record (EHR), based on International Classification of Diseases, 9th edition (ICD-9) codes, Current Procedural Terminology (CPT) codes, drug prescription information, and text data mining using natural language processing. We used a bootstrapped least absolute shrinkage and selection operator (LASSO) model to identify highly-predictive variables. We built random forest classifiers for each complication to create predictive models. Results Our data corroborated existing literature on postoperative complications—including the association of intraoperative complications, complex cataract surgery, black race, and/or prior eye surgery with an increased risk of any postoperative complications. We also found a number of other, less well-described risk factors, including systemic diabetes mellitus, young age (<60 years old), and hyperopia as risk factors for complex cataract surgery and intra- and post-operative complications. Our predictive models based on aggregated outperformed existing published models. Conclusions The constellations of risk factors and complications described here can guide new avenues of research and provide specific, personalized risk assessment for a patient considering cataract surgery. The predictive capacity of our models can enable risk stratification of patients, which has utility as a teaching tool as well as informing quality/value-based reimbursements. PMID:26692059
Lee II, Henry; Reusser, Deborah A.; Frazier, Melanie R; McCoy, Lee M; Clinton, Patrick J.; Clough, Jonathan S.
2014-01-01
The “Sea‐Level Affecting Marshes Model” (SLAMM) is a moderate resolution model used to predict the effects of sea level rise on marsh habitats (Craft et al. 2009). SLAMM has been used extensively on both the west coast (e.g., Glick et al., 2007) and east coast (e.g., Geselbracht et al., 2011) of the United States to evaluate potential changes in the distribution and extent of tidal marsh habitats. However, a limitation of the current version of SLAMM, (Version 6.2) is that it lacks the ability to model distribution changes in seagrass habitat resulting from sea level rise. Because of the ecological importance of SAV habitats, U.S. EPA, USGS, and USDA partnered with Warren Pinnacle Consulting to enhance the SLAMM modeling software to include new functionality in order to predict changes in Zostera marina distribution within Pacific Northwest estuaries in response to sea level rise. Specifically, the objective was to develop a SAV model that used generally available GIS data and parameters that were predictive and that could be customized for other estuaries that have GIS layers of existing SAV distribution. This report describes the procedure used to develop the SAV model for the Yaquina Bay Estuary, Oregon, appends a statistical script based on the open source R software to generate a similar SAV model for other estuaries that have data layers of existing SAV, and describes how to incorporate the model coefficients from the site‐specific SAV model into SLAMM to predict the effects of sea level rise on Zostera marina distributions. To demonstrate the applicability of the R tools, we utilize them to develop model coefficients for Willapa Bay, Washington using site‐specific SAV data.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Peterson, J.; Dunham, J.B.
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.
NASA Technical Reports Server (NTRS)
Kalluri, Sreeramesh
2013-01-01
Structural materials used in engineering applications routinely subjected to repetitive mechanical loads in multiple directions under non-isothermal conditions. Over past few decades, several multiaxial fatigue life estimation models (stress- and strain-based) developed for isothermal conditions. Historically, numerous fatigue life prediction models also developed for thermomechanical fatigue (TMF) life prediction, predominantly for uniaxial mechanical loading conditions. Realistic structural components encounter multiaxial loads and non-isothermal loading conditions, which increase potential for interaction of damage modes. A need exists for mechanical testing and development verification of life prediction models under such conditions.
Aerodynamic analysis of the Darrieus wind turbines including dynamic-stall effects
NASA Astrophysics Data System (ADS)
Paraschivoiu, Ion; Allet, Azeddine
Experimental data for a 17-m wind turbine are compared with aerodynamic performance predictions obtained with two dynamic stall methods which are based on numerical correlations of the dynamic stall delay with the pitch rate parameter. Unlike the Gormont (1973) model, the MIT model predicts that dynamic stall does not occur in the downwind part of the turbine, although it does exist in the upwind zone. The Gormont model is shown to overestimate the aerodynamic coefficients relative to the MIT model. The MIT model is found to accurately predict the dynamic-stall regime, which is characterized by a plateau oscillating near values of the experimental data for the rotor power vs wind speed at the equator.
Information-theoretic model comparison unifies saliency metrics
Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias
2015-01-01
Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340
Efficient statistical mapping of avian count data
Royle, J. Andrew; Wikle, C.K.
2005-01-01
We develop a spatial modeling framework for count data that is efficient to implement in high-dimensional prediction problems. We consider spectral parameterizations for the spatially varying mean of a Poisson model. The spectral parameterization of the spatial process is very computationally efficient, enabling effective estimation and prediction in large problems using Markov chain Monte Carlo techniques. We apply this model to creating avian relative abundance maps from North American Breeding Bird Survey (BBS) data. Variation in the ability of observers to count birds is modeled as spatially independent noise, resulting in over-dispersion relative to the Poisson assumption. This approach represents an improvement over existing approaches used for spatial modeling of BBS data which are either inefficient for continental scale modeling and prediction or fail to accommodate important distributional features of count data thus leading to inaccurate accounting of prediction uncertainty.
Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft
NASA Technical Reports Server (NTRS)
Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.
2010-01-01
Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.
Brown, Jeremiah R; MacKenzie, Todd A; Maddox, Thomas M; Fly, James; Tsai, Thomas T; Plomondon, Mary E; Nielson, Christopher D; Siew, Edward D; Resnic, Frederic S; Baker, Clifton R; Rumsfeld, John S; Matheny, Michael E
2015-12-11
Acute kidney injury (AKI) occurs frequently after cardiac catheterization and percutaneous coronary intervention. Although a clinical risk model exists for percutaneous coronary intervention, no models exist for both procedures, nor do existing models account for risk factors prior to the index admission. We aimed to develop such a model for use in prospective automated surveillance programs in the Veterans Health Administration. We collected data on all patients undergoing cardiac catheterization or percutaneous coronary intervention in the Veterans Health Administration from January 01, 2009 to September 30, 2013, excluding patients with chronic dialysis, end-stage renal disease, renal transplant, and missing pre- and postprocedural creatinine measurement. We used 4 AKI definitions in model development and included risk factors from up to 1 year prior to the procedure and at presentation. We developed our prediction models for postprocedural AKI using the least absolute shrinkage and selection operator (LASSO) and internally validated using bootstrapping. We developed models using 115 633 angiogram procedures and externally validated using 27 905 procedures from a New England cohort. Models had cross-validated C-statistics of 0.74 (95% CI: 0.74-0.75) for AKI, 0.83 (95% CI: 0.82-0.84) for AKIN2, 0.74 (95% CI: 0.74-0.75) for contrast-induced nephropathy, and 0.89 (95% CI: 0.87-0.90) for dialysis. We developed a robust, externally validated clinical prediction model for AKI following cardiac catheterization or percutaneous coronary intervention to automatically identify high-risk patients before and immediately after a procedure in the Veterans Health Administration. Work is ongoing to incorporate these models into routine clinical practice. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Empirical testing of an analytical model predicting electrical isolation of photovoltaic models
NASA Astrophysics Data System (ADS)
Garcia, A., III; Minning, C. P.; Cuddihy, E. F.
A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.
Process-based soil erodibility estimation for empirical water erosion models
USDA-ARS?s Scientific Manuscript database
A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...
Modelling of the 10-micrometer natural laser emission from the mesospheres of Mars and Venus
NASA Technical Reports Server (NTRS)
Deming, D.; Mumma, M. J.
1983-01-01
The NLTE radiative transfer problem is solved to obtain the 00 deg 1 vibrational state population. This model successfully reproduces the existing center-to-limb observations, although higher spatial resolution observations are needed for a definitive test. The model also predicts total fluxes which are close to the observed values. The strength of the emission is predicted to be closely related to the instantaneous near-IR solar heating rate.
Modeling of the 10-micron natural laser emission from the mesospheres of Mars and Venus
NASA Technical Reports Server (NTRS)
Deming, D.; Mumma, M. J.
1983-01-01
The NLTE radiative transfer problem is solved to obtain the 00 deg 1 vibrational state population. This model successfully reproduces the existing center-to-limb observations, although higher spatial resolution observations are needed for a definitive test. The model also predicts total fluxes which are close to the observed values. The strength of the emission is predicted to be closely related to the instantaneous near-IR solar heating rate.
Piantadosi, Steven T.; Hayden, Benjamin Y.
2015-01-01
Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613
Bayesian Weibull tree models for survival analysis of clinico-genomic data
Clarke, Jennifer; West, Mike
2008-01-01
An important goal of research involving gene expression data for outcome prediction is to establish the ability of genomic data to define clinically relevant risk factors. Recent studies have demonstrated that microarray data can successfully cluster patients into low- and high-risk categories. However, the need exists for models which examine how genomic predictors interact with existing clinical factors and provide personalized outcome predictions. We have developed clinico-genomic tree models for survival outcomes which use recursive partitioning to subdivide the current data set into homogeneous subgroups of patients, each with a specific Weibull survival distribution. These trees can provide personalized predictive distributions of the probability of survival for individuals of interest. Our strategy is to fit multiple models; within each model we adopt a prior on the Weibull scale parameter and update this prior via Empirical Bayes whenever the sample is split at a given node. The decision to split is based on a Bayes factor criterion. The resulting trees are weighted according to their relative likelihood values and predictions are made by averaging over models. In a pilot study of survival in advanced stage ovarian cancer we demonstrate that clinical and genomic data are complementary sources of information relevant to survival, and we use the exploratory nature of the trees to identify potential genomic biomarkers worthy of further study. PMID:18618012
Frequency-domain prediction of broadband trailing edge noise from a blunt flat plate
NASA Astrophysics Data System (ADS)
Lee, Gwang-Se; Cheong, Cheolung
2013-10-01
The aim of this study is to develop an efficient methodology for frequency-domain prediction of broadband trailing edge noise from a blunt flat plate where non-zero pressure gradient may exist in its boundary layer. This is achieved in two ways: (i) by developing new models for point pressure spectra within the boundary layer over a flat plate, and (ii) by deriving a simple formula to approximate the effect of convective velocity on the radiated noise spectrum. Firstly, two types of point pressure spectra-required as input data to predict the trailing edge noise in the frequency domain-are used. One is determined using the semi-analytic (S-A) models based on the boundary-layer theory combined with existing empirical models. It is shown that the prediction using these models show good agreements with the measurements where zero-pressure gradient assumption is valid. However, the prediction show poor agreement with that obtained from large eddy simulation results where negative (favorable) pressure gradient is observed with the boundary layer. Based on boundary layer characteristics predicted using the large eddy simulations, new model for point wall pressure spectra is proposed to account for the effect of favorable pressure gradient over the blunt flat plate on the wall pressure spectra. Sound spectra that were predicted using these models are compared with measurements to validate the proposed prediction scheme. The advantage of the semi-analytic model is that it can be applied to problems at Reynolds numbers for which the empirical model is not available. In addition, it is expected that the current models can be applied to the cases where favorable pressure gradient exists in the boundary layer over a blunt flat plate. Secondly, in order to quantitatively analyze contributions of the pressure field within the turbulent boundary layer on the flat plate to trailing edge noise, total pressure over the surface of airfoil is decomposed into its two constituents: incident pressure generated in the boundary layer without a trailing edge and the pressure formed by the scattering of the incident pressure at the trailing edge. The predictions made using each of the incident and scattered pressures reveal that the convective velocity of turbulence in the boundary layer dominantly affects the radiated sound pressure spectrum, both in terms of the gross behavior of the overall acoustic pressure spectrum through the scattered pressure and in terms of the narrow band small fluctuations of the spectrum through the incident pressure. The interaction term between the incident and the scattered is defined and the incident is shown to contribute to the radiated acoustic pressure through the interaction term. Based on this finding, a simple model to effectively compute the effects of convection velocities of the turbulence on the radiated sound pressure spectrum is proposed. It is shown that the proposed method can effectively and accurately predict the broadband trailing edge noise from the plate with considering both the incident and the scattered contributions.
Chemical structure-based predictive model for methanogenic anaerobic biodegradation potential.
Meylan, William; Boethling, Robert; Aronson, Dallas; Howard, Philip; Tunkel, Jay
2007-09-01
Many screening-level models exist for predicting aerobic biodegradation potential from chemical structure, but anaerobic biodegradation generally has been ignored by modelers. We used a fragment contribution approach to develop a model for predicting biodegradation potential under methanogenic anaerobic conditions. The new model has 37 fragments (substructures) and classifies a substance as either fast or slow, relative to the potential to be biodegraded in the "serum bottle" anaerobic biodegradation screening test (Organization for Economic Cooperation and Development Guideline 311). The model correctly classified 90, 77, and 91% of the chemicals in the training set (n = 169) and two independent validation sets (n = 35 and 23), respectively. Accuracy of predictions of fast and slow degradation was equal for training-set chemicals, but fast-degradation predictions were less accurate than slow-degradation predictions for the validation sets. Analysis of the signs of the fragment coefficients for this and the other (aerobic) Biowin models suggests that in the context of simple group contribution models, the majority of positive and negative structural influences on ultimate degradation are the same for aerobic and methanogenic anaerobic biodegradation.
Small traveling clusters in attractive and repulsive Hamiltonian mean-field models.
Barré, Julien; Yamaguchi, Yoshiyuki Y
2009-03-01
Long-lasting small traveling clusters are studied in the Hamiltonian mean-field model by comparing between attractive and repulsive interactions. Nonlinear Landau damping theory predicts that a Gaussian momentum distribution on a spatially homogeneous background permits the existence of traveling clusters in the repulsive case, as in plasma systems, but not in the attractive case. Nevertheless, extending the analysis to a two-parameter family of momentum distributions of Fermi-Dirac type, we theoretically predict the existence of traveling clusters in the attractive case; these findings are confirmed by direct N -body numerical simulations. The parameter region with the traveling clusters is much reduced in the attractive case with respect to the repulsive case.
Theory of low frequency noise transmission through turbines
NASA Technical Reports Server (NTRS)
Matta, R. K.; Mani, R.
1979-01-01
Improvements of the existing theory of low frequency noise transmission through turbines and development of a working prediction tool are described. The existing actuator-disk model and a new finite-chord model were utilized in an analytical study. The interactive effect of adjacent blade rows, higher order spinning modes, blade-passage shocks, and duct area variations were considered separately. The improved theory was validated using the data acquired in an earlier NASA program. Computer programs incorporating the improved theory were produced for transmission loss prediction purposes. The programs were exercised parametrically and charts constructed to define approximately the low frequency noise transfer through turbines. The loss through the exhaust nozzle and flow(s) was also considered.
DOT National Transportation Integrated Search
2009-07-01
"Considerable data exists for soils that were tested and documented, both for native properties and : properties with pozzolan stabilization. While the data exists there was no database for the Nebraska : Department of Roads to retrieve this data for...
Cox, Louis Anthony Tony
2017-08-01
Concentration-response (C-R) functions relating concentrations of pollutants in ambient air to mortality risks or other adverse health effects provide the basis for many public health risk assessments, benefits estimates for clean air regulations, and recommendations for revisions to existing air quality standards. The assumption that C-R functions relating levels of exposure and levels of response estimated from historical data usefully predict how future changes in concentrations would change risks has seldom been carefully tested. This paper critically reviews literature on C-R functions for fine particulate matter (PM2.5) and mortality risks. We find that most of them describe historical associations rather than valid causal models for predicting effects of interventions that change concentrations. The few papers that explicitly attempt to model causality rely on unverified modeling assumptions, casting doubt on their predictions about effects of interventions. A large literature on modern causal inference algorithms for observational data has been little used in C-R modeling. Applying these methods to publicly available data from Boston and the South Coast Air Quality Management District around Los Angeles shows that C-R functions estimated for one do not hold for the other. Changes in month-specific PM2.5 concentrations from one year to the next do not help to predict corresponding changes in average elderly mortality rates in either location. Thus, the assumption that estimated C-R relations predict effects of pollution-reducing interventions may not be true. Better causal modeling methods are needed to better predict how reducing air pollution would affect public health.
Fiorentine, Robert; Hillhouse, Maureen P
2004-01-01
Although previous research provided empirical support for the main assumptions of the Addicted-Self (A-S) Model of recovery, it is not known whether the model predicts recovery for various gender, ethnic, age, and drug preference populations. It may be that the model predicts recovery only for some groups of addicts and should not be viewed as a general theory of the recovery process. Addressing this concern using data from the Los Angeles Target Cities Drug Treatment Enhancement Project, it was determined that only trivial population differences exist in the primary variables associated with the A-S Model. The A-S Model predicts abstinence with about the same degree of accuracy and parsimony for all populations. The findings indicate that the A-S Model is a general theory of drug and alcohol addictive behavior cessation.
Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction
NASA Astrophysics Data System (ADS)
Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro
Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.
Towards more accurate and reliable predictions for nuclear applications
NASA Astrophysics Data System (ADS)
Goriely, Stephane; Hilaire, Stephane; Dubray, Noel; Lemaître, Jean-François
2017-09-01
The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. Nowadays mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenological inputs in the evaluation of nuclear data. The latest achievements to determine nuclear masses within the non-relativistic HFB approach, including the related uncertainties in the model predictions, are discussed. Similarly, recent efforts to determine fission observables within the mean-field approach are described and compared with more traditional existing models.
Thermophysical properties of liquid UO2, ZrO2 and corium by molecular dynamics and predictive models
NASA Astrophysics Data System (ADS)
Kim, Woong Kee; Shim, Ji Hoon; Kaviany, Massoud
2017-08-01
Predicting the fate of accident-melted nuclear fuel-cladding requires the understanding of the thermophysical properties which are lacking or have large scatter due to high-temperature experimental challenges. Using equilibrium classical molecular dynamics (MD), we predict the properties of melted UO2 and ZrO2 and compare them with the available experimental data and the predictive models. The existing interatomic potential models have been developed mainly for the polymorphic solid phases of these oxides, so they cannot be used to predict all the properties accurately. We compare and decipher the distinctions of those MD predictions using the specific property-related autocorrelation decays. The predicted properties are density, specific heat, heat of fusion, compressibility, viscosity, surface tension, and the molecular and electronic thermal conductivities. After the comparisons, we provide readily usable temperature-dependent correlations (including UO2-ZrO2 compounds, i.e. corium melt).
2009-01-01
Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism) known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention), and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models. PMID:19903354
Predictive modeling of synergistic effects in nanoscale ion track formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarkadoula, Eva; Pakarinen, Olli H.; Xue, Haizhou
Molecular dynamics techniques and the inelastic thermal spike model are used to study the coupled effects of inelastic energy loss due to 21 MeV Ni ion irradiation and pre-existing defects in SrTiO 3. We determine the dependence on pre-existing defect concentration of nanoscale track formation occurring from the synergy between the inelastic energy loss and the pre-existing atomic defects. We show that the nanoscale ion tracks’ size can be controlled by the concentration of pre-existing disorder. This work identifies a major gap in fundamental understanding concerning the role played by defects in electronic energy dissipation and electron–lattice coupling.
Predictive modeling of synergistic effects in nanoscale ion track formation
Zarkadoula, Eva; Pakarinen, Olli H.; Xue, Haizhou; ...
2015-08-05
Molecular dynamics techniques and the inelastic thermal spike model are used to study the coupled effects of inelastic energy loss due to 21 MeV Ni ion irradiation and pre-existing defects in SrTiO 3. We determine the dependence on pre-existing defect concentration of nanoscale track formation occurring from the synergy between the inelastic energy loss and the pre-existing atomic defects. We show that the nanoscale ion tracks’ size can be controlled by the concentration of pre-existing disorder. This work identifies a major gap in fundamental understanding concerning the role played by defects in electronic energy dissipation and electron–lattice coupling.
Models for predicting adverse outcomes can help reduce and focus animal testing with new and existing chemicals. This short "thought starter" describes how quantitative-structure activity relationship and systems biology models can be used to help define toxicity pathways and li...
Frappier, Vincent; Najmanovich, Rafael J.
2014-01-01
Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Cα−only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations. PMID:24762569
NASA Astrophysics Data System (ADS)
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
Prediction of Metabolism of Drugs using Artificial Intelligence: How far have we reached?
Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar
2016-01-01
Information about drug metabolism is an essential component of drug development. Modeling the drug metabolism requires identification of the involved enzymes, rate and extent of metabolism, the sites of metabolism etc. There has been continuous attempts in the prediction of metabolism of drugs using artificial intelligence in effort to reduce the attrition rate of drug candidates entering to preclinical and clinical trials. Currently, there are number of predictive models available for metabolism using Support vector machines, Artificial neural networks, Bayesian classifiers etc. There is an urgent need to review their progress so far and address the existing challenges in prediction of metabolism. In this attempt, we are presenting the currently available literature models and some of the critical issues regarding prediction of drug metabolism.
Toffanin, V; Penasa, M; McParland, S; Berry, D P; Cassandro, M; De Marchi, M
2015-05-01
The aim of the present study was to estimate genetic parameters for calcium (Ca), phosphorus (P) and titratable acidity (TA) in bovine milk predicted by mid-IR spectroscopy (MIRS). Data consisted of 2458 Italian Holstein-Friesian cows sampled once in 220 farms. Information per sample on protein and fat percentage, pH and somatic cell count, as well as test-day milk yield, was also available. (Co)variance components were estimated using univariate and bivariate animal linear mixed models. Fixed effects considered in the analyses were herd of sampling, parity, lactation stage and a two-way interaction between parity and lactation stage; an additive genetic and residual term were included in the models as random effects. Estimates of heritability for Ca, P and TA were 0.10, 0.12 and 0.26, respectively. Positive moderate to strong phenotypic correlations (0.33 to 0.82) existed between Ca, P and TA, whereas phenotypic weak to moderate correlations (0.00 to 0.45) existed between these traits with both milk quality and yield. Moderate to strong genetic correlations (0.28 to 0.92) existed between Ca, P and TA, and between these predicted traits with both fat and protein percentage (0.35 to 0.91). The existence of heritable genetic variation for Ca, P and TA, coupled with the potential to predict these components for routine cow milk testing, imply that genetic gain in these traits is indeed possible.
NASA Astrophysics Data System (ADS)
Gide, Milind S.; Karam, Lina J.
2016-08-01
With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.
Crystal study and econometric model
NASA Technical Reports Server (NTRS)
1975-01-01
An econometric model was developed that can be used to predict demand and supply figures for crystals over a time horizon roughly concurrent with that of NASA's Space Shuttle Program - that is, 1975 through 1990. The model includes an equation to predict the impact on investment in the crystal-growing industry. Actually, two models are presented. The first is a theoretical model which follows rather strictly the standard theoretical economic concepts involved in supply and demand analysis, and a modified version of the model was developed which, though not quite as theoretically sound, was testable utilizing existing data sources.
A PBPK model for TCE with specificity for the male LE rat that accurately predicts TCE tissue time-course data has not been developed, although other PBPK models for TCE exist. Development of such a model was the present aim. The PBPK model consisted of 5 compartments: fat; slowl...
We developed a numerical model to predict chemical concentrations in indoor environments resulting from soil vapor intrusion and volatilization from groundwater. The model, which integrates new and existing algorithms for chemical fate and transport, was originally...
RAPID ASSESSMENT OF URBAN WETLANDS: FUNCTIONAL ASSESSMENT MODEL DEVELOPMENT AND EVALUATION
The objective of this study was to test the ability of existing hydrogeomorphic (HGM) functional assessment models and our own proposed models to predict rates of nitrate production and removal, functions critical to water quality protection, in forested riparian wetlands in nort...
Size effects in non-linear heat conduction with flux-limited behaviors
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2017-11-01
Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.
NASA Technical Reports Server (NTRS)
Morren, Sybil Huang
1991-01-01
Transonic flow of dense gases for two-dimensional, steady-state, flow over a NACA 0012 airfoil was predicted analytically. The computer code used to model the dense gas behavior was a modified version of Jameson's FL052 airfoil code. The modifications to the code enabled modeling the dense gas behavior near the saturated vapor curve and critical pressure region where the fundamental derivative, Gamma, is negative. This negative Gamma region is of interest because the nonclassical gas behavior such as formation and propagation of expansion shocks, and the disintegration of inadmissible compression shocks may exist. The results indicated that dense gases with undisturbed thermodynamic states in the negative Gamma region show a significant reduction in the extent of the transonic regime as compared to that predicted by the perfect gas theory. The results support existing theories and predictions of the nonclassical, dense gas behavior from previous investigations.
NASA Technical Reports Server (NTRS)
Urquhart, Erin A.; Zaitchik, Benjamin F.; Waugh, Darryn W.; Guikema, Seth D.; Del Castillo, Carlos E.
2014-01-01
The effect that climate change and variability will have on waterborne bacteria is a topic of increasing concern for coastal ecosystems, including the Chesapeake Bay. Surface water temperature trends in the Bay indicate a warming pattern of roughly 0.3-0.4 C per decade over the past 30 years. It is unclear what impact future warming will have on pathogens currently found in the Bay, including Vibrio spp. Using historical environmental data, combined with three different statistical models of Vibrio vulnificus probability, we explore the relationship between environmental change and predicted Vibrio vulnificus presence in the upper Chesapeake Bay. We find that the predicted response of V. vulnificus probability to high temperatures in the Bay differs systematically between models of differing structure. As existing publicly available datasets are inadequate to determine which model structure is most appropriate, the impact of climatic change on the probability of V. vulnificus presence in the Chesapeake Bay remains uncertain. This result points to the challenge of characterizing climate sensitivity of ecological systems in which data are sparse and only statistical models of ecological sensitivity exist.
Butler, Barbara A; Ranville, James F; Ross, Philippe E
2008-06-01
North Fork Clear Creek (NFCC) in Colorado, an acid-mine drainage (AMD) impacted stream, was chosen to examine the distribution of dissolved and particulate Cu, Fe, Mn, and Zn in the water column, with respect to seasonal hydrologic controls. NFCC is a high-gradient stream with discharge directly related to snowmelt and strong seasonal storms. Additionally, conditions in the stream cause rapid precipitation of large amounts of hydrous iron oxides (HFO) that sequester metals. Because AMD-impacted systems are complex, geochemical modeling may assist with predictions and/or confirmations of processes occurring in these environments. This research used Visual-MINTEQ to determine if field data collected over a two and one-half year study would be well represented by modeling with a currently existing model, while limiting the number of processes modeled and without modifications to the existing model's parameters. Observed distributions between dissolved and particulate phases in the water column varied greatly among the metals, with average dissolved fractions being >90% for Mn, approximately 75% for Zn, approximately 30% for Cu, and <10% for Fe. A strong seasonal trend was observed for the metals predominantly in the dissolved phase (Mn and Zn), with increasing concentrations during base-flow conditions and decreasing concentrations during spring-runoff. This trend was less obvious for Cu and Fe. Within hydrologic seasons, storm events significantly influenced in-stream metals concentrations. The most simplified modeling, using solely sorption to HFO, gave predicted percentage particulate Cu results for most samples to within a factor of two of the measured values, but modeling data were biased toward over-prediction. About one-half of the percentage particulate Zn data comparisons fell within a factor of two, with the remaining data being under-predicted. Slightly more complex modeling, which included dissolved organic carbon (DOC) as a solution phase ligand, significantly reduced the positive bias between observed and predicted percentage particulate Cu, while inclusion of hydrous manganese oxide (HMO) yielded model results more representative of the observed percentage particulate Zn. These results indicate that there is validity in the use of an existing model, without alteration and with typically collected water chemistry data, to describe complex natural systems, but that processes considered optimal for one metal might not be applicable for all metals in a given water sample.
Learning Latent Variable and Predictive Models of Dynamical Systems
2009-10-01
stable over the full 1000 frame image sequence without significant damping. C. Sam- ples drawn from a least squares synthesized sequences (top), and...LDS stabilizing algorithms, LB-1 and LB-2. Bars at every 20 timesteps denote variance in the results. CG provides the best stable short term predictions...observations. This thesis contributes (1) novel learning algorithms for existing dynamical system models that overcome significant limitations of previous
NASA Astrophysics Data System (ADS)
Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.
2018-05-01
The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.
Reducing usage of the computational resources by event driven approach to model predictive control
NASA Astrophysics Data System (ADS)
Misik, Stefan; Bradac, Zdenek; Cela, Arben
2017-08-01
This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.
Hoogendoorn, Mark; Szolovits, Peter; Moons, Leon M G; Numans, Mattijs E
2016-05-01
Machine learning techniques can be used to extract predictive models for diseases from electronic medical records (EMRs). However, the nature of EMRs makes it difficult to apply off-the-shelf machine learning techniques while still exploiting the rich content of the EMRs. In this paper, we explore the usage of a range of natural language processing (NLP) techniques to extract valuable predictors from uncoded consultation notes and study whether they can help to improve predictive performance. We study a number of existing techniques for the extraction of predictors from the consultation notes, namely a bag of words based approach and topic modeling. In addition, we develop a dedicated technique to match the uncoded consultation notes with a medical ontology. We apply these techniques as an extension to an existing pipeline to extract predictors from EMRs. We evaluate them in the context of predictive modeling for colorectal cancer (CRC), a disease known to be difficult to diagnose before performing an endoscopy. Our results show that we are able to extract useful information from the consultation notes. The predictive performance of the ontology-based extraction method moves significantly beyond the benchmark of age and gender alone (area under the receiver operating characteristic curve (AUC) of 0.870 versus 0.831). We also observe more accurate predictive models by adding features derived from processing the consultation notes compared to solely using coded data (AUC of 0.896 versus 0.882) although the difference is not significant. The extracted features from the notes are shown be equally predictive (i.e. there is no significant difference in performance) compared to the coded data of the consultations. It is possible to extract useful predictors from uncoded consultation notes that improve predictive performance. Techniques linking text to concepts in medical ontologies to derive these predictors are shown to perform best for predicting CRC in our EMR dataset. Copyright © 2016 Elsevier B.V. All rights reserved.
The Role of Masculinity and Depressive Symptoms in Predicting Suicidal Ideation in Homeless Men.
Genuchi, Matthew C
2018-02-20
Men's suicide rates may be influenced by difficulties recognizing externalizing depressive symptoms in men that adhere to hegemonic masculine gender role norms. The purpose of this study was to investigate the ability of externalizing depressive symptoms, internalizing depressive symptoms, and hegemonic masculinity in predicting the existence and severity of suicidal ideation. Homeless men (n = 94) completed questionnaires at a resource center in the Rocky Mountain Western United States. Internalizing symptoms predicted the existence of suicidal ideation, and both externalizing and internalizing symptoms predicted increased severity of suicidal ideation. The masculine norms violence and playboy were correlated with men's suicidal ideation. An externalizing-internalizing model of predicting suicide in men and men's adherence to certain masculine gender role norms may be valuable to further efforts in suicide assessment and prevention.
Link prediction in the network of global virtual water trade
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; Laio, Francesco; Ridolfi, Luca
2016-04-01
Through the international food-trade, water resources are 'virtually' transferred from the country of production to the country of consumption. The international food-trade, thus, implies a network of virtual water flows from exporting to importing countries (i.e., nodes). Given the dynamical behavior of the network, where food-trade relations (i.e., links) are created and dismissed every year, link prediction becomes a challenge. In this study, we propose a novel methodology for link prediction in the virtual water network. The model aims at identifying the main factors (among 17 different variables) driving the creation of a food-trade relation between any two countries, along the period between 1986 and 2011. Furthermore, the model can be exploited to investigate the network configuration in the future, under different possible (climatic and demographic) scenarios. The model grounds the existence of a link between any two nodes on the link weight (i.e., the virtual water flow): a link exists when the nodes exchange a minimum (fixed) volume of virtual water. Starting from a set of potential links between any two nodes, we fit the associated virtual water flows (both the real and the null ones) by means of multivariate linear regressions. Then, links with estimated flows higher than a minimum value (i.e., threshold) are considered active-links, while the others are non-active ones. The discrimination between active and non-active links through the threshold introduces an error (called link-prediction error) because some real links are lost (i.e., missed links) and some non-existing links (i.e., spurious links) are inevitably introduced in the network. The major drivers are those significantly minimizing the link-prediction error. Once the structure of the unweighted virtual water network is known, we apply, again, linear regressions to assess the major factors driving the fluxes traded along (modelled) active-links. Results indicate that, on the one hand, population and fertilizer use, together with link properties (such as the distance between nodes), are the major factors driving the links creation; on the other hand, population, distance, and gross domestic product are essential to model the flux entity. The results are promising since the model is able to correctly predict the 85% of the 16422 food-trade links (15% are missed), by spuriously adding to the real network only the 5% of non-existing links. The link-prediction error, evaluated as the sum of the percentage of missed and spurious links, is around 20% and it is constant over the study period. Only the 0.01% of the global virtual water flow is traded along missed links and an even lower flow is added by the spurious links (0.003%).
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
Collaborative Research: Robust Climate Projections and Stochastic Stability of Dynamical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilya Zaliapin
This project focused on conceptual exploration of El Nino/Southern Oscillation (ENSO) variability and sensitivity using a Delay Differential Equation developed in the project. We have (i) established the existence and continuous dependence of solutions of the model (ii) explored multiple models solutions, and the distribution of solutions extrema, and (iii) established and explored the phase locking phenomenon and the existence of multiple solutions for the same values of model parameters. In addition, we have applied to our model the concept of pullback attractor, which greatly facilitated predictive understanding of the nonlinear model's behavior.
Ensemble-based prediction of RNA secondary structures.
Aghaeepour, Nima; Hoos, Holger H
2013-04-24
Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.
External validation of preexisting first trimester preeclampsia prediction models.
Allen, Rebecca E; Zamora, Javier; Arroyo-Manzano, David; Velauthar, Luxmilar; Allotey, John; Thangaratinam, Shakila; Aquilina, Joseph
2017-10-01
To validate the increasing number of prognostic models being developed for preeclampsia using our own prospective study. A systematic review of literature that assessed biomarkers, uterine artery Doppler and maternal characteristics in the first trimester for the prediction of preeclampsia was performed and models selected based on predefined criteria. Validation was performed by applying the regression coefficients that were published in the different derivation studies to our cohort. We assessed the models discrimination ability and calibration. Twenty models were identified for validation. The discrimination ability observed in derivation studies (Area Under the Curves) ranged from 0.70 to 0.96 when these models were validated against the validation cohort, these AUC varied importantly, ranging from 0.504 to 0.833. Comparing Area Under the Curves obtained in the derivation study to those in the validation cohort we found statistically significant differences in several studies. There currently isn't a definitive prediction model with adequate ability to discriminate for preeclampsia, which performs as well when applied to a different population and can differentiate well between the highest and lowest risk groups within the tested population. The pre-existing large number of models limits the value of further model development and future research should be focussed on further attempts to validate existing models and assessing whether implementation of these improves patient care. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Evaluating remedial alternatives for an acid mine drainage stream: A model post audit
Runkel, Robert L.; Kimball, Briant A.; Walton-Day, Katherine; Verplanck, Philip L.; Broshears, Robert E.
2012-01-01
A post audit for a reactive transport model used to evaluate acid mine drainage treatment systems is presented herein. The post audit is based on a paired synoptic approach in which hydrogeochemical data are collected at low (existing conditions) and elevated (following treatment) pH. Data obtained under existing, low-pH conditions are used for calibration, and the resultant model is used to predict metal concentrations observed following treatment. Predictions for Al, As, Fe, H+, and Pb accurately reproduce the observed reduction in dissolved concentrations afforded by the treatment system, and the information provided in regard to standard attainment is also accurate (predictions correctly indicate attainment or nonattainment of water quality standards for 19 of 25 cases). Errors associated with Cd, Cu, and Zn are attributed to misspecification of sorbent mass (precipitated Fe). In addition to these specific results, the post audit provides insight in regard to calibration and sensitivity analysis that is contrary to conventional wisdom. Steps taken during the calibration process to improve simulations of As sorption were ultimately detrimental to the predictive results, for example, and the sensitivity analysis failed to bracket observed metal concentrations.
Zhu, Qing; Riley, William J; Tang, Jinyun
2017-04-01
Terrestrial plants assimilate anthropogenic CO 2 through photosynthesis and synthesizing new tissues. However, sustaining these processes requires plants to compete with microbes for soil nutrients, which therefore calls for an appropriate understanding and modeling of nutrient competition mechanisms in Earth System Models (ESMs). Here, we survey existing plant-microbe competition theories and their implementations in ESMs. We found no consensus regarding the representation of nutrient competition and that observational and theoretical support for current implementations are weak. To reconcile this situation, we applied the Equilibrium Chemistry Approximation (ECA) theory to plant-microbe nitrogen competition in a detailed grassland 15 N tracer study and found that competition theories in current ESMs fail to capture observed patterns and the ECA prediction simplifies the complex nature of nutrient competition and quantitatively matches the 15 N observations. Since plant carbon dynamics are strongly modulated by soil nutrient acquisition, we conclude that (1) predicted nutrient limitation effects on terrestrial carbon accumulation by existing ESMs may be biased and (2) our ECA-based approach may improve predictions by mechanistically representing plant-microbe nutrient competition. © 2016 by the Ecological Society of America.
Evaluating remedial alternatives for an acid mine drainage stream: a model post audit.
Runkel, Robert L; Kimball, Briant A; Walton-Day, Katherine; Verplanck, Philip L; Broshears, Robert E
2012-01-03
A post audit for a reactive transport model used to evaluate acid mine drainage treatment systems is presented herein. The post audit is based on a paired synoptic approach in which hydrogeochemical data are collected at low (existing conditions) and elevated (following treatment) pH. Data obtained under existing, low-pH conditions are used for calibration, and the resultant model is used to predict metal concentrations observed following treatment. Predictions for Al, As, Fe, H(+), and Pb accurately reproduce the observed reduction in dissolved concentrations afforded by the treatment system, and the information provided in regard to standard attainment is also accurate (predictions correctly indicate attainment or nonattainment of water quality standards for 19 of 25 cases). Errors associated with Cd, Cu, and Zn are attributed to misspecification of sorbent mass (precipitated Fe). In addition to these specific results, the post audit provides insight in regard to calibration and sensitivity analysis that is contrary to conventional wisdom. Steps taken during the calibration process to improve simulations of As sorption were ultimately detrimental to the predictive results, for example, and the sensitivity analysis failed to bracket observed metal concentrations.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
The Use of a Block Diagram Simulation Language for Rapid Model Prototyping
NASA Technical Reports Server (NTRS)
Whitlow, Johnathan E.; Engrand, Peter
1996-01-01
The research performed this summer was a continuation of work performed during the 1995 NASA/ASEE Summer Fellowship. The focus of the work was to expand previously generated predictive models for liquid oxygen (LOX) loading into the external fuel tank of the shuttle. The models which were developed using a block diagram simulation language known as VisSim, were evaluated on numerous shuttle flights and found to well in most cases. Once the models were refined and validated, the predictive methods were integrated into the existing Rockwell software propulsion advisory tool (PAT). Although time was not sufficient to completely integrate the models developed into PAT, the ability to predict flows and pressures in the orbiter section and graphically display the results was accomplished.
A review of methods for predicting air pollution dispersion
NASA Technical Reports Server (NTRS)
Mathis, J. J., Jr.; Grose, W. L.
1973-01-01
Air pollution modeling, and problem areas in air pollution dispersion modeling were surveyed. Emission source inventory, meteorological data, and turbulent diffusion are discussed in terms of developing a dispersion model. Existing mathematical models of urban air pollution, and highway and airport models are discussed along with their limitations. Recommendations for improving modeling capabilities are included.
Modelling proteins' hidden conformations to predict antibiotic resistance
NASA Astrophysics Data System (ADS)
Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.
2016-10-01
TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, C.H.; Ready, A.B.; Rea, J.
1995-06-01
Versions of the computer program PROATES (PROcess Analysis for Thermal Energy Systems) have been used since 1979 to analyse plant performance improvement proposals relating to existing plant and also to evaluate new plant designs. Several plant modifications have been made to improve performance based on the model predictions and the predicted performance has been realised in practice. The program was born out of a need to model the overall steady state performance of complex plant to enable proposals to change plant component items or operating strategy to be evaluated. To do this with confidence it is necessary to model themore » multiple thermodynamic interactions between the plant components. The modelling system is modular in concept allowing the configuration of individual plant components to represent any particular power plant design. A library exists of physics based modules which have been extensively validated and which provide representations of a wide range of boiler, turbine and CW system components. Changes to model data and construction is achieved via a user friendly graphical model editing/analysis front-end with results being presented via the computer screen or hard copy. The paper describes briefly the modelling system but concentrates mainly on the application of the modelling system to assess design re-optimisation, firing with different fuels and the re-powering of an existing plant.« less
Wind power application research on the fusion of the determination and ensemble prediction
NASA Astrophysics Data System (ADS)
Lan, Shi; Lina, Xu; Yuzhu, Hao
2017-07-01
The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.
Hicks, Katharine E; Zhao, Yichen; Fallah, Nader; Rivers, Carly S; Noonan, Vanessa K; Plashkes, Tova; Wai, Eugene K; Roffey, Darren M; Tsai, Eve C; Paquet, Jerome; Attabib, Najmedden; Marion, Travis; Ahn, Henry; Phan, Philippe
2017-10-01
Traumatic spinal cord injury (SCI) is a debilitating condition with limited treatment options for neurologic or functional recovery. The ability to predict the prognosis of walking post injury with emerging prediction models could aid in rehabilitation strategies and reintegration into the community. To revalidate an existing clinical prediction model for independent ambulation (van Middendorp et al., 2011) using acute and long-term post-injury follow-up data, and to investigatethe accuracy of a simplified model using prospectively collected data from a Canadian multicenter SCI database, the Rick Hansen Spinal Cord Injury Registry (RHSCIR). Prospective cohort study. The analysis cohort consisted of 278 adult individuals with traumatic SCI enrolled in the RHSCIR for whom complete neurologic examination data and Functional Independence Measure (FIM) outcome data were available. The FIM locomotor score was used to assess independent walking ability (defined as modified or complete independence in walk or combined walk and wheelchair modality) at 1-year follow-up for each participant. A logistic regression (LR) model based on age and four neurologic variables was applied to our cohort of 278 RHSCIR participants. Additionally, a simplified LR model was created. The Hosmer-Lemeshow goodness of fit test was used to check if the predictive model is applicable to our data set. The performance of the model was verified by calculating the area under the receiver operating characteristic curve (AUC). The accuracy of the model was tested using a cross-validation technique. This study was supported by a grant from The Ottawa Hospital Academic Medical Organization ($50,000 over 2 years). The RHSCIR is sponsored by the Rick Hansen Institute and is supported by funding from Health Canada, Western Economic Diversification Canada, and the provincial governments of Alberta, British Columbia, Manitoba, and Ontario. ET and JP report receiving grants from the Rick Hansen Institute (approximately $60,000 and $30,000 per year, respectively). DMR reports receiving remuneration for consulting services provided to Palladian Health, LLC and Pacira Pharmaceuticals, Inc ($20,000-$30,000 annually), although neither relationship presents a potential conflict of interest with the submitted work. KEH received a grant for involvement in the present study from the Government of Canada as part of the Canada Summer Jobs Program ($3,000). JP reports receiving an educational grant from Medtronic Canada outside of the submitted work ($75,000 annually). TM reports receiving educational fellowship support from AO Spine, AO Trauma, and Medtronic; however, none of these relationships are financial in nature. All remaining authors have no conflicts of interest to disclose. The fitted prediction model generated 85% overall classification accuracy, 79% sensitivity, and 90% specificity. The prediction model was able to accurately classify independent walking ability (AUC 0.889, 95% confidence interval [CI] 0.846-0.933, p<.001) compared with the existing prediction model, despite the use of a different outcome measure (FIM vs. Spinal Cord Independence Measure) to qualify walking ability. A simplified, three-variable LR model based on age and two neurologic variables had an overall classification accuracy of 84%, with 76% sensitivity and 90% specificity, demonstrating comparable accuracy with its five-variable prediction model counterpart. The AUC was 0.866 (95% CI 0.816-0.916, p<.01), only marginally less than that of the existing prediction model. A simplified predictive model with similar accuracy to a more complex model for predicting independent walking was created, which improves utility in a clinical setting. Such models will allow clinicians to better predict the prognosis of ambulation in individuals who have sustained a traumatic SCI. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Learning Management System with Prediction Model and Course-Content Recommendation Module
ERIC Educational Resources Information Center
Evale, Digna S.
2017-01-01
Aim/Purpose: This study is an attempt to enhance the existing learning management systems today through the integration of technology, particularly with educational data mining and recommendation systems. Background: It utilized five-year historical data to find patterns for predicting student performance in Java Programming to generate…
MODELING FINE SEDIMENT TRANSPORT IN ESTUARIES
A sediment transport model (SEDIMENT IIIA) was developed to assist in predicting the fate of chemical pollutants sorbed to cohesive sediments in rivers and estuaries. Laboratory experiments were conducted to upgrade an existing two-dimensional, depth-averaged, finite element, coh...
O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite
2012-01-01
Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.
The Problem with Big Data: Operating on Smaller Datasets to Bridge the Implementation Gap.
Mann, Richard P; Mushtaq, Faisal; White, Alan D; Mata-Cervantes, Gabriel; Pike, Tom; Coker, Dalton; Murdoch, Stuart; Hiles, Tim; Smith, Clare; Berridge, David; Hinchliffe, Suzanne; Hall, Geoff; Smye, Stephen; Wilkie, Richard M; Lodge, J Peter A; Mon-Williams, Mark
2016-01-01
Big datasets have the potential to revolutionize public health. However, there is a mismatch between the political and scientific optimism surrounding big data and the public's perception of its benefit. We suggest a systematic and concerted emphasis on developing models derived from smaller datasets to illustrate to the public how big data can produce tangible benefits in the long term. In order to highlight the immediate value of a small data approach, we produced a proof-of-concept model predicting hospital length of stay. The results demonstrate that existing small datasets can be used to create models that generate a reasonable prediction, facilitating health-care delivery. We propose that greater attention (and funding) needs to be directed toward the utilization of existing information resources in parallel with current efforts to create and exploit "big data."
USDA-ARS?s Scientific Manuscript database
Streambank stabilization techniques are often implemented to reduce sediment loads from unstable streambanks. Process-based models can predict sediment yields with stabilization scenarios prior to implementation. However, a framework does not exist on how to effectively utilize these models to evalu...
Oscar, T P
2017-01-01
Predictive models are valuable tools for assessing food safety. Existing thermal inactivation models for Salmonella and ground chicken do not provide predictions above 71°C, which is below the recommended final cooked temperature of 73.9°C for chicken. They also do not predict when all Salmonella are eliminated without extrapolating beyond the data used to develop them. Thus, a study was undertaken to develop a model for thermal inactivation of Salmonella to elimination in ground chicken at temperatures above those of existing models. Ground chicken thigh portions (0.76 cm 3 ) in microcentrifuge tubes were inoculated with 4.45 ± 0.25 log most probable number (MPN) of a single strain of Salmonella Typhimurium (chicken isolate). They were cooked at 50 to 100°C in 2 or 2.5°C increments in a heating block that simulated two-sided pan frying. A whole sample enrichment, miniature MPN (WSE-mMPN) method was used for enumeration. The lower limit of detection was one Salmonella cell per portion. MPN data were used to develop a multiple-layer feedforward neural network model. Model performance was evaluated using the acceptable prediction zone (APZ) method. The proportion of residuals in an APZ (pAPZ) from -1 log (fail-safe) to 0.5 log (fail-dangerous) was 0.911 (379 of 416) for dependent data and 0.910 (162 of 178) for independent data for interpolation. A pAPZ ≥0.7 indicated that model predictions had acceptable bias and accuracy. There were no local prediction problems because pAPZ for individual thermal inactivation curves ranged from 0.813 to 1.000. Independent data for interpolation satisfied the test data criteria of the APZ method. Thus, the model was successfully validated. Predicted times for a 1-log reduction ranged from 9.6 min at 56°C to 0.71 min at 100°C. Predicted times for elimination ranged from 8.6 min at 60°C to 1.4 min at 100°C. The model will be a valuable new tool for predicting and managing this important risk to public health.
Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P
2010-06-01
The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.
Modeling irrigation behavior in groundwater systems
NASA Astrophysics Data System (ADS)
Foster, Timothy; Brozović, Nicholas; Butler, Adrian P.
2014-08-01
Integrated hydro-economic models have been widely applied to water management problems in regions of intensive groundwater-fed irrigation. However, policy interpretations may be limited as most existing models do not explicitly consider two important aspects of observed irrigation decision making, namely the limits on instantaneous irrigation rates imposed by well yield and the intraseasonal structure of irrigation planning. We develop a new modeling approach for determining irrigation demand that is based on observed farmer behavior and captures the impacts on production and water use of both well yield and climate. Through a case study of irrigated corn production in the Texas High Plains region of the United States we predict optimal irrigation strategies under variable levels of groundwater supply, and assess the limits of existing models for predicting land and groundwater use decisions by farmers. Our results show that irrigation behavior exhibits complex nonlinear responses to changes in groundwater availability. Declining well yields induce large reductions in the optimal size of irrigated area and irrigation use as constraints on instantaneous application rates limit the ability to maintain sufficient soil moisture to avoid negative impacts on crop yield. We demonstrate that this important behavioral response to limited groundwater availability is not captured by existing modeling approaches, which therefore may be unreliable predictors of irrigation demand, agricultural profitability, and resilience to climate change and aquifer depletion.
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
Shao, Kan; Small, Mitchell J
2011-10-01
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.
Using models to manage systems subject to sustainability indicators
Hill, M.C.
2006-01-01
Mathematical and numerical models can provide insight into sustainability indicators using relevant simulated quantities, which are referred to here as predictions. To be useful, many concerns need to be considered. Four are discussed here: (a) mathematical and numerical accuracy of the model; (b) the accuracy of the data used in model development, (c) the information observations provide to aspects of the model important to predictions of interest as measured using sensitivity analysis; and (d) the existence of plausible alternative models for a given system. The four issues are illustrated using examples from conservative and transport modelling, and using conceptual arguments. Results suggest that ignoring these issues can produce misleading conclusions.
CCTOP: a Consensus Constrained TOPology prediction web server.
Dobson, László; Reményi, István; Tusnády, Gábor E
2015-07-01
The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Positrons from quantum evaporation of primordial black-holes
NASA Technical Reports Server (NTRS)
Durouchoux, P.; Wallyn, P.; Dubus, G.
1997-01-01
The unconfirmed prediction of quantum evaporation of primordial black holes (PBHs) is considered together with the related unanswered questions of whether PBHs ever existed and whether any could still exist. The behavior of the positrons from PHBs is modeled in relation to three facts. Firstly, the integrated emitted number spectrum of positrons is six to eight times larger than that of photons. Secondly, positrons emitted from PBHs lose energy and annihilate, producing a prominent line at 511 keV which is redshifted by the expansion of the universe. Thirdly, these photons may be detectable in the X-ray and low gamma ray energy ranges. The model predicts a flux which is significantly inferior to the instrument sensitivities of the foreseeable future.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
Mysara, Mohamed; Elhefnawi, Mahmoud; Garibaldi, Jonathan M
2012-06-01
The investigation of small interfering RNA (siRNA) and its posttranscriptional gene-regulation has become an extremely important research topic, both for fundamental reasons and for potential longer-term therapeutic benefits. Several factors affect the functionality of siRNA including positional preferences, target accessibility and other thermodynamic features. State of the art tools aim to optimize the selection of target siRNAs by identifying those that may have high experimental inhibition. Such tools implement artificial neural network models as Biopredsi and ThermoComposition21, and linear regression models as DSIR, i-Score and Scales, among others. However, all these models have limitations in performance. In this work, a neural-network trained new siRNA scoring/efficacy prediction model was developed based on combining two existing scoring algorithms (ThermoComposition21 and i-Score), together with the whole stacking energy (ΔG), in a multi-layer artificial neural network. These three parameters were chosen after a comparative combinatorial study between five well known tools. Our developed model, 'MysiRNA' was trained on 2431 siRNA records and tested using three further datasets. MysiRNA was compared with 11 alternative existing scoring tools in an evaluation study to assess the predicted and experimental siRNA efficiency where it achieved the highest performance both in terms of correlation coefficient (R(2)=0.600) and receiver operating characteristics analysis (AUC=0.808), improving the prediction accuracy by up to 18% with respect to sensitivity and specificity of the best available tools. MysiRNA is a novel, freely accessible model capable of predicting siRNA inhibition efficiency with improved specificity and sensitivity. This multiclassifier approach could help improve the performance of prediction in several bioinformatics areas. MysiRNA model, part of MysiRNA-Designer package [1], is expected to play a key role in siRNA selection and evaluation. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Xia, Z. M.; Wang, C. G.; Tan, H. F.
2018-04-01
A pseudo-beam model with modified internal bending moment is presented to predict elastic properties of graphene, including the Young's modulus and Poisson's ratio. In order to overcome a drawback in existing molecular structural mechanics models, which only account for pure bending (constant bending moment), the presented model accounts for linear bending moments deduced from the balance equations. Based on this pseudo-beam model, an analytical prediction is accomplished to predict the Young's modulus and Poisson's ratio of graphene based on the equation of the strain energies by using Castigliano second theorem. Then, the elastic properties of graphene are calculated compared with results available in literature, which verifies the feasibility of the pseudo-beam model. Finally, the pseudo-beam model is utilized to study the twisting wrinkling characteristics of annular graphene. Due to modifications of the internal bending moment, the wrinkling behaviors of graphene sheet are predicted accurately. The obtained results show that the pseudo-beam model has a good ability to predict the elastic properties of graphene accurately, especially the out-of-plane deformation behavior.
Ambler, Graeme K; Gohel, Manjit S; Mitchell, David C; Loftus, Ian M; Boyle, Jonathan R
2015-01-01
Accurate adjustment of surgical outcome data for risk is vital in an era of surgeon-level reporting. Current risk prediction models for abdominal aortic aneurysm (AAA) repair are suboptimal. We aimed to develop a reliable risk model for in-hospital mortality after intervention for AAA, using rigorous contemporary statistical techniques to handle missing data. Using data collected during a 15-month period in the United Kingdom National Vascular Database, we applied multiple imputation methodology together with stepwise model selection to generate preoperative and perioperative models of in-hospital mortality after AAA repair, using two thirds of the available data. Model performance was then assessed on the remaining third of the data by receiver operating characteristic curve analysis and compared with existing risk prediction models. Model calibration was assessed by Hosmer-Lemeshow analysis. A total of 8088 AAA repair operations were recorded in the National Vascular Database during the study period, of which 5870 (72.6%) were elective procedures. Both preoperative and perioperative models showed excellent discrimination, with areas under the receiver operating characteristic curve of .89 and .92, respectively. This was significantly better than any of the existing models (area under the receiver operating characteristic curve for best comparator model, .84 and .88; P < .001 and P = .001, respectively). Discrimination remained excellent when only elective procedures were considered. There was no evidence of miscalibration by Hosmer-Lemeshow analysis. We have developed accurate models to assess risk of in-hospital mortality after AAA repair. These models were carefully developed with rigorous statistical methodology and significantly outperform existing methods for both elective cases and overall AAA mortality. These models will be invaluable for both preoperative patient counseling and accurate risk adjustment of published outcome data. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Sweat loss prediction using a multi-model approach
NASA Astrophysics Data System (ADS)
Xu, Xiaojiang; Santee, William R.
2011-07-01
A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.
Near-wall k-epsilon turbulence modeling
NASA Technical Reports Server (NTRS)
Mansour, N. N.; Kim, J.; Moin, P.
1987-01-01
The flow fields from a turbulent channel simulation are used to compute the budgets for the turbulent kinetic energy (k) and its dissipation rate (epsilon). Data from boundary layer simulations are used to analyze the dependence of the eddy-viscosity damping-function on the Reynolds number and the distance from the wall. The computed budgets are used to test existing near-wall turbulence models of the k-epsilon type. It was found that the turbulent transport models should be modified in the vicinity of the wall. It was also found that existing models for the different terms in the epsilon-budget are adequate in the region from the wall, but need modification near the wall. The channel flow is computed using a k-epsilon model with an eddy-viscosity damping function from the data and no damping functions in the epsilon-equation. These computations show that the k-profile can be adequately predicted, but to correctly predict the epsilon-profile, damping functions in the epsilon-equation are needed.
Huang, Yu-An; You, Zhu-Hong; Chen, Xing
2018-01-01
Drug-Target Interactions (DTI) play a crucial role in discovering new drug candidates and finding new proteins to target for drug development. Although the number of detected DTI obtained by high-throughput techniques has been increasing, the number of known DTI is still limited. On the other hand, the experimental methods for detecting the interactions among drugs and proteins are costly and inefficient. Therefore, computational approaches for predicting DTI are drawing increasing attention in recent years. In this paper, we report a novel computational model for predicting the DTI using extremely randomized trees model and protein amino acids information. More specifically, the protein sequence is represented as a Pseudo Substitution Matrix Representation (Pseudo-SMR) descriptor in which the influence of biological evolutionary information is retained. For the representation of drug molecules, a novel fingerprint feature vector is utilized to describe its substructure information. Then the DTI pair is characterized by concatenating the two vector spaces of protein sequence and drug substructure. Finally, the proposed method is explored for predicting the DTI on four benchmark datasets: Enzyme, Ion Channel, GPCRs and Nuclear Receptor. The experimental results demonstrate that this method achieves promising prediction accuracies of 89.85%, 87.87%, 82.99% and 81.67%, respectively. For further evaluation, we compared the performance of Extremely Randomized Trees model with that of the state-of-the-art Support Vector Machine classifier. And we also compared the proposed model with existing computational models, and confirmed 15 potential drug-target interactions by looking for existing databases. The experiment results show that the proposed method is feasible and promising for predicting drug-target interactions for new drug candidate screening based on sizeable features. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Real-time assessments of water quality: expanding nowcasting throughout the Great Lakes
,
2013-01-01
Nowcasts are systems that inform the public of current bacterial water-quality conditions at beaches on the basis of predictive models. During 2010–12, the U.S. Geological Survey (USGS) worked with 23 local and State agencies to improve existing operational beach nowcast systems at 4 beaches and expand the use of predictive models in nowcasts at an additional 45 beaches throughout the Great Lakes. The predictive models were specific to each beach, and the best model for each beach was based on a unique combination of environmental and water-quality explanatory variables. The variables used most often in models to predict Escherichia coli (E. coli) concentrations or the probability of exceeding a State recreational water-quality standard included turbidity, day of the year, wave height, wind direction and speed, antecedent rainfall for various time periods, and change in lake level over 24 hours. During validation of 42 beach models during 2012, the models performed better than the current method to assess recreational water quality (previous day's E. coli concentration). The USGS will continue to work with local agencies to improve nowcast predictions, enable technology transfer of predictive model development procedures, and implement more operational systems during 2013 and beyond.
NASA Technical Reports Server (NTRS)
Pope, L. D.; Wilby, E. G.
1982-01-01
An airplane interior noise prediction model is developed to determine the important parameters associated with sound transmission into the interiors of airplanes, and to identify apropriate noise control methods. Models for stiffened structures, and cabin acoustics with floor partition are developed. Validation studies are undertaken using three test articles: a ring stringer stiffened cylinder, an unstiffened cylinder with floor partition, and ring stringer stiffened cylinder with floor partition and sidewall trim. The noise reductions of the three test articles are computed using the heoretical models and compared to measured values. A statistical analysis of the comparison data indicates that there is no bias in the predictions although a substantial random error exists so that a discrepancy of more than five or six dB can be expected for about one out of three predictions.
Practical extension of a Lake States tree height model
Don C. Bragg
2008-01-01
By adapting data from national and state champion lists and the predictions of an existing height model, an exponential function was developed to improvetree height estimation. As a case study, comparisons between the original and redesigned model were made with eastern white pine (Pinus strobus L.). Forexample, the heights...
VERIFICATION OF THE HYDROLOGIC EVALUATION OF LANDFILL PERFORMANCE (HELP) MODEL USING FIELD DATA
The report describes a study conducted to verify the Hydrologic Evaluation of Landfill Performance (HELP) computer model using existing field data from a total of 20 landfill cells at 7 sites in the United States. Simulations using the HELP model were run to compare the predicted...
NASA Technical Reports Server (NTRS)
Foster, John V.; Hartman, David C.
2017-01-01
The NASA Unmanned Aircraft System (UAS) Traffic Management (UTM) project is conducting research to enable civilian low-altitude airspace and UAS operations. A goal of this project is to develop probabilistic methods to quantify risk during failures and off nominal flight conditions. An important part of this effort is the reliable prediction of feasible trajectories during off-nominal events such as control failure, atmospheric upsets, or navigation anomalies that can cause large deviations from the intended flight path or extreme vehicle upsets beyond the normal flight envelope. Few examples of high-fidelity modeling and prediction of off-nominal behavior for small UAS (sUAS) vehicles exist, and modeling requirements for accurately predicting flight dynamics for out-of-envelope or failure conditions are essentially undefined. In addition, the broad range of sUAS aircraft configurations already being fielded presents a significant modeling challenge, as these vehicles are often very different from one another and are likely to possess dramatically different flight dynamics and resultant trajectories and may require different modeling approaches to capture off-nominal behavior. NASA has undertaken an extensive research effort to define sUAS flight dynamics modeling requirements and develop preliminary high fidelity six degree-of-freedom (6-DOF) simulations capable of more closely predicting off-nominal flight dynamics and trajectories. This research has included a literature review of existing sUAS modeling and simulation work as well as development of experimental testing methods to measure and model key components of propulsion, airframe and control characteristics. The ultimate objective of these efforts is to develop tools to support UTM risk analyses and for the real-time prediction of off-nominal trajectories for use in the UTM Risk Assessment Framework (URAF). This paper focuses on modeling and simulation efforts for a generic quad-rotor configuration typical of many commercial vehicles in use today. An overview of relevant off-nominal multi-rotor behaviors will be presented to define modeling goals and to identify the prediction capability lacking in simplified models of multi-rotor performance. A description of recent NASA wind tunnel testing of multi-rotor propulsion and airframe components will be presented illustrating important experimental and data acquisition methods, and a description of preliminary propulsion and airframe models will be presented. Lastly, examples of predicted off-nominal flight dynamics and trajectories from the simulation will be presented.
Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi
2012-03-01
Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.
Predicting Pilot Performance in Off-Nominal Conditions: A Meta-Analysis and Model Validation
NASA Technical Reports Server (NTRS)
Wickens, C.D.; Hooey, B.L.; Gore, B.F.; Sebok, A.; Koenecke, C.; Salud, E.
2009-01-01
Pilot response to off-nominal (very rare) events represents a critical component to understanding the safety of next generation airspace technology and procedures. We describe a meta-analysis designed to integrate the existing data regarding pilot accuracy of detecting rare, unexpected events such as runway incursions in realistic flight simulations. Thirty-five studies were identified and pilot responses were categorized by expectancy, event location, and whether the pilot was flying with a highway-in-the-sky display. All three dichotomies produced large, significant effects on event miss rate. A model of human attention and noticing, N-SEEV, was then used to predict event noticing performance as a function of event salience and expectancy, and retinal eccentricity. Eccentricity is predicted from steady state scanning by the SEEV model of attention allocation. The model was used to predict miss rates for the expectancy, location and highway-in-the-sky (HITS) effects identified in the meta-analysis. The correlation between model-predicted results and data from the meta-analysis was 0.72.
A Lightweight Radio Propagation Model for Vehicular Communication in Road Tunnels.
Qureshi, Muhammad Ahsan; Noor, Rafidah Md; Shamim, Azra; Shamshirband, Shahaboddin; Raymond Choo, Kim-Kwang
2016-01-01
Radio propagation models (RPMs) are generally employed in Vehicular Ad Hoc Networks (VANETs) to predict path loss in multiple operating environments (e.g. modern road infrastructure such as flyovers, underpasses and road tunnels). For example, different RPMs have been developed to predict propagation behaviour in road tunnels. However, most existing RPMs for road tunnels are computationally complex and are based on field measurements in frequency band not suitable for VANET deployment. Furthermore, in tunnel applications, consequences of moving radio obstacles, such as large buses and delivery trucks, are generally not considered in existing RPMs. This paper proposes a computationally inexpensive RPM with minimal set of parameters to predict path loss in an acceptable range for road tunnels. The proposed RPM utilizes geometric properties of the tunnel, such as height and width along with the distance between sender and receiver, to predict the path loss. The proposed RPM also considers the additional attenuation caused by the moving radio obstacles in road tunnels, while requiring a negligible overhead in terms of computational complexity. To demonstrate the utility of our proposed RPM, we conduct a comparative summary and evaluate its performance. Specifically, an extensive data gathering campaign is carried out in order to evaluate the proposed RPM. The field measurements use the 5 GHz frequency band, which is suitable for vehicular communication. The results demonstrate that a close match exists between the predicted values and measured values of path loss. In particular, an average accuracy of 94% is found with R2 = 0.86.
The two-state dimer receptor model: a general model for receptor dimers.
Franco, Rafael; Casadó, Vicent; Mallol, Josefa; Ferrada, Carla; Ferré, Sergi; Fuxe, Kjell; Cortés, Antoni; Ciruela, Francisco; Lluis, Carmen; Canela, Enric I
2006-06-01
Nonlinear Scatchard plots are often found for agonist binding to G-protein-coupled receptors. Because there is clear evidence of receptor dimerization, these nonlinear Scatchard plots can reflect cooperativity on agonist binding to the two binding sites in the dimer. According to this, the "two-state dimer receptor model" has been recently derived. In this article, the performance of the model has been analyzed in fitting data of agonist binding to A(1) adenosine receptors, which are an example of receptor displaying concave downward Scatchard plots. Analysis of agonist/antagonist competition data for dopamine D(1) receptors using the two-state dimer receptor model has also been performed. Although fitting to the two-state dimer receptor model was similar to the fitting to the "two-independent-site receptor model", the former is simpler, and a discrimination test selects the two-state dimer receptor model as the best. This model was also very robust in fitting data of estrogen binding to the estrogen receptor, for which Scatchard plots are concave upward. On the one hand, the model would predict the already demonstrated existence of estrogen receptor dimers. On the other hand, the model would predict that concave upward Scatchard plots reflect positive cooperativity, which can be neither predicted nor explained by assuming the existence of two different affinity states. In summary, the two-state dimer receptor model is good for fitting data of binding to dimeric receptors displaying either linear, concave upward, or concave downward Scatchard plots.
Sediment transport through self-adjusting, bedrock-walled waterfall plunge pools
NASA Astrophysics Data System (ADS)
Scheingross, Joel S.; Lamb, Michael P.
2016-05-01
Many waterfalls have deep plunge pools that are often partially or fully filled with sediment. Sediment fill may control plunge-pool bedrock erosion rates, partially determine habitat availability for aquatic organisms, and affect sediment routing and debris flow initiation. Currently, there exists no mechanistic model to describe sediment transport through waterfall plunge pools. Here we develop an analytical model to predict steady-state plunge-pool depth and sediment-transport capacity by combining existing jet theory with sediment transport mechanics. Our model predicts plunge-pool sediment-transport capacity increases with increasing river discharge, flow velocity, and waterfall drop height and decreases with increasing plunge-pool depth, radius, and grain size. We tested the model using flume experiments under varying waterfall and plunge-pool geometries, flow hydraulics, and sediment size. The model and experiments show that through morphodynamic feedbacks, plunge pools aggrade to reach shallower equilibrium pool depths in response to increases in imposed sediment supply. Our theory for steady-state pool depth matches the experiments with an R2 value of 0.8, with discrepancies likely due to model simplifications of the hydraulics and sediment transport. Analysis of 75 waterfalls suggests that the water depths in natural plunge pools are strongly influenced by upstream sediment supply, and our model provides a mass-conserving framework to predict sediment and water storage in waterfall plunge pools for sediment routing, habitat assessment, and bedrock erosion modeling.
Cogswell, Rebecca; Kobashigawa, Erin; McGlothlin, Dana; Shaw, Robin; De Marco, Teresa
2012-11-01
The Registry to Evaluate Early and Long-Term Pulmonary Arterial (PAH) Hypertension Disease Management (REVEAL) model was designed to predict 1-year survival in patients with PAH. Multivariate prediction models need to be evaluated in cohorts distinct from the derivation set to determine external validity. In addition, limited data exist on the utility of this model in the prediction of long-term survival. REVEAL model performance was assessed to predict 1-year and 5-year outcomes, defined as survival or composite survival or freedom from lung transplant, in 140 patients with PAH. The validation cohort had a higher proportion of human immunodeficiency virus (7.9% vs 1.9%, p < 0.0001), methamphetamine use (19.3% vs 4.9%, p < 0.0001), and portal hypertension PAH (16.4% vs 5.1%, p < 0.0001) compared with the development cohort. The C-index of the model to predict survival was 0.765 at 1 year and 0.712 at 5 years of follow-up. The C-index of the model to predict composite survival or freedom from lung transplant was 0.805 and 0.724 at 1 and 5 years of follow-up, respectively. Prediction by the model, however, was weakest among patients with intermediate-risk predicted survival. The REVEAL model had adequate discrimination to predict 1-year survival in this small but clinically distinct validation cohort. Although the model also had predictive ability out to 5 years, prediction was limited among patients of intermediate risk, suggesting our prediction methods can still be improved. Copyright © 2012. Published by Elsevier Inc.
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
A new Predictive Model for Relativistic Electrons in Outer Radiation Belt
NASA Astrophysics Data System (ADS)
Chen, Y.
2017-12-01
Relativistic electrons trapped in the Earth's outer radiation belt present a highly hazardous radiation environment for spaceborne electronics. These energetic electrons, with kinetic energies up to several megaelectron-volt (MeV), manifest a highly dynamic and event-specific nature due to the delicate interplay of competing transport, acceleration and loss processes. Therefore, developing a forecasting capability for outer belt MeV electrons has long been a critical and challenging task for the space weather community. Recently, the vital roles of electron resonance with waves (including such as chorus and electromagnetic ion cyclotron) have been widely recognized; however, it is still difficult for current diffusion radiation belt models to reproduce the behavior of MeV electrons during individual geomagnetic storms, mainly because of the large uncertainties existing in input parameters. In this work, we expanded our previous cross-energy cross-pitch-angle coherence study and developed a new predictive model for MeV electrons over a wide range of L-shells inside the outer radiation belt. This new model uses NOAA POES observations from low-Earth-orbits (LEOs) as inputs to provide high-fidelity nowcast (multiple hour prediction) and forecast (> 1 day prediction) of the energization of MeV electrons as well as the evolving MeV electron distributions afterwards during storms. Performance of the predictive model is quantified by long-term in situ data from Van Allen Probes and LANL GEO satellites. This study adds new science significance to an existing LEO space infrastructure, and provides reliable and powerful tools to the whole space community.
Model-based learning and the contribution of the orbitofrontal cortex to the model-free world.
McDannald, Michael A; Takahashi, Yuji K; Lopatina, Nina; Pietras, Brad W; Jones, Josh L; Schoenbaum, Geoffrey
2012-04-01
Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Roy, Swagata; Biswas, Srija; Babu, K. Arun; Mandal, Sumantra
2018-05-01
A novel constitutive model has been developed for predicting flow responses of super-austenitic stainless steel over a wide range of strains (0.05-0.6), temperatures (1173-1423 K) and strain rates (0.001-1 s-1). Further, the predictability of this new model has been compared with the existing Johnson-Cook (JC) and modified Zerilli-Armstrong (M-ZA) model. The JC model is not befitted for flow prediction as it is found to be exhibiting very high ( 36%) average absolute error (δ) and low ( 0.92) correlation coefficient (R). On the contrary, the M-ZA model has demonstrated relatively lower δ ( 13%) and higher R ( 0.96) for flow prediction. The incorporation of couplings of processing parameters in M-ZA model has led to exhibit better prediction than JC model. However, the flow analyses of the studied alloy have revealed the additional synergistic influences of strain and strain rate as well as strain, temperature, and strain rate apart from those considered in M-ZA model. Hence, the new phenomenological model has been formulated incorporating all the individual and synergistic effects of processing parameters and a `strain-shifting' parameter. The proposed model predicted the flow behavior of the alloy with much better correlation and generalization than M-ZA model as substantiated by its lower δ ( 7.9%) and higher R ( 0.99) of prediction.
Weather-based forecasts of California crop yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D B; Cahill, K N; Field, C B
2005-09-26
Crop yield forecasts provide useful information to a range of users. Yields for several crops in California are currently forecast based on field surveys and farmer interviews, while for many crops official forecasts do not exist. As broad-scale crop yields are largely dependent on weather, measurements from existing meteorological stations have the potential to provide a reliable, timely, and cost-effective means to anticipate crop yields. We developed weather-based models of state-wide yields for 12 major California crops (wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios), and tested their accuracy using cross-validation over themore » 1980-2003 period. Many crops were forecast with high accuracy, as judged by the percent of yield variation explained by the forecast, the number of yields with correctly predicted direction of yield change, or the number of yields with correctly predicted extreme yields. The most successfully modeled crop was almonds, with 81% of yield variance captured by the forecast. Predictions for most crops relied on weather measurements well before harvest time, allowing for lead times that were longer than existing procedures in many cases.« less
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
The status and challenge of global fire modelling
Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.; ...
2016-06-09
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central questionmore » underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. In conclusion, we indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.« less
The status and challenge of global fire modelling
NASA Astrophysics Data System (ADS)
Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.; Kelley, Douglas I.; Prentice, I. Colin; Rabin, Sam S.; Archibald, Sally; Mouillot, Florent; Arnold, Steve R.; Artaxo, Paulo; Bachelet, Dominique; Ciais, Philippe; Forrest, Matthew; Friedlingstein, Pierre; Hickler, Thomas; Kaplan, Jed O.; Kloster, Silvia; Knorr, Wolfgang; Lasslop, Gitta; Li, Fang; Mangeon, Stephane; Melton, Joe R.; Meyn, Andrea; Sitch, Stephen; Spessa, Allan; van der Werf, Guido R.; Voulgarakis, Apostolos; Yue, Chao
2016-06-01
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central question underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. We indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.
The status and challenge of global fire modelling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central questionmore » underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. In conclusion, we indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.« less
Adaptation of clinical prediction models for application in local settings.
Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M
2012-01-01
When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.
On predicting the extent of magnetic aging in electrical steels
NASA Astrophysics Data System (ADS)
Ray, Santanu Kumar; Mohanty, Omkar Nath
1989-02-01
Magnetic aging of steels is essentially the result of an increase in coercive force, inhibition of ferrite domain wall movement by precipitated carbide particles being the main cause of this increase. In the present work, the nature of the carbides precipitating in four grades of electrical steels has been looked into. Existing postulations have been invoked to predict the extent of coercive force enhancement due to metastable (ɛ) and stable (cementite) carbides which have been observed to precipitate in these steels. The model of Drabecki and Wyslocki when applied to the case of metastable carbide predicts its contribution to the coercive force fairly accurately. None of the existing models, however, succeeds in suggesting the extent of the increases accruing from the presence of the stable carbide (cementite) particles. Each of the models takes into account only one or two of the isolated aspects of magnetic interaction between matrix and precipitate. It appears that for cementite, whose several magnetic characteristics are quite different from those of the ferrite matrix, all possible interaction parameters have to be taken into account to determine the actual mechanism.
Augmenting the SCaN Link Budget Tool with Validated Atmospheric Propagation
NASA Technical Reports Server (NTRS)
Steinkerchner, Leo; Welch, Bryan
2017-01-01
In any Earth-Space or Space-Earth communications link, atmospheric effects cause significant signal attenuation. In order to develop a communications system that is cost effective while meeting appropriate performance requirements, it is important to accurately predict these effects for the given link parameters. This project aimed to develop a Matlab(TradeMark) (The MathWorks, Inc.) program that could augment the existing Space Communications and Navigation (SCaN) Link Budget Tool with accurate predictions of atmospheric attenuation of both optical and radio-frequency signals according to the SCaN Optical Link Assessment Model Version 5 and the International Telecommunications Union, Radiocommunications Sector (ITU-R) atmospheric propagation loss model, respectively. When compared to data collected from the Advance Communications Technology Satellite (ACTS), the radio-frequency model predicted attenuation to within 1.3 dB of loss for 95 of measurements. Ultimately, this tool will be integrated into the SCaN Center for Engineering, Networks, Integration, and Communications (SCENIC) user interface in order to support analysis of existing SCaN systems and planning capabilities for future NASA missions.
NASA Astrophysics Data System (ADS)
Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.
2018-03-01
Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Prediction of physical workload in reduced gravity environments
NASA Technical Reports Server (NTRS)
Goldberg, Joseph H.
1987-01-01
The background, development, and application of a methodology to predict human energy expenditure and physical workload in low gravity environments, such as a Lunar or Martian base, is described. Based on a validated model to predict energy expenditures in Earth-based industrial jobs, the model relies on an elemental analysis of the proposed job. Because the job itself need not physically exist, many alternative job designs may be compared in their physical workload. The feasibility of using the model for prediction of low gravity work was evaluated by lowering body and load weights, while maintaining basal energy expenditure. Comparison of model results was made both with simulated low gravity energy expenditure studies and with reported Apollo 14 Lunar EVA expenditure. Prediction accuracy was very good for walking and for cart pulling on slopes less than 15 deg, but the model underpredicted the most difficult work conditions. This model was applied to example core sampling and facility construction jobs, as presently conceptualized for a Lunar or Martian base. Resultant energy expenditures and suggested work-rest cycles were well within the range of moderate work difficulty. Future model development requirements were also discussed.
Condensation model for the ESBWR passive condensers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Revankar, S. T.; Zhou, W.; Wolf, B.
2012-07-01
In the General Electric's Economic simplified boiling water reactor (GE-ESBWR) the passive containment cooling system (PCCS) plays a major role in containment pressure control in case of an loss of coolant accident. The PCCS condenser must be able to remove sufficient energy from the reactor containment to prevent containment from exceeding its design pressure following a design basis accident. There are three PCCS condensation modes depending on the containment pressurization due to coolant discharge; complete condensation, cyclic venting and flow through mode. The present work reviews the models and presents model predictive capability along with comparison with existing data frommore » separate effects test. The condensation models in thermal hydraulics code RELAP5 are also assessed to examine its application to various flow modes of condensation. The default model in the code predicts complete condensation well, and basically is Nusselt solution. The UCB model predicts through flow well. None of condensation model in RELAP5 predict complete condensation, cyclic venting, and through flow condensation consistently. New condensation correlations are given that accurately predict all three modes of PCCS condensation. (authors)« less
Nonlinear modal resonances in low-gravity slosh-spacecraft systems
NASA Technical Reports Server (NTRS)
Peterson, Lee D.
1991-01-01
Nonlinear models of low gravity slosh, when coupled to spacecraft vibrations, predict intense nonlinear eigenfrequency shifts at zero gravity. These nonlinear frequency shifts are due to internal quadratic and cubic resonances between fluid slosh modes and spacecraft vibration modes. Their existence has been verified experimentally, and they cannot be correctly modeled by approximate, uncoupled nonlinear models, such as pendulum mechanical analogs. These predictions mean that linear slosh assumptions for spacecraft vibration models can be invalid, and may lead to degraded control system stability and performance. However, a complete nonlinear modal analysis will predict the correct dynamic behavior. This paper presents the analytical basis for these results, and discusses the effect of internal resonances on the nonlinear coupled response at zero gravity.
Statistical Power for a Simultaneous Test of Factorial and Predictive Invariance
ERIC Educational Resources Information Center
Olivera-Aguilar, Margarita; Millsap, Roger E.
2013-01-01
A common finding in studies of differential prediction across groups is that although regression slopes are the same or similar across groups, group differences exist in regression intercepts. Building on earlier work by Birnbaum (1979), Millsap (1998) presented an invariant factor model that would explain such intercept differences as arising due…
Observations and modeling of San Diego beaches during El Niño
NASA Astrophysics Data System (ADS)
Doria, André; Guza, R. T.; O'Reilly, William C.; Yates, M. L.
2016-08-01
Subaerial sand levels were observed at five southern California beaches for 16 years, including notable El Niños in 1997-98 and 2009-10. An existing, empirical shoreline equilibrium model, driven with wave conditions estimated using a regional buoy network, simulates well the seasonal changes in subaerial beach width (e.g. the cross-shore location of the MSL contour) during non-El Niño years, similar to previous results with a 5-year time series lacking an El Niño winter. The existing model correctly identifies the 1997-98 El Niño winter conditions as more erosive than 2009-10, but overestimates shoreline erosion during both El Niños. The good skill of the existing equilibrium model in typical conditions does not necessarily extrapolate to extreme erosion on these beaches where a few meters thick sand layer often overlies more resistant layers. The modest over-prediction of the 2009-10 El Niño is reduced by gradually decreasing the model mobility of highly eroded shorelines (simulating cobbles, kelp wrack, shell hash, or other stabilizing layers). Over prediction during the more severe 1997-98 El Niño is corrected by stopping model erosion when resilient surfaces (identified with aerial imagery) are reached. The trained model provides a computationally simple (e.g. nonlinear first order differential equation) representation of the observed relationship between incident waves and shoreline change.
Prediction of muscle activation for an eye movement with finite element modeling.
Karami, Abbas; Eghtesad, Mohammad; Haghpanah, Seyyed Arash
2017-10-01
In this paper, a 3D finite element (FE) modeling is employed in order to predict extraocular muscles' activation and investigate force coordination in various motions of the eye orbit. A continuum constitutive hyperelastic model is employed for material description in dynamic modeling of the extraocular muscles (EOMs). Two significant features of this model are accurate mass modeling with FE method and stimulating EOMs for motion through muscle activation parameter. In order to validate the eye model, a forward dynamics simulation of the eye motion is carried out by variation of the muscle activation. Furthermore, to realize muscle activation prediction in various eye motions, two different tracking-based inverse controllers are proposed. The performance of these two inverse controllers is investigated according to their resulted muscle force magnitude and muscle force coordination. The simulation results are compared with the available experimental data and the well-known existing neurological laws. The comparison authenticates both the validation and the prediction results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model
NASA Astrophysics Data System (ADS)
Sahragard, Nasrolah; Ramli, Abdul Rahman B.; Hamiruce Marhaban, Mohammad; Mansor, Shattri B.
2011-02-01
Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.
NASA Astrophysics Data System (ADS)
Luo, Junhui; Wu, Chao; Liu, Xianlin; Mi, Decai; Zeng, Fuquan; Zeng, Yongjun
2018-01-01
At present, the prediction of soft foundation settlement mostly use the exponential curve and hyperbola deferred approximation method, and the correlation between the results is poor. However, the application of neural network in this area has some limitations, and none of the models used in the existing cases adopted the TS fuzzy neural network of which calculation combines the characteristics of fuzzy system and neural network to realize the mutual compatibility methods. At the same time, the developed and optimized calculation program is convenient for engineering designers. Taking the prediction and analysis of soft foundation settlement of gully soft soil in granite area of Guangxi Guihe road as an example, the fuzzy neural network model is established and verified to explore the applicability. The TS fuzzy neural network is used to construct the prediction model of settlement and deformation, and the corresponding time response function is established to calculate and analyze the settlement of soft foundation. The results show that the prediction of short-term settlement of the model is accurate and the final settlement prediction result has certain engineering reference value.
PREDICTION OF MULTICOMPONENT INORGANIC ATMOSPHERIC AEROSOL BEHAVIOR. (R824793)
Many existing models calculate the composition of the atmospheric aerosol system by solving a set of algebraic equations based on reversible reactions derived from thermodynamic equilibrium. Some models rely on an a priori knowledge of the presence of components in certain relati...
Luo, Gang; Stone, Bryan L; Johnson, Michael D; Nkoy, Flory L
2016-03-07
In young children, bronchiolitis is the most common illness resulting in hospitalization. For children less than age 2, bronchiolitis incurs an annual total inpatient cost of $1.73 billion. Each year in the United States, 287,000 emergency department (ED) visits occur because of bronchiolitis, with a hospital admission rate of 32%-40%. Due to a lack of evidence and objective criteria for managing bronchiolitis, ED disposition decisions (hospital admission or discharge to home) are often made subjectively, resulting in significant practice variation. Studies reviewing admission need suggest that up to 29% of admissions from the ED are unnecessary. About 6% of ED discharges for bronchiolitis result in ED returns with admission. These inappropriate dispositions waste limited health care resources, increase patient and parental distress, expose patients to iatrogenic risks, and worsen outcomes. Existing clinical guidelines for bronchiolitis offer limited improvement in patient outcomes. Methodological shortcomings include that the guidelines provide no specific thresholds for ED decisions to admit or to discharge, have an insufficient level of detail, and do not account for differences in patient and illness characteristics including co-morbidities. Predictive models are frequently used to complement clinical guidelines, reduce practice variation, and improve clinicians' decision making. Used in real time, predictive models can present objective criteria supported by historical data for an individualized disease management plan and guide admission decisions. However, existing predictive models for ED patients with bronchiolitis have limitations, including low accuracy and the assumption that the actual ED disposition decision was appropriate. To date, no operational definition of appropriate admission exists. No model has been built based on appropriate admissions, which include both actual admissions that were necessary and actual ED discharges that were unsafe. The goal of this study is to develop a predictive model to guide appropriate hospital admission for ED patients with bronchiolitis. This study will: (1) develop an operational definition of appropriate hospital admission for ED patients with bronchiolitis, (2) develop and test the accuracy of a new model to predict appropriate hospital admission for an ED patient with bronchiolitis, and (3) conduct simulations to estimate the impact of using the model on bronchiolitis outcomes. We are currently extracting administrative and clinical data from the enterprise data warehouse of an integrated health care system. Our goal is to finish this study by the end of 2019. This study will produce a new predictive model that can be operationalized to guide and improve disposition decisions for ED patients with bronchiolitis. Broad use of the model would reduce iatrogenic risk, patient and parental distress, health care use, and costs and improve outcomes for bronchiolitis patients.
Levy, David; Fergus, Cristin; Rudov, Lindsey; McCormick-Ricket, Iben; Carton, Thomas
2016-02-01
Despite the presence of tobacco control policies, Louisiana continues to experience a high smoking burden and elevated smoking-attributable deaths. The SimSmoke model provides projections of these health outcomes in the face of existing and expanded (simulated) tobacco control polices. The SimSmoke model utilizes population data, smoking rates, and various tobacco control policy measures from Louisiana to predict smoking prevalence and smoking-attributable deaths. The model begins in 1993 and estimates are projected through 2054. The model is validated against existing Louisiana smoking prevalence data. The most powerful individual policy measure for reducing smoking prevalence is cigarette excise tax. However, a comprehensive cessation treatment policy is predicted to save the most lives. A combination of tobacco control policies provides the greatest reduction in smoking prevalence and smoking-attributable deaths. The existing Louisiana excise tax ranks as one of the lowest in the country and the legislature is against further increases. Alternative policy measures aimed at lowering prevalence and attributable deaths are: cessation treatments, comprehensive smoke-free policies, and limiting youth access. These three policies have a substantial effect on smoking prevalence and attributable deaths and are likely to encounter more favor in the Louisiana legislature than increasing the state excise tax.
A link prediction approach to cancer drug sensitivity prediction.
Turki, Turki; Wei, Zhi
2017-10-03
Predicting the response to a drug for cancer disease patients based on genomic information is an important problem in modern clinical oncology. This problem occurs in part because many available drug sensitivity prediction algorithms do not consider better quality cancer cell lines and the adoption of new feature representations; both lead to the accurate prediction of drug responses. By predicting accurate drug responses to cancer, oncologists gain a more complete understanding of the effective treatments for each patient, which is a core goal in precision medicine. In this paper, we model cancer drug sensitivity as a link prediction, which is shown to be an effective technique. We evaluate our proposed link prediction algorithms and compare them with an existing drug sensitivity prediction approach based on clinical trial data. The experimental results based on the clinical trial data show the stability of our link prediction algorithms, which yield the highest area under the ROC curve (AUC) and are statistically significant. We propose a link prediction approach to obtain new feature representation. Compared with an existing approach, the results show that incorporating the new feature representation to the link prediction algorithms has significantly improved the performance.
AI techniques in geomagnetic storm forecasting
NASA Astrophysics Data System (ADS)
Lundstedt, Henrik
This review deals with how geomagnetic storms can be predicted with the use of Artificial Intelligence (AI) techniques. Today many different Al techniques have been developed, such as symbolic systems (expert and fuzzy systems) and connectionism systems (neural networks). Even integrations of AI techniques exist, so called Intelligent Hybrid Systems (IHS). These systems are capable of learning the mathematical functions underlying the operation of non-linear dynamic systems and also to explain the knowledge they have learned. Very few such powerful systems exist at present. Two such examples are the Magnetospheric Specification Forecast Model of Rice University and the Lund Space Weather Model of Lund University. Various attempts to predict geomagnetic storms on long to short-term are reviewed in this article. Predictions of a month to days ahead most often use solar data as input. The first SOHO data are now available. Due to the high temporal and spatial resolution new solar physics have been revealed. These SOHO data might lead to a breakthrough in these predictions. Predictions hours ahead and shorter rely on real-time solar wind data. WIND gives us real-time data for only part of the day. However, with the launch of the ACE spacecraft in 1997, real-time data during 24 hours will be available. That might lead to the second breakthrough for predictions of geomagnetic storms.
Modeling the NF-κB mediated inflammatory response predicts cytokine waves in tissue
2011-01-01
Background Waves propagating in "excitable media" is a reliable way to transmit signals in space. A fascinating example where living cells comprise such a medium is Dictyostelium D. which propagates waves of chemoattractant to attract distant cells. While neutrophils chemotax in a similar fashion as Dictyostelium D., it is unclear if chemoattractant waves exist in mammalian tissues and what mechanisms could propagate them. Results We propose that chemoattractant cytokine waves may naturally develop as a result of NF-κB response. Using a heuristic mathematical model of NF-κB-like circuits coupled in space we show that the known characteristics of NF-κB response favor cytokine waves. Conclusions While the propagating wave of cytokines is generally beneficial for inflammation resolution, our model predicts that there exist special conditions that can cause chronic inflammation and re-occurrence of acute inflammatory response. PMID:21771307
Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin
The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less
NASA Technical Reports Server (NTRS)
Perkey, D. J.; Kreitzberg, C. W.
1984-01-01
The dynamic prediction model along with its macro-processor capability and data flow system from the Drexel Limited-Area and Mesoscale Prediction System (LAMPS) were converted and recorded for the Perkin-Elmer 3220. The previous version of this model was written for Control Data Corporation 7600 and CRAY-1a computer environment which existed until recently at the National Center for Atmospheric Research. The purpose of this conversion was to prepare LAMPS for porting to computer environments other than that encountered at NCAR. The emphasis was shifted from programming tasks to model simulation and evaluation tests.
NASA Technical Reports Server (NTRS)
Solomon, P. M.; De Zafra, R.; Parrish, A.; Barrett, J. W.
1984-01-01
Ground-based observations of a mm-wave spectral line at 278 GHz have yielded stratospheric chlorine monoxide column density diurnal variation records which indicate that the mixing ratio and column density of this compound above 30 km are about 20 percent lower than model predictions based on 2.1 parts/billion of total stratospheric chlorine. The observed day-to-night variation is, however, in good agreement with recent model predictions, both confirming the existence of a nighttime reservoir for chlorine and verifying the predicted general rate of its storage and retrieval.
Estella Gilbert; James A. Powell; Jesse A. Logan; Barbara J. Bentz
2004-01-01
In all organisms, phenotypic variability is an evolutionary stipulation. Because the development of poikilothermic organisms depends directly on the temperature of their habitat, environmental variability is also an integral factor in models of their phenology. In this paper we present two existing phenology models, the distributed delay model and the Sharpe and...
Chehrazi, Ehsan; Sharif, Alireza; Omidkhah, Mohammadreza; Karimi, Mohammad
2017-10-25
Theoretical approaches that accurately predict the gas permeation behavior of nanotube-containing mixed matrix membranes (nanotube-MMMs) are scarce. This is mainly due to ignoring the effects of nanotube/matrix interfacial characteristics in the existing theories. In this paper, based on the analogy of thermal conduction in polymer composites containing nanotubes, we develop a model to describe gas permeation through nanotube-MMMs. Two new parameters, "interfacial thickness" (a int ) and "interfacial permeation resistance" (R int ), are introduced to account for the role of nanotube/matrix interfacial interactions in the proposed model. The obtained values of a int , independent of the nature of the permeate gas, increased by increasing both the nanotubes aspect ratio and polymer-nanotube interfacial strength. An excellent correlation between the values of a int and polymer-nanotube interaction parameters, χ, helped to accurately reproduce the existing experimental data from the literature without the need to resort to any adjustable parameter. The data includes 10 sets of CO 2 /CH 4 permeation, 12 sets of CO 2 /N 2 permeation, 3 sets of CO 2 /O 2 permeation, and 2 sets of CO 2 /H 2 permeation through different nanotube-MMMs. Moreover, the average absolute relative errors between the experimental data and the predicted values of the proposed model are very small (less than 5%) in comparison with those of the existing models in the literature. To the best of our knowledge, this is the first study where such a systematic comparison between model predictions and such extensive experimental data is presented. Finally, the new way of assessing gas permeation data presented in the current work would be a simple alternative to complex approaches that are usually utilized to estimate interfacial thickness in polymer composites.
Entraining IDyOT: Timing in the Information Dynamics of Thinking
Forth, Jamie; Agres, Kat; Purver, Matthew; Wiggins, Geraint A.
2016-01-01
We present a novel hypothetical account of entrainment in music and language, in context of the Information Dynamics of Thinking model, IDyOT. The extended model affords an alternative view of entrainment, and its companion term, pulse, from earlier accounts. The model is based on hierarchical, statistical prediction, modeling expectations of both what an event will be and when it will happen. As such, it constitutes a kind of predictive coding, with a particular novel hypothetical implementation. Here, we focus on the model's mechanism for predicting when a perceptual event will happen, given an existing sequence of past events, which may be musical or linguistic. We propose a range of tests to validate or falsify the model, at various different levels of abstraction, and argue that computational modeling in general, and this model in particular, can offer a means of providing limited but useful evidence for evolutionary hypotheses. PMID:27803682
Mbeutcha, Aurélie; Mathieu, Romain; Rouprêt, Morgan; Gust, Kilian M; Briganti, Alberto; Karakiewicz, Pierre I; Shariat, Shahrokh F
2016-10-01
In the context of customized patient care for upper tract urothelial carcinoma (UTUC), decision-making could be facilitated by risk assessment and prediction tools. The aim of this study was to provide a critical overview of existing predictive models and to review emerging promising prognostic factors for UTUC. A literature search of articles published in English from January 2000 to June 2016 was performed using PubMed. Studies on risk group stratification models and predictive tools in UTUC were selected, together with studies on predictive factors and biomarkers associated with advanced-stage UTUC and oncological outcomes after surgery. Various predictive tools have been described for advanced-stage UTUC assessment, disease recurrence and cancer-specific survival (CSS). Most of these models are based on well-established prognostic factors such as tumor stage, grade and lymph node (LN) metastasis, but some also integrate newly described prognostic factors and biomarkers. These new prediction tools seem to reach a high level of accuracy, but they lack external validation and decision-making analysis. The combinations of patient-, pathology- and surgery-related factors together with novel biomarkers have led to promising predictive tools for oncological outcomes in UTUC. However, external validation of these predictive models is a prerequisite before their introduction into daily practice. New models predicting response to therapy are urgently needed to allow accurate and safe individualized management in this heterogeneous disease.
Cochran, Susan D.; Mays, Vickie M.
2011-01-01
Existing models of attitude-behavior relationships, including the Health Belief Model, the Theory of Reasoned Action, and the Self-Efficacy Theory, are increasingly being used by psychologists to predict human immunodeficiency virus (HIV)-related risk behaviors. The authors briefly highlight some of the difficulties that might arise in applying these models to predicting the risk behaviors of African Americans. These social psychological models tend to emphasize the importance of individualistic, direct control of behavioral choices and deemphasize factors, such as racism and poverty, particularly relevant to that segment of the African American population most at risk for HIV infection. Applications of these models without taking into account the unique issues associated with behavioral choices within the African American community may fail to capture the relevant determinants of risk behaviors. PMID:23529205
Analysis of Highly-Resolved Simulations of 2-D Humps Toward Improvement of Second-Moment Closures
NASA Technical Reports Server (NTRS)
Jeyapaul, Elbert; Rumsey Christopher
2013-01-01
Fully resolved simulation data of flow separation over 2-D humps has been used to analyze the modeling terms in second-moment closures of the Reynolds-averaged Navier- Stokes equations. Existing models for the pressure-strain and dissipation terms have been analyzed using a priori calculations. All pressure-strain models are incorrect in the high-strain region near separation, although a better match is observed downstream, well into the separated-flow region. Near-wall inhomogeneity causes pressure-strain models to predict incorrect signs for the normal components close to the wall. In a posteriori computations, full Reynolds stress and explicit algebraic Reynolds stress models predict the separation point with varying degrees of success. However, as with one- and two-equation models, the separation bubble size is invariably over-predicted.
Adjemian, Jennifer C Z; Girvetz, Evan H; Beckett, Laurel; Foley, Janet E
2006-01-01
More than 20 species of fleas in California are implicated as potential vectors of Yersinia pestis. Extremely limited spatial data exist for plague vectors-a key component to understanding where the greatest risks for human, domestic animal, and wildlife health exist. This study increases the spatial data available for 13 potential plague vectors by using the ecological niche modeling system Genetic Algorithm for Rule-Set Production (GARP) to predict their respective distributions. Because the available sample sizes in our data set varied greatly from one species to another, we also performed an analysis of the robustness of GARP by using the data available for flea Oropsylla montana (Baker) to quantify the effects that sample size and the chosen explanatory variables have on the final species distribution map. GARP effectively modeled the distributions of 13 vector species. Furthermore, our analyses show that all of these modeled ranges are robust, with a sample size of six fleas or greater not significantly impacting the percentage of the in-state area where the flea was predicted to be found, or the testing accuracy of the model. The results of this study will help guide the sampling efforts of future studies focusing on plague vectors.
Yatsuya, Hiroshi; Li, Yuanying; Hirakawa, Yoshihisa; Ota, Atsuhiko; Matsunaga, Masaaki; Haregot, Hilawe Esayas; Chiang, Chifa; Zhang, Yan; Tamakoshi, Koji; Toyoshima, Hideaki; Aoyama, Atsuko
2018-03-17
Relatively little evidence exists for type 2 diabetes mellitus (T2DM) prediction models from long-term follow-up studies in East Asians. This study aims to develop a point-based prediction model for 10-year risk of developing T2DM in middle-aged Japanese men. We followed 3,540 male participants of Aichi Workers' Cohort Study, who were aged 35-64 years and were free of diabetes in 2002, until March 31, 2015. Baseline age, body mass index (BMI), smoking status, alcohol consumption, regular exercise, medication for dyslipidemia, diabetes family history, and blood levels of triglycerides (TG), high density lipoprotein cholesterol (HDLC) and fasting blood glucose (FBG) were examined using Cox proportional hazard model. Variables significantly associated with T2DM in univariable models were simultaneously entered in a multivariable model for determination of the final model using backward variable selection. Performance of an existing T2DM model when applied to the current dataset was compared to that obtained in the present study's model. During the median follow-up of 12.2 years, 342 incident T2DM cases were documented. The prediction system using points assigned to age, BMI, smoking status, diabetes family history, and TG and FBG showed reasonable discrimination (c-index: 0.77) and goodness-of-fit (Hosmer-Lemeshow test, P = 0.22). The present model outperformed the previous one in the present subjects. The point system, once validated in the other populations, could be applied to middle-aged Japanese male workers to identify those at high risk of developing T2DM. In addition, further investigation is also required to examine whether the use of this system will reduce incidence.
Iino, Chikara; Mikami, Tatsuya; Igarashi, Takasato; Aihara, Tomoyuki; Ishii, Kentaro; Sakamoto, Jyuichi; Tono, Hiroshi; Fukuda, Shinsaku
2016-11-01
Multiple scoring systems have been developed to predict outcomes in patients with upper gastrointestinal bleeding. We determined how well these and a newly established scoring model predict the need for therapeutic intervention, excluding transfusion, in Japanese patients with upper gastrointestinal bleeding. We reviewed data from 212 consecutive patients with upper gastrointestinal bleeding. Patients requiring endoscopic intervention, operation, or interventional radiology were allocated to the therapeutic intervention group. Firstly, we compared areas under the curve for the Glasgow-Blatchford, Clinical Rockall, and AIMS65 scores. Secondly, the scores and factors likely associated with upper gastrointestinal bleeding were analyzed with a logistic regression analysis to form a new scoring model. Thirdly, the new model and the existing model were investigated to evaluate their usefulness. Therapeutic intervention was required in 109 patients (51.4%). The Glasgow-Blatchford score was superior to both the Clinical Rockall and AIMS65 scores for predicting therapeutic intervention need (area under the curve, 0.75 [95% confidence interval, 0.69-0.81] vs 0.53 [0.46-0.61] and 0.52 [0.44-0.60], respectively). Multivariate logistic regression analysis retained seven significant predictors in the model: systolic blood pressure <100 mmHg, syncope, hematemesis, hemoglobin <10 g/dL, blood urea nitrogen ≥22.4 mg/dL, estimated glomerular filtration rate ≤ 60 mL/min per 1.73 m 2 , and antiplatelet medication. Based on these variables, we established a new scoring model with superior discrimination to those of existing scoring systems (area under the curve, 0.85 [0.80-0.90]). We developed a superior scoring model for identifying therapeutic intervention need in Japanese patients with upper gastrointestinal bleeding. © 2016 Japan Gastroenterological Endoscopy Society.
Marco A. Contreras; Russell A. Parsons; Woodam Chung
2012-01-01
Land managers have been using fire behavior and simulation models to assist in several fire management tasks. These widely-used models use average attributes to make stand-level predictions without considering spatial variability of fuels within a stand. Consequently, as the existing models have limitations in adequately modeling crown fire initiation and propagation,...
Weck, Philippe F.; Kim, Eunja; Wang, Yifeng; ...
2017-08-01
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematicallymore » compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.« less
Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M
2017-08-01
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weck, Philippe F.; Kim, Eunja; Wang, Yifeng
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematicallymore » compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.« less
NASA Technical Reports Server (NTRS)
Jacobson, I. D.
1978-01-01
The framework for a model of travel demand which will be useful in predicting the total market for air travel between two cities is discussed. Variables to be used in determining the need for air transportation where none currently exists and the effect of changes in system characteristics on attracting latent demand are identified. Existing models are examined in order to provide insight into their strong points and shortcomings. Much of the existing behavioral research in travel demand is incorporated to allow the inclusion of non-economic factors, such as convenience. The model developed is characterized as a market segmentation model. This is a consequence of the strengths of disaggregation and its natural evolution to a usable aggregate formulation. The need for this approach both pedagogically and mathematically is discussed.
Improving Localization Accuracy: Successive Measurements Error Modeling
Abu Ali, Najah; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
Modeling postshock evolution of large electropores
NASA Astrophysics Data System (ADS)
Neu, John C.; Krassowska, Wanda
2003-02-01
The Smoluchowski equation (SE), which describes the evolution of pores created by electric shocks, cannot be applied to modeling large and long-lived pores for two reasons: (1) it does not predict pores of radius above 20 nm without also predicting membrane rupture; (2) it does not predict postshock growth of pores. This study proposes a model in which pores are coupled by membrane tension, resulting in a nonlinear generalization of SE. The predictions of the model are explored using examples of homogeneous (all pore radii r are equal) and heterogeneous (0⩽r⩽rmax) distributions of pores. Pores in a homogeneous population either shrink to zero or assume a stable radius corresponding to the minimum of the bilayer energy. For a heterogeneous population, such a stable radius does not exist. All pores, except rmax, shrink to zero and rmax grows to infinity. However, the unbounded growth of rmax is not physical because the number of pores per cell decreases in time and the continuum model loses validity. When the continuum formulation is replaced by the discrete one, the model predicts the coarsening process: all pores, except rmax, shrink to zero and rmax assumes a stable radius. Thus, the model with tension-coupled pores does not predict membrane rupture and the predicted postshock growth of pores is consistent with experimental evidence.
The predictability of consumer visitation patterns
NASA Astrophysics Data System (ADS)
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-04-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.
The predictability of consumer visitation patterns
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-01-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917
Modeling carbon and nitrogen biogeochemistry in forest ecosystems
Changsheng Li; Carl Trettin; Ge Sun; Steve McNulty; Klaus Butterbach-Bahl
2005-01-01
A forest biogeochemical model, Forest-DNDC, was developed to quantify carbon sequestration in and trace gas emissions from forest ecosystems. Forest-DNDC was constructed by integrating two existing moels, PnET and DNDC, with several new features including nitrification, forest litter layer, soil freezing and thawing etc, PnET is a forest physiological model predicting...
Carcinogenicity and Mutagenicity Data: New Initiatives to ...
Currents models for prediction of chemical carcinogenicity and mutagenicity rely upon a relatively small number of publicly available data resources, where the data being modeled are highly summarized and aggregated representations of the actual experimental results. A number of new initiatives are underway to improve access to existing public carcinogenicity and mutagenicity data for use in modeling, as well as to encourage new approaches to the use of data in modeling. Rodent bioassay results from the NIEHS National Toxicology Program (NTP) and the Berkeley Carcinogenic Potency Database (CPDB) have provided the largest public data resources for building carcinogenicity prediction models to date. However, relatively few and limited representations of these data have actually informed existing models. Initiatives, such as EPA's DSSTox Database Network, offer elaborated and quality reviewed presentations of the CPDB and expanded data linkages and coverage of chemical space for carcinogenicity and mutagenicity. In particular the latest published DSSTox CPDBAS structure-data file includes a number of species-specific and summary activity fields, including a species-specific normalized score for carcinogenic potency (TD50) and various weighted summary activities. These data are being incorporated into PubChem to provide broad
Integrated modelling of H-mode pedestal and confinement in JET-ILW
NASA Astrophysics Data System (ADS)
Saarelma, S.; Challis, C. D.; Garzotti, L.; Frassinetti, L.; Maggi, C. F.; Romanelli, M.; Stokes, C.; Contributors, JET
2018-01-01
A pedestal prediction model Europed is built on the existing EPED1 model by coupling it with core transport simulation using a Bohm-gyroBohm transport model to self-consistently predict JET-ILW power scan for hybrid plasmas that display weaker power degradation than the IPB98(y, 2) scaling of the energy confinement time. The weak power degradation is reproduced in the coupled core-pedestal simulation. The coupled core-pedestal model is further tested for a 3.0 MA plasma with the highest stored energy achieved in JET-ILW so far, giving a prediction of the stored plasma energy within the error margins of the measured experimental value. A pedestal density prediction model based on the neutral penetration is tested on a JET-ILW database giving a prediction with an average error of 17% from the experimental data when a parameter taking into account the fuelling rate is added into the model. However the model fails to reproduce the power dependence of the pedestal density implying missing transport physics in the model. The future JET-ILW deuterium campaign with increased heating power is predicted to reach plasma energy of 11 MJ, which would correspond to 11-13 MW of fusion power in equivalent deuterium-tritium plasma but with isotope effects on pedestal stability and core transport ignored.
Prediction models for successful external cephalic version: a systematic review.
Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein
2015-12-01
To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.
Prediction versus aetiology: common pitfalls and how to avoid them.
van Diepen, Merel; Ramspek, Chava L; Jager, Kitty J; Zoccali, Carmine; Dekker, Friedo W
2017-04-01
Prediction research is a distinct field of epidemiologic research, which should be clearly separated from aetiological research. Both prediction and aetiology make use of multivariable modelling, but the underlying research aim and interpretation of results are very different. Aetiology aims at uncovering the causal effect of a specific risk factor on an outcome, adjusting for confounding factors that are selected based on pre-existing knowledge of causal relations. In contrast, prediction aims at accurately predicting the risk of an outcome using multiple predictors collectively, where the final prediction model is usually based on statistically significant, but not necessarily causal, associations in the data at hand.In both scientific and clinical practice, however, the two are often confused, resulting in poor-quality publications with limited interpretability and applicability. A major problem is the frequently encountered aetiological interpretation of prediction results, where individual variables in a prediction model are attributed causal meaning. This article stresses the differences in use and interpretation of aetiological and prediction studies, and gives examples of common pitfalls. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
T-Epitope Designer: A HLA-peptide binding prediction server.
Kangueane, Pandjassarame; Sakharkar, Meena Kishore
2005-05-15
The current challenge in synthetic vaccine design is the development of a methodology to identify and test short antigen peptides as potential T-cell epitopes. Recently, we described a HLA-peptide binding model (using structural properties) capable of predicting peptides binding to any HLA allele. Consequently, we have developed a web server named T-EPITOPE DESIGNER to facilitate HLA-peptide binding prediction. The prediction server is based on a model that defines peptide binding pockets using information gleaned from X-ray crystal structures of HLA-peptide complexes, followed by the estimation of peptide binding to binding pockets. Thus, the prediction server enables the calculation of peptide binding to HLA alleles. This model is superior to many existing methods because of its potential application to any given HLA allele whose sequence is clearly defined. The web server finds potential application in T cell epitope vaccine design. http://www.bioinformation.net/ted/
Modelling proteins’ hidden conformations to predict antibiotic resistance
Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.
2016-01-01
TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM’s specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models’ prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design. PMID:27708258
Prediction of global and local model quality in CASP8 using the ModFOLD server.
McGuffin, Liam J
2009-01-01
The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.
Bruyndonckx, Robin; Hens, Niel; Verheij, Theo Jm; Aerts, Marc; Ieven, Margareta; Butler, Christopher C; Little, Paul; Goossens, Herman; Coenen, Samuel
2018-05-01
Accurate prediction of the course of an acute cough episode could curb antibiotic overprescribing, but is still a major challenge in primary care. The authors set out to develop a new prediction rule for poor outcome (re-consultation with new or worsened symptoms, or hospital admission) in adults presenting to primary care with acute cough. Data were collected from 2604 adults presenting to primary care with acute cough or symptoms suggestive of lower respiratory tract infection (LRTI) within the Genomics to combat Resistance against Antibiotics in Community-acquired LRTI in Europe (GRACE; www.grace-lrti.org) Network of Excellence. Important signs and symptoms for the new prediction rule were found by combining random forest and logistic regression modelling. Performance to predict poor outcome in acute cough patients was compared with that of existing prediction rules, using the models' area under the receiver operator characteristic curve (AUC), and any improvement obtained by including additional test results (C-reactive protein [CRP], blood urea nitrogen [BUN], chest radiography, or aetiology) was evaluated using the same methodology. The new prediction rule, included the baseline Risk of poor outcome, Interference with daily activities, number of years stopped Smoking (> or <45 years), severity of Sputum, presence of Crackles, and diastolic blood pressure (> or <85 mmHg) (RISSC85). Though performance of RISSC85 was moderate (sensitivity 62%, specificity 59%, positive predictive value 27%, negative predictive value 86%, AUC 0.63, 95% confidence interval [CI] = 0.61 to 0.67), it outperformed all existing prediction rules used today (highest AUC 0.53, 95% CI = 0.51 to 0.56), and could not be significantly improved by including additional test results (highest AUC 0.64, 95% CI = 0.62 to 0.68). The new prediction rule outperforms all existing alternatives in predicting poor outcome in adult patients presenting to primary care with acute cough and could not be improved by including additional test results. © British Journal of General Practice 2018.
The spatial structure of a nonlinear receptive field.
Schwartz, Gregory W; Okawa, Haruhisa; Dunn, Felice A; Morgan, Josh L; Kerschensteiner, Daniel; Wong, Rachel O; Rieke, Fred
2012-11-01
Understanding a sensory system implies the ability to predict responses to a variety of inputs from a common model. In the retina, this includes predicting how the integration of signals across visual space shapes the outputs of retinal ganglion cells. Existing models of this process generalize poorly to predict responses to new stimuli. This failure arises in part from properties of the ganglion cell response that are not well captured by standard receptive-field mapping techniques: nonlinear spatial integration and fine-scale heterogeneities in spatial sampling. Here we characterize a ganglion cell's spatial receptive field using a mechanistic model based on measurements of the physiological properties and connectivity of only the primary excitatory circuitry of the retina. The resulting simplified circuit model successfully predicts ganglion-cell responses to a variety of spatial patterns and thus provides a direct correspondence between circuit connectivity and retinal output.
Sakoda, Lori C; Henderson, Louise M; Caverly, Tanner J; Wernli, Karen J; Katki, Hormuzd A
2017-12-01
Risk prediction models may be useful for facilitating effective and high-quality decision-making at critical steps in the lung cancer screening process. This review provides a current overview of published lung cancer risk prediction models and their applications to lung cancer screening and highlights both challenges and strategies for improving their predictive performance and use in clinical practice. Since the 2011 publication of the National Lung Screening Trial results, numerous prediction models have been proposed to estimate the probability of developing or dying from lung cancer or the probability that a pulmonary nodule is malignant. Respective models appear to exhibit high discriminatory accuracy in identifying individuals at highest risk of lung cancer or differentiating malignant from benign pulmonary nodules. However, validation and critical comparison of the performance of these models in independent populations are limited. Little is also known about the extent to which risk prediction models are being applied in clinical practice and influencing decision-making processes and outcomes related to lung cancer screening. Current evidence is insufficient to determine which lung cancer risk prediction models are most clinically useful and how to best implement their use to optimize screening effectiveness and quality. To address these knowledge gaps, future research should be directed toward validating and enhancing existing risk prediction models for lung cancer and evaluating the application of model-based risk calculators and its corresponding impact on screening processes and outcomes.
Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A
2015-12-01
With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.
1996-02-01
Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.
Predicting Negative Discipline in Traditional Families: A Multi-Dimensional Stress Model.
ERIC Educational Resources Information Center
Fisher, Philip A.
An attempt is made to integrate existing theories of family violence by introducing the concept of family role stress. Role stressors may be defined as factors inhibiting the enactment of family roles. Multiple regression analyses were performed on data from 190 families to test a hypothesis involving the prediction of negative discipline at…
NASA Astrophysics Data System (ADS)
Huang, Lu; Jiang, Yuyang; Chen, Yuzong
2017-01-01
Synergistic drug combinations enable enhanced therapeutics. Their discovery typically involves the measurement and assessment of drug combination index (CI), which can be facilitated by the development and applications of in-silico CI predictive tools. In this work, we developed and tested the ability of a mathematical model of drug-targeted EGFR-ERK pathway in predicting CIs and in analyzing multiple synergistic drug combinations against observations. Our mathematical model was validated against the literature reported signaling, drug response dynamics, and EGFR-MEK drug combination effect. The predicted CIs and combination therapeutic effects of the EGFR-BRaf, BRaf-MEK, FTI-MEK, and FTI-BRaf inhibitor combinations showed consistent synergism. Our results suggest that existing pathway models may be potentially extended for developing drug-targeted pathway models to predict drug combination CI values, isobolograms, and drug-response surfaces as well as to analyze the dynamics of individual and combinations of drugs. With our model, the efficacy of potential drug combinations can be predicted. Our method complements the developed in-silico methods (e.g. the chemogenomic profile and the statistically-inferenced network models) by predicting drug combination effects from the perspectives of pathway dynamics using experimental or validated molecular kinetic constants, thereby facilitating the collective prediction of drug combination effects in diverse ranges of disease systems.
Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris
2016-09-01
Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have similar performances reaching AUC values 0.783 and 0.779 for traditional Lasso and Tree-Lasso, respectfully. However, information loss of Lasso models is 0.35 bits higher compared to Tree-Lasso model. We propose a method for building predictive models applicable for the detection of readmission risk based on Electronic Health records. Integration of domain knowledge (in the form of ICD-9-CM taxonomy) and a data-driven, sparse predictive algorithm (Tree-Lasso Logistic Regression) resulted in an increase of interpretability of the resulting model. The models are interpreted for the readmission prediction problem in general pediatric population in California, as well as several important subpopulations, and the interpretations of models comply with existing medical understanding of pediatric readmission. Finally, quantitative assessment of the interpretability of the models is given, that is beyond simple counts of selected low-level features. Copyright © 2016 Elsevier B.V. All rights reserved.
Validating spatiotemporal predictions of an important pest of small grains.
Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J
2015-01-01
Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.
Multiscale Modeling of Angiogenesis and Predictive Capacity
NASA Astrophysics Data System (ADS)
Pillay, Samara; Byrne, Helen; Maini, Philip
Tumors induce the growth of new blood vessels from existing vasculature through angiogenesis. Using an agent-based approach, we model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death-like processes. We use the transition probabilities associated with the discrete model and a discrete conservation equation for cell occupancy to determine collective cell behavior, in terms of partial differential equations (PDEs). We derive three PDE models incorporating single, multi-species and no volume exclusion. By fitting the parameters in our PDE models and other well-established continuum models to agent-based simulations during a specific time period, and then comparing the outputs from the PDE models and agent-based model at later times, we aim to determine how well the PDE models predict the future behavior of the agent-based model. We also determine whether predictions differ across PDE models and the significance of those differences. This may impact drug development strategies based on PDE models.
Computational intelligence in earth sciences and environmental applications: issues and challenges.
Cherkassky, V; Krasnopolsky, V; Solomatine, D P; Valdes, J
2006-03-01
This paper introduces a generic theoretical framework for predictive learning, and relates it to data-driven and learning applications in earth and environmental sciences. The issues of data quality, selection of the error function, incorporation of the predictive learning methods into the existing modeling frameworks, expert knowledge, model uncertainty, and other application-domain specific problems are discussed. A brief overview of the papers in the Special Issue is provided, followed by discussion of open issues and directions for future research.
Simulating the evolution of glyphosate resistance in grains farming in northern Australia.
Thornby, David F; Walker, Steve R
2009-09-01
The evolution of resistance to herbicides is a substantial problem in contemporary agriculture. Solutions to this problem generally consist of the use of practices to control the resistant population once it evolves, and/or to institute preventative measures before populations become resistant. Herbicide resistance evolves in populations over years or decades, so predicting the effectiveness of preventative strategies in particular relies on computational modelling approaches. While models of herbicide resistance already exist, none deals with the complex regional variability in the northern Australian sub-tropical grains farming region. For this reason, a new computer model was developed. The model consists of an age- and stage-structured population model of weeds, with an existing crop model used to simulate plant growth and competition, and extensions to the crop model added to simulate seed bank ecology and population genetics factors. Using awnless barnyard grass (Echinochloa colona) as a test case, the model was used to investigate the likely rate of evolution under conditions expected to produce high selection pressure. Simulating continuous summer fallows with glyphosate used as the only means of weed control resulted in predicted resistant weed populations after approx. 15 years. Validation of the model against the paddock history for the first real-world glyphosate-resistant awnless barnyard grass population shows that the model predicted resistance evolution to within a few years of the real situation. This validation work shows that empirical validation of herbicide resistance models is problematic. However, the model simulates the complexities of sub-tropical grains farming in Australia well, and can be used to investigate, generate and improve glyphosate resistance prevention strategies.
NASA Astrophysics Data System (ADS)
Torki-Harchegani, Mehdi; Ghanbarian, Davoud; Sadeghi, Morteza
2015-08-01
To design new dryers or improve existing drying equipments, accurate values of mass transfer parameters is of great importance. In this study, an experimental and theoretical investigation of drying whole lemons was carried out. The whole lemons were dried in a convective hot air dryer at different air temperatures (50, 60 and 75 °C) and a constant air velocity (1 m s-1). In theoretical consideration, three moisture transfer models including Dincer and Dost model, Bi- G correlation approach and conventional solution of Fick's second law of diffusion were used to determine moisture transfer parameters and predict dimensionless moisture content curves. The predicted results were then compared with the experimental data and the higher degree of prediction accuracy was achieved by the Dincer and Dost model.
Brightness perception of unrelated self-luminous colors.
Withouck, Martijn; Smet, Kevin A G; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Koenderink, Jan; Hanselaer, Peter
2013-06-01
The perception of brightness of unrelated self-luminous colored stimuli of the same luminance has been investigated. The Helmholtz-Kohlrausch (H-K) effect, i.e., an increase in brightness perception due to an increase in saturation, is clearly observed. This brightness perception is compared with the calculated brightness according to six existing vision models, color appearance models, and models based on the concept of equivalent luminance. Although these models included the H-K effect and half of them were developed to work with unrelated colors, none of the models seemed to be able to fully predict the perceived brightness. A tentative solution to increase the prediction accuracy of the color appearance model CAM97u, developed by Hunt, is presented.
[GSH fermentation process modeling using entropy-criterion based RBF neural network model].
Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng
2008-05-01
The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.
An analytical study of aircraft lateral-directional handling qualities using pilot models
NASA Technical Reports Server (NTRS)
Adams, J. J.; Moore, F. L.
1976-01-01
A procedure for predicting lateral-directional pilot ratings on the basis of the characteristics of the pilot model and the closed-loop system characteristics is demonstrated. A correlation is shown to exist between experimentally obtained pilot ratings and the computed pilot ratings.
DOT National Transportation Integrated Search
2000-01-01
The ability to visualize data has grown immensely as the speed and functionality of Geographic Information Systems (GIS) have increased. Now, with modeling software and GIS, planners are able to view a prediction of the future traffic demands in thei...
Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J
2017-05-01
Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.
Carlisle, D.M.; Hawkins, C.P.
2008-01-01
Inferences drawn from regional bioassessments could be strengthened by integrating data from different monitoring programs. We combined data from the US Geological Survey National Water-Quality Assessment (NAWQA) program and the US Environmental Protection Agency Wadeable Streams Assessment (WSA) to expand the scope of an existing River InVertebrate Prediction and Classification System (RIVPACS)-type predictive model and to assess the biological condition of streams across the western US in a variety of landuse classes. We used model-derived estimates of taxon-specific probabilities of capture and observed taxon occurrences to identify taxa that were absent from sites where they were predicted to occur (decreasers) and taxa that were present at sites where they were not predicted to occur (increasers). Integration of 87 NAWQA reference sites increased the scope of the existing WSA predictive model to include larger streams and later season sampling. Biological condition at 336 NAWQA test sites was significantly (p < 0.001) associated with basin land use and tended to be lower in basins with intensive landuse modification (e.g., mixed, urban, and agricultural basins) than in basins with relatively undisturbed land use (e.g., forested basins). Of the 437 taxa observed among reference and test sites, 180 (41%) were increasers or decreasers. In general, decreasers had a different set of ecological traits (functional traits or tolerance values) than did increasers. We could predict whether a taxon was a decreaser or an increaser based on just a few traits, e.g., desiccation resistance, timing of larval development, habit, and thermal preference, but we were unable to predict the type of basin land use from trait states present in invertebrate assemblages. Refined characterization of traits might be required before bioassessment data can be used routinely to aid in the diagnoses of the causes of biological impairment. ?? 2008 by The North American Benthological Society.
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
A Lightweight Radio Propagation Model for Vehicular Communication in Road Tunnels
Shamim, Azra; Shamshirband, Shahaboddin; Raymond Choo, Kim-Kwang
2016-01-01
Radio propagation models (RPMs) are generally employed in Vehicular Ad Hoc Networks (VANETs) to predict path loss in multiple operating environments (e.g. modern road infrastructure such as flyovers, underpasses and road tunnels). For example, different RPMs have been developed to predict propagation behaviour in road tunnels. However, most existing RPMs for road tunnels are computationally complex and are based on field measurements in frequency band not suitable for VANET deployment. Furthermore, in tunnel applications, consequences of moving radio obstacles, such as large buses and delivery trucks, are generally not considered in existing RPMs. This paper proposes a computationally inexpensive RPM with minimal set of parameters to predict path loss in an acceptable range for road tunnels. The proposed RPM utilizes geometric properties of the tunnel, such as height and width along with the distance between sender and receiver, to predict the path loss. The proposed RPM also considers the additional attenuation caused by the moving radio obstacles in road tunnels, while requiring a negligible overhead in terms of computational complexity. To demonstrate the utility of our proposed RPM, we conduct a comparative summary and evaluate its performance. Specifically, an extensive data gathering campaign is carried out in order to evaluate the proposed RPM. The field measurements use the 5 GHz frequency band, which is suitable for vehicular communication. The results demonstrate that a close match exists between the predicted values and measured values of path loss. In particular, an average accuracy of 94% is found with R2 = 0.86. PMID:27031989
NASA Astrophysics Data System (ADS)
Wang, Weijie; Lu, Yanmin
2018-03-01
Most existing Collaborative Filtering (CF) algorithms predict a rating as the preference of an active user toward a given item, which is always a decimal fraction. Meanwhile, the actual ratings in most data sets are integers. In this paper, we discuss and demonstrate why rounding can bring different influences to these two metrics; prove that rounding is necessary in post-processing of the predicted ratings, eliminate of model prediction bias, improving the accuracy of the prediction. In addition, we also propose two new rounding approaches based on the predicted rating probability distribution, which can be used to round the predicted rating to an optimal integer rating, and get better prediction accuracy compared to the Basic Rounding approach. Extensive experiments on different data sets validate the correctness of our analysis and the effectiveness of our proposed rounding approaches.
A Signal to Noise Paradox in Climate Predictions
NASA Astrophysics Data System (ADS)
Eade, R.; Scaife, A. A.; Smith, D.; Dunstone, N. J.; MacLachlan, C.; Hermanson, L.; Ruth, C.
2017-12-01
Recent advances in climate modelling have resulted in the achievement of skilful long-range prediction, particular that associated with the winter circulation over the north Atlantic (e.g. Scaife et al 2014, Stockdale et al 2015, Dunstone et al 2016) including impacts over Europe and North America, and further afield. However, while highly significant and potentially useful skill exists, the signal-to-noise ratio of the ensemble mean to total variability in these ensemble predictions is anomalously small (Scaife et al 2014) and the correlation between the ensemble mean and historical observations exceeds the proportion of predictable variance in the ensemble (Eade et al 2014). This means the real world is more predictable than our climate models. Here we discuss a series of hypothesis tests that have been carried out to assess issues with model mechanisms compared to the observed world, and present the latest findings in our attempt to determine the cause of the anomalously weak predicted signals in our seasonal-to-decadal hindcasts.
Benchmarking Deep Learning Models on Large Healthcare Datasets.
Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan
2018-06-04
Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Schmidt, Rodney C.; Patankar, Suhas V.
1988-01-01
The use of low Reynolds number (LRN) forms of the k-epsilon turbulence model in predicting transitional boundary layer flow characteristic of gas turbine blades is developed. The research presented consists of: (1) an evaluation of two existing models; (2) the development of a modification to current LRN models; and (3) the extensive testing of the proposed model against experimental data. The prediction characteristics and capabilities of the Jones-Launder (1972) and Lam-Bremhorst (1981) LRN k-epsilon models are evaluated with respect to the prediction of transition on flat plates. Next, the mechanism by which the models simulate transition is considered and the need for additional constraints is discussed. Finally, the transition predictions of a new model are compared with a wide range of different experiments, including transitional flows with free-stream turbulence under conditions of flat plate constant velocity, flat plate constant acceleration, flat plate but strongly variable acceleration, and flow around turbine blade test cascades. In general, calculational procedure yields good agreement with most of the experiments.
The brain, self and society: a social-neuroscience model of predictive processing.
Kelly, Michael P; Kriznik, Natasha M; Kinmonth, Ann Louise; Fletcher, Paul C
2018-05-10
This paper presents a hypothesis about how social interactions shape and influence predictive processing in the brain. The paper integrates concepts from neuroscience and sociology where a gulf presently exists between the ways that each describe the same phenomenon - how the social world is engaged with by thinking humans. We combine the concepts of predictive processing models (also called predictive coding models in the neuroscience literature) with ideal types, typifications and social practice - concepts from the sociological literature. This generates a unified hypothetical framework integrating the social world and hypothesised brain processes. The hypothesis combines aspects of neuroscience and psychology with social theory to show how social behaviors may be "mapped" onto brain processes. It outlines a conceptual framework that connects the two disciplines and that may enable creative dialogue and potential future research.
Prediction of clinical behaviour and treatment for cancers.
Futschik, Matthias E; Sullivan, Mike; Reeve, Anthony; Kasabov, Nikola
2003-01-01
Prediction of clinical behaviour and treatment for cancers is based on the integration of clinical and pathological parameters. Recent reports have demonstrated that gene expression profiling provides a powerful new approach for determining disease outcome. If clinical and microarray data each contain independent information then it should be possible to combine these datasets to gain more accurate prognostic information. Here, we have used existing clinical information and microarray data to generate a combined prognostic model for outcome prediction for diffuse large B-cell lymphoma (DLBCL). A prediction accuracy of 87.5% was achieved. This constitutes a significant improvement compared to the previously most accurate prognostic model with an accuracy of 77.6%. The model introduced here may be generally applicable to the combination of various types of molecular and clinical data for improving medical decision support systems and individualising patient care.
Multi-scale modeling of tsunami flows and tsunami-induced forces
NASA Astrophysics Data System (ADS)
Qin, X.; Motley, M. R.; LeVeque, R. J.; Gonzalez, F. I.
2016-12-01
The modeling of tsunami flows and tsunami-induced forces in coastal communities with the incorporation of the constructed environment is challenging for many numerical modelers because of the scale and complexity of the physical problem. A two-dimensional (2D) depth-averaged model can be efficient for modeling of waves offshore but may not be accurate enough to predict the complex flow with transient variance in vertical direction around constructed environments on land. On the other hand, using a more complex three-dimensional model is much more computational expensive and can become impractical due to the size of the problem and the meshing requirements near the built environment. In this study, a 2D depth-integrated model and a 3D Reynolds Averaged Navier-Stokes (RANS) model are built to model a 1:50 model-scale, idealized community, representative of Seaside, OR, USA, for which existing experimental data is available for comparison. Numerical results from the two numerical models are compared with each other as well as experimental measurement. Both models predict the flow parameters (water level, velocity, and momentum flux in the vicinity of the buildings) accurately, in general, except for time period near the initial impact, where the depth-averaged models can fail to capture the complexities in the flow. Forces predicted using direct integration of predicted pressure on structural surfaces from the 3D model and using momentum flux from the 2D model with constructed environment are compared, which indicates that force prediction from the 2D model is not always reliable in such a complicated case. Force predictions from integration of the pressure are also compared with forces predicted from bare earth momentum flux calculations to reveal the importance of incorporating the constructed environment in force prediction models.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
Du, Tianchuan; Liao, Li; Wu, Cathy H
2016-12-01
Identifying the residues in a protein that are involved in protein-protein interaction and identifying the contact matrix for a pair of interacting proteins are two computational tasks at different levels of an in-depth analysis of protein-protein interaction. Various methods for solving these two problems have been reported in the literature. However, the interacting residue prediction and contact matrix prediction were handled by and large independently in those existing methods, though intuitively good prediction of interacting residues will help with predicting the contact matrix. In this work, we developed a novel protein interacting residue prediction system, contact matrix-interaction profile hidden Markov model (CM-ipHMM), with the integration of contact matrix prediction and the ipHMM interaction residue prediction. We propose to leverage what is learned from the contact matrix prediction and utilize the predicted contact matrix as "feedback" to enhance the interaction residue prediction. The CM-ipHMM model showed significant improvement over the previous method that uses the ipHMM for predicting interaction residues only. It indicates that the downstream contact matrix prediction could help the interaction site prediction.
Cao, Pengxing
2017-01-01
Models of within-host influenza viral dynamics have contributed to an improved understanding of viral dynamics and antiviral effects over the past decade. Existing models can be classified into two broad types based on the mechanism of viral control: models utilising target cell depletion to limit the progress of infection and models which rely on timely activation of innate and adaptive immune responses to control the infection. In this paper, we compare how two exemplar models based on these different mechanisms behave and investigate how the mechanistic difference affects the assessment and prediction of antiviral treatment. We find that the assumed mechanism for viral control strongly influences the predicted outcomes of treatment. Furthermore, we observe that for the target cell-limited model the assumed drug efficacy strongly influences the predicted treatment outcomes. The area under the viral load curve is identified as the most reliable predictor of drug efficacy, and is robust to model selection. Moreover, with support from previous clinical studies, we suggest that the target cell-limited model is more suitable for modelling in vitro assays or infection in some immunocompromised/immunosuppressed patients while the immune response model is preferred for predicting the infection/antiviral effect in immunocompetent animals/patients. PMID:28933757
NASA Technical Reports Server (NTRS)
Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.
2016-01-01
Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.
Nixon, Richard M; Bansback, Nick; Stevens, John W; Brennan, Alan; Madan, Jason
2009-01-01
A model is presented to generate a distribution for the probability of an ACR response at six months for a new treatment for rheumatoid arthritis given evidence from a one- or three-month clinical trial. The model is based on published evidence from 11 randomized controlled trials on existing treatments. A hierarchical logistic regression model is used to find the relationship between the proportion of patients achieving ACR20 and ACR50 at one and three months and the proportion at six months. The model is assessed by Bayesian predictive P-values that demonstrate that the model fits the data well. The model can be used to predict the number of patients with an ACR response for proposed six-month clinical trials given data from clinical trials of one or three months duration. Copyright 2008 John Wiley & Sons, Ltd.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
A Solution to the Cosmic Conundrum including Cosmological Constant and Dark Energy Problems
NASA Astrophysics Data System (ADS)
Singh, A.
2009-12-01
A comprehensive solution to the cosmic conundrum is presented that also resolves key paradoxes of quantum mechanics and relativity. A simple mathematical model, the Gravity Nullification model (GNM), is proposed that integrates the missing physics of the spontaneous relativistic conversion of mass to energy into the existing physics theories, specifically a simplified general theory of relativity. Mechanistic mathematical expressions are derived for a relativistic universe expansion, which predict both the observed linear Hubble expansion in the nearby universe and the accelerating expansion exhibited by the supernova observations. The integrated model addresses the key questions haunting physics and Big Bang cosmology. It also provides a fresh perspective on the misconceived birth and evolution of the universe, especially the creation and dissolution of matter. The proposed model eliminates singularities from existing models and the need for the incredible and unverifiable assumptions including the superluminous inflation scenario, multiple universes, multiple dimensions, Anthropic principle, and quantum gravity. GNM predicts the observed features of the universe without any explicit consideration of time as a governing parameter.
Improving RNA nearest neighbor parameters for helices by going beyond the two-state model.
Spasic, Aleksandar; Berger, Kyle D; Chen, Jonathan L; Seetin, Matthew G; Turner, Douglas H; Mathews, David H
2018-06-01
RNA folding free energy change nearest neighbor parameters are widely used to predict folding stabilities of secondary structures. They were determined by linear regression to datasets of optical melting experiments on small model systems. Traditionally, the optical melting experiments are analyzed assuming a two-state model, i.e. a structure is either complete or denatured. Experimental evidence, however, shows that structures exist in an ensemble of conformations. Partition functions calculated with existing nearest neighbor parameters predict that secondary structures can be partially denatured, which also directly conflicts with the two-state model. Here, a new approach for determining RNA nearest neighbor parameters is presented. Available optical melting data for 34 Watson-Crick helices were fit directly to a partition function model that allows an ensemble of conformations. Fitting parameters were the enthalpy and entropy changes for helix initiation, terminal AU pairs, stacks of Watson-Crick pairs and disordered internal loops. The resulting set of nearest neighbor parameters shows a 38.5% improvement in the sum of residuals in fitting the experimental melting curves compared to the current literature set.
Questioning the Faith - Models and Prediction in Stream Restoration (Invited)
NASA Astrophysics Data System (ADS)
Wilcock, P.
2013-12-01
River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.
ERIC Educational Resources Information Center
Paton, David
2006-01-01
Rational choice models of teenage sexual behaviour lead to radically different predictions than do models that assume such behaviour is random. Existing empirical evidence has not been able to distinguish conclusively between these competing models. I use regional data from England between 1998 and 2001 to examine the impact of recent increases in…
Development of a rotor wake-vortex model, volume 1
NASA Technical Reports Server (NTRS)
Majjigi, R. K.; Gliebe, P. R.
1984-01-01
Certain empirical rotor wake and turbulence relationships were developed using existing low speed rotor wave data. A tip vortex model was developed by replacing the annulus wall with a row of image vortices. An axisymmetric turbulence spectrum model, developed in the context of rotor inflow turbulence, was adapted to predicting the turbulence spectrum of the stator gust upwash.
Curtis L. VanderSchaaf; Ryan W. McKnight; Thomas R. Fox; H. Lee Allen
2010-01-01
A model form is presented, where the model contains regressors selected for inclusion based on biological rationale, to predict how fertilization, precipitation amounts, and overstory stand density affect understory vegetation biomass. Due to time, economic, and logistic constraints, datasets of large sample sizes generally do not exist for understory vegetation. Thus...
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
NASA Technical Reports Server (NTRS)
Douglass, Anne R.; Stolarski, Richard S.
1987-01-01
Atmospheric photochemistry models have been used to predict the sensitivity of the ozone layer to various perturbations. These same models also predict concentrations of chemical species in the present day atmosphere which can be compared to observations. Model results for both present day values and sensitivity to perturbation depend upon input data for reaction rates, photodissociation rates, and boundary conditions. A method of combining the results of a Monte Carlo uncertainty analysis with the existing set of present atmospheric species measurements is developed. The method is used to examine the range of values for the sensitivity of ozone to chlorine perturbations that is possible within the currently accepted ranges for input data. It is found that model runs which predict ozone column losses much greater than 10 percent as a result of present fluorocarbon fluxes produce concentrations and column amounts in the present atmosphere which are inconsistent with the measurements for ClO, HCl, NO, NO2, and HNO3.
Mathematical Modeling of Electrochemical Flow Capacitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyt, NC; Wainright, JS; Savinell, RF
Electrochemical flow capacitors (EFCs) for grid-scale energy storage are a new technology that is beginning to receive interest. Prediction of the expected performance of such systems is important as modeling can be a useful avenue in the search for design improvements. Models based off of circuit analogues exist to predict EFC performance, but these suffer from deficiencies (e.g. a multitude of fitting constants that are required and the ability to analyze only one spatial direction at a time). In this paper mathematical models based off of three-dimensional macroscopic balances (similar to models for porous electrodes) are reported. Unlike existing three-dimensionalmore » porous electrode-based approaches for modeling slurry electrodes, advection (i.e., transport associated with bulk fluid motion) of the overpotential is included in order to account for the surface charge at the interface between flowing particles and the electrolyte. Doing so leads to the presence of overpotential boundary layers that control the performance of EFCs. These models were used to predict the charging behavior of an EFC under both flowing and non-flowing conditions. Agreement with experimental data was good, including proper prediction of the steady-state current that is achieved during charging of a flowing EFC. (C) The Author(s) 2015. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives 4.0 License (CC BY-NC-ND, http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reuse, distribution, and reproduction in any medium, provided the original work is not changed in any way and is properly cited. For permission for commercial reuse, please email: oa@electrochem.org. All rights reserved.« less
Sovány, Tamás; Papós, Kitti; Kása, Péter; Ilič, Ilija; Srčič, Stane; Pintye-Hódi, Klára
2013-06-01
The importance of in silico modeling in the pharmaceutical industry is continuously increasing. The aim of the present study was the development of a neural network model for prediction of the postcompressional properties of scored tablets based on the application of existing data sets from our previous studies. Some important process parameters and physicochemical characteristics of the powder mixtures were used as training factors to achieve the best applicability in a wide range of possible compositions. The results demonstrated that, after some pre-processing of the factors, an appropriate prediction performance could be achieved. However, because of the poor extrapolation capacity, broadening of the training data range appears necessary.
A Model of BGA Thermal Fatigue Life Prediction Considering Load Sequence Effects
Hu, Weiwei; Li, Yaqiu; Sun, Yufeng; Mosleh, Ali
2016-01-01
Accurate testing history data is necessary for all fatigue life prediction approaches, but such data is always deficient especially for the microelectronic devices. Additionally, the sequence of the individual load cycle plays an important role in physical fatigue damage. However, most of the existing models based on the linear damage accumulation rule ignore the sequence effects. This paper proposes a thermal fatigue life prediction model for ball grid array (BGA) packages to take into consideration the load sequence effects. For the purpose of improving the availability and accessibility of testing data, a new failure criterion is discussed and verified by simulation and experimentation. The consequences for the fatigue underlying sequence load conditions are shown. PMID:28773980
Personalized Cancer Medicine: An Organoid Approach.
Aboulkheyr Es, Hamidreza; Montazeri, Leila; Aref, Amir Reza; Vosough, Massoud; Baharvand, Hossein
2018-04-01
Personalized cancer therapy applies specific treatments to each patient. Using personalized tumor models with similar characteristics to the original tumors may result in more accurate predictions of drug responses in patients. Tumor organoid models have several advantages over pre-existing models, including conserving the molecular and cellular composition of the original tumor. These advantages highlight the tremendous potential of tumor organoids in personalized cancer therapy, particularly preclinical drug screening and predicting patient responses to selected treatment regimens. Here, we highlight the advantages, challenges, and translational potential of tumor organoids in personalized cancer therapy and focus on gene-drug associations, drug response prediction, and treatment selection. Finally, we discuss how microfluidic technology can contribute to immunotherapy drug screening in tumor organoids. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Free Wake Numerical Simulation for Darrieus Vertical Axis Wind Turbine Performance Prediction
NASA Astrophysics Data System (ADS)
Belu, Radian
2010-11-01
In the last four decades, several aerodynamic prediction models have been formulated for the Darrieus wind turbine performances and characteristics. We can identified two families: stream-tube and vortex. The paper presents a simplified numerical techniques for simulating vertical axis wind turbine flow, based on the lifting line theory and a free vortex wake model, including dynamic stall effects for predicting the performances of a 3-D vertical axis wind turbine. A vortex model is used in which the wake is composed of trailing stream-wise and shedding span-wise vortices, whose strengths are equal to the change in the bound vortex strength as required by the Helmholz and Kelvin theorems. Performance parameters are computed by application of the Biot-Savart law along with the Kutta-Jukowski theorem and a semi-empirical stall model. We tested the developed model with an adaptation of the earlier multiple stream-tube performance prediction model for the Darrieus turbines. Predictions by using our method are shown to compare favorably with existing experimental data and the outputs of other numerical models. The method can predict accurately the local and global performances of a vertical axis wind turbine, and can be used in the design and optimization of wind turbines for built environment applications.
Elissen, Arianne M J; Struijs, Jeroen N; Baan, Caroline A; Ruwaard, Dirk
2015-05-01
To support providers and commissioners in accurately assessing their local populations' health needs, this study produces an overview of Dutch predictive risk models for health care, focusing specifically on the type, combination and relevance of included determinants for achieving the Triple Aim (improved health, better care experience, and lower costs). We conducted a mixed-methods study combining document analyses, interviews and a Delphi study. Predictive risk models were identified based on a web search and expert input. Participating in the study were Dutch experts in predictive risk modelling (interviews; n=11) and experts in healthcare delivery, insurance and/or funding methodology (Delphi panel; n=15). Ten predictive risk models were analysed, comprising 17 unique determinants. Twelve were considered relevant by experts for estimating community health needs. Although some compositional similarities were identified between models, the combination and operationalisation of determinants varied considerably. Existing predictive risk models provide a good starting point, but optimally balancing resources and targeting interventions on the community level will likely require a more holistic approach to health needs assessment. Development of additional determinants, such as measures of people's lifestyle and social network, may require policies pushing the integration of routine data from different (healthcare) sources. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Modelling dimercaptosuccinic acid (DMSA) plasma kinetics in humans.
van Eijkeren, Jan C H; Olie, J Daniël N; Bradberry, Sally M; Vale, J Allister; de Vries, Irma; Meulenbelt, Jan; Hunault, Claudine C
2016-11-01
No kinetic models presently exist which simulate the effect of chelation therapy on lead blood concentrations in lead poisoning. Our aim was to develop a kinetic model that describes the kinetics of dimercaptosuccinic acid (DMSA; succimer), a commonly used chelating agent, that could be used in developing a lead chelating model. This was a kinetic modelling study. We used a two-compartment model, with a non-systemic gastrointestinal compartment (gut lumen) and the whole body as one systemic compartment. The only data available from the literature were used to calibrate the unknown model parameters. The calibrated model was then validated by comparing its predictions with measured data from three different experimental human studies. The model predicted total DMSA plasma and urine concentrations measured in three healthy volunteers after ingestion of DMSA 10 mg/kg. The model was then validated by using data from three other published studies; it predicted concentrations within a factor of two, representing inter-human variability. A simple kinetic model simulating the kinetics of DMSA in humans has been developed and validated. The interest of this model lies in the future potential to use it to predict blood lead concentrations in lead-poisoned patients treated with DMSA.
Dragovic, Sanja; Vermeulen, Nico P E; Gerets, Helga H; Hewitt, Philip G; Ingelman-Sundberg, Magnus; Park, B Kevin; Juhila, Satu; Snoeys, Jan; Weaver, Richard J
2016-12-01
The current test systems employed by pharmaceutical industry are poorly predictive for drug-induced liver injury (DILI). The 'MIP-DILI' project addresses this situation by the development of innovative preclinical test systems which are both mechanism-based and of physiological, pharmacological and pathological relevance to DILI in humans. An iterative, tiered approach with respect to test compounds, test systems, bioanalysis and systems analysis is adopted to evaluate existing models and develop new models that can provide validated test systems with respect to the prediction of specific forms of DILI and further elucidation of mechanisms. An essential component of this effort is the choice of compound training set that will be used to inform refinement and/or development of new model systems that allow prediction based on knowledge of mechanisms, in a tiered fashion. In this review, we focus on the selection of MIP-DILI training compounds for mechanism-based evaluation of non-clinical prediction of DILI. The selected compounds address both hepatocellular and cholestatic DILI patterns in man, covering a broad range of pharmacologies and chemistries, and taking into account available data on potential DILI mechanisms (e.g. mitochondrial injury, reactive metabolites, biliary transport inhibition, and immune responses). Known mechanisms by which these compounds are believed to cause liver injury have been described, where many if not all drugs in this review appear to exhibit multiple toxicological mechanisms. Thus, the training compounds selection offered a valuable tool to profile DILI mechanisms and to interrogate existing and novel in vitro systems for the prediction of human DILI.
Huang, Zhengxing; Dong, Wei; Duan, Huilong; Liu, Jiquan
2018-05-01
Acute coronary syndrome (ACS), as a common and severe cardiovascular disease, is a leading cause of death and the principal cause of serious long-term disability globally. Clinical risk prediction of ACS is important for early intervention and treatment. Existing ACS risk scoring models are based mainly on a small set of hand-picked risk factors and often dichotomize predictive variables to simplify the score calculation. This study develops a regularized stacked denoising autoencoder (SDAE) model to stratify clinical risks of ACS patients from a large volume of electronic health records (EHR). To capture characteristics of patients at similar risk levels, and preserve the discriminating information across different risk levels, two constraints are added on SDAE to make the reconstructed feature representations contain more risk information of patients, which contribute to a better clinical risk prediction result. We validate our approach on a real clinical dataset consisting of 3464 ACS patient samples. The performance of our approach for predicting ACS risk remains robust and reaches 0.868 and 0.73 in terms of both AUC and accuracy, respectively. The obtained results show that the proposed approach achieves a competitive performance compared to state-of-the-art models in dealing with the clinical risk prediction problem. In addition, our approach can extract informative risk factors of ACS via a reconstructive learning strategy. Some of these extracted risk factors are not only consistent with existing medical domain knowledge, but also contain suggestive hypotheses that could be validated by further investigations in the medical domain.
Exploring Higher Education Business Models ("If Such a Thing Exists")
ERIC Educational Resources Information Center
Harney, John O.
2013-01-01
The global economic recession has caused students, parents, and policymakers to reevaluate personal and societal investments in higher education--and has prompted the realization that traditional higher ed "business models" may be unsustainable. Predicting a shakeout, most presidents expressed confidence for their own school's ability to…
Lance A. Vickers; Thomas R. Fox; David L. Loftis; David A. Boucugnani
2013-01-01
The difficulty of achieving reliable oak (Quercus spp.) regeneration is well documented. Application of silvicultural techniques to facilitate oak regeneration largely depends on current regeneration potential. A computer model to assess regeneration potential based on existing advanced reproduction in Appalachian hardwoods was developed by David...
Economic Benefits of Predictive Models for Pest Control in Agricultural Crops
USDA-ARS?s Scientific Manuscript database
Various forms of crop models or decision making tools for managing crops have existed for many years. The potential advantage of all of these decision making tools is that more informed and economically improved crop management or decision making is accomplished. However, examination of some of thes...
We incorporate the Regional Atmospheric Chemistry Mechanism (RACM2) into the Community Multiscale Air Quality (CMAQ) hemispheric model and compare model predictions to those obtained using the existing Carbon Bond chemical mechanism with updated toluene chemistry (CB05TU). The RA...
Tachyon cosmology, supernovae data, and the big brake singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keresztes, Z.; Gergely, L. A.; Gorini, V.
2009-04-15
We compare the existing observational data on type Ia supernovae with the evolutions of the Universe predicted by a one-parameter family of tachyon models which we have introduced recently [Phys. Rev. D 69, 123512 (2004)]. Among the set of the trajectories of the model which are compatible with the data there is a consistent subset for which the Universe ends up in a new type of soft cosmological singularity dubbed big brake. This opens up yet another scenario for the future history of the Universe besides the one predicted by the standard {lambda}CDM model.
Field investigation of the drift shadow
Su, G.W.; Kneafsey, T.J.; Ghezzehei, T.A.; Cook, P.J.; Marshall, B.D.
2006-01-01
The "Drift Shadow" is defined as the relatively drier region that forms below subsurface cavities or drifts in unsaturated rock. Its existence has been predicted through analytical and numerical models of unsaturated flow. However, these theoretical predictions have not been demonstrated empirically to date. In this project we plan to test the drift shadow concept through field investigations and compare our observations to simulations. Based on modeling studies we have an identified a suitable site to perform the study at an inactive mine in a sandstone formation. Pretest modeling studies and preliminary characterization of the site are being used to develop the field scale tests.
Contamination Effects on EUV Optics
NASA Technical Reports Server (NTRS)
Tveekrem, J.
1999-01-01
During ground-based assembly and upon exposure to the space environment, optical surfaces accumulate both particles and molecular condensibles, inevitably resulting in degradation of optical instrument performance. Currently, this performance degradation (and the resulting end-of-life instrument performance) cannot be predicted with sufficient accuracy using existing software tools. Optical design codes exist to calculate instrument performance, but these codes generally assume uncontaminated optical surfaces. Contamination models exist which predict approximate end-of-life contamination levels, but the optical effects of these contamination levels can not be quantified without detailed information about the optical constants and scattering properties of the contaminant. The problem is particularly pronounced in the extreme ultraviolet (EUV, 300-1,200 A) and far (FUV, 1,200-2,000 A) regimes due to a lack of data and a lack of knowledge of the detailed physical and chemical processes involved. Yet it is in precisely these wavelength regimes that accurate predictions are most important, because EUV/FUV instruments are extremely sensitive to contamination.
Assessing cetacean surveys throughout the Mediterranean Sea: a gap analysis in environmental space.
Mannocci, Laura; Roberts, Jason J; Halpin, Patrick N; Authier, Matthieu; Boisseau, Oliver; Bradai, Mohamed Nejmeddine; Cañadas, Ana; Chicote, Carla; David, Léa; Di-Méglio, Nathalie; Fortuna, Caterina M; Frantzis, Alexandros; Gazo, Manel; Genov, Tilen; Hammond, Philip S; Holcer, Draško; Kaschner, Kristin; Kerem, Dani; Lauriano, Giancarlo; Lewis, Tim; Notarbartolo di Sciara, Giuseppe; Panigada, Simone; Raga, Juan Antonio; Scheinin, Aviad; Ridoux, Vincent; Vella, Adriana; Vella, Joseph
2018-02-15
Heterogeneous data collection in the marine environment has led to large gaps in our knowledge of marine species distributions. To fill these gaps, models calibrated on existing data may be used to predict species distributions in unsampled areas, given that available data are sufficiently representative. Our objective was to evaluate the feasibility of mapping cetacean densities across the entire Mediterranean Sea using models calibrated on available survey data and various environmental covariates. We aggregated 302,481 km of line transect survey effort conducted in the Mediterranean Sea within the past 20 years by many organisations. Survey coverage was highly heterogeneous geographically and seasonally: large data gaps were present in the eastern and southern Mediterranean and in non-summer months. We mapped the extent of interpolation versus extrapolation and the proportion of data nearby in environmental space when models calibrated on existing survey data were used for prediction across the entire Mediterranean Sea. Using model predictions to map cetacean densities in the eastern and southern Mediterranean, characterised by warmer, less productive waters, and more intense eddy activity, would lead to potentially unreliable extrapolations. We stress the need for systematic surveys of cetaceans in these environmentally unique Mediterranean waters, particularly in non-summer months.
Electroweak Symmetry Breaking and the Higgs Boson: Confronting Theories at Colliders
NASA Astrophysics Data System (ADS)
Azatov, Aleksandr; Galloway, Jamison
2013-01-01
In this review, we discuss methods of parsing direct information from collider experiments regarding the Higgs boson and describe simple ways in which experimental likelihoods can be consistently reconstructed and interfaced with model predictions in pertinent parameter spaces. We review prevalent scenarios for extending the electroweak symmetry breaking sector and emphasize their predictions for nonstandard Higgs phenomenology that could be observed in large hadron collider (LHC) data if naturalness is realized in particular ways. Specifically we identify how measurements of Higgs couplings can be used to imply the existence of new physics at particular scales within various contexts. The most dominant production and decay modes of the Higgs-like state observed in the early data sets have proven to be consistent with predictions of the Higgs boson of the Standard Model, though interesting directions in subdominant channels still exist and will require our careful attention in further experimental tests. Slightly anomalous rates in certain channels at the early LHC have spurred effort in model building and spectra analyses of particular theories, and we discuss these developments in some detail. Finally, we highlight some parameter spaces of interest in order to give examples of how the data surrounding the new state can most effectively be used to constrain specific models of weak scale physics.
A new model for approximating RNA folding trajectories and population kinetics
NASA Astrophysics Data System (ADS)
Kirkpatrick, Bonnie; Hajiaghayi, Monir; Condon, Anne
2013-01-01
RNA participates both in functional aspects of the cell and in gene regulation. The interactions of these molecules are mediated by their secondary structure which can be viewed as a planar circle graph with arcs for all the chemical bonds between pairs of bases in the RNA sequence. The problem of predicting RNA secondary structure, specifically the chemically most probable structure, has many useful and efficient algorithms. This leaves RNA folding, the problem of predicting the dynamic behavior of RNA structure over time, as the main open problem. RNA folding is important for functional understanding because some RNA molecules change secondary structure in response to interactions with the environment. The full RNA folding model on at most O(3n) secondary structures is the gold standard. We present a new subset approximation model for the full model, give methods to analyze its accuracy and discuss the relative merits of our model as compared with a pre-existing subset approximation. The main advantage of our model is that it generates Monte Carlo folding pathways with the same probabilities with which they are generated under the full model. The pre-existing subset approximation does not have this property.
Data-Based Predictive Control with Multirate Prediction Step
NASA Technical Reports Server (NTRS)
Barlow, Jonathan S.
2010-01-01
Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.
Yoo, Jin Eun
2018-01-01
A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective.
Yoo, Jin Eun
2018-01-01
A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective. PMID:29599736
Biodiversity in environmental assessment-current practice and tools for prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gontier, Mikael; Balfors, Berit; Moertberg, Ulla
Habitat loss and fragmentation are major threats to biodiversity. Environmental impact assessment and strategic environmental assessment are essential instruments used in physical planning to address such problems. Yet there are no well-developed methods for quantifying and predicting impacts of fragmentation on biodiversity. In this study, a literature review was conducted on GIS-based ecological models that have potential as prediction tools for biodiversity assessment. Further, a review of environmental impact statements for road and railway projects from four European countries was performed, to study how impact prediction concerning biodiversity issues was addressed. The results of the study showed the existing gapmore » between research in GIS-based ecological modelling and current practice in biodiversity assessment within environmental assessment.« less
Predicting future spatial distribution of SOC across entire France
NASA Astrophysics Data System (ADS)
Meersmans, Jeroen; Van Rompaey, Anton; Quine, Tim; Martin, Manuel; Pagé, Christian; Arrouays, Dominique
2013-04-01
Soil organic carbon (SOC) is widely recognized as a key factor controlling soil quality and as a crucial and active component of the global C-cycle. Hence, there exists a growing interest in monitoring and modeling the spatial and temporal behavior of this pool. So far, a large attempt has been made to map SOC at national scales for current and/or past situations. Despite some coarse predictions, detailed spatial SOC predictions for the future are still lacking. In this study we aim to predict future spatial evolution of SOC driven by climate and land use change for France up to the year 2100. Therefore, we combined 1) an existing model, predicting SOC as a function of soil type, climate, land use and management (Meersmans et al 2012), with 2) eight different IPCC spatial explicit climate change predictions (conducted by CERFACS) and 3) Land use change scenario predictions. We created business-as-usual land use change scenarios by extrapolating observed trends and calibrating logistic regression models, incorporating a large set of physical and socio-economic factors, at the regional level in combination with a multi-objective land allocation (MOLA) procedure. The resultant detailed projections of future SOC evolution across all regions of France, allow us to identify regions that are most likely to be characterized by a significant gain or loss of SOC and the degree to which land use decisions/outcomes control the scale of loss and gain. Therefore, this methodology and resulting maps can be considered as powerful tools to aid decision making concerning appropriate soil management, in order to enlarge SOC storage possibilities and reduce soil related CO2 fluxes.
A precision medicine approach for psychiatric disease based on repeated symptom scores.
Fojo, Anthony T; Musliner, Katherine L; Zandi, Peter P; Zeger, Scott L
2017-12-01
For psychiatric diseases, rich information exists in the serial measurement of mental health symptom scores. We present a precision medicine framework for using the trajectories of multiple symptoms to make personalized predictions about future symptoms and related psychiatric events. Our approach fits a Bayesian hierarchical model that estimates a population-average trajectory for all symptoms and individual deviations from the average trajectory, then fits a second model that uses individual symptom trajectories to estimate the risk of experiencing an event. The fitted models are used to make clinically relevant predictions for new individuals. We demonstrate this approach on data from a study of antipsychotic therapy for schizophrenia, predicting future scores for positive, negative, and general symptoms, and the risk of treatment failure in 522 schizophrenic patients with observations over 8 weeks. While precision medicine has focused largely on genetic and molecular data, the complementary approach we present illustrates that innovative analytic methods for existing data can extend its reach more broadly. The systematic use of repeated measurements of psychiatric symptoms offers the promise of precision medicine in the field of mental health. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Neuner, Matthias; Gamnitzer, Peter; Hofstetter, Günter
2017-01-01
The aims of the present paper are (i) to briefly review single-field and multi-field shotcrete models proposed in the literature; (ii) to propose the extension of a damage-plasticity model for concrete to shotcrete; and (iii) to evaluate the capabilities of the proposed extended damage-plasticity model for shotcrete by comparing the predicted response with experimental data for shotcrete and with the response predicted by shotcrete models, available in the literature. The results of the evaluation will be used for recommendations concerning the application and further improvements of the investigated shotcrete models and they will serve as a basis for the design of a new lab test program, complementing the existing ones. PMID:28772445
Speed and Delay Prediction Models for Planning Applications
DOT National Transportation Integrated Search
1999-01-01
Estimation of vehicle speed and delay is fundamental to many forms of : transportation planning analyses including air quality, long-range travel : forecasting, major investment studies, and congestion management systems. : However, existing planning...
Testing the Predictive Power of Coulomb Stress on Aftershock Sequences
NASA Astrophysics Data System (ADS)
Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.
2009-12-01
Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ground Motion Prediction Model Using Artificial Neural Network
NASA Astrophysics Data System (ADS)
Dhanya, J.; Raghukanth, S. T. G.
2018-03-01
This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.
Xu, Dong; Zhang, Yang
2013-01-01
Genome-wide protein structure prediction and structure-based function annotation have been a long-term goal in molecular biology but not yet become possible due to difficulties in modeling distant-homology targets. We developed a hybrid pipeline combining ab initio folding and template-based modeling for genome-wide structure prediction applied to the Escherichia coli genome. The pipeline was tested on 43 known sequences, where QUARK-based ab initio folding simulation generated models with TM-score 17% higher than that by traditional comparative modeling methods. For 495 unknown hard sequences, 72 are predicted to have a correct fold (TM-score > 0.5) and 321 have a substantial portion of structure correctly modeled (TM-score > 0.35). 317 sequences can be reliably assigned to a SCOP fold family based on structural analogy to existing proteins in PDB. The presented results, as a case study of E. coli, represent promising progress towards genome-wide structure modeling and fold family assignment using state-of-the-art ab initio folding algorithms. PMID:23719418
Genomic selection using beef commercial carcass phenotypes.
Todd, D L; Roughsedge, T; Woolliams, J A
2014-03-01
In this study, an industry terminal breeding goal was used in a deterministic simulation, using selection index methodology, to predict genetic gain in a beef population modelled on the UK pedigree Limousin, when using genomic selection (GS) and incorporating phenotype information from novel commercial carcass traits. The effect of genotype-environment interaction was investigated by including the model variations of the genetic correlation between purebred and commercial cross-bred performance (ρX). Three genomic scenarios were considered: (1) genomic breeding values (GBV)+estimated breeding values (EBV) for existing selection traits; (2) GBV for three novel commercial carcass traits+EBV in existing traits; and (3) GBV for novel and existing traits plus EBV for existing traits. Each of the three scenarios was simulated for a range of training population (TP) sizes and with three values of ρX. Scenarios 2 and 3 predicted substantially higher percentage increases over current selection than Scenario 1. A TP of 2000 sires, each with 20 commercial progeny with carcass phenotypes, and assuming a ρX of 0.7, is predicted to increase gain by 40% over current selection in Scenario 3. The percentage increase in gain over current selection increased with decreasing ρX; however, the effect of varying ρX was reduced at high TP sizes for Scenarios 2 and 3. A further non-genomic scenario (4) was considered simulating a conventional population-wide progeny test using EBV only. With 20 commercial cross-bred progenies per sire, similar gain was predicted to Scenario 3 with TP=5000 and ρX=1.0. The range of increases in genetic gain predicted for terminal traits when using GS are of similar magnitude to those observed after the implementation of BLUP technology in the United Kingdom. It is concluded that implementation of GS in a terminal sire breeding goal, using purebred phenotypes alone, will be sub-optimal compared with the inclusion of novel commercial carcass phenotypes in genomic evaluations.
sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Heng; Ye, Hao; Ng, Hui Wen
Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. Furthermore, this algorithmmore » can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system.« less
sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides
Luo, Heng; Ye, Hao; Ng, Hui Wen; Sakkiah, Sugunadevi; Mendrick, Donna L.; Hong, Huixiao
2016-01-01
Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. This algorithm can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system. PMID:27558848
sNebula, a network-based algorithm to predict binding between human leukocyte antigens and peptides
Luo, Heng; Ye, Hao; Ng, Hui Wen; ...
2016-08-25
Understanding the binding between human leukocyte antigens (HLAs) and peptides is important to understand the functioning of the immune system. Since it is time-consuming and costly to measure the binding between large numbers of HLAs and peptides, computational methods including machine learning models and network approaches have been developed to predict HLA-peptide binding. However, there are several limitations for the existing methods. We developed a network-based algorithm called sNebula to address these limitations. We curated qualitative Class I HLA-peptide binding data and demonstrated the prediction performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-validations. Furthermore, this algorithmmore » can predict not only peptides of different lengths and different types of HLAs, but also the peptides or HLAs that have no existing binding data. We believe sNebula is an effective method to predict HLA-peptide binding and thus improve our understanding of the immune system.« less
Mei, Suyu; Zhu, Hao
2015-01-26
Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.
Statistical physics of interacting neural networks
NASA Astrophysics Data System (ADS)
Kinzel, Wolfgang; Metzler, Richard; Kanter, Ido
2001-12-01
Recent results on the statistical physics of time series generation and prediction are presented. A neural network is trained on quasi-periodic and chaotic sequences and overlaps to the sequence generator as well as the prediction errors are calculated numerically. For each network there exists a sequence for which it completely fails to make predictions. Two interacting networks show a transition to perfect synchronization. A pool of interacting networks shows good coordination in the minority game-a model of competition in a closed market. Finally, as a demonstration, a perceptron predicts bit sequences produced by human beings.
Computational chemistry in 25 years
NASA Astrophysics Data System (ADS)
Abagyan, Ruben
2012-01-01
Here we are making some predictions based on three methods: a straightforward extrapolations of the existing trends; a self-fulfilling prophecy; and picking some current grievances and predicting that they will be addressed or solved. We predict the growth of multicore computing and dramatic growth of data, as well as the improvements in force fields and sampling methods. We also predict that effects of therapeutic and environmental molecules on human body, as well as complex natural chemical signalling will be understood in terms of three dimensional models of their binding to specific pockets.
Pino, María J; Castillo, Rosa A; Raya, Antonio; Herruzo, Javier
2017-11-09
To identify possible differences in the level of externalizing behavior problems among children with and without hearing impairment and determine whether any relationship exists between this type of problem and parenting practices. The Behavior Assessment System for Children was used to evaluate externalizing variables in a sample of 118 boys and girls divided into two matched groups: 59 with hearing disorders and 59 normal-hearing controls. Significant between-group differences were found in hyperactivity, behavioral problems, and externalizing problems, but not in aggression. Significant differences were also found in various aspects of parenting styles. A model for predicting externalizing behavior problems was constructed, achieving a predicted explained variance of 50%. Significant differences do exist between adaptation levels in children with and without hearing impairment. Parenting style also plays an important role.
Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop
NASA Technical Reports Server (NTRS)
Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel
2013-01-01
This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.
Wasserberg, Gideon; Kotler, B.P.; Morris, D.W.; Abramsky, Z.
2007-01-01
Background: An optimal habitat selection model called centrifugal community organization (CCO) predicts that species, although they have the same primary habitat, may co-exist owing to their ability to use different secondary habitats. Goal: Test the predictions of CCO with field experiments. Species: The Egyptian sand gerbil (40 g), Gerbillus pyramidum, and Allenby's gerbil (25 g), G. andersoni allenbyi. Site: Ashdod sand dunes in the southern coastal plain of Israel. Three sandy habitats are present: shifting, semi-stabilized, and stabilized sand. Gerbils occupied all three habitats. Methods: We surveyed rodent abundance, activity levels, and foraging behaviour while experimentally removing G. pyramidum. Results: Three predictions of the CCO model were supported. Both species did best in the semi-stabilized habitat. However, they differed in their secondary habitats. Gerbillus pyramidum preferred the shifting sand habitat, whereas G. a. allenbyi preferred the stabilized habitat. Habitat selection by both species depended on density. However, in contrast to CCO, G. pyramidum dominated the core habitat and excluded G. a. allenbyi. We term this variant of CCO, 'asymmetric CCO'. Conclusions: The fundamental feature of CCO appears valid: co-existence may result not because of what each competing species does best, but because of what they do as a back-up. But in contrast to the prediction of the original CCO model, all dynamic traces of interaction can vanish if the system includes interference competition. ?? 2007 Gideon Wasserberg.
Fraser, Keith; Bruckner, Dylan M; Dordick, Jonathan S
2018-06-18
Adverse drug reactions, particularly those that result in drug-induced liver injury (DILI), are a major cause of drug failure in clinical trials and drug withdrawals. Hepatotoxicity-mediated drug attrition occurs despite substantial investments of time and money in developing cellular assays, animal models, and computational models to predict its occurrence in humans. Underperformance in predicting hepatotoxicity associated with drugs and drug candidates has been attributed to existing gaps in our understanding of the mechanisms involved in driving hepatic injury after these compounds perfuse and are metabolized by the liver. Herein we assess in vitro, in vivo (animal), and in silico strategies used to develop predictive DILI models. We address the effectiveness of several two- and three-dimensional in vitro cellular methods that are frequently employed in hepatotoxicity screens and how they can be used to predict DILI in humans. We also explore how humanized animal models can recapitulate human drug metabolic profiles and associated liver injury. Finally, we highlight the maturation of computational methods for predicting hepatotoxicity, the untapped potential of artificial intelligence for improving in silico DILI screens, and how knowledge acquired from these predictions can shape the refinement of experimental methods.
Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong
2018-01-01
Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control. PMID:29461469
Zhang, Sen; Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong
2018-02-20
Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control.
Kattou, Panayiotis; Lian, Guoping; Glavin, Stephen; Sorrell, Ian; Chen, Tao
2017-10-01
The development of a new two-dimensional (2D) model to predict follicular permeation, with integration into a recently reported multi-scale model of transdermal permeation is presented. The follicular pathway is modelled by diffusion in sebum. The mass transfer and partition properties of solutes in lipid, corneocytes, viable dermis, dermis and systemic circulation are calculated as reported previously [Pharm Res 33 (2016) 1602]. The mass transfer and partition properties in sebum are collected from existing literature. None of the model input parameters was fit to the clinical data with which the model prediction is compared. The integrated model has been applied to predict the published clinical data of transdermal permeation of caffeine. The relative importance of the follicular pathway is analysed. Good agreement of the model prediction with the clinical data has been obtained. The simulation confirms that for caffeine the follicular route is important; the maximum bioavailable concentration of caffeine in systemic circulation with open hair follicles is predicted to be 20% higher than that when hair follicles are blocked. The follicular pathway contributes to not only short time fast penetration, but also the overall systemic bioavailability. With such in silico model, useful information can be obtained for caffeine disposition and localised delivery in lipid, corneocytes, viable dermis, dermis and the hair follicle. Such detailed information is difficult to obtain experimentally.
Dai, Zongli; Zhao, Aiwu; He, Jie
2018-01-01
In this paper, we propose a hybrid method to forecast the stock prices called High-order-fuzzy-fluctuation-Trends-based Back Propagation(HTBP)Neural Network model. First, we compare each value of the historical training data with the previous day's value to obtain a fluctuation trend time series (FTTS). On this basis, the FTTS blur into fuzzy time series (FFTS) based on the fluctuation of the increasing, equality, decreasing amplitude and direction. Since the relationship between FFTS and future wave trends is nonlinear, the HTBP neural network algorithm is used to find the mapping rules in the form of self-learning. Finally, the results of the algorithm output are used to predict future fluctuations. The proposed model provides some innovative features:(1)It combines fuzzy set theory and neural network algorithm to avoid overfitting problems existed in traditional models. (2)BP neural network algorithm can intelligently explore the internal rules of the actual existence of sequential data, without the need to analyze the influence factors of specific rules and the path of action. (3)The hybrid modal can reasonably remove noises from the internal rules by proper fuzzy treatment. This paper takes the TAIEX data set of Taiwan stock exchange as an example, and compares and analyzes the prediction performance of the model. The experimental results show that this method can predict the stock market in a very simple way. At the same time, we use this method to predict the Shanghai stock exchange composite index, and further verify the effectiveness and universality of the method. PMID:29420584
Guan, Hongjun; Dai, Zongli; Zhao, Aiwu; He, Jie
2018-01-01
In this paper, we propose a hybrid method to forecast the stock prices called High-order-fuzzy-fluctuation-Trends-based Back Propagation(HTBP)Neural Network model. First, we compare each value of the historical training data with the previous day's value to obtain a fluctuation trend time series (FTTS). On this basis, the FTTS blur into fuzzy time series (FFTS) based on the fluctuation of the increasing, equality, decreasing amplitude and direction. Since the relationship between FFTS and future wave trends is nonlinear, the HTBP neural network algorithm is used to find the mapping rules in the form of self-learning. Finally, the results of the algorithm output are used to predict future fluctuations. The proposed model provides some innovative features:(1)It combines fuzzy set theory and neural network algorithm to avoid overfitting problems existed in traditional models. (2)BP neural network algorithm can intelligently explore the internal rules of the actual existence of sequential data, without the need to analyze the influence factors of specific rules and the path of action. (3)The hybrid modal can reasonably remove noises from the internal rules by proper fuzzy treatment. This paper takes the TAIEX data set of Taiwan stock exchange as an example, and compares and analyzes the prediction performance of the model. The experimental results show that this method can predict the stock market in a very simple way. At the same time, we use this method to predict the Shanghai stock exchange composite index, and further verify the effectiveness and universality of the method.
Validation of Groundwater Models: Meaningful or Meaningless?
NASA Astrophysics Data System (ADS)
Konikow, L. F.
2003-12-01
Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.
Four Major South Korea's Rivers Using Deep Learning Models.
Lee, Sangmok; Lee, Donghyun
2018-06-24
Harmful algal blooms are an annual phenomenon that cause environmental damage, economic losses, and disease outbreaks. A fundamental solution to this problem is still lacking, thus, the best option for counteracting the effects of algal blooms is to improve advance warnings (predictions). However, existing physical prediction models have difficulties setting a clear coefficient indicating the relationship between each factor when predicting algal blooms, and many variable data sources are required for the analysis. These limitations are accompanied by high time and economic costs. Meanwhile, artificial intelligence and deep learning methods have become increasingly common in scientific research; attempts to apply the long short-term memory (LSTM) model to environmental research problems are increasing because the LSTM model exhibits good performance for time-series data prediction. However, few studies have applied deep learning models or LSTM to algal bloom prediction, especially in South Korea, where algal blooms occur annually. Therefore, we employed the LSTM model for algal bloom prediction in four major rivers of South Korea. We conducted short-term (one week) predictions by employing regression analysis and deep learning techniques on a newly constructed water quality and quantity dataset drawn from 16 dammed pools on the rivers. Three deep learning models (multilayer perceptron, MLP; recurrent neural network, RNN; and long short-term memory, LSTM) were used to predict chlorophyll-a, a recognized proxy for algal activity. The results were compared to those from OLS (ordinary least square) regression analysis and actual data based on the root mean square error (RSME). The LSTM model showed the highest prediction rate for harmful algal blooms and all deep learning models out-performed the OLS regression analysis. Our results reveal the potential for predicting algal blooms using LSTM and deep learning.
Modelling Fault Zone Evolution: Implications for fluid flow.
NASA Astrophysics Data System (ADS)
Moir, H.; Lunn, R. J.; Shipton, Z. K.
2009-04-01
Flow simulation models are of major interest to many industries including hydrocarbon, nuclear waste, sequestering of carbon dioxide and mining. One of the major uncertainties in these models is in predicting the permeability of faults, principally in the detailed structure of the fault zone. Studying the detailed structure of a fault zone is difficult because of the inaccessible nature of sub-surface faults and also because of their highly complex nature; fault zones show a high degree of spatial and temporal heterogeneity i.e. the properties of the fault change as you move along the fault, they also change with time. It is well understood that faults influence fluid flow characteristics. They may act as a conduit or a barrier or even as both by blocking flow across the fault while promoting flow along it. Controls on fault hydraulic properties include cementation, stress field orientation, fault zone components and fault zone geometry. Within brittle rocks, such as granite, fracture networks are limited but provide the dominant pathway for flow within this rock type. Research at the EU's Soultz-sous-Forệt Hot Dry Rock test site [Evans et al., 2005] showed that 95% of flow into the borehole was associated with a single fault zone at 3490m depth, and that 10 open fractures account for the majority of flow within the zone. These data underline the critical role of faults in deep flow systems and the importance of achieving a predictive understanding of fault hydraulic properties. To improve estimates of fault zone permeability, it is important to understand the underlying hydro-mechanical processes of fault zone formation. In this research, we explore the spatial and temporal evolution of fault zones in brittle rock through development and application of a 2D hydro-mechanical finite element model, MOPEDZ. The authors have previously presented numerical simulations of the development of fault linkage structures from two or three pre-existing joints, the results of which compare well to features observed in mapped exposures. For these simple simulations from a small number of pre-existing joints the fault zone evolves in a predictable way: fault linkage is governed by three key factors: Stress ratio of s1 (maximum compressive stress) to s3(minimum compressive stress), original geometry of the pre-existing structures (contractional vs. dilational geometries) and the orientation of the principle stress direction (σ1) to the pre-existing structures. In this paper we present numerical simulations of the temporal and spatial evolution of fault linkage structures from many pre-existing joints. The initial location, size and orientations of these joints are based on field observations of cooling joints in granite from the Sierra Nevada. We show that the constantly evolving geometry and local stress field perturbations contribute significantly to fault zone evolution. The location and orientations of linkage structures previously predicted by the simple simulations are consistent with the predicted geometries in the more complex fault zones, however, the exact location at which individual structures form is not easily predicted. Markedly different fault zone geometries are predicted when the pre-existing joints are rotated with respect to the maximum compressive stress. In particular, fault surfaces range from evolving smooth linear structures to producing complex ‘stepped' fault zone geometries. These geometries have a significant effect on simulations of along and across-fault flow.
Modeling and prediction of ionospheric scintillation
NASA Technical Reports Server (NTRS)
Fremouw, E. J.
1974-01-01
Scintillation modeling performed thus far is based on the theory of diffraction by a weakly modulating phase screen developed by Briggs and Parkin (1963). Shortcomings of the existing empirical model for the scintillation index are discussed together with questions of channel modeling, giving attention to the needs of the communication engineers. It is pointed out that much improved scintillation index models may be available in a matter of a year or so.
Using connectome-based predictive modeling to predict individual behavior from brain connectivity
Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd
2017-01-01
Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017
Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.
2008-01-01
This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.
Updating Known Distribution Models for Forecasting Climate Change Impact on Endangered Species
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only. PMID:23840330
Updating known distribution models for forecasting climate change impact on endangered species.
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.
Li, Longhai; Feng, Cindy X; Qiu, Shi
2017-06-30
An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.
Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro
2018-03-01
Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.
Expanded modeling of temperature-dependent dielectric properties for microwave thermal ablation
Ji, Zhen; Brace, Christopher L
2011-01-01
Microwaves are a promising source for thermal tumor ablation due to their ability to rapidly heat dispersive biological tissues, often to temperatures in excess of 100 °C. At these high temperatures, tissue dielectric properties change rapidly and, thus, so do the characteristics of energy delivery. Precise knowledge of how tissue dielectric properties change during microwave heating promises to facilitate more accurate simulation of device performance and helps optimize device geometry and energy delivery parameters. In this study, we measured the dielectric properties of liver tissue during high-temperature microwave heating. The resulting data were compiled into either a sigmoidal function of temperature or an integration of the time–temperature curve for both relative permittivity and effective conductivity. Coupled electromagnetic–thermal simulations of heating produced by a single monopole antenna using the new models were then compared to simulations with existing linear and static models, and experimental temperatures in liver tissue. The new sigmoidal temperature-dependent model more accurately predicted experimental temperatures when compared to temperature–time integrated or existing models. The mean percent differences between simulated and experimental temperatures over all times were 4.2% for sigmoidal, 10.1% for temperature–time integration, 27.0% for linear and 32.8% for static models at the antenna input power of 50 W. Correcting for tissue contraction improved agreement for powers up to 75 W. The sigmoidal model also predicted substantial changes in heating pattern due to dehydration. We can conclude from these studies that a sigmoidal model of tissue dielectric properties improves prediction of experimental results. More work is needed to refine and generalize this model. PMID:21791728
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
Imbalanced target prediction with pattern discovery on clinical data repositories.
Chan, Tak-Ming; Li, Yuxi; Chiau, Choo-Chiap; Zhu, Jane; Jiang, Jie; Huo, Yong
2017-04-20
Clinical data repositories (CDR) have great potential to improve outcome prediction and risk modeling. However, most clinical studies require careful study design, dedicated data collection efforts, and sophisticated modeling techniques before a hypothesis can be tested. We aim to bridge this gap, so that clinical domain users can perform first-hand prediction on existing repository data without complicated handling, and obtain insightful patterns of imbalanced targets for a formal study before it is conducted. We specifically target for interpretability for domain users where the model can be conveniently explained and applied in clinical practice. We propose an interpretable pattern model which is noise (missing) tolerant for practice data. To address the challenge of imbalanced targets of interest in clinical research, e.g., deaths less than a few percent, the geometric mean of sensitivity and specificity (G-mean) optimization criterion is employed, with which a simple but effective heuristic algorithm is developed. We compared pattern discovery to clinically interpretable methods on two retrospective clinical datasets. They contain 14.9% deaths in 1 year in the thoracic dataset and 9.1% deaths in the cardiac dataset, respectively. In spite of the imbalance challenge shown on other methods, pattern discovery consistently shows competitive cross-validated prediction performance. Compared to logistic regression, Naïve Bayes, and decision tree, pattern discovery achieves statistically significant (p-values < 0.01, Wilcoxon signed rank test) favorable averaged testing G-means and F1-scores (harmonic mean of precision and sensitivity). Without requiring sophisticated technical processing of data and tweaking, the prediction performance of pattern discovery is consistently comparable to the best achievable performance. Pattern discovery has demonstrated to be robust and valuable for target prediction on existing clinical data repositories with imbalance and noise. The prediction results and interpretable patterns can provide insights in an agile and inexpensive way for the potential formal studies.
Malloy, Timothy; Zaunbrecher, Virginia; Beryt, Elizabeth; Judson, Richard; Tice, Raymond; Allard, Patrick; Blake, Ann; Cote, Ila; Godwin, Hilary; Heine, Lauren; Kerzic, Patrick; Kostal, Jakub; Marchant, Gary; McPartland, Jennifer; Moran, Kelly; Nel, Andre; Ogunseitan, Oladele; Rossi, Mark; Thayer, Kristina; Tickner, Joel; Whittaker, Margaret; Zarker, Ken
2017-09-01
Alternatives analysis (AA) is a method used in regulation and product design to identify, assess, and evaluate the safety and viability of potential substitutes for hazardous chemicals. It requires toxicological data for the existing chemical and potential alternatives. Predictive toxicology uses in silico and in vitro approaches, computational models, and other tools to expedite toxicological data generation in a more cost-effective manner than traditional approaches. The present article briefly reviews the challenges associated with using predictive toxicology in regulatory AA, then presents 4 recommendations for its advancement. It recommends using case studies to advance the integration of predictive toxicology into AA, adopting a stepwise process to employing predictive toxicology in AA beginning with prioritization of chemicals of concern, leveraging existing resources to advance the integration of predictive toxicology into the practice of AA, and supporting transdisciplinary efforts. The further incorporation of predictive toxicology into AA would advance the ability of companies and regulators to select alternatives to harmful ingredients, and potentially increase the use of predictive toxicology in regulation more broadly. Integr Environ Assess Manag 2017;13:915-925. © 2017 SETAC. © 2017 SETAC.
Improve the prediction of RNA-binding residues using structural neighbours.
Li, Quan; Cao, Zanxia; Liu, Haiyan
2010-03-01
The interactions between RNA-binding proteins (RBPs) with RNA play key roles in managing some of the cell's basic functions. The identification and prediction of RNA binding sites is important for understanding the RNA-binding mechanism. Computational approaches are being developed to predict RNA-binding residues based on the sequence- or structure-derived features. To achieve higher prediction accuracy, improvements on current prediction methods are necessary. We identified that the structural neighbors of RNA-binding and non-RNA-binding residues have different amino acid compositions. Combining this structure-derived feature with evolutionary (PSSM) and other structural information (secondary structure and solvent accessibility) significantly improves the predictions over existing methods. Using a multiple linear regression approach and 6-fold cross validation, our best model can achieve an overall correct rate of 87.8% and MCC of 0.47, with a specificity of 93.4%, correctly predict 52.4% of the RNA-binding residues for a dataset containing 107 non-homologous RNA-binding proteins. Compared with existing methods, including the amino acid compositions of structure neighbors lead to clearly improvement. A web server was developed for predicting RNA binding residues in a protein sequence (or structure),which is available at http://mcgill.3322.org/RNA/.
NASA Astrophysics Data System (ADS)
Cappelli, Mark; Young, Christopher
2016-10-01
We present continued efforts towards introducing physical models for cross-magnetic field electron transport into Hall thruster discharge simulations. In particular, we seek to evaluate whether such models accurately capture ion dynamics, both averaged and resolved in time, through comparisons with measured ion velocity distributions which are now becoming available for several devices. Here, we describe a turbulent electron transport model that is integrated into 2-D hybrid fluid/PIC simulations of a 72 mm diameter laboratory thruster operating at 400 W. We also compare this model's predictions with one recently proposed by Lafluer et al.. Introducing these models into 2-D hybrid simulations is relatively straightforward and leverages the existing framework for solving the electron fluid equations. The models are tested for their ability to capture the time-averaged experimental discharge current and its fluctuations due to ionization instabilities. Model predictions are also more rigorously evaluated against recent laser-induced fluorescence measurements of time-resolved ion velocity distributions.
Quicksilver: Fast predictive image registration - A deep learning approach.
Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc
2017-09-01
This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.
Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.
Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret
2005-01-01
Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.
Improving acceptance for Higgs events at CDF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sforza, Federico; /INFN, Pisa
2008-03-01
The Standard Model of elementary particles predicts the existence of the Higgs boson as the responsable of the electroweak symmetry breaking, the process by which fermions and vector bosons acquire mass. The Higgs existence is one of the most important questions in the present high energy physics research. This work concerns the search of W H associate production at the CDF II experiment (Collider Detector at Fermilab).
A zero-equation turbulence model for two-dimensional hybrid Hall thruster simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappelli, Mark A., E-mail: cap@stanford.edu; Young, Christopher V.; Cha, Eunsun
2015-11-15
We present a model for electron transport across the magnetic field of a Hall thruster and integrate this model into 2-D hybrid particle-in-cell simulations. The model is based on a simple scaling of the turbulent electron energy dissipation rate and the assumption that this dissipation results in Ohmic heating. Implementing the model into 2-D hybrid simulations is straightforward and leverages the existing framework for solving the electron fluid equations. The model recovers the axial variation in the mobility seen in experiments, predicting the generation of a transport barrier which anchors the region of plasma acceleration. The predicted xenon neutral andmore » ion velocities are found to be in good agreement with laser-induced fluorescence measurements.« less
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
A systems biology approach to investigate the antimicrobial activity of oleuropein.
Li, Xianhua; Liu, Yanhong; Jia, Qian; LaMacchia, Virginia; O'Donoghue, Kathryn; Huang, Zuyi
2016-12-01
Oleuropein and its hydrolysis products are olive phenolic compounds that have antimicrobial effects on a variety of pathogens, with the potential to be utilized in food and pharmaceutical products. While the existing research is mainly focused on individual genes or enzymes that are regulated by oleuropein for antimicrobial activities, little work has been done to integrate intracellular genes, enzymes and metabolic reactions for a systematic investigation of antimicrobial mechanism of oleuropein. In this study, the first genome-scale modeling method was developed to predict the system-level changes of intracellular metabolism triggered by oleuropein in Staphylococcus aureus, a common food-borne pathogen. To simulate the antimicrobial effect, an existing S. aureus genome-scale metabolic model was extended by adding the missing nitric oxide reactions, and exchange rates of potassium, phosphate and glutamate were adjusted in the model as suggested by previous research to mimic the stress imposed by oleuropein on S. aureus. The developed modeling approach was able to match S. aureus growth rates with experimental data for five oleuropein concentrations. The reactions with large flux change were identified and the enzymes of fifteen of these reactions were validated by existing research for their important roles in oleuropein metabolism. When compared with experimental data, the up/down gene regulations of 80% of these enzymes were correctly predicted by our modeling approach. This study indicates that the genome-scale modeling approach provides a promising avenue for revealing the intracellular metabolism of oleuropein antimicrobial properties.
McMeekin, Tom; Bowman, John; McQuestin, Olivia; Mellefont, Lyndal; Ross, Tom; Tamplin, Mark
2008-11-30
This paper considers the future of predictive microbiology by exploring the balance that exists between science, applications and expectations. Attention is drawn to the development of predictive microbiology as a sub-discipline of food microbiology and of technologies that are required for its applications, including a recently developed biological indicator. As we move into the era of systems biology, in which physiological and molecular information will be increasingly available for incorporation into models, predictive microbiologists will be faced with new experimental and data handling challenges. Overcoming these hurdles may be assisted by interacting with microbiologists and mathematicians developing models to describe the microbial role in ecosystems other than food. Coupled with a commitment to maintain strategic research, as well as to develop innovative technologies, the future of predictive microbiology looks set to fulfil "great expectations".
Direct CFD Predictions of Low Frequency Sounds Generated by a Helicopter Main Rotor
NASA Technical Reports Server (NTRS)
Sim, Ben W.; Potsdam, Mark A.; Conner, Dave A.; Conner, Dave A.; Watts, Michael E.
2010-01-01
The use of CFD to directly predict helicopter main rotor noise is shown to be quite promising as an alternative mean for low frequency source noise evaluation. Results using existing state-of-the-art grid structures and finite-difference schemes demonstrated that small perturbation pressures, associated with acoustics radiation, can be extracted with some degree of fidelity. Accuracy of the predictions are demonstrated via comparing to predictions from conventional acoustic analogy-based models, and with measurements obtained from wind tunnel and flight tests for the MD-902 helicopter at several operating conditions. Findings show that the direct CFD approach is quite successfully in yielding low frequency results due to thickness and steady loading noise mechanisms. Mid-to-high frequency contents, due to blade-vortex interactions, are not predicted due to CFD modeling and grid constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Brien, C. J.; Barr, C. M.; Price, P. M.
There has recently been a great deal of interest in employing immiscible solutes to stabilize nanocrystalline microstructures. Existing modeling efforts largely rely on mesoscale Monte Carlo approaches that employ a simplified model of the microstructure and result in highly homogeneous segregation to grain boundaries. However, there is ample evidence from experimental and modeling studies that demonstrates segregation to grain boundaries is highly non-uniform and sensitive to boundary character. This work employs a realistic nanocrystalline microstructure with experimentally relevant global solute concentrations to illustrate inhomogeneous boundary segregation. Furthermore, experiments quantifying segregation in thin films are reported that corroborate the prediction thatmore » grain boundary segregation is highly inhomogeneous. In addition to grain boundary structure modifying the degree of segregation, the existence of a phase transformation between low and high solute content grain boundaries is predicted. In order to conduct this study, new embedded atom method interatomic potentials are developed for Pt, Au, and the PtAu binary alloy.« less
Continuum Lowering and Fermi-Surface Rising in Strongly Coupled and Degenerate Plasmas
NASA Astrophysics Data System (ADS)
Hu, S. X.
2017-08-01
Continuum lowering is a well known and important physics concept that describes the ionization potential depression (IPD) in plasmas caused by thermal- or pressure-induced ionization of outer-shell electrons. The existing IPD models are often used to characterize plasma conditions and to gauge opacity calculations. Recent precision measurements have revealed deficits in our understanding of continuum lowering in dense hot plasmas. However, these investigations have so far been limited to IPD in strongly coupled but nondegenerate plasmas. Here, we report a first-principles study of the K -edge shifting in both strongly coupled and fully degenerate carbon plasmas, with quantum molecular dynamics calculations based on the all-electron density-functional theory. The resulting K -edge shifting versus plasma density, as a probe to the continuum lowering and the Fermi-surface rising, is found to be significantly different from predictions of existing IPD models. In contrast, a simple model of "single-atom-in-box," developed in this work, accurately predicts K -edge locations as ab initio calculations provide.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.
O’Brien, C. J.; Barr, C. M.; Price, P. M.; ...
2017-10-31
There has recently been a great deal of interest in employing immiscible solutes to stabilize nanocrystalline microstructures. Existing modeling efforts largely rely on mesoscale Monte Carlo approaches that employ a simplified model of the microstructure and result in highly homogeneous segregation to grain boundaries. However, there is ample evidence from experimental and modeling studies that demonstrates segregation to grain boundaries is highly non-uniform and sensitive to boundary character. This work employs a realistic nanocrystalline microstructure with experimentally relevant global solute concentrations to illustrate inhomogeneous boundary segregation. Furthermore, experiments quantifying segregation in thin films are reported that corroborate the prediction thatmore » grain boundary segregation is highly inhomogeneous. In addition to grain boundary structure modifying the degree of segregation, the existence of a phase transformation between low and high solute content grain boundaries is predicted. In order to conduct this study, new embedded atom method interatomic potentials are developed for Pt, Au, and the PtAu binary alloy.« less
Reassessing Pliocene temperature gradients
NASA Astrophysics Data System (ADS)
Tierney, J. E.
2017-12-01
With CO2 levels similar to present, the Pliocene Warm Period (PWP) is one of our best analogs for climate change in the near future. Temperature proxy data from the PWP describe dramatically reduced zonal and meridional temperature gradients that have proved difficult to reproduce with climate model simulations. Recently, debate has emerged regarding the interpretation of the proxies used to infer Pliocene temperature gradients; these interpretations affect the magnitude of inferred change and the degree of inconsistency with existing climate model simulations of the PWP. Here, I revisit the issue using Bayesian proxy forward modeling and prediction that propagates known uncertainties in the Mg/Ca, UK'37, and TEX86 proxy systems. These new spatiotemporal predictions are quantitatively compared to PWP simulations to assess probabilistic agreement. Results show generally good agreement between existing Pliocene simulations from the PlioMIP ensemble and SST proxy data, suggesting that exotic changes in the ocean-atmosphere are not needed to explain the Pliocene climate state. Rather, the spatial changes in SST during the Pliocene are largely consistent with elevated CO2 forcing.
NASA Astrophysics Data System (ADS)
Singleton, V. L.; Gantzer, P.; Little, J. C.
2007-02-01
An existing linear bubble plume model was improved, and data collected from a full-scale diffuser installed in Spring Hollow Reservoir, Virginia, were used to validate the model. The depth of maximum plume rise was simulated well for two of the three diffuser tests. Temperature predictions deviated from measured profiles near the maximum plume rise height, but predicted dissolved oxygen profiles compared very well with observations. A sensitivity analysis was performed. The gas flow rate had the greatest effect on predicted plume rise height and induced water flow rate, both of which were directly proportional to gas flow rate. Oxygen transfer within the hypolimnion was independent of all parameters except initial bubble radius and was inversely proportional for radii greater than approximately 1 mm. The results of this work suggest that plume dynamics and oxygen transfer can successfully be predicted for linear bubble plumes using the discrete-bubble approach.
NASA Astrophysics Data System (ADS)
Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.
2016-08-01
According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.
Predicted and measured boundary layer refraction for advanced turboprop propeller noise
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Krejsa, Eugene A.
1990-01-01
Currently, boundary layer refraction presents a limitation to the measurement of forward arc propeller noise measured on an acoustic plate in the NASA Lewis 8- by 6-Foot Supersonic Wind Tunnel. The use of a validated boundary layer refraction model to adjust the data could remove this limitation. An existing boundary layer refraction model is used to predict the refraction for cases where boundary layer refraction was measured. In general, the model exhibits the same qualitative behavior as the measured refraction. However, the prediction method does not show quantitative agreement with the data. In general, it overpredicts the amount of refraction for the far forward angles at axial Mach number of 0.85 and 0.80 and underpredicts the refraction at axial Mach numbers of 0.75 and 0.70. A more complete propeller source description is suggested as a way to improve the prediction method.
First-Principles Prediction of Liquid/Liquid Interfacial Tension.
Andersson, M P; Bennetzen, M V; Klamt, A; Stipp, S L S
2014-08-12
The interfacial tension between two liquids is the free energy per unit surface area required to create that interface. Interfacial tension is a determining factor for two-phase liquid behavior in a wide variety of systems ranging from water flooding in oil recovery processes and remediation of groundwater aquifers contaminated by chlorinated solvents to drug delivery and a host of industrial processes. Here, we present a model for predicting interfacial tension from first principles using density functional theory calculations. Our model requires no experimental input and is applicable to liquid/liquid systems of arbitrary compositions. The consistency of the predictions with experimental data is significant for binary, ternary, and multicomponent water/organic compound systems, which offers confidence in using the model to predict behavior where no data exists. The method is fast and can be used as a screening technique as well as to extend experimental data into conditions where measurements are technically too difficult, time consuming, or impossible.
Photovoltaic performance models: an evaluation with actual field data
NASA Astrophysics Data System (ADS)
TamizhMani, Govindasamy; Ishioye, John-Paul; Voropayev, Arseniy; Kang, Yi
2008-08-01
Prediction of energy production is crucial to the design and installation of the building integrated photovoltaic systems. This prediction should be attainable based on the commonly available parameters such as system size, orientation and tilt angle. Several commercially available as well as free downloadable software tools exist to predict energy production. Six software models have been evaluated in this study and they are: PV Watts, PVsyst, MAUI, Clean Power Estimator, Solar Advisor Model (SAM) and RETScreen. This evaluation has been done by comparing the monthly, seasonaly and annually predicted data with the actual, field data obtained over a year period on a large number of residential PV systems ranging between 2 and 3 kWdc. All the systems are located in Arizona, within the Phoenix metropolitan area which lies at latitude 33° North, and longitude 112 West, and are all connected to the electrical grid.
2009-06-24
These drawings depict explanations for the source of intense heat that has been measured coming from Enceladus south polar region. These models predict that water could exist in a deep layer as an ocean or sea and also near the surface.
Deep-Learning-Based Drug-Target Interaction Prediction.
Wen, Ming; Zhang, Zhimin; Niu, Shaoyu; Sha, Haozhi; Yang, Ruihan; Yun, Yonghuan; Lu, Hongmei
2017-04-07
Identifying interactions between known drugs and targets is a major challenge in drug repositioning. In silico prediction of drug-target interaction (DTI) can speed up the expensive and time-consuming experimental work by providing the most potent DTIs. In silico prediction of DTI can also provide insights about the potential drug-drug interaction and promote the exploration of drug side effects. Traditionally, the performance of DTI prediction depends heavily on the descriptors used to represent the drugs and the target proteins. In this paper, to accurately predict new DTIs between approved drugs and targets without separating the targets into different classes, we developed a deep-learning-based algorithmic framework named DeepDTIs. It first abstracts representations from raw input descriptors using unsupervised pretraining and then applies known label pairs of interaction to build a classification model. Compared with other methods, it is found that DeepDTIs reaches or outperforms other state-of-the-art methods. The DeepDTIs can be further used to predict whether a new drug targets to some existing targets or whether a new target interacts with some existing drugs.
The Anatomy of a Likely Donor: Econometric Evidence on Philanthropy to Higher Education
ERIC Educational Resources Information Center
Lara, Christen; Johnson, Daniel
2014-01-01
In 2011, philanthropic giving to higher education institutions totaled $30.3 billion, an 8.2% increase over the previous year. Roughly, 26% of those funds came from alumni donations. This article builds upon existing economic models to create an econometric model to explain and predict the pattern of alumni giving. We test the model using data…
James T. Peterson; Jason Dunham
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult- to-sample species, and models of species...
Yuan Fang; Ge Sun; Peter Caldwell; Steven G. McNulty; Asko Noormets; Jean-Christophe Domec; John King; Zhiqiang Zhang; Xudong Zhang; Guanghui Lin; Guangsheng Zhou; Jingfeng Xiao; Jiquan Chen
2015-01-01
Evapotranspiration (ET) is arguably the most uncertain ecohydrologic variable for quantifying watershed water budgets. Although numerous ET and hydrological models exist, accurately predicting the effects of global change on water use and availability remains challenging because of model deficiency and/or a lack of input parameters. The objective of this study was to...
Physical characteristics of shrub and conifer fuels for fire behavior models
Jonathan R. Gallacher; Thomas H. Fletcher; Victoria Lansinger; Sydney Hansen; Taylor Ellsworth; David R. Weise
2017-01-01
The physical properties and dimensions of foliage are necessary inputs for some fire spread models. Currently, almost no data exist on these plant characteristics to fill this need. In this report, we measured the physical properties and dimensions of the foliage from 10 live shrub and conifer fuels throughout a 1-year period. We developed models to predict relative...
An integrated model of soil, hydrology, and vegetation for carbon dynamics in wetland ecosystems
Yu Zhang; Changsheng Li; Carl C. Trettin; Harbin Li; Ge Sun
2002-01-01
Wetland ecosystems are an important component in global carbon (C) cycles and may exert a large influence on global clinlate change. Predictions of C dynamics require us to consider interactions among many critical factors of soil, hydrology, and vegetation. However, few such integrated C models exist for wetland ecosystems. In this paper, we report a simulation model...
Modeling Aromatic Liquids: Toluene, Phenol, and Pyridine.
Baker, Christopher M; Grant, Guy H
2007-03-01
Aromatic groups are now acknowledged to play an important role in many systems of interest. However, existing molecular mechanics methods provide a poor representation of these groups. In a previous paper, we have shown that the molecular mechanics treatment of benzene can be improved by the incorporation of an explicit representation of the aromatic π electrons. Here, we develop this concept further, developing charge-separation models for toluene, phenol, and pyridine. Monte Carlo simulations are used to parametrize the models, via the reproduction of experimental thermodynamic data, and our models are shown to outperform an existing atom-centered model. The models are then used to make predictions about the structures of the liquids at the molecular level and are tested further through their application to the modeling of gas-phase dimers and cation-π interactions.
NASA Astrophysics Data System (ADS)
Hogg, Charlie; Dalziel, Stuart; Huppert, Herbert; Imberger, Jorg; Department of Applied Mathematics; Theoretical Physics Team; CentreWater Research Team
2014-11-01
Dense gravity currents feed fluid into confined basins in lakes, the oceans and many industrial applications. Existing models of the circulation and mixing in such basins are often based on the currents entraining ambient fluid. However, recent observations have suggested that uni-directional entrainment into a gravity current does not fully describe the mixing in such currents. Laboratory experiments were carried out which visualised peeling detrainment from the gravity current occurring when the ambient fluid was stratified. A theoretical model of the observed peeling detrainment was developed to predict the stratification in the basin. This new model gives a better approximation of the stratification observed in the experiments than the pre-existing entraining model. The model can now be developed such that it integrates into operational models of lakes.
Simulating the evolution of glyphosate resistance in grains farming in northern Australia
Thornby, David F.; Walker, Steve R.
2009-01-01
Background and Aims The evolution of resistance to herbicides is a substantial problem in contemporary agriculture. Solutions to this problem generally consist of the use of practices to control the resistant population once it evolves, and/or to institute preventative measures before populations become resistant. Herbicide resistance evolves in populations over years or decades, so predicting the effectiveness of preventative strategies in particular relies on computational modelling approaches. While models of herbicide resistance already exist, none deals with the complex regional variability in the northern Australian sub-tropical grains farming region. For this reason, a new computer model was developed. Methods The model consists of an age- and stage-structured population model of weeds, with an existing crop model used to simulate plant growth and competition, and extensions to the crop model added to simulate seed bank ecology and population genetics factors. Using awnless barnyard grass (Echinochloa colona) as a test case, the model was used to investigate the likely rate of evolution under conditions expected to produce high selection pressure. Key Results Simulating continuous summer fallows with glyphosate used as the only means of weed control resulted in predicted resistant weed populations after approx. 15 years. Validation of the model against the paddock history for the first real-world glyphosate-resistant awnless barnyard grass population shows that the model predicted resistance evolution to within a few years of the real situation. Conclusions This validation work shows that empirical validation of herbicide resistance models is problematic. However, the model simulates the complexities of sub-tropical grains farming in Australia well, and can be used to investigate, generate and improve glyphosate resistance prevention strategies. PMID:19567415
A simplified approach to quasi-linear viscoelastic modeling
Nekouzadeh, Ali; Pryse, Kenneth M.; Elson, Elliot L.; Genin, Guy M.
2007-01-01
The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in one dimension is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of “ramp-and-hold” stretching tests were applied to rectangular collagen specimens. The relaxation force data from the “hold” was used to calibrate a new “adaptive QLV model” and several models from literature, and the force data from the “ramp” was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The “adaptive QLV model” based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation. PMID:17499254
NASA Astrophysics Data System (ADS)
Flores, A. N.; Pathak, C. S.; Senarath, S. U.; Bras, R. L.
2009-12-01
Robust hydrologic monitoring networks represent a critical element of decision support systems for effective water resource planning and management. Moreover, process representation within hydrologic simulation models is steadily improving, while at the same time computational costs are decreasing due to, for instance, readily available high performance computing resources. The ability to leverage these increasingly complex models together with the data from these monitoring networks to provide accurate and timely estimates of relevant hydrologic variables within a multiple-use, managed water resources system would substantially enhance the information available to resource decision makers. Numerical data assimilation techniques provide mathematical frameworks through which uncertain model predictions can be constrained to observational data to compensate for uncertainties in the model forcings and parameters. In ensemble-based data assimilation techniques such as the ensemble Kalman Filter (EnKF), information in observed variables such as canal, marsh and groundwater stages are propagated back to the model states in a manner related to: (1) the degree of certainty in the model state estimates and observations, and (2) the cross-correlation between the model states and the observable outputs of the model. However, the ultimate degree to which hydrologic conditions can be accurately predicted in an area of interest is controlled, in part, by the configuration of the monitoring network itself. In this proof-of-concept study we developed an approach by which the design of an existing hydrologic monitoring network is adapted to iteratively improve the predictions of hydrologic conditions within an area of the South Florida Water Management District (SFWMD). The objective of the network design is to minimize prediction errors of key hydrologic states and fluxes produced by the spatially distributed Regional Simulation Model (RSM), developed specifically to simulate the hydrologic conditions in several intensively managed and hydrologically complex watersheds within the SFWMD system. In a series of synthetic experiments RSM is used to generate the notionally true hydrologic state and the relevant observational data. The EnKF is then used as the mechanism to fuse RSM hydrologic estimates with data from the candidate network. The performance of the candidate network is measured by the prediction errors of the EnKF estimates of hydrologic states, relative to the notionally true scenario. The candidate network is then adapted by relocating existing observational sites to unobserved areas where predictions of local hydrologic conditions are most uncertain and the EnKF procedure repeated. Iteration of the monitoring network continues until further improvements in EnKF-based predictions of hydrologic conditions are negligible.
Towards feasible and effective predictive wavefront control for adaptive optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A; Veran, J
We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.
Predicting chaos in memristive oscillator via harmonic balance method.
Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai
2012-12-01
This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.
Deep neural networks for modeling visual perceptual learning.
Wenliang, Li; Seitz, Aaron R
2018-05-23
Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. While existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well-known instance of deep neural network (DNN), while not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could asymmetrically transfer to coarse discriminations when the stimulus conditions varied. In line with the behavioral findings, the distribution of plasticity moved towards lower layers when task precision increased, and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL and can serve as a testbed for theories and assist in generating predictions for physiological investigations. SIGNIFICANCE STATEMENT Visual perceptual learning (VPL) has been found to cause changes at multiple stages of the visual hierarchy. We found that training a deep neural network (DNN) on an orientation discrimination task produced similar behavioral and physiological patterns found in human and monkey experiments. Unlike existing VPL models, the DNN was pre-trained on natural images to reach high performance in object recognition but was not designed specifically for VPL, and yet it fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. When used with care, this unbiased and deep-hierarchical model can provide new ways of studying VPL from behavior to physiology. Copyright © 2018 the authors.
A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.
Radvansky, Gabriel A.; D’Mello, Sidney K.; Abbott, Robert G.; ...
2016-01-27
The Fluid Events Model is aimed at predicting changes in the actions people take on a moment-by-moment basis. In contrast with other research on action selection, this work does not investigate why some course of action was selected, but rather the likelihood of discontinuing the current course of action and selecting another in the near future. This is done using both task-based and experience-based factors. Prior work evaluated this model in the context of trial-by-trial, independent, interactive events, such as choosing how to copy a figure of a line drawing. In this paper, we extend this model to more covertmore » event experiences, such as reading narratives, as well as to continuous interactive events, such as playing a video game. To this end, the model was applied to existing data sets of reading time and event segmentation for written and picture stories. It was also applied to existing data sets of performance in a strategy board game, an aerial combat game, and a first person shooter game in which a participant’s current state was dependent on prior events. The results revealed that the model predicted behavior changes well, taking into account both the theoretically defined structure of the described events, as well as a person’s prior experience. Hence, theories of event cognition can benefit from efforts that take into account not only how events in the world are structured, but also how people experience those events.« less
Radvansky, Gabriel A.; D’Mello, Sidney K.; Abbott, Robert G.; Bixler, Robert E.
2016-01-01
The Fluid Events Model is aimed at predicting changes in the actions people take on a moment-by-moment basis. In contrast with other research on action selection, this work does not investigate why some course of action was selected, but rather the likelihood of discontinuing the current course of action and selecting another in the near future. This is done using both task-based and experience-based factors. Prior work evaluated this model in the context of trial-by-trial, independent, interactive events, such as choosing how to copy a figure of a line drawing. In this paper, we extend this model to more covert event experiences, such as reading narratives, as well as to continuous interactive events, such as playing a video game. To this end, the model was applied to existing data sets of reading time and event segmentation for written and picture stories. It was also applied to existing data sets of performance in a strategy board game, an aerial combat game, and a first person shooter game in which a participant’s current state was dependent on prior events. The results revealed that the model predicted behavior changes well, taking into account both the theoretically defined structure of the described events, as well as a person’s prior experience. Thus, theories of event cognition can benefit from efforts that take into account not only how events in the world are structured, but also how people experience those events. PMID:26858673
Radvansky, Gabriel A; D'Mello, Sidney K; Abbott, Robert G; Bixler, Robert E
2016-01-01
The Fluid Events Model is aimed at predicting changes in the actions people take on a moment-by-moment basis. In contrast with other research on action selection, this work does not investigate why some course of action was selected, but rather the likelihood of discontinuing the current course of action and selecting another in the near future. This is done using both task-based and experience-based factors. Prior work evaluated this model in the context of trial-by-trial, independent, interactive events, such as choosing how to copy a figure of a line drawing. In this paper, we extend this model to more covert event experiences, such as reading narratives, as well as to continuous interactive events, such as playing a video game. To this end, the model was applied to existing data sets of reading time and event segmentation for written and picture stories. It was also applied to existing data sets of performance in a strategy board game, an aerial combat game, and a first person shooter game in which a participant's current state was dependent on prior events. The results revealed that the model predicted behavior changes well, taking into account both the theoretically defined structure of the described events, as well as a person's prior experience. Thus, theories of event cognition can benefit from efforts that take into account not only how events in the world are structured, but also how people experience those events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radvansky, Gabriel A.; D’Mello, Sidney K.; Abbott, Robert G.
The Fluid Events Model is aimed at predicting changes in the actions people take on a moment-by-moment basis. In contrast with other research on action selection, this work does not investigate why some course of action was selected, but rather the likelihood of discontinuing the current course of action and selecting another in the near future. This is done using both task-based and experience-based factors. Prior work evaluated this model in the context of trial-by-trial, independent, interactive events, such as choosing how to copy a figure of a line drawing. In this paper, we extend this model to more covertmore » event experiences, such as reading narratives, as well as to continuous interactive events, such as playing a video game. To this end, the model was applied to existing data sets of reading time and event segmentation for written and picture stories. It was also applied to existing data sets of performance in a strategy board game, an aerial combat game, and a first person shooter game in which a participant’s current state was dependent on prior events. The results revealed that the model predicted behavior changes well, taking into account both the theoretically defined structure of the described events, as well as a person’s prior experience. Hence, theories of event cognition can benefit from efforts that take into account not only how events in the world are structured, but also how people experience those events.« less
Improving the accuracy of energy baseline models for commercial buildings with occupancy data
Liang, Xin; Hong, Tianzhen; Shen, Geoffrey Qiping
2016-07-07
More than 80% of energy is consumed during operation phase of a building's life cycle, so energy efficiency retrofit for existing buildings is considered a promising way to reduce energy use in buildings. The investment strategies of retrofit depend on the ability to quantify energy savings by “measurement and verification” (M&V), which compares actual energy consumption to how much energy would have been used without retrofit (called the “baseline” of energy use). Although numerous models exist for predicting baseline of energy use, a critical limitation is that occupancy has not been included as a variable. However, occupancy rate is essentialmore » for energy consumption and was emphasized by previous studies. This study develops a new baseline model which is built upon the Lawrence Berkeley National Laboratory (LBNL) model but includes the use of building occupancy data. The study also proposes metrics to quantify the accuracy of prediction and the impacts of variables. However, the results show that including occupancy data does not significantly improve the accuracy of the baseline model, especially for HVAC load. The reasons are discussed further. In addition, sensitivity analysis is conducted to show the influence of parameters in baseline models. To conclude, the results from this study can help us understand the influence of occupancy on energy use, improve energy baseline prediction by including the occupancy factor, reduce risks of M&V and facilitate investment strategies of energy efficiency retrofit.« less
The prediction of acoustical particle motion using an efficient polynomial curve fit procedure
NASA Technical Reports Server (NTRS)
Marshall, S. E.; Bernhard, R.
1984-01-01
A procedure is examined whereby the acoustic model parameters, natural frequencies and mode shapes, in the cavities of transportation vehicles are determined experimentally. The acoustic model shapes are described in terms of the particle motion. The acoustic modal analysis procedure is tailored to existing minicomputer based spectral analysis systems.
Formal Models of Word Recognition. Final Report.
ERIC Educational Resources Information Center
Travers, Jeffrey R.
Existing mathematical models of word recognition are reviewed and a new theory is proposed in this research. The new theory integrates earlier proposals within a single framework, sacrificing none of the predictive power of the earlier proposals, but offering a gain in theoretical economy. The theory holds that word recognition is accomplished by…
2009-05-01
estimate to a geometric mean in the process (Finney 1941, Smith 1993). The ratio estimator was used to correct for this back-transformation bias...2007) Killer whales preying on a blue whale calf on the Costa Rica Dome: genetics, morphometrics , vocalizations and composition of the group. Journal
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Degradation Prediction Model Based on a Neural Network with Dynamic Windows
Zhang, Xinghui; Xiao, Lei; Kang, Jianshe
2015-01-01
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873
Park, Hahnbeom; Bradley, Philip; Greisen, Per; Liu, Yuan; Mulligan, Vikram Khipple; Kim, David E.; Baker, David; DiMaio, Frank
2017-01-01
Most biomolecular modeling energy functions for structure prediction, sequence design, and molecular docking, have been parameterized using existing macromolecular structural data; this contrasts molecular mechanics force fields which are largely optimized using small-molecule data. In this study, we describe an integrated method that enables optimization of a biomolecular modeling energy function simultaneously against small-molecule thermodynamic data and high-resolution macromolecular structural data. We use this approach to develop a next-generation Rosetta energy function that utilizes a new anisotropic implicit solvation model, and an improved electrostatics and Lennard-Jones model, illustrating how energy functions can be considerably improved in their ability to describe large-scale energy landscapes by incorporating both small-molecule and macromolecule data. The energy function improves performance in a wide range of protein structure prediction challenges, including monomeric structure prediction, protein-protein and protein-ligand docking, protein sequence design, and prediction of the free energy changes by mutation, while reasonably recapitulating small-molecule thermodynamic properties. PMID:27766851
The use of computer models to predict temperature and smoke movement in high bay spaces
NASA Technical Reports Server (NTRS)
Notarianni, Kathy A.; Davis, William D.
1993-01-01
The Building and Fire Research Laboratory (BFRL) was given the opportunity to make measurements during fire calibration tests of the heat detection system in an aircraft hangar with a nominal 30.4 (100 ft) ceiling height near Dallas, TX. Fire gas temperatures resulting from an approximately 8250 kW isopropyl alcohol pool fire were measured above the fire and along the ceiling. The results of the experiments were then compared to predictions from the computer fire models DETACT-QS, FPETOOL, and LAVENT. In section A of the analysis conducted, DETACT-QS AND FPETOOL significantly underpredicted the gas temperature. LAVENT at the position below the ceiling corresponding to maximum temperature and velocity provided better agreement with the data. For large spaces, hot gas transport time and an improved fire plume dynamics model should be incorporated into the computer fire model activation routines. A computational fluid dynamics (CFD) model, HARWELL FLOW3D, was then used to model the hot gas movement in the space. Reasonable agreement was found between the temperatures predicted from the CFD calculations and the temperatures measured in the aircraft hangar. In section B, an existing NASA high bay space was modeled using the CFD model. The NASA space was a clean room, 27.4 m (90 ft) high with forced horizontal laminar flow. The purpose of this analysis is to determine how the existing fire detection devices would respond to various size fires in the space. The analysis was conducted for 32 MW, 400 kW, and 40 kW fires.
Real Time Land-Surface Hydrologic Modeling Over Continental US
NASA Technical Reports Server (NTRS)
Houser, Paul R.
1998-01-01
The land surface component of the hydrological cycle is fundamental to the overall functioning of the atmospheric and climate processes. Spatially and temporally variable rainfall and available energy, combined with land surface heterogeneity cause complex variations in all processes related to surface hydrology. The characterization of the spatial and temporal variability of water and energy cycles are critical to improve our understanding of land surface-atmosphere interaction and the impact of land surface processes on climate extremes. Because the accurate knowledge of these processes and their variability is important for climate predictions, most Numerical Weather Prediction (NWP) centers have incorporated land surface schemes in their models. However, errors in the NWP forcing accumulate in the surface and energy stores, leading to incorrect surface water and energy partitioning and related processes. This has motivated the NWP to impose ad hoc corrections to the land surface states to prevent this drift. A proposed methodology is to develop Land Data Assimilation schemes (LDAS), which are uncoupled models forced with observations, and not affected by NWP forcing biases. The proposed research is being implemented as a real time operation using an existing Surface Vegetation Atmosphere Transfer Scheme (SVATS) model at a 40 km degree resolution across the United States to evaluate these critical science questions. The model will be forced with real time output from numerical prediction models, satellite data, and radar precipitation measurements. Model parameters will be derived from the existing GIS vegetation and soil coverages. The model results will be aggregated to various scales to assess water and energy balances and these will be validated with various in-situ observations.
Numerical simulation of experiments in the Giant Planet Facility
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.
1979-01-01
Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.
Le Moullec, Y; Potier, O; Gentric, C; Leclerc, J P
2011-05-01
This paper presents an experimental and numerical study of an activated sludge channel pilot plant. Concentration profiles of oxygen, COD, NO(3) and NH(4) have been measured for several operating conditions. These profiles have been compared to the simulated ones with three different modelling approaches, namely a systemic approach, CFD and compartmental modelling. For these three approaches, the kinetics model was the ASM-1 model (Henze et al., 2001). The three approaches allowed a reasonable simulation of all the concentration profiles except for ammonium for which the simulations results were far from the experimental ones. The analysis of the results showed that the role of the kinetics model is of primary importance for the prediction of activated sludge reactors performance. The fact that existing kinetics parameters in the literature have been determined by parametric optimisation using a systemic model limits the reliability of the prediction of local concentrations and of the local design of activated sludge reactors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Pérez-Jorge, Sergi; Pereira, Thalia; Corne, Chloe; Wijtten, Zeno; Omar, Mohamed; Katello, Jillo; Kinyua, Mark; Oro, Daniel; Louzao, Maite
2015-01-01
Along the East African coast, marine top predators are facing an increasing number of anthropogenic threats which requires the implementation of effective and urgent conservation measures to protect essential habitats. Understanding the role that habitat features play on the marine top predator' distribution and abundance is a crucial step to evaluate the suitability of an existing Marine Protected Area (MPA), originally designated for the protection of coral reefs. We developed species distribution models (SDM) on the IUCN data deficient Indo-Pacific bottlenose dolphin (Tursiops aduncus) in southern Kenya. We followed a comprehensive ecological modelling approach to study the environmental factors influencing the occurrence and abundance of dolphins while developing SDMs. Through the combination of ensemble prediction maps, we defined recurrent, occasional and unfavourable habitats for the species. Our results showed the influence of dynamic and static predictors on the dolphins' spatial ecology: dolphins may select shallow areas (5-30 m), close to the reefs (< 500 m) and oceanic fronts (< 10 km) and adjacent to the 100 m isobath (< 5 km). We also predicted a significantly higher occurrence and abundance of dolphins within the MPA. Recurrent and occasional habitats were identified on large percentages on the existing MPA (47% and 57% using presence-absence and abundance models respectively). However, the MPA does not adequately encompass all occasional and recurrent areas and within this context, we propose to extend the MPA to incorporate all of them which are likely key habitats for the highly mobile species. The results from this study provide two key conservation and management tools: (i) an integrative habitat modelling approach to predict key marine habitats, and (ii) the first study evaluating the effectiveness of an existing MPA for marine mammals in the Western Indian Ocean.
A crystallographic model for the tensile and fatigue response for Rene N4 at 982 C
NASA Technical Reports Server (NTRS)
Sheh, M. Y.; Stouffer, D. C.
1990-01-01
An anisotropic constitutive model based on crystallographic slip theory was formulated for nickel-base single-crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the existence of back stress in single crystals. The results showed that the back stress effect of reverse inelastic flow on the unloading stress is orientation-dependent, and a back stress state variable in the inelastic flow equation is necessary for predicting inelastic behavior. Model correlations and predictions of experimental data are presented for the single crystal superalloy Rene N4 at 982 C.
NASA Astrophysics Data System (ADS)
Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie
2015-08-01
The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.
In-silico wear prediction for knee replacements--methodology and corroboration.
Strickland, M A; Taylor, M
2009-07-22
The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).
Predictability of malaria parameters in Sahel under the S4CAST Model.
NASA Astrophysics Data System (ADS)
Diouf, Ibrahima; Rodríguez-Fonseca, Belen; Deme, Abdoulaye; Cisse, Moustapha; Ndione, Jaques-Andre; Gaye, Amadou; Suárez-Moreno, Roberto
2016-04-01
An extensive literature exists documenting the ENSO impacts on infectious diseases, including malaria. Other studies, however, have already focused on cholera, dengue and Rift Valley Fever. This study explores the seasonal predictability of malaria outbreaks over Sahel from previous SSTs of Pacific and Atlantic basins. The SST may be considered as a source of predictability due to its direct influence on rainfall and temperature, thus also other related variables like malaria parameters. In this work, the model has been applied to the study of predictability of the Sahelian malaria parameters from the leading MCA covariability mode in the framework of climate and health issue. The results of this work will be useful for decision makers to better access to climate forecasts and application on malaria transmission risk.
Acoustic Measurements of Small Solid Rocket Motor
NASA Technical Reports Server (NTRS)
Vargas, Magda B.; Kenny, R. Jeremy
2010-01-01
Rocket acoustic noise can induce loads and vibration on the vehicle as well as the surrounding structures. Models have been developed to predict these acoustic loads based on scaling existing solid rocket motor data. The NASA Marshall Space Flight Center acoustics team has measured several small solid rocket motors (thrust below 150,000 lbf) to anchor prediction models. This data will provide NASA the capability to predict the acoustic environments and consequent vibro-acoustic response of larger rockets (thrust above 1,000,000 lbf) such as those planned for the NASA Constellation program. This paper presents the methods used to measure acoustic data during the static firing of small solid rocket motors and the trends found in the data.
Model-Based Fatigue Prognosis of Fiber-Reinforced Laminates Exhibiting Concurrent Damage Mechanisms
NASA Technical Reports Server (NTRS)
Corbetta, M.; Sbarufatti, C.; Saxena, A.; Giglio, M.; Goebel, K.
2016-01-01
Prognostics of large composite structures is a topic of increasing interest in the field of structural health monitoring for aerospace, civil, and mechanical systems. Along with recent advancements in real-time structural health data acquisition and processing for damage detection and characterization, model-based stochastic methods for life prediction are showing promising results in the literature. Among various model-based approaches, particle-filtering algorithms are particularly capable in coping with uncertainties associated with the process. These include uncertainties about information on the damage extent and the inherent uncertainties of the damage propagation process. Some efforts have shown successful applications of particle filtering-based frameworks for predicting the matrix crack evolution and structural stiffness degradation caused by repetitive fatigue loads. Effects of other damage modes such as delamination, however, are not incorporated in these works. It is well established that delamination and matrix cracks not only co-exist in most laminate structures during the fatigue degradation process but also affect each other's progression. Furthermore, delamination significantly alters the stress-state in the laminates and accelerates the material degradation leading to catastrophic failure. Therefore, the work presented herein proposes a particle filtering-based framework for predicting a structure's remaining useful life with consideration of multiple co-existing damage-mechanisms. The framework uses an energy-based model from the composite modeling literature. The multiple damage-mode model has been shown to suitably estimate the energy release rate of cross-ply laminates as affected by matrix cracks and delamination modes. The model is also able to estimate the reduction in stiffness of the damaged laminate. This information is then used in the algorithms for life prediction capabilities. First, a brief summary of the energy-based damage model is provided. Then, the paper describes how the model is embedded within the prognostic framework and how the prognostics performance is assessed using observations from run-to-failure experiments
Dimer-based model for heptaspanning membrane receptors.
Franco, Rafael; Casadó, Vicent; Mallol, Josefa; Ferré, Sergi; Fuxe, Kjell; Cortés, Antonio; Ciruela, Francisco; Lluis, Carmen; Canela, Enric I
2005-07-01
The existence of intramembrane receptor-receptor interactions for heptaspanning membrane receptors is now fully accepted, but a model considering dimers as the basic unit that binds to two ligand molecules is lacking. Here, we propose a two-state-dimer model in which the ligand-induced conformational changes from one component of the dimer are communicated to the other. Our model predicts cooperativity in binding, which is relevant because the other current models fail to address this phenomenon satisfactorily. Our two-state-dimer model also predicts the variety of responses elicited by full or partial agonists, neutral antagonists and inverse agonists. This model can aid our understanding of the operation of heptaspanning receptors and receptor channels, and, potentially, be important for improving the treatment of cardiovascular, neurological and neuropsychyatric diseases.
Na, Okpin; Cai, Xiao-Chuan; Xi, Yunping
2017-01-01
The prediction of the chloride-induced corrosion is very important because of the durable life of concrete structure. To simulate more realistic durability performance of concrete structures, complex scientific methods and more accurate material models are needed. In order to predict the robust results of corrosion initiation time and to describe the thin layer from concrete surface to reinforcement, a large number of fine meshes are also used. The purpose of this study is to suggest more realistic physical model regarding coupled hygro-chemo transport and to implement the model with parallel finite element algorithm. Furthermore, microclimate model with environmental humidity and seasonal temperature is adopted. As a result, the prediction model of chloride diffusion under unsaturated condition was developed with parallel algorithms and was applied to the existing bridge to validate the model with multi-boundary condition. As the number of processors increased, the computational time decreased until the number of processors became optimized. Then, the computational time increased because the communication time between the processors increased. The framework of present model can be extended to simulate the multi-species de-icing salts ingress into non-saturated concrete structures in future work. PMID:28772714
Campbell, William; Ganna, Andrea; Ingelsson, Erik; Janssens, A Cecile J W
2016-01-01
We propose a new measure of assessing the performance of risk models, the area under the prediction impact curve (auPIC), which quantifies the performance of risk models in terms of their average health impact in the population. Using simulated data, we explain how the prediction impact curve (PIC) estimates the percentage of events prevented when a risk model is used to assign high-risk individuals to an intervention. We apply the PIC to the Atherosclerosis Risk in Communities (ARIC) Study to illustrate its application toward prevention of coronary heart disease. We estimated that if the ARIC cohort received statins at baseline, 5% of events would be prevented when the risk model was evaluated at a cutoff threshold of 20% predicted risk compared to 1% when individuals were assigned to the intervention without the use of a model. By calculating the auPIC, we estimated that an average of 15% of events would be prevented when considering performance across the entire interval. We conclude that the PIC is a clinically meaningful measure for quantifying the expected health impact of risk models that supplements existing measures of model performance. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph
2018-05-01
This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.
On the Space-Time Structure of Sheared Turbulence
NASA Astrophysics Data System (ADS)
de Maré, Martin; Mann, Jakob
2016-09-01
We develop a model that predicts all two-point correlations in high Reynolds number turbulent flow, in both space and time. This is accomplished by combining the design philosophies behind two existing models, the Mann spectral velocity tensor, in which isotropic turbulence is distorted according to rapid distortion theory, and Kristensen's longitudinal coherence model, in which eddies are simultaneously advected by larger eddies as well as decaying. The model is compared with data from both observations and large-eddy simulations and is found to predict spatial correlations comparable to the Mann spectral tensor and temporal coherence better than any known model. Within the developed framework, Lagrangian two-point correlations in space and time are also predicted, and the predictions are compared with measurements of isotropic turbulence. The required input to the models, which are formulated as spectral velocity tensors, can be estimated from measured spectra or be derived from the rate of dissipation of turbulent kinetic energy, the friction velocity and the mean shear of the flow. The developed models can, for example, be used in wind-turbine engineering, in applications such as lidar-assisted feed forward control and wind-turbine wake modelling.
Predictive Finite Rate Model for Oxygen-Carbon Interactions at High Temperature
NASA Astrophysics Data System (ADS)
Poovathingal, Savio
An oxidation model for carbon surfaces is developed to predict ablation rates for carbon heat shields used in hypersonic vehicles. Unlike existing empirical models, the approach used here was to probe gas-surface interactions individually and then based on an understanding of the relevant fundamental processes, build a predictive model that would be accurate over a wide range of pressures and temperatures, and even microstructures. Initially, molecular dynamics was used to understand the oxidation processes on the surface. The molecular dynamics simulations were compared to molecular beam experiments and good qualitative agreement was observed. The simulations reproduced cylindrical pitting observed in the experiments where oxidation was rapid and primarily occurred around a defect. However, the studies were limited to small systems at low temperatures and could simulate time scales only of the order of nanoseconds. Molecular beam experiments at high surface temperature indicated that a majority of surface reaction products were produced through thermal mechanisms. Since the reactions were thermal, they occurred over long time scales which were computationally prohibitive for molecular dynamics to simulate. The experiments provided detailed dynamical data on the scattering of O, O2, CO, and CO2 and it was found that the data from molecular beam experiments could be used directly to build a model. The data was initially used to deduce surface reaction probabilities at 800 K. The reaction probabilities were then incorporated into the direct simulation Monte Carlo (DSMC) method. Simulations were performed where the microstructure was resolved and dissociated oxygen convected and diffused towards it. For a gas-surface temperature of 800 K, it was found that despite CO being the dominant surface reaction product, a gas-phase reaction forms significant CO2 within the microstructure region. It was also found that surface area did not play any role in concentration of reaction products because the reaction probabilities were in the diffusion dominant regime. The molecular beam data at different surface temperatures was then used to build a finite rate model. Each reaction mechanism and all rate parameters of the new model were determined individually based on the molecular beam data. Despite the experiments being performed at near vacuum conditions, the finite rate model developed using the data could be used at pressures and temperatures relevant to hypersonic conditions. The new model was implemented in a computational fluid dynamics (CFD) solver and flow over a hypersonic vehicle was simulated. The new model predicted similar overall mass loss rates compared to existing models, however, the individual species production rates were completely different. The most notable difference was that the new model (based on molecular beam data) predicts CO as the oxidation reaction product with virtually no CO2 production, whereas existing models predict the exact opposite trend. CO being the dominant oxidation product is consistent with recent high enthalpy wind tunnel experiments. The discovery that measurements taken in molecular beam facilities are able to determine individual reaction mechanisms, including dependence on surface coverage, opens up an entirely new way of constructing ablation models.
Non-thermal hydrogen atoms in the terrestrial upper thermosphere.
Qin, Jianqi; Waldrop, Lara
2016-12-06
Model predictions of the distribution and dynamical transport of hydrogen atoms in the terrestrial atmosphere have long-standing discrepancies with ultraviolet remote sensing measurements, indicating likely deficiencies in conventional theories regarding this crucial atmospheric constituent. Here we report the existence of non-thermal hydrogen atoms that are much hotter than the ambient oxygen atoms in the upper thermosphere. Analysis of satellite measurements indicates that the upper thermospheric hydrogen temperature, more precisely the mean kinetic energy of the atomic hydrogen population, increases significantly with declining solar activity, contrary to contemporary understanding of thermospheric behaviour. The existence of hot hydrogen atoms in the upper thermosphere, which is the key to reconciling model predictions and observations, is likely a consequence of low atomic oxygen density leading to incomplete collisional thermalization of the hydrogen population following its kinetic energization through interactions with hot atomic or ionized constituents in the ionosphere, plasmasphere or magnetosphere.
Non-thermal hydrogen atoms in the terrestrial upper thermosphere
Qin, Jianqi; Waldrop, Lara
2016-01-01
Model predictions of the distribution and dynamical transport of hydrogen atoms in the terrestrial atmosphere have long-standing discrepancies with ultraviolet remote sensing measurements, indicating likely deficiencies in conventional theories regarding this crucial atmospheric constituent. Here we report the existence of non-thermal hydrogen atoms that are much hotter than the ambient oxygen atoms in the upper thermosphere. Analysis of satellite measurements indicates that the upper thermospheric hydrogen temperature, more precisely the mean kinetic energy of the atomic hydrogen population, increases significantly with declining solar activity, contrary to contemporary understanding of thermospheric behaviour. The existence of hot hydrogen atoms in the upper thermosphere, which is the key to reconciling model predictions and observations, is likely a consequence of low atomic oxygen density leading to incomplete collisional thermalization of the hydrogen population following its kinetic energization through interactions with hot atomic or ionized constituents in the ionosphere, plasmasphere or magnetosphere. PMID:27922018
An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models
NASA Technical Reports Server (NTRS)
Mack, Robert J.
2003-01-01
Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.
On thermonuclear ignition criterion at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Baolian; Kwan, Thomas J. T.; Wang, Yi-Ming
2014-10-15
Sustained thermonuclear fusion at the National Ignition Facility remains elusive. Although recent experiments approached or exceeded the anticipated ignition thresholds, the nuclear performance of the laser-driven capsules was well below predictions in terms of energy and neutron production. Such discrepancies between expectations and reality motivate a reassessment of the physics of ignition. We have developed a predictive analytical model from fundamental physics principles. Based on the model, we obtained a general thermonuclear ignition criterion in terms of the areal density and temperature of the hot fuel. This newly derived ignition threshold and its alternative forms explicitly show the minimum requirementsmore » of the hot fuel pressure, mass, areal density, and burn fraction for achieving ignition. Comparison of our criterion with existing theories, simulations, and the experimental data shows that our ignition threshold is more stringent than those in the existing literature and that our results are consistent with the experiments.« less
Structural optimization and recent large ground antenna installations
NASA Technical Reports Server (NTRS)
Levy, Roy
1989-01-01
Within the past several years, the Jet Propulsion Laboratory has designed and built major ground antenna structures in Spain, Australia, and California. One of the antennas at each location is a 70 meter-diameter structure that is a retrofit of the existing 64 meter antenna. The 64 meter existing antennas were first stripped back to a 34 meter interior and then completely new construction with deeper trusses was added to extend the interior to 70 meters. The 70 meter project included the rare opportunity to collect field data to compare with predictions of the finite-element analytical models. The new quadripod design was tested for its lower mode natural frequencies and the main reflector was measured by theodolite to determine deflections of subsets of the backup-structure deformations under load. The emphasis here is to examine measurement results and possibly provide some appreciation of the relationship of predictions made from the design model to actual measurements.
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
Evaluation of free modeling targets in CASP11 and ROLL.
Kinch, Lisa N; Li, Wenlin; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Grishin, Nick V
2016-09-01
We present an assessment of 'template-free modeling' (FM) in CASP11and ROLL. Community-wide server performance suggested the use of automated scores similar to previous CASPs would provide a good system of evaluating performance, even in the absence of comprehensive manual assessment. The CASP11 FM category included several outstanding examples, including successful prediction by the Baker group of a 256-residue target (T0806-D1) that lacked sequence similarity to any existing template. The top server model prediction by Zhang's Quark, which was apparently selected and refined by several manual groups, encompassed the entire fold of target T0837-D1. Methods from the same two groups tended to dominate overall CASP11 FM and ROLL rankings. Comparison of top FM predictions with those from the previous CASP experiment revealed progress in the category, particularly reflected in high prediction accuracy for larger protein domains. FM prediction models for two cases were sufficient to provide functional insights that were otherwise not obtainable by traditional sequence analysis methods. Importantly, CASP11 abstracts revealed that alignment-based contact prediction methods brought about much of the CASP11 progress, producing both of the functionally relevant models as well as several of the other outstanding structure predictions. These methodological advances enabled de novo modeling of much larger domain structures than was previously possible and allowed prediction of functional sites. Proteins 2016; 84(Suppl 1):51-66. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Anbalakan, K; Chua, D; Pandya, G J; Shelat, V G
2015-02-01
Emergency surgery for perforated peptic ulcer (PPU) is associated with significant morbidity and mortality. Accurate and early risk stratification is important. The primary aim of this study is to validate the various existing MRPMs and secondary aim is to audit our experience of managing PPU. 332 patients who underwent emergency surgery for PPU at a single intuition from January 2008 to December 2012 were studied. Clinical and operative details were collected. Four MRPMs: American Society of Anesthesiology (ASA) score, Boey's score, Mannheim peritonitis index (MPI) and Peptic ulcer perforation (PULP) score were validated. Median age was 54.7 years (range 17-109 years) with male predominance (82.5%). 61.7% presented within 24 h of onset of abdominal pain. Median length of stay was 7 days (range 2-137 days). Intra-abdominal collection, leakage, re-operation and 30-day mortality rates were 8.1%, 2.1%, 1.2% and 7.2% respectively. All the four MRPMs predicted intra-abdominal collection and mortality; however, only MPI predicted leak (p = 0.01) and re-operation (p = 0.02) rates. The area under curve for predicting mortality was 75%, 72%, 77.2% and 75% for ASA score, Boey's score, MPI and PULP score respectively. Emergency surgery for PPU has low morbidity and mortality in our experience. MPI is the only scoring system which predicts all - intra-abdominal collection, leak, reoperation and mortality. All four MRPMs had a similar and fair accuracy to predict mortality, however due to geographic and demographic diversity and inherent weaknesses of exiting MRPMs, quest for development of an ideal model should continue. Copyright © 2015 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
FIELD INVESTIGATIONS OF THE DRIFT SHADOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. W. Su, T. J. Kneafsey, T. A. Ghezzehei, B. D. Marshall, and P. J. Cook
The ''Drift Shadow'' is defined as the relatively drier region that forms below subsurface cavities or drifts in unsaturated rock. Its existence has been predicted through analytical and numerical models of unsaturated flow. However, these theoretical predictions have not been demonstrated empirically to date. In this project they plan to test the drift shadow concept through field investigations and compare our observations to simulations. Based on modeling studies they have an identified suitable site to perform the study at an inactive mine in a sandstone formation. Pretest modeling studies and preliminary characterization of the site are being used to developmore » the field scale tests.« less
Reactor pressure vessel embrittlement: Insights from neural network modelling
NASA Astrophysics Data System (ADS)
Mathew, J.; Parfitt, D.; Wilford, K.; Riddle, N.; Alamaniotis, M.; Chroneos, A.; Fitzpatrick, M. E.
2018-04-01
Irradiation embrittlement of steel pressure vessels is an important consideration for the operation of current and future light water nuclear reactors. In this study we employ an ensemble of artificial neural networks in order to provide predictions of the embrittlement using two literature datasets, one based on US surveillance data and the second from the IVAR experiment. We use these networks to examine trends with input variables and to assess various literature models including compositional effects and the role of flux and temperature. Overall, the networks agree with the existing literature models and we comment on their more general use in predicting irradiation embrittlement.
Mean stress and the exhaustion of fatigue-damage resistance
NASA Technical Reports Server (NTRS)
Berkovits, Avraham
1989-01-01
Mean-stress effects on fatigue life are critical in isothermal and thermomechanically loaded materials and composites. Unfortunately, existing mean-stress life-prediction methods do not incorporate physical fatigue damage mechanisms. An objective is to examine the relation between mean-stress induced damage (as measured by acoustic emission) and existing life-prediction methods. Acoustic emission instrumentation has indicated that, as with static yielding, fatigue damage results from dislocation buildup and motion until dislocation saturation is reached, after which void formation and coalescence predominate. Correlation of damage processes with similar mechanisms under monotonic loading led to a reinterpretation of Goodman diagrams for 40 alloys and a modification of Morrow's formulation for life prediction under mean stresses. Further testing, using acoustic emission to monitor dislocation dynamics, can generate data for developing a more general model for fatigue under mean stress.
A Fully Coupled Multi-Rigid-Body Fuel Slosh Dynamics Model Applied to the Triana Stack
NASA Technical Reports Server (NTRS)
London, K. W.
2001-01-01
A somewhat general multibody model is presented that accounts for energy dissipation associated with fuel slosh and which unifies some of the existing more specialized representations. This model is used to predict the mutation growth time constant for the Triana Spacecraft, or Stack, consisting of the Triana Observatory mated with the Gyroscopic Upper Stage of GUS (includes the solid rocket motor, SRM, booster). At the nominal spin rate of 60 rpm and with 145 kg of hydrazine propellant on board, a time constant of 116 s is predicted for worst case sloshing of a spherical slug model compared to 1,681 s (nominal), 1,043 s (worst case) for sloshing of a three degree of freedom pendulum model.
Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa; Ravindran, Balaraman; Moustafa, Ahmed A.
2014-01-01
Although empirical and neural studies show that serotonin (5HT) plays many functional roles in the brain, prior computational models mostly focus on its role in behavioral inhibition. In this study, we present a model of risk based decision making in a modified Reinforcement Learning (RL)-framework. The model depicts the roles of dopamine (DA) and serotonin (5HT) in Basal Ganglia (BG). In this model, the DA signal is represented by the temporal difference error (δ), while the 5HT signal is represented by a parameter (α) that controls risk prediction error. This formulation that accommodates both 5HT and DA reconciles some of the diverse roles of 5HT particularly in connection with the BG system. We apply the model to different experimental paradigms used to study the role of 5HT: (1) Risk-sensitive decision making, where 5HT controls risk assessment, (2) Temporal reward prediction, where 5HT controls time-scale of reward prediction, and (3) Reward/Punishment sensitivity, in which the punishment prediction error depends on 5HT levels. Thus the proposed integrated RL model reconciles several existing theories of 5HT and DA in the BG. PMID:24795614
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; ...
2017-07-20
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less
Use of machine learning methods to reduce predictive error of groundwater models.
Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal
2014-01-01
Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.
Predicting geogenic arsenic contamination in shallow groundwater of south Louisiana, United States.
Yang, Ningfang; Winkel, Lenny H E; Johannesson, Karen H
2014-05-20
Groundwater contaminated with arsenic (As) threatens the health of more than 140 million people worldwide. Previous studies indicate that geology and sedimentary depositional environments are important factors controlling groundwater As contamination. The Mississippi River delta has broadly similar geology and sedimentary depositional environments to the large deltas in South and Southeast Asia, which are severely affected by geogenic As contamination and therefore may also be vulnerable to groundwater As contamination. In this study, logistic regression is used to develop a probability model based on surface hydrology, soil properties, geology, and sedimentary depositional environments. The model is calibrated using 3286 aggregated and binary-coded groundwater As concentration measurements from Bangladesh and verified using 78 As measurements from south Louisiana. The model's predictions are in good agreement with the known spatial distribution of groundwater As contamination of Bangladesh, and the predictions also indicate high risk of As contamination in shallow groundwater from Holocene sediments of south Louisiana. Furthermore, the model correctly predicted 79% of the existing shallow groundwater As measurements in the study region, indicating good performance of the model in predicting groundwater As contamination in shallow aquifers of south Louisiana.
Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.
Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang
2017-01-01
The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.
Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model
Xiaolei, Wang
2017-01-01
The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server. PMID:28497044